How to Tame Apache Impala Users with Admission Control

Apache Impala

Share This Post

Share on facebook
Share on linkedin
Share on twitter
Share on email

This blog post is intended for users who are familiar with Apache Impala. If you’d like to learn about Apache Impala, read more here.

Introduction

A common problem encountered with Apache Impala is resource management. Everyone wants to use as many resources (i.e. memory) as they can to try to increase speed and/or hide query inefficiency. However, it’s not fair to others and it can be detrimental to queries supporting important business processes. What we see at a lot of clients is that there are plenty of resources when their clusters are freshly built and the initial use cases are being onboarded. There isn’t a concern about resources until you continue to add more use cases, data scientists, and business units running ad-hoc queries that consume enough resources to prevent those original use cases from completing on time. This leads to query failures which can be frustrating for users and problematic for existing use cases.

In order to effectively manage resources for Apache Impala, we recommend using the Admission Control feature. With Admission Control, we can set up resource pools for Impala. This means limiting the number of queries, the amount of memory, and enforcing settings for each query in a resource pool. There are many settings to Admission Control, which can be daunting at first. We will focus on the memory settings that are fundamental for a cluster that already has dozens of active users and applications running.

Step 1: Getting Memory Stats

The first challenge with Admission Control is manually gathering metrics about individual users and the queries they have run to try and define the memory settings for resource pools. You could manually use the Apache Impala queries pane and chart builder in Cloudera Manager to go through each user’s queries to gather up some stats, but that’s very time consuming and tedious to re-evaluate at a later date. In order to make informed and accurate decisions on how to allocate resources for various users and applications, we need to gather detailed metrics. We’ve written a Python script to streamline this process.

Our script can be found on GitHub: https://github.com/phdata/blog-2019-10-impala-admcontrol.

The script generates a csv report and does not make any changes. Please review the readme and run the script in your environment.

The csv report includes overall and per-user stats for:

  • (queries_count) – number of queries ran
  • (queries_count_missing_stats) – number of queries ran without stats
  • (aggregate_avg_gb) – average memory used across nodes
  • (aggregate_99th_gb) – 99% max memory used across nodes
  • (aggregate_max_gb) – max memory used across nodes
  • (per_node_avg_gb) – average memory used per node
  • (per_node_99th_gb) – 99% max memory used per node
  • (per_node_max_gb) – max memory used per node
  • (duration_avg_minutes) – average query duration (in minutes)
  • (duration_99th_minutes) – 99% query duration (in minutes)
  • (duration_max_minutes) – max query duration (in minutes)

Step 2: Immediate Actions and Concerns

Every workload on every cluster is going to be different and have a wide range of requirements. As you go through the report, there are a few high priority items to look for.

First, are users running queries missing stats? (count_missing_stats column) If you see queries running without stats, we suggest that you investigate which tables are missing stats and make sure that computing stats is a standard procedure in your environment.

Second, compare max to 99th columns. With the 99th columns, we are trying to account for the majority of their queries (99%). This will allow us to account for bad or errant queries if any of the max columns are more than 10-20% higher than the 99th, investigate that user’s highest queries to see if they were bad queries or if those few queries could be improved to better utilize resources.

  • aggregate_max to aggregate_99th
  • per_node_max to per_node_99th
  • duration_max to duration_99th

Step 3: Resource Pool Setup in Apache Impala

The settings we are going to define based on this report are:

  • Max Running Queries / Max Queued Queries
  • Default Query Memory Limit
  • Max Memory
  • Queue Timeout

We’ll walk you through how to determine each of these settings for the necessary resource pools. Once that is determined, we’ll use the “Create Resource Pool” wizard in CM to create each pool as shown in the image below.

Max Running/Queued Queries

To really gauge this, we would need to have a separate report that took query start times and durations to track the average, 99th percentile, and max concurrency for each user. For this setting, we would suggest that you keep this as low as possible based on the use case because it ultimately affects the Max Memory you want this user or group of users to be able to consume. To make things simple, for the number of queued queries, we would set this to the same number we set for max running queries.

Default Query Memory Limit

This is the maximum amount of memory that we want to give a query per node. The safest entry for this setting is per_node_max column from our report. An exception to this is if you have investigated the user’s highest memory usage queries and found that the per_node_99th is a better representation of good queries from the user, then use per_node_99th.

Max Memory

This is calculated from (Default Query Memory Limit * 20 (number of Impala hosts) * Max Running Queries). For example, if we want a resource pool to have a 4GiB max per node query limit and be able to run 5 queries at a time, this comes out to 400GiB of max memory.

Queue Timeout

This setting is determined by concurrency, duration, and the SLA of queries. If you have a query that has to run within 30 seconds and it’s tuned to run in 20 seconds, if it sits in the queue for more than 10 seconds it will violate the SLA. Third-party applications running against Apache Impala may have their own query timeouts that this may interfere with that we would prefer to return an immediate error. For long-running ETL workloads that may end up with data skew increasing query duration, you can extend these timeouts to ensure that all of the queries are queued and ran.

Like Cloudera’s Admission Control Sample Scenario, our cluster has 20 nodes with 128gb of memory for Impala on each node (2560 GiB total for Impala).

Immediately, we can see that three users (svc_account3, user1, and user4) need to be followed up with to see if their memory stats can be improved with compute stats or if several of their queries were just poorly written. We should also look into svc_account1 because their _99th and _max numbers are so far apart.

Default resource pool for users: This is our general pool for anyone on the platform that does not have a justified use case for additional resources. We’re setting aside 25% of the cluster resources.

  • Max Memory: 640 GiB (25% of the cluster)
  • Default Query Memory Limit: 3 GiB
  • Max Running Queries: 10
  • Max Queued Queries: 10
  • Queue Timeout: 60 seconds

Default resource pool for service accounts: This is a general resource pool for standard workloads being generated by applications or scheduled processes.

  • Max Memory: 1000 GiB
  • Default Query Memory Limit: 5 GiB
  • Max Running Queries: 10
  • Max Queued Queries: 10
  • Queue Timeout: 20 minutes

Power Users resource pool: This is the resource pool for users who require more resources. user3 may be the only one that qualifies for the Power Users resource pool.

  • Max Memory: 400 GiB
  • Default Query Memory Limit: 10 GiB
  • Max Running Queries: 2
  • Max Queued Queries: 2
  • Queue Timeout: 60 minutes

svc_account2 resource pool: Out of the service accounts, this is the only one that we found that really needs a dedicated resource pool.

  • Max Memory: 240 GiB
  • Default Query Memory Limit: 12 GiB
  • Max Running Queries: 1
  • Max Queued Queries: 1
  • Queue Timeout: 5 minutes

We recommend creating dedicated resource pools for each service account to ensure that resources are protected and not consumed by standard users.

Conclusion

After implementing the guard rails of Admission Control, our customers have much more reliability and consistency in their workloads. There is some care and feeding involved, however.  In some scenarios, new use cases go through a process of needing to request and justify resources above and beyond defaults. As a reminder, every workload on each cluster is unique and it may take some trial and error to fully implement Admission Control. Our hope is that this blog post gives you a headstart in being able to implement Apache Impala Admission Control in your environment. Please reach out to phData for additional information or assistance in getting Admission Control implemented.

Subscribe To Our Newsletter

Get updates and learn from the experts

More To Explore

Want to learn more about phData?

Image of desk