Myth #5 of Apache Spark Optimization: Spark Dynamic Allocation

Myth #5 of Apache Spark Optimization: Spark Dynamic Allocation

In this blog series we’re examining the Five Myths of Apache Spark Optimization. The fifth and final myth in this series relates to another common assumption of many Spark users: Spark Dynamic Allocation automatically prevents Spark from wasting resources.

The Value of Spark Dynamic Allocation

Spark Dynamic Allocation is a useful feature that was developed through the Spark community’s focus on continuous innovation and improvement. This feature optimizes the resource utilization of Spark applications by dynamically adding and removing executors based on workload requirements. It attempts to fully utilize the available task slots per executor, eliminating the need for developers to rightsize the number of executors before applications start running.

Because of these benefits, Spark Dynamic Allocation is considered a no brainer. If the application architecture can handle it, then most developers will enable Spark Dynamic Allocation.

But an important question to ask is: What can Spark Dynamic Allocation not do?

 

What Spark Dynamic Allocation Cannot Do

  1. Tasks Cannot Use Their Full Allocation at All Times
    If a certain number of tasks is capable of running inside an executor, then ideally that number of tasks should be running. But for most applications, this number is not constant, because most tasks do not use their full allocation inside the executor at all times—which means resources are wasted. As we saw with application resource requirements in Myth 4, allocations are typically set to accommodate peak usage, even though applications and the tasks within them don’t run at peak most of the time. In fact, the number of running tasks often varies quite dramatically over time.
  2. Spark Dynamic Allocation Leaves Waste on Table Due to Task Variability
    No matter what a Spark developer does, it’s not possible to turn a knob within Spark that forces all the tasks to fully use all of the available executors. As a result, Spark executors underutilize resources, leading to waste and unneeded spend.
  3. Spark Dynamic Allocation Cannot Guarantee Equitable Resource Allocation in Multi-Tenant Environments
    Even when Spark Dynamic Allocation is enabled, Spark applications can request and can potentially consume all the cluster resources. If more than a few applications are running, these resource-hungry applications could potentially starve or even stop other applications which are running in the same cluster. This problem can be amplified in a multi-tenant environment—a common environment for SaaS-based applications—possibly preventing users or teams from accessing or using the environment.

Spark Dynamic Allocation: A Useful but Incomplete Solution

Spark Dynamic Allocation provides significant efficiency benefits in terms of automatically adding or removing executors when there is a backlog of pending tasks. It also eliminates the need for developers to rightsize the number of executors.

However, Spark Dynamic Allocation is not a standalone solution to the problem of Spark optimization, because it cannot prevent low resource utilization inside Spark executors. Even when Spark Dynamic Allocation is implemented, resources are often still underutilized because tasks are not static, and they do not consume their peak allocation all the time. As a result, significant waste can still remain.

Summarizing the Five Myths

That wraps up our examination of the five myths around Apache Spark Optimization! Here’s a quick recap of each myth and why buying into these myths means that you still leave money and capacity on the table:

Myth 1. Observability & Monitoring

Observing and monitoring my Spark environment means I’ll be able to find the wasteful apps and tune them.

The Truth About Observability & Monitoring

Observing and monitoring your Spark environment can help you find pockets of waste, but finding the waste isn’t the same as fixing it. Recommendations for eliminating waste simply generate more tasks to complete, which become impossible to implement at scale. Busy developers may be unwilling to implement such recommendations for apps that aren’t actually broken. And Spark waste still exists even after tuning for peak resource usage, because the non-peak times are still driving peak-level costs.

Myth 2. Cluster Autoscaling

Cluster Autoscaling stops applications from wasting resources.

The Truth About Cluster Autoscaling

Cluster Autoscaling adds tremendous value in automatically responding to requests for resources and terminating instances when they're no longer needed. However, Spark applications—and specifically Spark executors—still generate waste by requesting resources and not using them, regardless of whether Cluster Autoscaling is enabled or not.

Myth 3. Instance Rightsizing

Choosing the right instances will eliminate the waste in my cluster.

The Truth About Instance Rightsizing

Truth: Instance Rightsizing can reduce costs by aligning application needs with instance resources. However, Instance Rightsizing cannot prevent inefficient applications from driving waste—even with optimal instance types. Furthermore, the choice of instance type cannot be made dynamically from second to second as application resource requirements change, which leads to waste.

Myth 4. Manual Application Tuning

Spark application tuning can eliminate all of the waste in my applications

The Truth About Manual Application Tuning

Truth: Application tuning can pull down resource allocations to the peak of the utilization curve while preventing the application from failing due to too few resources. However, it cannot eliminate the Spark waste that still occurs when the utilization curve is not at peak—which is most of the time—nor can it account for changing needs as data characteristics change dynamically. This waste from non-peak times driving peak-level costs is still significant, typically 30% or more for most Spark applications. And, most of the time, busy developers want to be developing, not spending their time tuning applications.

Myth 5. Spark Dynamic Allocation

Spark Dynamic Allocation automatically prevents Spark from wasting resources.

The Truth About Spark Dynamic Allocation

Truth: As we saw above, Spark Dynamic Allocation is a "no brainer" for many applications, since it eliminates the need for developers to rightsize the number of executors by fully utilizing the available task slots per executor. However, Spark Dynamic Allocation cannot prevent low resource utilization inside Spark executors. Even when Spark Dynamic Allocation is implemented, Spark applications still underutilize resources because, most of the time, tasks are not consuming resources at their peak allocation levels.

We have one more blog article in this series—an extra, bonus myth that we haven’t covered yet, along with a solution to the fundamental problem of Spark applications wasting resources. Stay tuned for a sneak peak!

Explore More

Looking for a safe, proven method to reduce waste and cost by up to 47% and maximize value for your cloud environment? Sign up now for a free Cost Optimization Proof-of-Value to see how Pepperdata Capacity Optimizer can help you start saving immediately.