AWS and Pepperdata invite you to join us for lunch and a hands-on lab session. Experience for yourself the “before and after” savings with Pepperdata.
Even after you manually tune the environment and run Cluster Autoscaling, you’ll see how Pepperdata eliminates additional waste to reduce instance hour costs in real time.
What You’ll Learn:
How large customers such as Autodesk and mid-sized customers such as Extole realized additional 30-47% cost savings for their Spark workloads on Amazon EMR with Pepperdata.
Who Should Attend:
Amazon EMR customers running Apache Spark for data-intensive workloads. We look forward to seeing you there!
TIME | ACTIVITY |
---|---|
12:00 pm – 1:00 pm | Lunch and Networking |
1:00 pm – 1:45 pm | Introduction and Problem Overview |
1:45 pm – 2:20 pm | Lab 1 – Build Your Cluster |
2:20 pm – 3:00 pm | Lab 2 – Optimize the Cluster |
3:00 pm – 3:30 pm | Lab 3 – Optimize the Application |
Break | |
3:45 pm – 4:15 pm | Lab 4 – Optimize with Pepperdata |
4:15 pm – 4:45 pm | Review Results/Next Steps |
If you’re using Amazon EMR or Amazon EKS, Pepperdata can save you between 30-47% by automatically reducing the waste in data-intensive applications such as Apache Spark.
Pepperdata Capacity Optimizer autonomously optimizes CPU and memory in real time with no application code changes, and saves time and resources by eliminating the need for ongoing manual tuning and recommendations.
Check out this quick explainer video for more details.
See how other customers have benefitted:
Looking for a safe, proven method to reduce waste and cost by up to 47% and maximize value for your cloud environment? Sign up now for a free cost optimization demo to learn how Pepperdata Capacity Optimizer can help you start saving immediately.