Properly tuning Kubernetes microservice applications is a daunting task even for experienced Performance Engineers and SREs. As a consequence, companies often face reliability and performance issues, and unexpected costs even after weeks of manual tuning efforts.
In this session, we present results from real-world cases that demonstrate how ML techniques make it possible to automatically tune both Kubernetes pods and application runtime parameters to identify the optimal configuration that dramatically reduces the associated cost and improves the service resilience. We also discuss a general approach to tune pods and autoscaling policies for Kubernetes applications.