Autoscaling Kubernetes Clusters on AWS EKS

Autoscaling Kubernetes Clusters on AWS EKS

Client Details

A customer has built software for government agencies and had deployed their application on AWS in the Kubernetes cluster.


A private company that built software for government agencies had deployed their application on AWS in the Kubernetes cluster. They wanted to implement auto-scaling of workload and infrastructure/nodes to scale up and down based on demand and use resources in an optimal manner.  


  • The customer’s monitoring platform was built on Datadog and was used extensively for monitoring. Kubernetes’s native horizontal pod, autoscaler only integrated with the metric server for which there was no Datadog integration available. 
  • The path to building a horizontal pod autoscaler was not clear. After the first consultation with InfraCloud customer discovered the entire pipeline built of adapters for the newly introduced metric server. But during implementation, we found a better path of implementation –  which would mean pivoting to a different design than earlier envisioned. 


Autoscaling of application – pod autoscaling 

InfraCloud designed an end-to-end monitoring pipeline, as shown in the diagram below, using two adapters. The first adapter enabled passing metrics from Statsd agent to Prometheus by doing the appropriate conversion. The second adapter converted metrics from Prometheus to metric server format. This total pipeline allows custom metrics passed from application to metric server to be scaled. 

While the implementation started based on initial design and testing was being carried out Datadog introduced the metric server compliant “Datadog cluster agent”. This would simplify the pipeline drastically by removing components and would also be supported by the provider. Infracloud recommended switching to a new design for simplicity. The implementation was then changed to a new architecture, as shown in the diagram below.

Autoscaling of application - node autoscaling

For scaling the nodes, evaluated Escalator and Kubernetes autoscaler, chose Kubernetes autoscaler as it was a perfect fit considering the requirements.  Kubernetes autoscaler enabled scaling of nodes in clusters based on the resource needs of the application. Scaling up and scaling down policies were defined along with pod disruption budgets for each application. 

Build modern and scalable cloud native applications with InfraCloud. Join the cloud native revolution.