A customer has built software for government agencies and had deployed their application on AWS in the Kubernetes cluster.
A private company that built software for government agencies had deployed their application on AWS in the Kubernetes cluster. They wanted to implement auto-scaling of workload and infrastructure/nodes to scale up and down based on demand and use resources in an optimal manner.
Autoscaling of application – pod autoscaling
InfraCloud designed an end-to-end monitoring pipeline, as shown in the diagram below, using two adapters. The first adapter enabled passing metrics from Statsd agent to Prometheus by doing the appropriate conversion. The second adapter converted metrics from Prometheus to metric server format. This total pipeline allows custom metrics passed from application to metric server to be scaled.
While the implementation started based on initial design and testing was being carried out Datadog introduced the metric server compliant “Datadog cluster agent”. This would simplify the pipeline drastically by removing components and would also be supported by the provider. Infracloud recommended switching to a new design for simplicity. The implementation was then changed to a new architecture, as shown in the diagram below.
Autoscaling of application - node autoscaling
For scaling the nodes, evaluated Escalator and Kubernetes autoscaler, chose Kubernetes autoscaler as it was a perfect fit considering the requirements. Kubernetes autoscaler enabled scaling of nodes in clusters based on the resource needs of the application. Scaling up and scaling down policies were defined along with pod disruption budgets for each application.