AWS EKS Auto Mode - Simplified Kubernetes for Everyone

Udaara Jayawardana
10 min readJan 13, 2025

--

Managing Kubernetes is like to preparing a fine gourmet dinner without the original recipe. Rewarding, yet frequently chaotic and unpredictable. For all of its promises of orchestration and scalability, Kubernetes can feel like attempting to host a party where each guest has a list of demands, dietary restrictions, and requires personal attention at the same time.

Enter AWS EKS, a service designed to take some of that chaos and make it manageable. EKS is like having a dependable butler for your Kubernetes needs. It sets the table, arranges the seats, and ensures that everyone plays along. Even with this capable helper, there is still the task of maintaining the infrastructure, scaling correctly, and keeping costs in check.

Here’s where EKS Auto Mode comes in, like an expert event planner you didn’t know you needed. It’s Kubernetes without the heavy lifting. A simple, serverless solution that allows you to focus on your apps while AWS manages the rest. Imagine the party now: everyone is having a good time, the drinks are flowing, and all you had to do was show up! Sounds AWSome, doesn’t it?

Auto Mode lets you EKS and Chill! :D

What AWS Says About EKS Auto Mode

Amazon EKS Auto Mode fully automates Kubernetes cluster management for compute, storage, and networking on AWS with a single click. It simplifies Kubernetes management by automatically provisioning infrastructure, selecting optimal compute instances, dynamically scaling resources, continuously optimizing costs, managing core add-ons, patching operating systems, and integrating with AWS security services

Leverage the scalability and operational excellence of AWS without deep Kubernetes expertise or ongoing infrastructure management overhead. By offloading cluster operations to AWS, you can get started quickly, improve performance, and reduce overhead, letting you focus on building applications that drive innovation instead of cluster management

— AWS EKS Auto Mode Documentation

EKS Auto Mode vs EKS Standard Mode

Although standard mode provided a managed control plane, running production-grade Kubernetes applications demanded significant expertise and constant effort. Users were responsible for

  • Selecting and provisioning the right EC2 instances
  • Scaling, which is handled via external tools like Karpenter for dynamic node provisioning
  • Installing and maintaining plug-ins
  • Performing cluster upgrades
  • Patching OS of nodes
  • Optimising resources
  • Managing costs

It was a continuous balancing act of infrastructure management and application development

EKS Standard Mode

EKS Auto Mode addresses the limitations of Standard Mode by automating infrastructure tasks like as node provisioning, scaling, and maintenance. It eliminates the need for manual resource optimisation and reduces operational complexity, and simplifying Kubernetes management.

  • Node management is fully automated, removing the need to manually select, provision, or maintain EC2 instances
  • Dynamically scales and optimises the data plane to meet real-time application demands
  • Self-healing infrastructure with unhealthy nodes automatically detected and replaced to ensure HA
  • Compute, Networking, and Storage controllers are fully managed by EKS
  • Cost-efficiency as users only pay for resources consumed

However, it’s worth noting that even though EKS Auto simplifies compute and storage, users still need to handle networking configurations such as VPC and security groups

EKS Auto Mode

Interesting Insights About EKS Auto Mode

  1. EKS Auto Mode automatically provisions EC2 instances running the Bottlerocket OS for faster boot times, optimal performance and security
  2. Ephemeral compute resources are used, with instances being cycled on a regular basis to reduce security risks and keep infrastructure up to date
  3. Karpenter powers automatic node scaling, which dynamically provisions nodes to match workload demands
  4. Built-in GPU support provides high-performance tasks without requiring additional configuration

EKS Auto Mode vs ECS Fargate

EKS Auto Mode and ECS Fargate both provide serverless computing solutions, which eliminate the need to manage underlying infrastructure. They scale automatically to meet workload demands and connect easily with other AWS services.

So what’s the difference? you might ask.

  • Orchestration: EKS. Auto Mode uses Kubernetes, making it ideal for Kubernetes-native applications. ECS Fargate relies on AWS ECS
  • Infrastructure: EKS Auto Mode automates EC2 instance management but still exposes Kubernetes node-level concepts. ECS Fargate fully eliminates the nodes and infrastructure
  • Scaling: EKS Auto Mode scales at the pod level, adhering to Kubernetes configurations. ECS Fargate scales tasks using ECS service definition

Setting up EKS Auto Mode Cluster — Hands On!

Creating a EKS Auto cluster is, as advertised by AWS, just a single click! No pesky YAMLs!

Create EKS Auto Cluster

Remember, EKS Auto only works with Kubernetes 1.29 and above. So if you are running an older version of K8s, you’ll have to upgrade it.
Another thing I love is the auto-generated cluster names. Look what I got! :D

By default you do not get any nodes, as we have nothing runs in the cluster. It’s important to remember that EKS Auto Mode doesn’t expose nodes to the user either, hence even when nodes spin-up once you deploy the apps, you cannot SSH into them.
However, you get just two Node Pools (General Purpose & System) at the start

  • System node pool runs the critical system components with a taint (eks.amazonaws.com/nodegroup-type=system:NoSchedule) to reserve resources for cluster operations
  • General-Purpose node pool handles all general workloads without taints, making it suitable for applications deployed by the user
Node Pools in kubectl

You can play around with these built-in node pools with custom configurations, and even outright disable it all together. To learn more on that, refer this AWS Documentation

Default Compute Resources

The Metrics Server is the only add-on by default. This is because EKS Auto Mode makes cluster management easier by delegating many functions to AWS-managed infrastructure.
AWS automatically provides and manages the key components such as networking, DNS, and load balancing

Metrics Server is provided because it enables certain Kubernetes-native features that rely on resource metrics, such as scaling (Horizontal Pod Autoscaler) and monitoring (kubectl top)

Default Add-Ons — Only Metrics Server

What about the other EKS capabilities, then? Well, here you go!

Core Capabilities Comparison

A Brief Look at Compute, Storage, and Networking in EKS Auto Mode

Compute

  • Dynamically selects and chose the best instance types for the application
  • Runs managed EC2 instances. These are EC2s deployed in your account by EKS Auto Mode. EKS Auto Mode is responsible for Launching, managing and securing these EC2s
  • Continuously optimizes the instances as demand on your application changes. Under-utilised instances gets terminated, and if a cheaper instance identified, it will replace the current nodes
  • User-Defined Auto Node Pools let’s user to provide required instance types
  • Also supports reserved instances, spot instances and even compute savings plans

Networking

  • CoreDNS & Fully managed Network Proxy runs on every node
  • Fully managed VPC CNI

Load Balancers -

  • Managed Load Balancers for EKS Auto Mode clusters, with per-configured network best practices
  • Satisfies K8s ingress resources by provisioning ALBs
  • Satisfies K8s service resources by provisioning NLBs
  • New Ingress Class introduced to specifically works with this new Load Balancer Controller
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: alb
spec:
# Configures the IngressClass to use EKS Auto Mode
controller: eks.amazonaws.com/alb
parameters:
apiGroup: eks.amazonaws.com
kind: IngressClassParams
name: alb

Storage

  • Comes with EBS CSI Driver
  • New Storage Class introduced. Ensures that persistent volumes (PVs) are provisioned and managed in alignment with the requirements of the Auto Mode architecture
  • But EKS Auto Mode does not create a StorageClass for you. You must create a StorageClass referencing ebs.csi.eks.amazonaws.com to use the storage capability of EKS Auto Mode
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: auto-ebs-sc
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: ebs.csi.eks.amazonaws.com
volumeBindingMode: WaitForFirstConsumer
parameters:
type: gp3
encrypted: "true"
  • Other storafe types such as EFS and S3 can accessed through the existing EKS add-ons

Deploying an App and Cluster Upgrade

I’ve used the same app shown in the EKS Auto demo in re:Invent 2024. If you are testing EKS Auto, it’s a great place to start!

https://github.com/aws-containers/retail-store-sample-app

As soon as you deploy an application to the EKS Auto Cluster, resources begin to spin up. Pods, deployments, stateful sets…

Workloads: Pods
Workloads: StatefulSets

Nodes

Another change is the deployment of new managed EC2 nodes. As talked about earlier, these are handled by the EKS service, and you cannot SSH into them. They are not associated with a keypair, and no SSM agent is active.

New managed EC2 Nodes spin up
Node Details
I went above and beyond… and tried to SSH to the nodes! :D

Upgrading the Kubernetes

This by far the feature I love the most. Upgrading the cluster, is just a single click like everything else with EKS Auto

After upgrading the control plane, EKS Auto Mode will begin incrementally updating managed nodes to align with the new Kubernetes version. This process respects pod disruption budgets to minimize impact on your workloads. Therefore, no manual intervention is required for node updates

Single click K8s version Upgrade

Key Limitations compared to EKS Standard

Automated Node Maintenance
EKS Auto Mode rovides automated node maintenance, and handles node upgrades with health checks, pod eviction, disruption budgets and and rolling updates. However,

  • ⚠️ Manual control plane upgrades required
  • ⚠️ Potential compatibility issues with older Helm charts, manifests, or deprecated APIs

Karpenter Dependency
EKS Auto Mode leverages Karpenter for dynamic provisioning and scaling. So it handles standardised configurations for instances, volumes, and networking automatically. However,

  • ⚠️ Specialised workloads requiring custom configurations, kernel modules, or CNIs (ie Calico) may face conflicts

NodePool Customisation

  • ⚠️ Limited UI options
  • ⚠️ Advanced configurations (ie custom instance types, sizes) require YAML manifests, AWS CLI, or eksctl
  • ⚠️ Predefined instance categories (ie c m r) are non-editable.
  • ⚠️ System NodePools support ARMand AMD while General-Purpose supports only AMD

Restricted Operations

  • ⚠️ No SSH/SSM access to nodes
  • ⚠️ Monitoring and troubleshooting require Kubernetes-native tools (kubectl) or AWS services like CloudWatch Logs and the EKS console. Therefore deep troubleshooting done in standard mode, is not possible with EKS Auto

Additional Charges

  • ⚠️ EKS Auto Mode adds a management fee on top of standard EC2 costs
  • 💡 See if the operational benefits outweigh the additional costs for your workload, before switching to Auto Mode

EKS Auto Best Practices (from AWS!)

These best practices are taken directly from AWS documentation, providing you with their exact recommendations for EKS Auto Mode.

  • Configure pod disruption budgets to protect workloads against voluntary disruptions: During voluntary disruptions, such as when EKS Auto Mode disrupts an underused node, disruption budgets help control that rate at which replicas of a deployment are interrupted, helping to protect some workload capacity to continue to serve traffic or process work.
  • Schedule replicas across nodes and Availability Zones for high-availability: Use pod Topology Spread Constraints to spread workloads across nodes and to minimise the chance of running multiple replicas of a deployment on the same node.
  • Configure appropriate resource requests and limits: EKS Auto Mode launches EC2 instances based on the vCPU and memory requests of the workloads. Resource requests must be carefully configured otherwise resources could be over-provisioned. EKS Auto Mode doesn’t consider resource limits or usage.
  • Application must handle graceful shutdowns:Your application must be able to gracefully shutdown by handling a SIGTERM signal to prevent loss of work or interrupting end user experience during voluntary disruptions. When a Kubernetes pod is decided to be evicted, a SIGTERM signal is sent to the main process of each container in the Pods being evicted. After the SIGTERM signal is sent, Kubernetes gives the process some time (grace period) before a SIGKILL signal is sent. This grace period is 30 seconds by default. You can override the default by declaring terminationGracePeriodSeconds in your pod specification.
  • Avoid overly constraining compute selection: The general-purpose EKS Auto Mode NodePool diversifies across c, m, r Amazon EC2 families of different sizes to maximise the opportunity to pick a right-sized EC2 instance for a workload. For workloads with specific compute requirements, you can use well-known Kubernetes labels to allow pods to request only certain instance types, architectures, or other attributes when creating nodes.

Closing Notes

So here we are, at the end of what could have been a very complicated story but wasn’t thanks to EKS Auto Mode. It takes all the repetetive, time-consuming parts of Kubernetes management and just… handles them. No drama, no chaos, just quiet efficiency in the background.

It’s the kind of simplicity that makes you wonder if someone else is doing all of the work while you focus on building the applications. And, guess what? That’s exactly what’s happening.

So sit back, deploy your apps, relax, and let EKS Auto Mode handle the rest. Kubernetes has never been this simple!

--

--

Udaara Jayawardana
Udaara Jayawardana

Written by Udaara Jayawardana

A DevOps Engineer who specialises in the design and implementation of AWS and Containerized Infrastructure.

No responses yet