This article tells you everything you need to know about Amazon Elastic Kubernetes Service (EKS). Kubernetes is an open-source platform for managing containerized applications. Containerization is a software deployment process that enables developers to bundle applications and their dependencies into a single, executable unit. AWS provides a cloud service known as EKS that helps you configure, run, and manage Kubernetes clusters.
What is Amazon EKS?
Amazon Elastic Kubernetes Service (EKS) is a fully managed service that makes it easy to run Kubernetes on AWS. Users don’t have to install or operate their own Kubernetes control plane or nodes. EKS eliminates the need to maintain the availability and scalability of Kubernetes clusters.
Think of Amazon EKS as a reliable platform for orchestrating containerized workload. It automates key tasks like provisioning infrastructure, patching, and scaling resources. EKS integrates with other AWS services, including:
EC2 instance types for the compute needs of your workloads
Elastic Container Registry (ERC) for container images
Virtual private cloud (VPC) for isolating resources
Identity and Access Management (IAM) for providing permissions and securing clusters
Amazon Cloudwatch, CloudTrail, and GuardDuty for monitoring
Why Amazon EKS?
Here are some of the reasons why organizations choose Amazon EKS:
Fully Managed Service: AWS manages the underlying infrastructure. The administrator focuses on the application running on their cluster.
Scalability: Amazon EKS allows you to easily scale your clusters up or down, depending on needs. Auto-scaling in EKS adjusts computing resources without manual input from the administrator.
High Availability: Amazon EKS provides high availability by running control plane clusters across multiple availability zones. This ensures your application remains available even when a single node or component fails**.**
Faragate Support: You can run Kubernetes pods on AWS Fargate, a serverless computing platform for containers.
Security: You have access to robust security services, including IAM for authentication, VPC for isolation, and AWS Key Management Service (KMS) for encryption.
Cost-Effective: Amazon EKS is a cost-effective way to run Kubernetes. It offers pay-as-you-go pricing, allowing users to pay only for the resources they utilize.
Amazon EKS Components
Amazon EKS aligns with the general architecture of Kubernetes. Here are the key components of Amazon EKS:
Image source: official Kubernetes documentation
Control Plane
The control plane, or master nodes, is responsible for managing Kubernetes clusters and API operations. It majorly consists of the following:
API server: This is the main point of entry to the cluster. It exposes the Kubernetes API that lets end users, different parts of the cluster, and external components communicate.
etcd: This is a distributed key-value store. It stores the configuration data relating to the state of the cluster.
Control manager: The control manager performs various cluster-level functions. For instance, it can create or delete pods and services. It ensures the current state of the cluster matches the desired state stored in etcd.
Scheduler: The Kubernetes scheduler is responsible for assigning pods to suitable nodes. It determines where containers should run based on various criteria, such as availability of resources, labels, and affinity rules.
Worker Nodes
The worker nodes, or data plane, run the containerized applications. They perform the actions triggered via the Kubernetes API. In EKS, worker nodes are typically EC2 instances. The administrator is responsible for creating and managing worker nodes on AWS.
All nodes run the following services:
Kubelet: This agent allows each worker node to communicate with the API server on the control plane. It also ensures the pods are running and healthy, including their containers.
Kube-proxy: This is a network proxy that helps to manage communication between pods and services. It monitors changes to Service objects and their endpoints and translates them into network rules in the node.
Container runtime: The administrator needs to install a container runtime on each node so pods can run there. EKS supports different container runtimes depending on the version of Kubernetes in use.
Amazon EKS Node Types
AWS allows users to either manage the EC2 instances that comprise the nodes (self-managed nodes) or let AWS manage them (managed node group). There is also an option to mix both types of nodes. Below are the primary node types available on Amazon EKS:
EKS Auto Mode
AWS launched the EKS auto mode to automate Kubernetes management beyond the control plane. It allows AWS to set up and manage the infrastructure for your worker nodes. This feature integrates the following capabilities as built-in components:
Compute auto-scaling
Cluster DNS
GPU support
Load balancing
Storage
Networking
Enabling EKS auto mode minimizes key tasks related to cluster infrastructure management. This option is excellent for organizations that want to reduce operational overhead and leverage AWS expertise for daily operations.
EKS with AWS Fargate
AWS Faragate provides on-demand compute capacity for containers. It eliminates the need to manage the underlying infrastructure or worry about the maintenance of worker nodes. With Fargate, you specify the resource needs of your application and AWS automatically provisions, scales, and manages your infrastructure.
Self-managed Nodes
Self-managed nodes offer complete control over the EC2 instances within your EKS clusters. You are in charge of the underlying infrastructure, including managing, scaling, and maintaining the worker nodes. Amazon provides specialized Amazon Machine Images (AMIs) configured to work with EKS. They are called Amazon EKS optimized AMIs. The components of these AMIs include kubelet, containerd, and AWS IAM Authenticator.
Managed Node Groups
You can also allow AWS to manage your worker nodes. AWS eases operation by handling tasks like patching, updating, and scaling nodes. By leveraging managed node groups, you can create, scale, and delete nodes with a single operation. AWS also allows you to use a custom launch template to specify instance configuration information. Managed node groups offer a blend of automation and customization for managing EC2 instances within a cluster.
Demo: How to Create an EKS Cluster
There are many methods for creating an EKS cluster. Here, we’ll show how to set up a cluster through the AWS console. This will cover setting up IAM roles, creating the master node, configuring kubectl on a Linux machine, adding worker nodes, and verifying the cluster setup.
Step 1: Set Up IAM Roles
Navigate to the AWS console and search for IAM under services. Select IAM from the results.
At the left navigation panel on the IAM page, click on Roles. A new page appears, select Create role.
Under Select trusted entity, choose AWS service.
Under Use case, select EKS and EKS - Cluster. This sets the role for managing EKS clusters. Click Next to proceed.
On the Add Permissions page, we don’t need additional permissions. Click Next to proceed.
Enter a name of choice for your role, such as DemoClusterRole-3. Click the Create role button at the bottom of the page.
Step 2: Create the Master Node
Search for Elastic Kubernetes Services under the list of services and select it. On the EKS page, click the Create cluster button to begin setting up the cluster.
Select Custom configuration.
Provide the necessary details, including your desired cluster name, Kubernetes version, and role (the IAM role you just created). Note the Kubernetes version you’re selecting here.
Leave other settings as default and proceed to the last page to create our cluster. This may take up to 20 minutes.
Step 3: Configure Kubectl on a Linux Machine
To interact with your EKS cluster, you need to install and configure kubectl, the Kubernetes command-line tool.
Create a Linux client machine.
Select EC2 under services and click Launch instances.
Give your instance a descriptive name, such as demoEKSclient. Here, we’ll select Amazon Linux AMI and proceed to Launch instance.
On the IAM dashboard, create an access key to authenticate your AWS CLI and kubectl. You’ll need the access key ID and secret access key.
Go to the instance we created earlier and click on it.
Under instance summary, click the Connect button.
On your Linux machine, enter the following command:
aws configure
This will prompt you to enter your AWS Access Key, Secret Access Key, region, and output format.
To confirm our CLI is configured correctly, let’s check the status of our master node. Run the following command:
aws eks --region us-east-1 describe-cluster --name demoEKScluster --query cluster.status
Note: your region should be the same as the region in which you deployed your cluster
We can see our cluster is “Active.”
Let’s set up kubectl configuration.
First, update your kubeconfig file to interact with the newly created EKS cluster by running this command:
aws eks --region us-east-1 update-kubeconfig --name demoEKScluster
Install kubectl by downloading the binary file from the official AWS page.
Note that we’re downloading kubectl for Kubernetes version 1.30, which is the same version as our master node cluster.
Make kubectl executable
Move the kubectl binary to a directory in your PATH
Let’s check our kubectl version
Run this command to view running services
Step 4: Create worker nodes
Now, let’s add worker nodes to our EKS cluster.
Select Compute on the master node cluster we created earlier. Click on Add node group.
Name your node group
Go to the IAM console to create a Node IAM role. The one we created earlier was for EKS cluster management. We need one for node group management.
Select EC2 as use case and click on Next
On the Add permissions page, add the following policies:
AmazonEC2ContainerRegistryReadOnly
AmazonEKS_CNI_Policy
AmazonEKSWorkerNodePolicy
Give your role a name and create.
Once created, return to the node group configuration and select this IAM role. After that, go down the page, click on Next.
The next page is for configuring node settings.
We will choose t2.micro for demo purposes. Leave every other setting as default, review your configuration and click Create.
Verify worker nodes from the console. Earlier, we had just the demoEKS client running. Now, we have two worker node instances.
Verify the worker nodes using
kubectl get nodes --watch
command
Step 5: Delete EKS cluster configurations
Delete your worker nodes by deleting the node group.
Delete the master node
This sums up how to create EKS clusters from the AWS console.
Conclusion
Amazon EKS simplifies how software engineers deploy, manage, and scale containerized applications. It handles the underlying infrastructure, including managing the control plane.
EKS offers high availability, seamless integration with AWS tools, and support for both managed and self-managed node configurations. It allows organizations to focus on their applications rather than the intricacies of container orchestration.
Whether you are new to Kubernetes or looking to optimize your existing workloads, Amazon EKS provides a scalable and secure solution to efficiently manage containerized applications at scale.
If you have suggestions or other topics you’d like me to write on, feel free to reach out. I’m on LinkedIn and X(Twitter). I would love to hear from you.