Karpenter in AWS EKS with existing cluster

Karpenter in AWS EKS with existing cluster

Karpenter is an open-source cluster autoscaler that automatically provisions new nodes in response to unschedulable pods. Karpenter evaluates the aggregate resource requirements of the pending pods and chooses the optimal instance type to run them. It will automatically scale-in or terminate instances that don’t have any non-daemonset pods to reduce waste. It also supports a consolidation feature which will actively move pods around and either delete or replace nodes with cheaper versions to reduce cluster cost.

Karpenter checks for unscheduled pods in the cluster and launches a new node in which pod fits right in newly launched node , Also If cluster have more resources available ,It will just create a smaller node to fit the pods and deletes the larger node ( This works only if we use consolidation: enabled: true in the provisioner). This makes Karpenter unique in its way.

To migrate an existing cluster from an AWS autoscaler or without an autoscaler to Karpenter for automatic node provisioning. We should have the following prerequisites:

  • We will use an existing EKS cluster.

  • We will use existing VPCs and subnets.

  • We will use existing security groups.

  • Our nodes are part of one or more node groups.

  • Our workloads should have pod disruption budgets that adhere to EKS best practices

  • Our cluster has an OIDC provider for service accounts.

Here , we will use aws-cli to perform the below steps.

Creating Kubernetes Cluster

I am assuming you have already configured aws cli on you system and installed aws cli , eksctl and kubectl

First lets create kubernetes cluster using ekcstl command

eksctl create cluster --name karpenter-poc --region ap-south-1 --version 1.23 --nodegroup-name linuxgroup --node-type t2.medium --nodes 2

update kubeconfig to access cluster

aws eks update-kubeconfig --region ap-south-1 --name karpenter-poc‌

check the nodes now. ( you should see 2 nodes ) .

kubectl get nodes

deploy a sample nginx and verify once .

kubectl  create deployment nginx --replicas=3 --image=nginx‌

check the pods

kubectl get pods ‌

Begin with Karpenter

Define variables

Set a variable for the cluster name.‌

export CLUSTER_NAME="karpenter-poc"

Here i have used the cluster name as karpenter-poc

You may have to check and associate the oidc identity provider URL

eksctl utils associate-iam-oidc-provider --cluster karpenter-poc --approve

Set other variables from cluster configuration.

AWS_PARTITION="aws" 
AWS_REGION="$(aws configure list | grep region | tr -s " " | cut -d" " -f3)" 
OIDC_ENDPOINT="$(aws eks describe-cluster --name ${CLUSTER_NAME} \     --query "cluster.identity.oidc.issuer" --output text)" 
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' \     --output text)

check the values‌

echo $CLUSTER_NAME $AWS_DEFAULT_REGION $AWS_ACCOUNT_ID $OIDC_ENDPOINT

we will use that information to create our IAM roles, inline policy, and trust relationship.

Create IAM roles

we first need to create two new IAM roles for nodes provisioned with Karpenter and the Karpenter controller.

To create the Karpenter node role we will use the following policy and commands.

echo '{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "ec2.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}' > node-trust-policy.json

aws iam create-role --role-name "KarpenterNodeRole-${CLUSTER_NAME}" \
    --assume-role-policy-document file://node-trust-policy.json

Now attach the required policies to the role

aws iam attach-role-policy --role-name "KarpenterNodeRole-${CLUSTER_NAME}" \
    --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy

aws iam attach-role-policy --role-name "KarpenterNodeRole-${CLUSTER_NAME}" \
    --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy

aws iam attach-role-policy --role-name "KarpenterNodeRole-${CLUSTER_NAME}" \
    --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly

aws iam attach-role-policy --role-name "KarpenterNodeRole-${CLUSTER_NAME}" \
    --policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore

Attach the IAM role to an EC2 instance profile.

aws iam create-instance-profile \
    --instance-profile-name "KarpenterNodeInstanceProfile-${CLUSTER_NAME}"

aws iam add-role-to-instance-profile \
    --instance-profile-name "KarpenterNodeInstanceProfile-${CLUSTER_NAME}" \
    --role-name "KarpenterNodeRole-${CLUSTER_NAME}"

Now we need to create an IAM role that the Karpenter controller will use to provision new instances. The controller will be using IAM Roles for Service Accounts (IRSA) which requires an OIDC endpoint.

cat << EOF > controller-trust-policy.json
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_ENDPOINT#*//}"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringEquals": {
                    "${OIDC_ENDPOINT#*//}:aud": "sts.amazonaws.com",
                    "${OIDC_ENDPOINT#*//}:sub": "system:serviceaccount:karpenter:karpenter"
                }
            }
        }
    ]
}
EOF

aws iam create-role --role-name KarpenterControllerRole-${CLUSTER_NAME} \
    --assume-role-policy-document file://controller-trust-policy.json

cat << EOF > controller-policy.json
{
    "Statement": [
        {
            "Action": [
                "ssm:GetParameter",
                "ec2:DescribeImages",
                "ec2:RunInstances",
                "ec2:DescribeSubnets",
                "ec2:DescribeSecurityGroups",
                "ec2:DescribeLaunchTemplates",
                "ec2:DescribeInstances",
                "ec2:DescribeInstanceTypes",
                "ec2:DescribeInstanceTypeOfferings",
                "ec2:DescribeAvailabilityZones",
                "ec2:DeleteLaunchTemplate",
                "ec2:CreateTags",
                "ec2:CreateLaunchTemplate",
                "ec2:CreateFleet",
                "ec2:DescribeSpotPriceHistory",
                "pricing:GetProducts"
            ],
            "Effect": "Allow",
            "Resource": "*",
            "Sid": "Karpenter"
        },
        {
            "Action": "ec2:TerminateInstances",
            "Condition": {
                "StringLike": {
                    "ec2:ResourceTag/karpenter.sh/provisioner-name": "*"
                }
            },
            "Effect": "Allow",
            "Resource": "*",
            "Sid": "ConditionalEC2Termination"
        },
        {
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "arn:${AWS_PARTITION}:iam::${AWS_ACCOUNT_ID}:role/KarpenterNodeRole-${CLUSTER_NAME}",
            "Sid": "PassNodeIAMRole"
        },
        {
            "Effect": "Allow",
            "Action": "eks:DescribeCluster",
            "Resource": "arn:${AWS_PARTITION}:eks:${AWS_REGION}:${AWS_ACCOUNT_ID}:cluster/${CLUSTER_NAME}",
            "Sid": "EKSClusterEndpointLookup"
        }
    ],
    "Version": "2012-10-17"
}
EOF

aws iam put-role-policy --role-name KarpenterControllerRole-${CLUSTER_NAME} \
    --policy-name KarpenterControllerPolicy-${CLUSTER_NAME} \
    --policy-document file://controller-policy.json

Add tags to subnets and security groups

We need to add tags to our nodegroup subnets so Karpenter will know which subnets to use.

"Key=karpenter.sh/discovery,Value=karpenter-poc"

This we have to add manually or you can add it via aws-cli . I have added it manually .

Add tags to our security groups.

"Key=karpenter.sh/discovery,Value=karpenter-poc"

Update aws-auth ConfigMap

We need to allow nodes that are using the node IAM role we just created to join the cluster. To do that we have to modify the aws-auth ConfigMap in the cluster.

kubectl edit configmap aws-auth -n kube-system

You will need to add a section to the mapRoles that looks something like this. Replace the ${AWS_ACCOUNT_ID} variable with your account and ${CLUSTER_NAME} variable with the cluster name, but do not replace the {{EC2PrivateDNSName}}.

- groups:
  - system:bootstrappers
  - system:nodes
  rolearn: arn:aws:iam::${AWS_ACCOUNT_ID}:role/KarpenterNodeRole-${CLUSTER_NAME}
  username: system:node:{{EC2PrivateDNSName}}

The full aws-auth configmap should have two groups. one for your Karpenter node role, and one for your existing node group.

Deploy Karpenter

First, set the Karpenter release

export KARPENTER_VERSION=v0.27.3

We can now generate a full Karpenter deployment yaml from the helm chart.

helm template karpenter oci://public.ecr.aws/karpenter/karpenter --version ${KARPENTER_VERSION} --namespace karpenter \
    --set settings.aws.defaultInstanceProfile=KarpenterNodeInstanceProfile-${CLUSTER_NAME} \
    --set settings.aws.clusterName=${CLUSTER_NAME} \
    --set serviceAccount.annotations."eks\.amazonaws\.com/role-arn"="arn:${AWS_PARTITION}:iam::${AWS_ACCOUNT_ID}:role/KarpenterControllerRole-${CLUSTER_NAME}" \
    --set controller.resources.requests.cpu=1 \
    --set controller.resources.requests.memory=1Gi \
    --set controller.resources.limits.cpu=1 \
    --set controller.resources.limits.memory=1Gi > karpenter.yaml

Modify the following lines in the karpenter.yaml file

Set node affinity

Edit the karpenter.yaml file and find the karpenter deployment affinity rules. Modify the affinity so karpenter will run on one of the existing node group nodes.

The rules should look something like this. Modify the value to match your $NODEGROUP, one node group per line. ( my node group was linuxgroup )

affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key: karpenter.sh/provisioner-name
          operator: DoesNotExist
      - matchExpressions:
        - key: eks.amazonaws.com/nodegroup
          operator: In
          values:
          - ${NODEGROUP}
  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      - topologyKey: "kubernetes.io/hostname"

Now that our deployment is ready we can create the karpenter namespace, create the provisioner CRD, and then deploy the rest of the karpenter resources.

kubectl create namespace karpenter
kubectl create -f \
    https://raw.githubusercontent.com/aws/karpenter/${KARPENTER_VERSION}/pkg/apis/crds/karpenter.sh_provisioners.yaml
kubectl create -f \
    https://raw.githubusercontent.com/aws/karpenter/${KARPENTER_VERSION}/pkg/apis/crds/karpenter.k8s.aws_awsnodetemplates.yaml
kubectl apply -f karpenter.yaml

Create default provisioner

We need to create a default provisioner so Karpenter knows what types of nodes we want for unscheduled workloads.

apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
  name: default
spec:
  consolidation:
    enabled: true
  weight: 100
  requirements:
    - key: karpenter.sh/capacity-type
      operator: In
      values: ["on-demand"]
    - key: karpenter.k8s.aws/instance-size
      operator: In
      values: [small, medium, large]
    - key: kubernetes.io/arch
      operator: In
      values: ["amd64"]

#  ttlSecondsAfterEmpty: 30
  providerRef:
    name: default
---
apiVersion: karpenter.k8s.aws/v1alpha1
kind: AWSNodeTemplate
metadata:
  name: default
spec:
  subnetSelector:
    karpenter.sh/discovery: karpenter-poc
  securityGroupSelector:
    karpenter.sh/discovery: karpenter-poc

Remove CAS

Now that karpenter is running we can disable the cluster autoscaler. To do that we will scale the number of replicas to zero.

kubectl scale deploy/cluster-autoscaler -n kube-system --replicas=0

Verify Karpenter

As nodegroup nodes are drained you can verify that Karpenter is creating nodes for your workloads.‌

kubectl logs -f -n karpenter -c controller -l app.kubernetes.io/name=karpenter

You should also see new nodes created in your cluster as the old nodes are removed

kubectl get nodes

You can modify the provisioner to match your desired nodes .