Priyanka Sharma

In our previous blogs, we have covered the deployment strategies, networking, and logging of the Kubernetes cluster. By default, for the EKS workers, AWS provides EKS optimized AMIs which uses Amazon Linux as the Operating System. In this article, we will be discussing how we can have RHEL workers configured with the AWS EKS Cluster.

  • Red Hat Enterprise Linux 7.6
  • Kubernetes 1.13 on AWS EKS. We have opted for private subnets for the EKS Control Plane. To provision a new EKS cluster, refer to the below command:
aws eks create-cluster --name <CLUSTER_NAME> --role-arn arn:aws:iam::<ACCOUNT>:role/<EKS_SERVICE_ROLE> --resources-vpc-config subnetIds=<PRIV_SUBNETA>,<PRIV_SUBNETB>,<PRIV_SUBNETC>,securityGroupIds=<EKS_SECURITYGROUP_ID>,endpointPublicAccess=false,endpointPrivateAccess=true --region ap-south-1

If running an old version, upgrade to the latest one by using the below command:

aws eks update-cluster-version --name <CLUSTER_NAME> --client-request-token updating-version --kubernetes-version 1.13 --region ap-south-1

Check the status using below command:

aws eks describe-cluster --name <CLUSTER_NAME> --query cluster.status --region ap-south-1

Update the Kube Config. Ensure you are using the latest version of AWS CLI. In our case, it is 1.16.195.

aws eks --region ap-south-1 update-kubeconfig --name <CLUSTER_NAME>
  • Provision RHEL 7.6 as standalone EC2 Server.
  • Execute a shell script to make it as EKS Optimized. The script is available in the Git Repo.
  • Take an AMI of the RHEL server.
  • Pass the AMI to the CF template parameters to provision the worker nodes.
  • Create AWS Auth ConfigMap and pass the ARN of the Instance Role.
  • See the RHEL server registering as workers.
  • Switch to EC2 Console and Provision an EC2 Server with RHEL 7.6 AMI.
  • Install the dependencies using the below commands:
yum install -y git
yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum install -y python-pip
pip install --upgrade awscli
pip install --upgrade aws-cfn-bootstrap
mkdir -p /opt/aws/bin
ln -s /usr/bin/cfn-signal /opt/aws/bin/cfn-signal
yum install -y http://mirror.centos.org/centos/7/extras/x86_64/Packages/container-selinux-2.74-1.el7.noarch.rpm ****can be replaced with the version required by docker****
sed -i 's/enforcing/permissive/g' /etc/selinux/config ****If not set to permissive, the docker containers will not be able to provision and throw Permission Denied Error****
  • Clone the git repo and Execute install-worker.sh.
git clone https://github.com/powerupcloud/aws-eks-rhel-workers.git
cd aws-eks-rhel-workers
sh install-worker.sh
  • Go to EC2 Console and create an AMI of this server.
  • Provision a Cloudformation Stack with the below template provided by AWS:
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-02-11/amazon-eks-nodegroup.yaml
  • In the parameter “NodeImageId”, input the Image ID of the AMI created in the previous step.

Wait for the bootstrap script to execute inside the Worker Node. Get the Instance Role ARN from the Cloudformation stack outputs and provide as the value of rolearn in the below yaml template.

apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: <ARN of instance role (not instance profile)>
username: system:node:
groups:
- system:bootstrappers
- system:nodes

Execute “kubectl apply -f aws-auth.yaml”.

Run “kubectl get nodes”. The RHEL worker node is registered with the EKS Cluster.

And that’s all. At this point, we have RHEL 7.6 worker nodes running in K8s Cluster.

References:

from Powerupcloud Tech Blog – Medium