Amazon EKS is a managed service that helps make it easier to run Kubernetes on AWS. Through EKS, organizations can run Kubernetes without installing and operating a Kubernetes control plane or worker nodes. Simply put, EKS is a managed containers-as-a-service (CaaS) that drastically simplifies Kubernetes deployment on AWS.
CREATE IAM USER
Before going to task i create IAM user and give the full root access to it
After creating the IAM user just configure to aws cli so here i already add my user .
for launching the cluster download the eksctl command according to you system OS
CREATE A EKS CLUSTER USING Eksctl command
Amazon will manage the k8s cluster master internally but for the slaves, we need to mention the configurations like how many instances we want as a slave and what should be the hardware configuration of that system ie Instance type.
For Serverless Cluster we can go for the Fargate Profiles where we don't even require to tell the configuration about the salve node.
Then after cluster created , it might take almost 15-20 minutes as this code first contact to cloud formation, and cloud formation code is created for launching these resources from code and then the cluster is clustered and then the node groups are created.
With cluster it is automatically start different Aws service like VPC,EIP subnet, security group
Here my ec2 intance (nodes) are launch ,
Then we need to update the kubeconfig file for setting the current cluster details like the server IP, certificate authority, and the authentication of the kubectl so that client can launch their apps in that cluster using kubectl command.
aws eks update-kubeconfig --name <cluster name>
Then its good practice to create a namespace and create your resources in that namespace , This help to Manage the our Pods.
kubectl create ns <namespace name>
Our default namespace is default , so i want to change default namespace to my new create namespace so i run below command
Here i login to my all the cluster nodes and download utils for my efs service so that later on we can mount the EFS to those instances.So we need to login to those instances using the key, which we need to create before creating the cluster in the amazon management console under the Ec2 service section.
To login to the instances, we can do ssh.
ssh -i <key.pemfile> -l <username> <Public Ip of the instance>
yum install amazon-efs-utils -y
The efs-provisioner allows you to mount EFS storage as PersistentVolumes in kubernetes. It consists of a container that has access to an AWS EFS resource. The container reads a configmap which contains the EFS filesystem ID, the AWS region, and the name you want to use for your efs-provisioner.
CREATE AN EFS PROVISIONER
But before creating the Efs- provisioner, create the EFS from where the pods will get the real storage.
The efs-provisioner allows you to mount EFS storage as PersistentVolumes in kubernetes. It consists of a container that has access to an AWS EFS resource. The container reads a configmap which contains the EFS filesystem ID, the AWS region, and the name you want to use for your efs-provisioner.But before creating the Efs- provisioner, create the EFS from where the pods will get the real storage.
Here i choose eksctl command created vpc so it auto attach subnet also .
So it is compulsory that the Security group u attach to the Efs instances should be the same as that of the instances in which all nodes can connect to each other and all ports are allowed within that security group/s as we need the NFS service port number to be allowed in all the instance where the EFS needed to be mounted.
Then while creating the manifest file for Efs provisioning we require the efs -id and the efs-DNS name. both are get from efs service crating after
CREATE A CLUSTER ROLE BINDING FOR EFS PROVISIONER:
Create a file, cluster role binding.YAML, that defines a cluster role binding that assigns the defined role to the service account.No alt text provided for this image
The JOOMLA_DB_HOST value should be the same as that of the service name for the MYSQL_deployment ie in my case is Joomla-MySQL as we have mentioned the clusterIp:None so we can use the name of that servicer to establish the connection between the two pods in the same cluster.
For delete all environment we need only single command but here is only delete that resource they are create by eksctl command . like efs service we manualy delete
eksctl delete cluster -f create_cluster.yml
here can also use Fargate Profile for the Serverless Cluster.
In the mumbai region this service my be not available