-
Setting up Terraform to facilitate infrastructure provisioning.
-
Using Terraform to provision Jenkins master, build nodes, and Ansible.
-
Configuring an Ansible server.
-
Employing Ansible to configure Jenkins master and build nodes.
-
Creating a Jenkins pipeline job.
-
Developing a Jenkinsfile from scratch.
-
Configuring SonarQube and Sonar scanner.
-
Executing SonarQube analysis for code quality assessment.
-
Configuring JFrog Artifactory.
-
Creating Dockerfile for containerization.
-
Storing Docker images on Artifactory.
-
Utilizing Terraform to provision a Kubernetes cluster.
-
Creating Kubernetes objects.
-
Deploying Kubernetes objects using Helm.
-
Setting up Prometheus and Grafana using Helm charts.
-
Monitoring the Kubernetes cluster using Prometheus.
As part of this, we should setup
Terraform VS Code AWSCLI
-
Download terraform the latest version from official website
-
Setup environment variable click on start --> search "edit the environment variables" and click on itUnder the advanced tab, chose "Environment variables" --> under the system variables select path variable and add terraform location in the path variable. system variables --> select path add new --> terraform_Path
in my system, this path location is C:\terraform_1.3.7
-
Run the below command to validate terraform version
$ terraform -v
-
Install Visual Studio code IDE
-
Download & install AWS cli s/w
-
Create IAM user in AWS & generate access keys
-
Configure IAM user access keys in Environment variables
-
Verify IAM user access keys in cmd
$ aws configure list
-
Create Key pair in EC2 (name: dpp.pem)
-
Execute terraform script to Create VPC + EC2 instances
-
Connect to Ansible VM using pem file
-
Execute below commands to install ansible
sudo su - sudo apt update sudo apt install software-properties-common sudo add-apt-repository --yes --update ppa:ansible/ansible sudo apt install ansible
-
Create hosts file in /opt
$ cd /opt $ vi hosts
-
Add below content in hosts file
[jenkins-master] 18.209.18.194
[jenkins-master:vars] ansible_user=ubuntu ansible_ssh_private_key_file=/opt/dpp.pem
[jenkins-slave] 54.224.107.148
[jenkins-slave:vars] ansible_user=ubuntu ansible_ssh_private_key_file=/opt/dpp.pem
-
Upload dpp.pem file to /opt directory and remove write permissions
$ chmod 400 dpp.pem
-
Test the connectivity
$ ansible -i hosts all -m ping
-
Connect to jenkins master machine and check jenkins status
$ sudo su - $ service jenkins status
-
Execute Ansible Playbook to setup Jenkins in Master Node
- copy jenkins-setup.yaml into /opt/
$ ansible-playbook -i hosts jenkins-master-setup.yaml
-
Execute Ansible Playbook to setup Maven in Jenkins Slave
- copy jenkins-slave-setup.yaml into /opt
$ ansible-playbook -i hosts jenkins-slave-setup.yaml
- Add Credentials
-
Manage Jenkins --> Manage Credentials --> System --> Global credentials --> Add credentials
-
Provide the below info to add credentials
kind:ssh username with private key
Scope:Global
ID:maven_slave
Username:ubuntu
private key:dpp.pem key content
-
Add slave node to master node
Follow the below setups to add a new slave node to the jenkins
-
Goto Manage Jenkins --> Manage nodes and clouds --> New node --> Permanent Agent
-
Provide the below info to add the node
Number of executors:3
Remote root directory:/home/ubuntu/jenkins
Labels:maven
Usage:Use this node as much as possible
Launch method:Launch agents via SSH
Host:<Private_IP_of_Slave>
Credentials:<Jenkins_Slave_Credentials>
Host Key Verification Strategy:Non verifying Verification Strategy
Availability:Keep this agent online as much as possible
-
Create Jenkins Pipeline
-
Add Build Stage To Pipeline
-
Create Sonar cloud account on https://sonarcloud.io
-
Generate an Authentication token on SonarQube
Account --> my account --> Security --> Generate Tokens
Token : 8122aeaf0aabfa0708813588365299c33181de6a
- On Jenkins create credentials
Manage Jenkins --> manage credentials --> system --> Global credentials --> add credentials - Credentials type: Secret text - ID: sonarqube-key
- Install SonarQube plugin
Manage Jenkins --> Available plugins --> Search for sonarqube scanner
- Configure sonarqube server
Manage Jenkins --> Configure System --> sonarqube server Add Sonarqube server - Name: sonar-server - Server URL: https://sonarcloud.io/ - Server authentication token: sonarqube-key
- Configure sonarqube scanner
Manage Jenkins --> Global Tool configuration --> Sonarqube scanner Add sonarqube scanner - Sonarqube scanner: sonar-scanner
- Write sonar-project.properties file
sonar.verbose=true sonar.organization=ashokit sonar.projectKey=ashokit_instalreels sonar.projectName=instareels sonar.language=java sonar.sourceEncoding=UTF-8 sonar.sources=. sonar.java.binaries=target/classes sonar.coverage.jacoco.xmlReportPaths=target/site/jacoco/jacoco.xml
-
Create Organization and Project in sonarcloud and configure those details in sonar-project.properties file and push sonar-project.properties file to git repo
-
Add Sonar stage in jenkins pipeline
stage('SonarQube analysis') {
environment{
scannerHome = tool 'ashokit-sonarqube-scanner'
}
steps{
withSonarQubeEnv('ashokit-sonarqube-server') {
sh "${scannerHome}/bin/sonar-scanner"
}
}
}
- Create Quality Gate in sonar cloud and make it default
-
Create JFrog Artifactory account (pwd: Jfrogashokit1)
-
Generate an access token with username (username must be your email id)
Settings --> Platform Configurations --> User Mgmt --> Access Tokens
-
Add username and password under jenkins credentials
Manage Jenkins -> Credentials -> Add Credentials -> username with password
-
Install Artifactory plugin
-
Create Docker Artifactory in Jfrog
-
Install Docker in jenkins-slave-server using ansible playblook (v2)
-
Install Docker Pipeline Plugin
-
Configure Docker Build stage in Jenkins Pipeline
def imageName = 'ashokit.jfrog.io/ashokit-docker-local/insta' def version = '2.1.4' stage(" Docker Build ") { steps { script { echo '<--------------- Docker Build Started --------------->' app = docker.build(imageName+":"+version) echo '<--------------- Docker Build Ends --------------->' } } }
-
Configure Docker Image Push stage in Jenkins Pipeline
stage (" Docker Publish "){ steps { script { echo '<--------------- Docker Publish Started --------------->' docker.withRegistry(registry, 'jfrog-cred'){ app.push() } echo '<--------------- Docker Publish Ended --------------->' } } }
-
Check Jfrog Docker Repository
-
Connect with Jenkins Slave machine
-
Switch to root user
$ sudo su -
-
Execute below commands
$ docker images
$ docker run -d -p 8000:8080
$ docker ps -a
$ docker logs
-
Enable 8000 in security group and access application
URL : http://public-ip:8000/
- Setup EKS Cluster using Ansible Playbook (eks, sg_ekgs, vpc)
- Setup EKS Cluster using Ansible Playbook (eks, sg_ekgs, vpc)
Setup kubectl
curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.24.9/2023-01-11/bin/linux/amd64/kubectl chmod +x ./kubectl mv ./kubectl /usr/local/bin kubectl version
- Install AWS CLI
yum remove awscli curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install --update
- Configure awscli to connect with aws account
aws configure Provide access_key, secret_key
- Download Kubernetes credentials and cluster configuration (.kube/config file) from the cluster
aws eks update-kubeconfig --region us-east-1 --name ashokit-eks-01
-
execute ns file
-
execute deployment file (it will fail)
-
Integrate Jfrog with Kubernetes cluster
Create a dedicated user in jfrog to use for a docker login user menu --> new user user name: jfrogcred email address: [email protected] password: Jfrogashokit1
- To pull an image from jfrog at the docker level, we should log into jfrog using username and password
$ docker login https://ashokit.jfrog.io
- genarate encode value for ~/.docker/config.json file
$ cat ~/.docker/config.json | base64 -w0
Note: use above command output in the secret
- Create secret
-
configure aws cli keys for ubunu user
-
update kube configfile for ubuntu user
-
Push k8s manifest files into git repo
-
Add deploy stage in jenkins
stage (" Deployment "){ steps { script { echo '<--------------- Deployment Started --------------->' sh 'sh deploy.sh' echo '<--------------- Deployment Ended --------------->' } } }
- install in jenkins-slave
$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 $ chmod 700 get_helm.sh $ ./get_helm.sh
- Validate helm installation
helm version helm list
- Setup helm repo
helm repo list helm repo add stable https://charts.helm.sh/stable helm repo update helm search repo stable
-
Create helm chart
$ helm create insta
-
Replace manifest files in templates directory
-
Package helm chart
$ helm package insta
-
intall helm chart
$ helm install insta
-
List helm deployments
$ helm list -a
-
Copy helm chart zip file from slave to our local and push to git hub
-
Add stage in jenkins to install
stage(" Deploy ") { steps { script { echo '<--------------- Helm Deploy Started --------------->' sh 'helm install insta ' echo '<--------------- Helm deploy Ends --------------->' } } }
- Kubernetes cluster
- helm
-
Create a dedicated namespace for prometheus
$ kubectl create namespace monitoring
-
Add Prometheus helm chart repository
$ sh helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
-
Update the helm chart repository
helm repo update helm repo list
-
Install the prometheus
sh helm install prometheus prometheus-community/kube-prometheus-stack --namespace monitoring
-
Above helm create all services as ClusterIP. To access Prometheus out side of the cluster, we should change the service type load balancer
sh kubectl edit svc prometheus-kube-prometheus-prometheus -n monitoring
-
Loginto Prometheus dashboard to monitor application https://ELB:9090
-
Check for node_load15 executor to check cluster monitoring
-
We check similar graphs in the Grafana dashboard itself. for that, we should change the service type of Grafana to LoadBalancer
$ sh kubectl edit svc prometheus-grafana
-
To login to Grafana account, use the below username and password
username: admin password: prom-operator
-
Here we should check for "Node Exporter/USE method/Node" and "Node Exporter/USE method/Cluster" USE - Utilization, Saturation, Errors
-
Even we can check the behavior of each pod, node, and cluster