This repository contains the resources to launch CloudBees Core on EKS using CloudFormation.
-
Active AWS subscription for CloudBees Core. Subscribe here!
-
Ensure you have IAM permissions to
- Create a Role
- Create a VPC
- Create an EC2 instance
You may also specify a role during the CloudFormation configuration.
- Click on the Launch button below to go to your CloudFormation console and create the stack. This will create a new EKS cluster and deploy CloudBees Core into the cluster.
- Click Next to go to the CloudFormation Details section to enter your input values.
- Enter the following:
- Stack name (required) - This is the CloudFormation stack name and also the namespace name for CloudBees Core to be used below.
- BootstrapArguments (optional) - See files/bootstrap.sh in https://github.com/awslabs/amazon-eks-ami
- ClusterName (required) - This is the name of the new EKS cluster.
- EKSRoleARN (required) - This is the IAM role that EKS will use to access other AWS services. It must have AmazonEKSClusterPolicy and AmazonEKSServicePolicy.
- KeyName (required) - This is the key pair to use for SSH.
- NodeAutoScalingGroupMaxSize, NodeAutoScalingGroupMinSize (required) - This is the EC2 node autoscaling range.
- NodeGroupName (required) - Name of the EC2 node group.
- NodeVolumeSize (required) - The volume size to use for the nodes.
- Click Next on the next steps and follow through to create the stack.
- Monitor the status of the stack as the resources are created.
- Go to the EKS Console and wait for the EKS cluster to be created. This will take several minutes.
- Go to the EC2 console and ensure that your EKS worker nodes are ready (running and checks complete).
- When completed and the EKS cluster shows Active, go to the EC2 console to get the public IP of the EC2 controller node.
- SSH to the controller node using the key pair that you specified.
ssh -i <key.pem> ec2-user@<controller-node-ip>
- Set the kubeconfig.
export KUBECONFIG=/home/ec2-user/.kube/config
- Set your kubernetes context.
sudo aws eks update-kubeconfig --name <eks-cluster-name> --region <region> --kubeconfig $KUBECONFIG
- Execute the following to get the list of namespaces.
kubectl get namespaces
You should see something like this.
NAME STATUS AGE
cloudbees-core-1 Active 2m
default Active 8m
ingress-nginx Active 2m
kube-public Active 8m
kube-system Active 8m
- Get the URL for CloudBees Core.
kubectl -n ingress-nginx get svc ingress-nginx -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
- Get the initial admin password.
kubectl exec -n <cloudbees namespace> cjoc-0 -- cat /var/jenkins_home/secrets/initialAdminPassword
- Enter the CloudBees Core URL into your browser.
- You will be presented with the CloudBees Core setup wizard. The first step is to enter the initial admin password. Enter it from above.
- Complete the next steps of the setup wizard to install plugins and create your first user. You may request a trial license or enter a commercial license.
Follow these steps to create a team and connect your repo.
Follow these steps to configure CloudBees Core for HTTPS.
Follow these steps to configure DNS for CloudBees Core
Request a license by sending an email to [email protected] or [email protected].
View the events in CloudFormation for your stack and look for failed events.
The bootstrap.sh script creates the EKS cluster and deploys CloudBees Core. The output of the script can provide help for troubleshooting issues.
From the EC2 console, select the EC2 controller node. Access the context menu and select Instance Settings -> Get System Log.
Access /var/log/cloud-init-output.log on the EC2 controller node instance. See above for how to SSH into the EC2 controller node.