Bootstrap a local Ubunto 22.04
or Debian 12
2 nodes Kubernetes cluster using Vagrant.
This project aim to help quickly bootstrap a k8s training cluster for testing, devolpment or training purposes.
Please go ahead and install these two components from their official website.
vagrant up
Now Vagrant will start provisioning the cluster starting with the controlplane
node and the the worker node.
This setup is available with 2 nodes: a master node (controlplane
) and a worker node (node01
).
If you wish to bootstrap additional worker nodes please feel free to update your VagrantFile
accordingly.
Once the bootstrap process finished go ahead and inspect your cluster using the following command:
vagrant global-status
vagrant ssh controlplane
Once into the cluster use kubectl
command to manage kubernetes.
vagrant destroy -f
Name | IP address | CPU | Memory(Mi) |
---|---|---|---|
controlplane | 192.168.56.10 | 2 | 2048 |
node01 | 192.168.56.11 | 1 | 1024 |
Version: 1.7.11
You should use one of the available CNI compliant network plugins available here.
I recommend using one of the following three plugins depending on your needs:
Network Plugin Provider | Network Policy Support |
---|---|
Calico | YES |
Weavenet | YES |
Flunnel | NO |
We will use weavenet for our environment.
Pod Network CIDR: 10.244.0.0/16
Component | Version |
---|---|
containerd | 1.7.11 |
runc | 1.1.11 |
cni plugin | 1.4.0 |
etcd | 3.5.11 (or Latest) |
kubelet | 1.29.1 (or Latest 1.29) |
kubeadm | 1.29.1 |
Once all VMs are provisioned, follow this guide to setup our cluster.
SSH into controplane
node:
vagrant ssh controlplane
initialize your k8s cluster using kubeadm
:
sudo kubeadm init --apiserver-cert-extra-sans=controlplane --apiserver-advertise-address=192.168.56.10 --pod-network-cidr=10.244.0.0/16
Note: Depending on your cluster configuration you may want to use
--ignore-preflight-errors=NumCPU
to bypass the minimum requirement of 2 CPU.
sudo kubeadm init --apiserver-cert-extra-sans=controlplane --apiserver-advertise-address=192.168.56.10 --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU
Next, follow kubeadm
instruction to complete your k8s configuration:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
A Weave
deployment file is available under resources/weave-daemonset-k8s.yaml
.
You may want to install vagrant-scp
on your host in order to be able to transfer files from host to your VMs.
vagrant plugin install vagrant-scp
Then use the following command to transfer files:
vagrant scp resources/weave-daemonset-k8s.yaml controlplane:.
Go ahead and use that file in your vagrant controlplane
node to add your network plugin:
kubectl apply -f resources/weave-daemonset-k8s.yaml
Note: the weave deployment file was updated according to the pod network CIDR
10.244.0.0/16
used above.
If you use a different CIDR please update this file accordingly by settingIPALLOC_RANGE
to the right CIDR.
containers:
- name: weave
env:
- name: IPALLOC_RANGE
value: 10.244.0.0/16
SSH into every worker node (node01
):
vagrant ssh node01
Use the join command printed out by the previous kubeadm init
command:
sudo kubeadm join 192.168.56.10:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
Or generate a new join command by running the following command on the master node:
sudo kubeadm token create --print-join-command
Use the following command to inspect your cluster:
kubectl cluster-info
kubectl get nodes -o wide
kubectl get all --all-namespaces
kubectl run test-pod --image=busybox -- sleep 1d