Code Monkey home page Code Monkey logo

prom_testing's Introduction

Playbook Ordering:

  1. Clean previous bootstrap deployment if bootstrap_enabled=true
  2. Prepare the bootkube host
  3. Render assets | or | BYOA folder (copy bootkube folder to files directory)
  4. Download CNI requirements
  5. Deploy Kubelet (will be in wait state until API server is available)
  6. Sync bootkube directory in /tmp/bootkube (on remote hosts)
  7. Load declared sdn cni plugins
  8. Launch bootkube

Rendering Assets

Promenade is flexible, and allows operators to generate their own rendered assets if they wish. To that this point, there are two methods for implementing rendered assets with Promenade. You can either render assets on your own locally (on your own host or elsewhere), and have them deployed (this is default), or you can have Promenade intelligently render assets on your behalf as part of the bootstrap process. The boolean variable bootkube_render is what determines the rendering actions. Using bootkube_render=true at the time of deployment will override the default "bring your own assets" behavior and automatically render assets for you on the bootstrap host.

docker run quay.io/coreos/bootkube:v0.4.0
--insecure-options=image
--volume=target,kind=host,source="${PWD}"
--mount volume=target,target=/target
--net=none
--exec=/bootkube
-- render
--asset-dir=/target/assets/bootkube
--api-servers=https://172.15.0.10:443
--etcd-servers=http://master.k8s:2379
--api-server-alt-names=DNS=master.k8s,IP=172.15.0.10

Promenade: Manually Self-hosted Kubernetes via Bootkube

A small howto on how to bring up a self-hosted kubernetes cluster

We'll use bootkube to initiate the master-components. First we'll render the assets necessary for bringing up the control plane (apiserver, controller-manger, scheduler, etc). Then we'll start the kubelets which job is it to start the assets but can't do much, because there's no API-server yet. Running bootkube once will kick things off then. At a high-level the bootstrapping process looks like this:

Self-Hosted

Image taken from the self-hosted proposal.

This is how the final cluster looks like from a kubectl perspective:

Screenshot

Let's start!

Temporary apiserver: bootkube

Download

wget https://github.com/kubernetes-incubator/bootkube/releases/download/v0.3.9/bootkube.tar.gz
tar xvzf bootkube.tar.gz
sudo cp bin/linux/bootkube /usr/bin/

Render the Assets

Exchange 10.7.183.59 with the node you are working on. If you have DNS available group all master node IP addresses behind a CNAME Record and provide this insted.

bootkube render --asset-dir=assets --experimental-self-hosted-etcd --etcd-servers=http://10.3.0.15:2379 --api-servers=https://10.7.183.59:443

This will generate several things:

  • manifests for running apiserver, controller-manager, scheduler, flannel, etcd, dns and kube-proxy
  • a kubeconfig file for connecting to and authenticating with the apiserver
  • TLS assets

Start the Master Kubelet

Download hyperkube

wget http://storage.googleapis.com/kubernetes-release/release/v1.5.3/bin/linux/amd64/hyperkube -O ./hyperkube
sudo mv hyperkube /usr/bin/hyperkube
sudo chmod 755 /usr/bin/hyperkube

Install CNI

sudo mkdir -p /opt/cni/bin
wget https://github.com/containernetworking/cni/releases/download/v0.4.0/cni-amd64-v0.4.0.tbz2
sudo tar xjf cni-amd64-v0.4.0.tbz2 -C /opt/cni/bin/

Copy Configuration Files

sudo cp assets/auth/kubeconfig /etc/kubernetes/
sudo cp -a assets/manifests /etc/kubernetes/

Start the Kubelet

sudo hyperkube kubelet --kubeconfig=/etc/kubernetes/kubeconfig \
    --require-kubeconfig \
    --cni-conf-dir=/etc/kubernetes/cni/net.d \
    --network-plugin=cni \
    --lock-file=/var/run/lock/kubelet.lock \
    --exit-on-lock-contention \
    --pod-manifest-path=/etc/kubernetes/manifests \
    --allow-privileged \
    --node-labels=master=true \
    --minimum-container-ttl-duration=6m0s \
    --cluster_dns=10.3.0.10 \
    --cluster_domain=cluster.local \
    --hostname-override=10.7.183.59

The TLS credentials generated by bootkube render in assets/tls/ are copied to a secret: assets/manifests/kube-apiserver-secret.yaml.

Start the Temporary API Server

bootkube will serve as the temporary apiserver so the kubelet from above can start the real apiserver in a pod

sudo bootkube start --asset-dir=./assets  --experimental-self-hosted-etcd --etcd-server=http://127.0.0.1:12379

bootkube should exit itself after successfully bootstrapping the master components. It's only needed for the very first bootstrapping

Check the Output

watch hyperkube kubectl get pods -o wide --all-namespaces

Join Nodes to the Cluster

Copy the information where to find the apiserver and how to authenticate:

scp 10.7.183.59:assets/auth/kubeconfig .
sudo mkdir -p /etc/kubernetes
sudo mv kubeconfig /etc/kubernetes/

install cni binaries and download hyperkube

sudo mkdir -p /opt/cni/bin
wget https://github.com/containernetworking/cni/releases/download/v0.4.0/cni-amd64-v0.4.0.tbz2
sudo tar xjf cni-amd64-v0.4.0.tbz2 -C /opt/cni/bin/
wget http://storage.googleapis.com/kubernetes-release/release/v1.5.3/bin/linux/amd64/hyperkube -O ./hyperkube
sudo mv hyperkube /usr/bin/hyperkube
sudo chmod 755 /usr/bin/hyperkube

Master Nodes

Start the kubelet:

sudo hyperkube kubelet --kubeconfig=/etc/kubernetes/kubeconfig \
    --require-kubeconfig \
    --cni-conf-dir=/etc/kubernetes/cni/net.d \
    --network-plugin=cni \
    --lock-file=/var/run/lock/kubelet.lock \
    --exit-on-lock-contention \
    --pod-manifest-path=/etc/kubernetes/manifests \
    --allow-privileged \
    --node-labels=master=true \
    --minimum-container-ttl-duration=6m0s \
    --cluster_dns=10.3.0.10 \
    --cluster_domain=cluster.local \
    --hostname-override=10.7.183.60

Worker Nodes

Note the only difference is the removal of --node-labels=master=true:

sudo hyperkube kubelet --kubeconfig=/etc/kubernetes/kubeconfig \
    --require-kubeconfig \
    --cni-conf-dir=/etc/kubernetes/cni/net.d \
    --network-plugin=cni \
    --lock-file=/var/run/lock/kubelet.lock \
    --exit-on-lock-contention \
    --pod-manifest-path=/etc/kubernetes/manifests \
    --allow-privileged \
    --minimum-container-ttl-duration=6m0s \
    --cluster_dns=10.3.0.10 \
    --cluster_domain=cluster.local\
    --hostname-override=10.7.183.60

Scale Etcd

kubectl apply doesn't work for TPR at the moment. See kubernetes/kubernetes#29542. As a workaround, we use cURL to resize the cluster.

hyperkube kubectl --namespace=kube-system get cluster.etcd kube-etcd -o json > etcd.json && \
vim etcd.json && \
curl -H 'Content-Type: application/json' -X PUT --data @etcd.json http://127.0.0.1:8080/apis/etcd.coreos.com/v1beta1/namespaces/kube-system/clusters/kube-etcd

If that doesn't work, re-run until it does. See kubernetes-retired/bootkube#346 (comment)

Challenges

Node setup

Some Broadcom NICs panic'ed with the default Ubuntu kernel

  • upgrade kernel to >4.8 because of brcm nic failure
  • move to --storage-driver=overlay2 instead of aufs as docker driver
  • disable swap on the node (will be a fatal error in kube-1.6)

ToDo Items:

apiserver resiliance

the master apiservers need to have a single address only. Possible solutions:

  • use LB from the DC
  • use DNS from the DC with programmable API (e.g. powerdns)
  • use something like kube-keepalive-vip?
  • bootstrap DNS itself (skydns, coredns)

Etcd Challenges

Notes

clean up docker

sudo su -
docker rm -f $(docker ps -a -q)
exit

Compile Bootkube

sudo docker run --rm -it -v $(pwd)/golang/src:/go/src/ -w /go/src golang:1.7 bash
go get -u github.com/kubernetes-incubator/bootkube
cd $GOPATH/src/github.com/kubernetes-incubator/bootkube
make

RBAC

./bootkube-rbac render --asset-dir assets-rbac --experimental-self-hosted-etcd --etcd-servers=http://10.3.0.15:2379 --api-servers=https://10.7.183.59:443
sudo rm -rf /etc/kubernetes/*
sudo cp -a assets-rbac/manifests /etc/kubernetes/
sudo cp assets-rbac/auth/kubeconfig /etc/kubernetes/
sudo ./bootkube-rbac start --asset-dir=./assets-rbac --experimental-self-hosted-etcd --etcd-server=http://127.0.0.1:12379

Containerized Kubelet

The benefit here is using a docker container instead of a kubelet binary. Also the hyperkube docker image packages and installs the cni binaries. The downside would be that in either case something needs to start the container upon a reboot of the node. Usually the something is systemd and systemd is better managing binaries than docker containers. Either way, this is how you would run a containerized kubelet:

sudo docker run \
    --rm \
    -it \
    --privileged \
    -v /dev:/dev \
    -v /run:/run \
    -v /sys:/sys \
    -v /etc/kubernetes:/etc/kubernetes \
    -v /usr/share/ca-certificates:/etc/ssl/certs \
    -v /var/lib/docker:/var/lib/docker \
    -v /var/lib/kubelet:/var/lib/kubelet \
    -v /:/rootfs \
    quay.io/coreos/hyperkube:v1.5.3_coreos.0 \
    ./hyperkube \
        kubelet \
        --network-plugin=cni \
        --cni-conf-dir=/etc/kubernetes/cni/net.d \
        --cni-bin-dir=/opt/cni/bin \
        --pod-manifest-path=/etc/kubernetes/manifests \
        --allow-privileged \
        --hostname-override=10.7.183.60 \
        --cluster-dns=10.3.0.10 \
        --cluster-domain=cluster.local \
        --kubeconfig=/etc/kubernetes/kubeconfig \
        --require-kubeconfig \
        --lock-file=/var/run/lock/kubelet.lock \
        --containerized

Not quite working yet though. The node comes up, registeres successfully with the master and starts daemonsets. Everything comes up except flannel:

main.go:127] Failed to create SubnetManager: unable to initialize inclusterconfig: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory

Resources and References

prom_testing's People

Contributors

v1k0d3n avatar

Watchers

James Cloos avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.