Code Monkey home page Code Monkey logo

kubernetes-anywhere's Introduction

Kubernetes Anywhere

{concise,reliable,cross-platform} turnup of Kubernetes clusters

WARNING: kubernetes-anywhere is deprecated and will be retired in a future release.

Consider using some of these active projects instead:

Goals and Motivation

Learning how to deploy Kubernetes is hard because the default deployment automation cluster/kube-up.sh is opaque. We can do better, and by doing better we enable users to run Kubernetes in more places.

This implementation will be considered successful if it:

  • is portable across many deployment targets (e.g. at least GCE/AWS/Azure)
  • allows for an easy and reliable first experience with running multinode Kubernetes in the cloud
  • is transparent (the opposite of opaque) and can be used as a reference when creating deployments to new targets

Getting Started

If you want to deploy a cluster to kick the tires of Kubernetes, checkout one of the getting started guides for your preferred supported deployment target.

Diving Deeper

If you want to understand, read further about the design and implementation then dive into the code.

Deployment Design:

The input of the deployment is a cluster configuration object, specified as JSON object. We use Kconfig to describe the structure of this object and add configuration parameters. You may notice that scattered around this repository, there are Kconfig files that define configuration parameters. Running make config .config.json executes the configuration wizard and produces a file in the root of the repository, .config.json which stores this config object.

The deployment consists of three phases (not including generating the config object), provisioning, bootstrap and addon deployment:

  1. Resource Provisioning
  2. Node Bootstrap
  3. Addon Deployment

Phase 1: Resource Provisioning

Provisioning consists of creating the physical or virtual resources that the cluster will run on (ips, instances, persistent disks). Provisioning will be implemented per cloud provider. There will be an implementation of GCE/AWS/Azure provisioning that utilizes Terraform. This phase takes the cluster configuration object as input.

Phase 2: Node Bootstrap

Bootstrapping consists of on host installation and configuration. This process installs Docker and a single init unit for the kubelet running in a Docker container. On the master, it also places configuration files for master component static pods into the kubelet manifest directory, thus starting the control-plane.

The input to bootstrap phase is the cluster configuration object along with a small amount of other information (e.g. ip address of the master, cryptographic assets) that are output by phase 1. This step is currently implemented with a minimal Ignition configuration that runs in a Docker container that bootstraps the host over a chroot. This phase will ideally be implemented once for all deployment targets (with sufficient configuration parameters).

Phase 3: Deploying Cluster Addons

Addon deployment consists of deploying onto the Kubernetes cluster all the applications that make Kubernetes run. Examples of these apps are kube-dns, heapster monitoring, kube-proxy, a SDN node agent if the deployment calls for one. These applications are managed with kubectl apply and can be deployed and managed with a single command.

Tying it all together

Phase 1 should be sufficiently decoupled from phase 2 such that phase 2 could be used with minimal modification on deployment targets that don't have a phase 1 implemented for them (e.g. baremetal).

At the end of these two phases:

  • The master will be running a kubelet in a Docker container and (apiserver, controller-manager, scheduler, etcd and addon-manager) in static pods.
  • The nodes will be running a kubelet in a Docker container that is registered securely to the apiserver using TLS client key auth.

Deployment of fluentd, kube-proxy will happen with DaemonSets after this process through the addon manager. Deployment of heapster, kube-dns, all other addons will happen after this process through the addon manager.

There should be a reasonably portable default networking configuration. For this default: node connectivity will be configured during provisioning and pod connectivity will be configured during bootstrapping. Pod connectivity will (likely) use flannel and the kubelet cni network plugin. The pod networking configuration should be sufficiently decoupled from the rest of the bootstrapping configuration so that it can be swapped with minimal modification for other pod networking implementations.

Contributing

Please see CONTRIBUTING.md for instructions on how to contribute.

kubernetes-anywhere's People

Contributors

abrarshivani avatar adamdang avatar avirat28 avatar baludontu avatar colemickens avatar divyenpatel avatar errordeveloper avatar jcbsmpsn avatar jessicaochen avatar k8s-ci-robot avatar klizhentas avatar luxas avatar madhusudancs avatar mikedanese avatar mohgeek avatar mrhillsman avatar neolit123 avatar nikhita avatar obi1kenobi avatar ozdanborne avatar paulbellamy avatar pipejakob avatar raeesiqbal avatar roberthbailey avatar steveruckdashel avatar theofpa avatar thockin avatar uthark avatar vpetersson avatar xiangpengzhao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-anywhere's Issues

Remove kubelet hostname override with v1.2

docker-images/kubelet-anywhere.sh-17-if [ ${CLOUD_PROVIDER} = 'aws' ]
docker-images/kubelet-anywhere.sh:18:then ## TODO: check if not needed with v1.2.0 is out (see kubernetes/kubernetes#11543)
docker-images/kubelet-anywhere.sh-19-  args="${args} --hostname-override=${AWS_LOCAL_HOSTNAME}"
docker-images/kubelet-anywhere.sh-20-fi

Wrap-up EC2/Terraform example

  • base Terraform code (e191dc8)
  • cleanup
    • purge useless resources
    • flatten what can be flattened
    • add basic sensible parameters
    • refactor as a module
  • fix cloud provider expectations (#26)
  • use systemd (#33)
  • find and solve cause of weird error on CLI calls in provisioning script (#27)
  • _write the docs_

Fix SkyDNS with TLS

Service account appears to be missing, but even creating it doesn't seem to make it work.

Add instructions using Docker for Mac

We could leverage CNI, or use the proxy, which should still work as from the host's perspective weave.sock is still available, yet toolbox we have to be launched via the Docker plugin.

lack of shared mount

Following the "single host" path of the README, in the first bash shell:

[root@cd55303fe5b1 resources]# compose up -d
Creating resources_controller-manager_1
Creating resources_scheduler_1
Creating resources_apiserver_1
Creating resources_kubelet_1
ERROR: Cannot start container 86fbeefafb7bce30dd3b6dfbe5bd9c7c1d15ccb4cc02140ba01e3fc8b78def29: Path /var/lib/kubelet is mounted on / but it is not a shared mount.

Avoid nameserver entry for WeaveDNS

One of the pods gets this in /etc/resolv.conf:

search default.svc.kube.local svc.kube.local kube.local
nameserver 10.16.0.3
nameserver 172.17.0.1
nameserver 172.17.0.1
options ndots:5

What's unclear is why it has a duplicate entry for 172.17.0.1.

Document GCE example

Now that #5 is fulfilled for the GCE example, we can document it properly, i.e. provide instructions and not just scripts with Obvious Naming Convention™.

Admission controls require some token foo in non-TLS setup

I0120 12:43:21.021807       1 event.go:206] Event(api.ObjectReference{Kind:"ReplicationController", Namespace:"default", Name:"redis-slave", UID:"3d4f8c1a-bf73-11e5-8bd6-0242ac110002", APIVersion:"v1", ResourceVersion:"282", FieldPath:""}): reason: 'FailedCreate' Error creating: Pod "redis-slave-" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account
E0120 12:43:21.022792       1 replication_controller.go:357] unable to create pods: Pod "redis-slave-" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account

The service account exists, but there is no token to it.

Document Docker Machine example

Obvious Naming Convention™ is great, but people shouldn't have to read the code right away to see whether there are params to be set etc. Also the kubectl sequences are slightly different due to TLS being enabled.

Fix AWS cloud provider expectations

E0208 22:42:32.342954       1 servicecontroller.go:187] Failed to process service delta. Retrying: Failed to create load balancer for service default/frontend: no instances found for name: ip-172-20-0-179

Anything else?

Deployment instructions are unclear

I guess I run the toolbox, then run some commands inside it?

Could there be a weaveworks/kubernetes-anywhere:deploy image that just does all this?
Or just a weaveworks/kubernetes-anywhere:toolbox deploy tool, or something?

Test Docker Machine example with EC2 driver

First issue is with access to /var/run/docker.sock, it's owned by root:docker, so we need to either add ubuntu user to group docker or just prefix all commands with sudo...

Second issue is that the Weave Net ports need to be open, so we probably need to add so aws-foo to this.

Notes on etcd

From speaking to @jonboulle, there are a few things that need to be documented.

Firstly, it's enough to pass only one address of a node, thereby the API server doesn't really have to care about having the ETCD_CLUSTER_SIZE parameter and we can just pass it etcd1.weave.local and drop our amazing node list generator.

Secondly, etcd node replacement story won't work and magically just because the DNS name is the same, as all nodes are considered unique, although this could have worked with some very old version. So all user really benefits from is that no API server config change is needed when etcd cluster changes in any way.

client is newer than server (client API version: 1.22, server API version: 1.21)

I just followed your tutorial "Get started using a single Docker host" using a trusty64 ubuntu host:

[root@22df29b2be99 resources]# setup-kubelet-volumes
+ KUBERNETES_ANYWHERE_TOOLS_IMAGE=weaveworks/kubernetes-anywhere:tools
++ docker inspect '--format={{.State.Status}}' kubelet-volumes
Error response from daemon: client is newer than server (client API version: 1.22, server API version: 1.21)
+ [[ '' = \c\r\e\a\t\e\d ]]
+ def_docker_root=/var/lib/docker
+ def_kubelet_root=/var/lib/kubelet
+ '[' -d /rootfs/etc ']'
+ '[' -f /rootfs/etc/os-release ']'
+ case "$(eval `cat /rootfs/etc/os-release` ; echo $ID)" in
+++ cat /rootfs/etc/os-release
++ eval 'NAME="Ubuntu"' 'VERSION="14.04.4' LTS, Trusty 'Tahr"' ID=ubuntu ID_LIKE=debian 'PRETTY_NAME="Ubuntu' 14.04.4 'LTS"' 'VERSION_ID="14.04"' 'HOME_URL="http://www.ubuntu.com/"' 'SUPPORT_URL="http://help.ubuntu.com/"' 'BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"'
+++ NAME=Ubuntu
+++ VERSION='14.04.4 LTS, Trusty Tahr'
+++ ID=ubuntu
+++ ID_LIKE=debian
+++ PRETTY_NAME='Ubuntu 14.04.4 LTS'
+++ VERSION_ID=14.04
+++ HOME_URL=http://www.ubuntu.com/
+++ SUPPORT_URL=http://help.ubuntu.com/
+++ BUG_REPORT_URL=http://bugs.launchpad.net/ubuntu/
++ echo ubuntu
+ docker_root_vol='          --volume="/var/lib/docker/:/var/lib/docker:rw"         '
+ kubelet_root_vol='           --volume="/var/lib/kubelet:/var/lib/kubelet:rw,rshared"         '
+ docker run --pid=host --privileged=true weaveworks/kubernetes-anywhere:tools nsenter --mount=/proc/1/ns/mnt -- mount --make-rshared /
docker: Error response from daemon: client is newer than server (client API version: 1.22, server API version: 1.21).
See 'docker run --help'.
+ docker create --volume=/:/rootfs:ro --volume=/sys:/sys:ro --volume=/dev:/dev --volume=/var/run:/var/run:rw '--volume="/var/lib/kubelet:/var/lib/kubelet:rw,rshared"' '--volume="/var/lib/docker/:/var/lib/docker:rw"' --volume=/var/run/weave/weave.sock:/docker.sock --name=kubelet-volumes weaveworks/kubernetes-anywhere:tools true
Error response from daemon: client is newer than server (client API version: 1.22, server API version: 1.21)

Minor bashisms

Still have a few non-critical ones like this:

+ [ = aws ]
/kubelet-anywhere: 15: [: =: unexpected operator

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.