Code Monkey home page Code Monkey logo

ansible-rpi-k8s-cluster's Introduction

ansible-rpi-k8s-cluster

This repo will be used for deploying a Kubernetes cluster on Raspberry Pi using Ansible.

Background

Why?

I have been looking at putting together a Kubernetes cluster using Raspberry Pi's for a while now. And I finally pulled all of it together and started pulling together numerous Ansible roles which I had already developed over time. I wanted this whole project to be provisioned with Ansible so I had a repeatable process to build everything out. As well as a way to share with others. I am still putting all of the pieces together so this will no doubt be a continual updated repo for some time.

How It Works

The following will outline the design of how this current iteration works. Basically we have a 5 (or more) node Raspberry Pi cluster. With the first node connecting to wireless to act as our gateway into the cluster. The first node is by far the most critical. We use the first nodes wireless connection to also do all of our provisioning of our cluster. We execute Ansible against all of the remaining nodes by using the first node as a bastion host via it's wireless IP. Once you obtain the IP of the first node's wireless connection you need to update jumphost_ip: in inventory/group_vars/all/all.yml as well as change the ansible_host for rpi-k8s-1 ansible_host=172.16.24.186 in inventory/hosts.inv. If you would like to change the subnet which the cluster will use, change dhcp_scope_subnet: in inventory/group_vars/all/all.yml to your desired subnet as well as the ansible_host addresses for the following nodes in inventory/hosts.inv:

[rpi_k8s_slaves]
rpi-k8s-2 ansible_host=192.168.100.128
rpi-k8s-3 ansible_host=192.168.100.129
rpi-k8s-4 ansible_host=192.168.100.130
rpi-k8s-5 ansible_host=192.168.100.131

NOTE: We may change to an automated inventory being generated if it makes things a little more easy.

The first node provides the following services for our cluster:

  • DHCP for all of the other nodes (only listening on eth0)
  • Gateway services for other nodes to connect to the internet and such.
    • An IPTABLES Masquerade rule NATs traffic from eth0 through wlan0
  • Apt-Cacher NG - A package caching proxy to speed up package downloads/installs.

NOTE: You can also define a static route on your LAN network firewall (if supported) for the subnet (192.168.100.0/24 in my case) to the wireless IP address that your first node obtains. Or you may add a static route on your Ansible control machine. This will allow you to communicate with all of the cluster nodes once they get an IP via DHCP from the first node.

For Kubernetes networking we are using Weave Net.

Requirements

Cloning Repo

Because we use submodules for many components within this project, we need to ensure that we get them as part of the cloning process.

git clone https://github.com/mrlesmithjr/ansible-rpi-k8s-cluster.git --recurse-submodules

Software

The following is a list of the required packages to be installed on your Ansible control machine (the machine you will be executing Ansible from).

Ansible

You can install Ansible in many different ways so head over to the official Ansible intro installation.

Kubernetes CLI Tools

You will also need to install the kubectl package. As with Ansible there are many different ways to install kubectl so head over to the official Kubernetes Install and Set Up kubectl.

NOTE: The Ansible playbook playbooks/deployments.yml fetches the admin.conf from the K8s master and copies this to your local $HOME/.kube/config. This allows us to run kubectl commands remotely to the cluster. There is a catch here though. The certificate is signed with the internal IP address of the K8s master. So in order for this to work correctly you will need to setup a static route on your firewall (if supported) to the subnet 192.168.100.0/24(in our case) via the wireless IP on your first node (also the K8s master). Or you may add a static route on your Ansible control machine.

Hardware

The following list is the hardware which I am using currently while developing this.

OS

Currently I am using Raspbian Lite for the OS. I did not orginally go with Hyperiot intentionally but may give it a go at some point.

Downloading OS

Head over here and download the RASPBIAN STRETCH LITE image.

Installing OS

I am using a Mac so my process will based on that so you may need to adjust based on your OS.

After you have finished downloading the OS you will want to extract the zip file 2017-11-29-raspbian-stretch-lite.zip in my case. After extrating the file you are ready to load the OS onto each and every SD card. In my case I am paying special attention to the first one. The first one we will be adding the wpa_supplicant.conf file which will connect us to wireless. We will use wireless as our gateway into the cluster. We want to keep this as isolated as possible.

First SD Card

With our zip file extracted we are now ready to load the image onto our SD card. Remember what I mentioned above, the first one is the one which we will use to connect to wireless.

Install OS Image

NOTE: Remember I am using a Mac so YMMV! You may also want to look into Etcher or PiBakery for a GUI based approach.

Open up your terminal and execute the following to determine the device name of the SD card:

diskutil list
...
/dev/disk0 (internal, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *500.3 GB   disk0
   1:                        EFI EFI                     209.7 MB   disk0s1
   2:          Apple_CoreStorage macOS                   499.4 GB   disk0s2
   3:                 Apple_Boot Recovery HD             650.0 MB   disk0s3

/dev/disk1 (internal, virtual):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:                  Apple_HFS macOS                  +499.0 GB   disk1
                                 Logical Volume on disk0s2
                                 7260501D-EA09-4048-91FA-3A911D627C9B
                                 Unencrypted

/dev/disk2 (external, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *32.0 GB     disk2
   1:                 DOS_FAT_16 NEW VOLUME              32.0 GB     disk2s1

From the above in my case I will be using /dev/disk2 which is my SD card.

Now we need to unmount the disk so we can write to it:

diskutil unmountdisk /dev/disk2

Now that our SD card is unmounted we are ready to write the OS image to it. And we do that by running the following in our terminal:

sudo dd bs=1m if=/Users/larry/Downloads/2017-11-29-raspbian-stretch-lite.img of=/dev/disk2 conv=sync

After that completes we now need to remount the SD card so that we can write some files to it.

diskutil mountdisk /dev/disk2

First we need to create a blank file ssh onto the SD card to enable SSH when the Pi boots up.

touch /Volumes/boot/ssh

Next we need to create the wpa_supplicant.conf file which will contain the configuration to connect to wireless. The contents of this file are listed below:

vi /Volumes/boot/wpa_supplicant.conf

wpa_supplicant.conf:

country=US
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
    ssid="your_real_wifi_ssid"
    scan_ssid=1
    psk="your_real_password"
    key_mgmt=WPA-PSK
}

Now that you have finished creating these files you can then unmount the SD card:

diskutil unmountdisk /dev/disk2

Now set this first one aside or place it into your Raspberry Pi that you want to be the first node.

Remaining SD cards

For the remaining SD cards you will follow the same process as in First SD Card except you will not create the wpa_supplicant.conf file on these. Unless you want to use wireless for all of your Pi's. If that is the case then that will be out of scope for this project (for now!).

Deploying

Ansible Variables

Most variables that need to be adjusted based on deployment can be found in inventory/group_vars/all/all.yml. Make sure to update jumphost_ip to the IP address that your first node obtained via DHCP and rpi_nodes to define the number of cluster nodes. If you do not define the rpi_nodes correctly and you fire up all of your cluster nodes. After the first node is provisioned for DHCP we wait for the number of DHCP leases to equal the number of cluster nodes (minus the first node). So if this is incorrect, provisioning will fail. So keep that in mind.

DHCP For Cluster

NOTE: We are using our cluster as cattle and not pets here folks. If you want to hand hold your cluster nodes then you will need to go to further extents beyond what this project is about. The only pet we have here is our first cluster node. This is because we need to know which one we need to connect to wireless, and which one routes, provides DHCP, and etc.

By default we are using DNSMasq now to provide DHCP for the cluster nodes. (Note: The first cluster node does not get it's address via DHCP, it is statically assigned) Being that we are using DHCP for the cluster nodes we need to first make sure that we account for the number of cluster nodes we are using. With this being said, we need to adjust a few variables and the inventory to account for this. The assumption within this project is that we are using 5 cluster nodes which is how DHCP is configured to accomodate such.

The important things to ensure that are configured correctly are listed below:

You should change the dhcp_scope_subnet: 192.168.100, dhcp_scope_start_range: "{{ dhcp_scope_subnet }}.128", dhcp_scope_end_range: "{{ dhcp_scope_subnet }}.131", and rpi_nodes variables to meet your requirements. Please review ansible-variables for further explanation on the importance of rpi_nodes.

# Defines DHCP scope end address
dhcp_scope_end_range: "{{ dhcp_scope_subnet }}.131"

# Defines DHCP scope start address
dhcp_scope_start_range: "{{ dhcp_scope_subnet }}.128"

# Defines dhcp scope subnet for isolated network
dhcp_scope_subnet: 192.168.100

# Defines the number of nodes in cluster
# Extremely important to define correctly, otherwise provisioning will fail.
rpi_nodes: 5

Based on the above we can ensure that we are only handing out 4 IP addresses to the cluster nodes because the first node again is statically assigned. This will account for our 5 node cluster. So if you have a different number of cluster nodes then you will need to adjust the start and end ranges. Why is this important? Because we can then ensure that we can define our inventory appropriately. And because we are treating all but our first node as cattle, we do not care which one in the stack is which, just as long as we can assign addresses to them and we can provision them.

Now, based on the details from above we need to ensure that our inventory is properly configured. So make sure that your inventory matches the DHCP range you defined and the nodes for the group rpi_k8s_slaves is accurate. Remembering that we are treating our slaves as cattle.

[rpi_k8s_slaves]
rpi-k8s-2 ansible_host=192.168.100.128
rpi-k8s-3 ansible_host=192.168.100.129
rpi-k8s-4 ansible_host=192.168.100.130
rpi-k8s-5 ansible_host=192.168.100.131

Ansible Playbook

To provision the full stack you can run the following:

ansible-playbook -i inventory playbooks/deploy.yml

Gotchas

If you happen to get the following error when attempting to deploy:

sshpass error
TASK [Gathering Facts] ***********************************************************************************************************************************************************
Sunday 11 February 2018  04:56:29 +0000 (0:00:00.029)       0:00:00.127 *******
fatal: [rpi-k8s-1]: FAILED! => {"msg": "to use the 'ssh' connection type with passwords, you must install the sshpass program"}
	to retry, use: --limit @/home/vagrant/ansible-rpi-k8s-cluster/playbooks/deploy.retry

Head over to here to resolve that issue.

SSH Key Missing

If you happen to get the following error when attempting to deploy:

TASK [Adding Local User SSH Key] *************************************************************************************************************************************************
Sunday 11 February 2018  04:58:28 +0000 (0:00:00.022)       0:00:34.350 *******
 [WARNING]: Unable to find '/home/vagrant/.ssh/id_rsa.pub' in expected paths.

fatal: [rpi-k8s-1]: FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'file'. Error was a <class 'ansible.errors.AnsibleError'>, original message: could not locate file in lookup: /home/vagrant/.ssh/id_rsa.pub"}
	to retry, use: --limit @/home/vagrant/ansible-rpi-k8s-cluster/playbooks/deploy.retry

You will need to generate an SSH key for your local user that you are running Ansible as:

ssh-keygen
...
Generating public/private rsa key pair.
Enter file in which to save the key (/home/vagrant/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/vagrant/.ssh/id_rsa.
Your public key has been saved in /home/vagrant/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:pX5si9jHbpe2Ubss4eLjGlMs7J3iC7PHOwkiDCkPE74 vagrant@node0
The key's randomart image is:
+---[RSA 2048]----+
|                 |
|.                |
|.o        .      |
|*.     . +       |
|.*.     S o   .  |
| E+ . .o = ... . |
|   . .oo*o*..o.  |
|       B=B*.*o . |
|      o.B@==.oo  |
+----[SHA256]-----+
Fixing Broken GlusterFS Repo

If you experience the following issue you can run the playbook fix_glusterfs_repo.yml which will remove the broken 3.10 repo. Once that is done you should be good to go and be able to run deploy.yml once again.

Managing WI-FI On First Node

To manage the WI-FI connection on your first node. You can create a wifi.yml file in inventory/group_vars/all with the following defined variables:

NOTE: wifi.yml is added to the .gitignore to ensure that the file is excluded from Git. Use your best judgment here. It is probably a better idea to encrypt this file with ansible-vault. The task(s) to manage WI-FI are in playbooks/bootstrap.yml and will only trigger if the variables defined below exist.

k8s_wifi_country: US
k8s_wifi_password: mysecretwifipassword
k8s_wifi_ssid: mywifissid

CAUTION: If your WI-FI IP address changes, Ansible will fail as it will no longer be able to connect to the original IP address. Keep this in mind.

If you would like to simply manange the WI-FI connection you may run the following:

ansible-playbook -i inventory playbooks/bootstrap.yml --tags rpi-manage-wifi

Routing

In order to use kubectl from your Ansible control machine, you need to ensure that you have a static route either on your LAN firewall or your local routing table on your Ansible control machine.

Adding Static Route On macOS

In order to add a static route on you will need to do the following:

NOTE: Replace 172.16.24.186 with the IP of that your first node obtained via DHCP. Also update 192.168.100.0/24 with the subnet that you changed the variable dhcp_scope_subnet in inventory/group_vars/all/all.yml to if you changed it.

sudo route -n add 192.168.100.0/24 172.16.24.186
...
Password:
add net 192.168.100.0: gateway 172.16.24.186

You can verify that the static route is definitely configured by executing the following:

netstat -nr | grep 192.168.100
...
192.168.100        172.16.24.186      UGSc            0        0     en0

Deleting Static Route on macOS

If you decide to delete the static route you can do so by executing the following:

sudo route -n delete 192.168.100.0/24 172.16.24.186
...
Password:
delete net 192.168.100.0: gateway 172.16.24.186

Load Balancing And Exposing Services

MetalLB

Deploying MetalLB

MetalLB is a load-balancer implementation for bare metal Kubernetes clusters, using standard routing protocols.

Deploying MetalLB Using Kubectl
kubectl apply -f deployments/metallb/deploy.yaml
...
namespace/metallb-system created
podsecuritypolicy.policy/speaker created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
daemonset.apps/speaker created
deployment.apps/controller created
Configuring MetalLB

MetalLB remains idle until configured. This is accomplished by creating and deploying a configmap into the same namespace (metallb-system) as the deployment. We will be using MetalLB layer 2 and configuring the address space to be within our isolated cluster subnet 192.168.100.0/24.

NOTE: To access exposed services provided by MetalLB, you will need to ensure that you have routed access into your isolated subnet. Reference

kubectl apply -f deployments/metallb/config.yaml
...
configmap/config created

Traefik

Deploying Traefik

We have included Traefik as an available load balancer which can be deployed to expose cluster services.

You can deploy Traefik using one of the following methods.

Deploy Traefik Using Kubectl
kubectl apply -f deployments/traefik/deploy.yaml
...
serviceaccount/traefik-ingress-controller created
clusterrolebinding.rbac.authorization.k8s.io/traefik-ingress-controller created
configmap/traefik-cfg created
deployment.apps/traefik-ingress-controller created
Deploy Traefik Using Helm
helm install stable/traefik --name traefik -values deployments/traefik/values.yaml --namespace kube-system

Accessing Traefik WebUI

You can access the Traefik WebUI by heading over to http://wirelessIP:8080/dashboard/ (replace wirelessIP with your actual IP of the wireless address on the first node).

Traefik

Load Balanced NGINX Demo Deployment

We have included an example NGINX deployment using either MetalLB or Traefik in which you can easily spin up for learning and testing. Both use the demo Namespace.

NGINX Load Balanced With MetalLB

You can deploy using kubectl by doing the following:

kubectl apply -f deployments/metallb/nginx-deployment.yaml

You may also deploy using Terraform by doing the following:

cd deployments/terraform
terraform init
terraform apply

NGINX Load Balanced With Traefik

This deployment creates the demo Namespace, nginx-demo Deployment with 2 replicas using the nginx image, nginx-demo Service, nginx-demo Ingress, and attaches itself to the Traefik load balancer with the path /demo but strips the path prefix so that the default NGINX container(s) will return the default page as / rather than /demo because that would fail. You can then connect to the default web page by connecting to http://wirelessIP/demo.

To spin up this demo simply execute the following:

kubectl apply -f deployments/traefik/nginx-deployment.yaml
...
namespace/demo created
deployment.extensions/nginx-demo created
service/nginx-demo created
ingress.extensions/nginx-demo created

To validate all is good:

kubectl get all --namespace demo
...
NAME                DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/nginx-demo   2         2         2            2           11m

NAME                       DESIRED   CURRENT   READY     AGE
rs/nginx-demo-76c897787b   2         2         2         11m

NAME                DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/nginx-demo   2         2         2            2           11m

NAME                       DESIRED   CURRENT   READY     AGE
rs/nginx-demo-76c897787b   2         2         2         11m

NAME                             READY     STATUS    RESTARTS   AGE
po/nginx-demo-76c897787b-gzwgl   1/1       Running   0          11m
po/nginx-demo-76c897787b-pzfrl   1/1       Running   0          11m

NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
svc/nginx-demo   ClusterIP   10.102.204.13   <none>        80/TCP    11m

To tear down this demo simply execute the following:

kubectl delete -f deployments/nginx_deployment.yaml

Consul Cluster

We have included a 3 node Consul cluster to spin up in the default namespace. MetalLB is required for this deployment as we are exposing the Consul UI over port 8500.

To spin up this Consul cluster simply execute one of the following:

Consul Cluster Using Kubectl

kubectl apply -f deployments/consul/deploy.yaml
...
serviceaccount/consul created
clusterrole.rbac.authorization.k8s.io/consul created
clusterrolebinding.rbac.authorization.k8s.io/consul created
statefulset.apps/consul created
service/consul-ui created

Consul Cluster Using Terraform

NOTE: Coming soon

Validating Consul Members

kubectl exec consul-0 consul members
...
Node      Address         Status  Type    Build  Protocol  DC   Segment
consul-0  10.36.0.3:8301  alive   server  1.6.0  2         dc1  <all>
consul-1  10.42.0.1:8301  alive   server  1.6.0  2         dc1  <all>
consul-2  10.35.0.2:8301  alive   server  1.6.0  2         dc1  <all>

Kubernetes Dashboard

We have included the Kubernetes dashboard as part of the provisioning. By default the dashboard is only available from within the cluster. So in order to connect to it you have a few options.

kubectl proxy

If you have installed kubectl on your local machine then you can simply drop to your terminal and type the following:

kubectl proxy
...
Starting to serve on 127.0.0.1:8001

Now you can open your browser of choice and head here

SSH Tunnel

NOTE: This method will also only work if you have a static route into the cluster subnet 192.168.100.0/24.

You can also use an SSH tunnel to your Kubernetes master node (any cluster node will work, but because the assumption is that the first node will be the only one accessible over WI-FI). First you need to find the kubernetes-dashboard ClusterIP, and you can do that by executing the following:

kubectl get svc --namespace kube-system kubernetes-dashboard
...
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes-dashboard   ClusterIP   10.106.41.154   <none>        443/TCP   2d

And from the above you will see that the ClusterIP is 10.106.41.154. Now you can create the SSH tunnel as below:

ssh -L 8001:10.106.41.154:443 [email protected]

Now you can open your browser of choice and head here

Admin Privileges

If you would like to allow admin privileges without requiring either a kubeconfig or token then you can apply the following ClusterRoleBinding:

NOTE: You can find more details on this here.

kubectl apply -f deployments/dashboard-admin.yaml

And now when you connect to the dashboard you can click skip and have full admin access. This is obviously not good practice, so you should delete this ClusterRoleBinding when you are done:

kubectl delete -f deployments/dashboard-admin.yaml

Cluster DNS and Service Discovery

NOTE: CoreDNS is now the default so this is only for historical references and will likely be removed.

You may wish to update the default DNS service to use CoreDNS to learn and/or whatever you want. The good news is that CoreDNS will eventually be the default DNS replacing kube-dns. So may as well start testing now!

Update Existing Cluster Using kubectl

NOTE: CoreDNS can run in place of the standard Kube-DNS in Kubernetes. Using the kubernetes plugin, CoreDNS will read zone data from a Kubernetes cluster.

If you would like to replace the default DNS service installed during provisioning with CoreDNS, you can easily do so by doing the following:

cd deployments/archive
./deploy-coredns.sh | kubectl apply -f -
kubectl delete --namespace=kube-system deployment kube-dns

Update Existing Cluster Using kubeadm

The following can be used if you would rather use kubeadm to update your cluster to use CoreDNS rather than using the kubectl method above. First you should check to make sure that this method is possible within your cluster.

kubeadm upgrade plan  --feature-gates CoreDNS=true
...
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.9.3
[upgrade/versions] kubeadm version: v1.9.2
[upgrade/versions] Latest stable version: v1.9.3
[upgrade/versions] Latest version in the v1.9 series: v1.9.3

Awesome, you're up-to-date! Enjoy!

Verifying CoreDNS

Checking pod status:

kubectl get pods --namespace kube-system -o wide
...
NAME                                          READY     STATUS    RESTARTS   AGE       IP                NODE
coredns-7f969bcf8c-458jv                      1/1       Running   0          22m       10.34.0.3         rpi-k8s-4
coredns-7f969bcf8c-nfpf7                      1/1       Running   0          22m       10.40.0.3         rpi-k8s-5
etcd-rpi-k8s-1                                1/1       Running   0          4d        192.168.100.1     rpi-k8s-1
heapster-8556df7b6b-cplz6                     1/1       Running   0          4d        10.34.0.0         rpi-k8s-4
kube-apiserver-rpi-k8s-1                      1/1       Running   2          4d        192.168.100.1     rpi-k8s-1
kube-controller-manager-rpi-k8s-1             1/1       Running   2          4d        192.168.100.1     rpi-k8s-1
kube-proxy-644h6                              1/1       Running   0          4d        192.168.100.130   rpi-k8s-4
kube-proxy-8dfbr                              1/1       Running   0          4d        192.168.100.1     rpi-k8s-1
kube-proxy-fcpqp                              1/1       Running   0          4d        192.168.100.131   rpi-k8s-5
kube-proxy-kh4jq                              1/1       Running   0          4d        192.168.100.128   rpi-k8s-2
kube-proxy-tjckk                              1/1       Running   0          4d        192.168.100.129   rpi-k8s-3
kube-scheduler-rpi-k8s-1                      1/1       Running   2          4d        192.168.100.1     rpi-k8s-1
kubernetes-dashboard-6686846dfd-z62q4         1/1       Running   0          4d        10.40.0.0         rpi-k8s-5
monitoring-grafana-6859cdd4bd-7bk5c           1/1       Running   0          4d        10.45.0.0         rpi-k8s-3
monitoring-influxdb-59cb7cb77b-rpmth          1/1       Running   0          4d        10.46.0.0         rpi-k8s-2
tiller-deploy-6499c74d46-hjgcr                1/1       Running   0          2d        10.40.0.1         rpi-k8s-5
traefik-ingress-controller-6ffd67bfcf-wb5m2   1/1       Running   0          2d        192.168.100.1     rpi-k8s-1
weave-net-7xfql                               2/2       Running   12         4d        192.168.100.1     rpi-k8s-1
weave-net-kxw2h                               2/2       Running   2          4d        192.168.100.129   rpi-k8s-3
weave-net-rxsfg                               2/2       Running   2          4d        192.168.100.128   rpi-k8s-2
weave-net-rzwd2                               2/2       Running   2          4d        192.168.100.131   rpi-k8s-5
weave-net-wnk26                               2/2       Running   2          4d        192.168.100.130   rpi-k8s-4

Checking deployment status:

kubectl get deployment --namespace kube-system
...
NAME                         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
coredns                      2         2         2            2           28m
heapster                     1         1         1            1           4d
kubernetes-dashboard         1         1         1            1           4d
monitoring-grafana           1         1         1            1           4d
monitoring-influxdb          1         1         1            1           4d
tiller-deploy                1         1         1            1           2d
traefik-ingress-controller   1         1         1            1           2d

Checking dig results:

First you need to find the CLUSTER-IP:

kubectl get service --namespace kube-system kube-dns
...
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP   50m

Now you can use dig:

dig @10.96.0.10 default.svc.cluster.local +noall +answer
...
; <<>> DiG 9.10.3-P4-Raspbian <<>> @10.96.0.10 default.svc.cluster.local +noall +answer
; (1 server found)
;; global options: +cmd
default.svc.cluster.local. 5	IN	A	10.45.0.2
default.svc.cluster.local. 5	IN	A	10.96.0.1
default.svc.cluster.local. 5	IN	A	10.34.0.2
default.svc.cluster.local. 5	IN	A	10.109.20.92

Helm

We have also enabled Helm as part of the provisioning of the cluster. However, because we are using Raspberry Pi's and Arm architecture we need to make some adjustments post deployment.

kubectl set image deploy/tiller-deploy tiller=jessestuart/tiller:v2.9.0 --namespace kube-system

Persistent Storage

GlusterFS

We have included GlusterFS as backend for persistent storage to be used by containers. We are not using Heketi at this time. So all managment of GlusterFS is done via Ansible. Check out the group_vars in inventory/group_vars/rpi_k8s/glusterfs.yml to define the backend bricks and client mounts.

GlusterFS can also be defined to be available in specific namespaces by defining the following in inventory/group_vars/all/all.yml:

k8s_glusterfs_namespaces:
  - default
  - kube-system

By defining GlusterFS into specific namespaces allows persistent storage to be available for consumption within those namespaces.

Deploying GlusterFS In Kubernetes

You must first deploy the Kubernetes Endpoints and Service defined in deployments/glusterfs/deploy.yaml. This file is dynamically generated during provisioning if glusterfs_volume_force_create: true.

kubectl apply -f deployments/glusterfs/deploy.yaml
...
endpoints/glusterfs-cluster created
endpoints/glusterfs-cluster created
service/glusterfs-cluster created

Using GlusterFS In Kubernetes Pod

In order to use GlusterFS for persistent storage you must define your pod(s) to do so. Below is an example of a pod definition:

---
apiVersion: v1
kind: Pod
metadata:
  name: glusterfs
spec:
  containers:
    - name: glusterfs
      image: armhfbuild/nginx
      volumeMounts:
        - mountPath: /mnt/glusterfs
          name: glusterfsvol
  volumes:
    - name: glusterfsvol
      glusterfs:
        endpoints: glusterfs-cluster
        path: volume-1
        readOnly: false

Monitoring

Resetting The Kubernetes Cluster

If for any reason you would like to reset the Kubernetes cluster. You can easily run the following Ansible playbook which will take care of that for you.

ansible-playbook -i inventory/ playbooks/reset_cluster.yml

License

MIT

Author Information

Larry Smith Jr.

Buy Me A Coffee

ansible-rpi-k8s-cluster's People

Contributors

aaronkjones avatar mrlesmithjr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ansible-rpi-k8s-cluster's Issues

download.gluster.org cannot be resolved on slaves, but on master

I get

ok: [rpi-k8s-1]
fatal: [rpi-k8s-3]: FAILED! => {"changed": false, "msg": "Failed to connect to download.gluster.org at port 443: [Errno -3] Temporärer Fehler bei der Namensauflösung"}

...any idea why the master can resolve the url, but the slaves not?

Kind regards,

Daniel

Adjust delays for wait_for_connection

During a fresh deployment I ran into some timing issues on wait_for_connection tasks. When a reboot occurred, the wait_for_connection task would catch the node as being up before the node actually rebooted therefore causing next tasks to fail because the node was in the middle of a reboot.

GlusterFS breaks on Raspbian 10 Buster

Cause

I believe the glusterfs-server service was renamed to glusterd and glustereventsd

Error output

TASK [ansible-glusterfs : debian | starting GlusterFS] ***************************************************************************************************************************************************************************************
Wednesday 03 July 2019  20:58:07 -0700 (0:00:05.627)       0:07:54.650 ********
[DEPRECATION WARNING]: evaluating rpi_k8s_use_glusterfs as a bare variable, this behaviour will go away and you might need to add |bool to the expression in the future. Also see CONDITIONAL_BARE_VARS configuration toggle.. This feature
will be removed in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
[DEPRECATION WARNING]: evaluating rpi_k8s_use_glusterfs as a bare variable, this behaviour will go away and you might need to add |bool to the expression in the future. Also see CONDITIONAL_BARE_VARS configuration toggle.. This feature
will be removed in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
[DEPRECATION WARNING]: evaluating rpi_k8s_use_glusterfs as a bare variable, this behaviour will go away and you might need to add |bool to the expression in the future. Also see CONDITIONAL_BARE_VARS configuration toggle.. This feature
will be removed in version 2.12. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
fatal: [rpi-k8s-1]: FAILED! => {"changed": false, "msg": "Could not find the requested service glusterfs-server: host"}
ok: [rpi-k8s-2]
ok: [rpi-k8s-3]

Manually installing glusterfs-server and glusterfs-client

Setting up glusterfs-client (5.5-3) ...
Setting up glusterfs-server (5.5-3) ...
glusterd.service is a disabled or a static unit, not starting it.
glustereventsd.service is a disabled or a static unit, not starting it.
Processing triggers for man-db (2.8.5-2) ...

glusterd service

$ systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/lib/systemd/system/glusterd.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:glusterd(8)

glustereventsd service

systemctl status glustereventsd
● glustereventsd.service - Gluster Events Notifier
   Loaded: loaded (/lib/systemd/system/glustereventsd.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:glustereventsd(8)

Work-around

- name: debian | starting GlusterFS Daemon
  service:
    name: "glusterd"
    state: started
    enabled: yes
  when: >
        inventory_hostname in groups[glusterfs_server_group] or
        (groups[glusterfs_arbiter_group] is defined and
        inventory_hostname in groups[glusterfs_arbiter_group])

- name: debian | starting GlusterFS Events Daemon
  service:
    name: "glustereventsd"
    state: started
    enabled: yes
  when: >
        inventory_hostname in groups[glusterfs_server_group] or
        (groups[glusterfs_arbiter_group] is defined and
        inventory_hostname in groups[glusterfs_arbiter_group])
  • re-run task

Static Routing issue (Kali VM) and kube-apiserver process using 100%CPU

Configuration:
Ansible Control Machine (Kali 2018.4 VM), using VMWare Workstation (natt'ed network)
Pull of ansible-rpi-k8s-project of 11/7/18.

Since both issues described here (static routing issue) and CPU spike kube-apiserver might be related keeping issues together in one report for now.

Problem 1: Static Routing issue - SSH Kali host
Cannot ssh from Ansible Control Machine (192.168.80.135) to rpi-k8s-1 (192.168.1.100, statically assigned). ssh to WIFI (DHCP) address (15.15.15.147) works fine

Expected results:
ssh from 192.168.80.135 to 192.168.1.100 to work (Similar as to 15.15.15.147)

Problem 2: kube-apiserver process consumes CPU and freezes rpi-k8s-1 within minutes
After successful execution of ansible-playbook -i inventory playbooks/deploy.yml, kube-apiserver
(by resolving issue 1 & 2 listed below) process runs out of memory and freezes rpi-k8s-1 within minutes
Workaround: manually kill kube-apiserver process each few minutes

Expected results:
Expect rpi-k8s-1 to continue to function (cap memory usage of of kube-apiserver process)

Resolved Issue 1:
ssh to 15.15.15.147 (WIFI rpi-k8s-1) fails.
SOLVED: Updated ansible.cfg: ssh_args = -o IPQos=throughput

Resolved Issue 2:
ansible-playbook -i inventory playbooks/deploy.yml fails to contact slave nodes

Snippet error in output

…..
TASK [Gathering Facts] ********************************************************************************************
Tuesday 06 November 2018  13:03:26 -0800 (0:00:01.136)       0:00:50.157 ******
fatal: [rpi-k8s-2]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to remote host \"192.168.100.128\". Make sure this host can be reached over ssh", "unreachable": true}
fatal: [rpi-k8s-3]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to remote host \"192.168.100.129\". Make sure this host can be reached over ssh", "unreachable": true}

SOLVED: Updated inventory/group_vars/rpi_k8s_slaves/all.yml:
ansible_ssh_common_args: '-o ProxyCommand="ssh -o IPQos=throughput -W %h:%p -q {{ ansible_user }}@{{ jumphost_ip }}"'

After resolving issue 1 & 2 : ansible-playbook -i inventory playbooks/deploy.yml completes successfully

Continued/existing problems:

Problem 1: Cannot ssh to rpi-k8s-1 (192.168.1.100, statically assigned)
Ssh to rpi-k8s-1 (15.15.15.147, wifi, dhcp) works fine

Problem 2: Kube-apiserver consumes 100%cpu every few minutes and freezes rpi-k8s-1

Inconsistent selection of k8s_master

I was having a strange thing where after a certain point in the deployment it would suddenly think that the cluster master was the first slave node, not the actual master node, leading to all sorts of weird behaviour.

The way the k8s_master is selected in roles/ansible-k8s/tasks/set_facts.yml is

- name: set_facts | Setting K8s Master
  set_fact:
    k8s_master: "{{ groups[k8s_master_group][0] }}"
  tags:
    - k8s_get_dashboard

However ordering in a group is not guaranteed, it's a python list, which aren't ordered (see ansible/ansible#31735).

A fix would be to have it look instead at groups["rpi_k8s_master"][0], rather than the rpi_k8s group, which has all the nodes in it.

The other place that this happens is in roles/ansible-k8s/defaults/main.yml, except here it's

k8s_cluster_group: k8s

I can't see how that works at all, k8s isn't even a group. I assume it's being overwritten in the task.

I have a PR ready to go that I think fixes these.

Kubernetes dashboard path changed

They changed the path of the dashboard yams file

  • Old path
    https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard-arm.yaml

  • New path
    https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard-arm.yaml

Error with deployment.

Hi,

I am getting:

TASK [Ensuring dnsmasq Is Started And Enabled On Boot] *******************************************************************************************************************************
Saturday 28 December 2019  22:29:33 -0800 (0:00:03.963)       0:02:26.570 *****
fatal: [rpi-k8s-1]: FAILED! => {"changed": false, "msg": "Could not find the requested service dnsmasq: host"}

PLAY RECAP ***************************************************************************************************************************************************************************
localhost                  : ok=2    changed=0    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0
rpi-k8s-1                  : ok=16   changed=10   unreachable=0    failed=1    skipped=2    rescued=0    ignored=0

And it looks like the installation failed because kubectl or kubeadm are not installed on the master at all. Thanks for your help.

Deploy fails on capturing cluster nodes

To rule out this being related to #7, I tried using glusterfs 3.8 and 3.10. The deployment fails on capturing cluster nodes.

Ansible output

TASK [ansible-k8s : cluster_summary | Capturing Cluster Nodes] **************************************************************************************************************************
Tuesday 01 May 2018  10:04:48 -0700 (0:00:00.097)       0:17:13.327 ***********
skipping: [rpi-k8s-2]
skipping: [rpi-k8s-3]
skipping: [rpi-k8s-4]
FAILED - RETRYING: cluster_summary | Capturing Cluster Nodes (30 retries left).
FAILED - RETRYING: cluster_summary | Capturing Cluster Nodes (29 retries left).
FAILED - RETRYING: cluster_summary | Capturing Cluster Nodes (28 retries left).
FAILED - RETRYING: cluster_summary | Capturing Cluster Nodes (27 retries left).
fatal: [rpi-k8s-1]: FAILED! => {"msg": "Timeout (12s) waiting for privilege escalation prompt: "}

Actual error

root@rpi-k8s-1:/home/pi# kubectl --kubeconfig /etc/kubernetes/admin.conf get nodes                    
Unable to connect to the server: net/http: TLS handshake timeout

API server listening

tcp6      78      0 :::6443                 :::*                    LISTEN      2132/kube-apiserver

Container logs

root@rpi-k8s-1:/var/log# docker logs k8s_kube-apiserver_kube-apiserver-rpi-k8s-1_kube-system_d7500013e70b20a01b9a66898ee099ff_9
Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.
Flag --insecure-port has been deprecated, This flag will be removed in a future version.
I0501 17:59:18.793334       1 server.go:135] Version: v1.10.2
I0501 17:59:18.793742       1 server.go:724] external host was not specified, using 192.168.100.1

Attempted to do kubeadm reset and re-run it (after rebooting the cluster). Same error occurs.

Adding K8s Repo step fails on Buster

TASK [ansible-k8s : debian | Adding K8s Repo] ************************************************************************************************************************************************************************************************
Thursday 04 July 2019  13:30:20 -0700 (0:00:05.721)       0:03:57.353 *********
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: apt.cache.FetchFailedException: E:The repository 'http://apt.kubernetes.io kubernetes-buster Release' does not have a Release file.
fatal: [rpi-k8s-1]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n  File \"<stdin>\", line 114, in <module>\n  File \"<stdin>\", line 106, in _ansiballz_main\n  File \"<stdin>\", line 49, in invoke_module\n  File \"/tmp/ansible_apt_repository_payload_Qw3oql/__main__.py\", line 554, in <module>\n  File \"/tmp/ansible_apt_repository_payload_Qw3oql/__main__.py\", line 546, in main\n  File \"/usr/lib/python2.7/dist-packages/apt/cache.py\", line 562, in update\n    raise FetchFailedException(e)\napt.cache.FetchFailedException: E:The repository 'http://apt.kubernetes.io kubernetes-buster Release' does not have a Release file.\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

Can it work with Rpi 2

This is awesome!

Can you foresee any challenges trying to adapt this to Rpi2.

Maybe the OS.

Kubernetes Package version conflicts

When trying to install the kubernetes packages I get

"TASK [ansible-k8s : debian | Installing K8s Packages] *************************************************************************************************************************************
Sunday 28 April 2019 09:31:52 +0200 (0:00:00.061) 0:01:51.346 **********
failed: [rpi-k8s-1] (item=[u'kubelet', u'kubeadm', u'kubectl', u'kubernetes-cni']) => {"cache_update_time": 1556435223, "cache_updated": false, "changed": false, "item": ["kubelet", "kubeadm", "kubectl", "kubernetes-cni"], "msg": "'/usr/bin/apt-get -y -o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold" install 'kubelet' 'kubeadm' 'kubectl' 'kubernetes-cni'' failed: E: Unable to correct problems, you have held broken packages.\n", "rc": 100, "stderr": "E: Unable to correct problems, you have held broken packages.\n", "stderr_lines": ["E: Unable to correct problems, you have held broken packages."], "stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\nSome packages could not be installed. This may mean that you have\nrequested an impossible situation or if you are using the unstable\ndistribution that some required packages have not yet been created\nor been moved out of Incoming.\nThe following information may help to resolve the situation:\n\nThe following packages have unmet dependencies:\n kubeadm : Depends: kubernetes-cni (= 0.6.0) but 0.7.5-00 is to be installed\n kubelet : Depends: kubernetes-cni (= 0.6.0) but 0.7.5-00 is to be installed\n", "stdout_lines": ["Reading package lists...", "Building dependency tree...", "Reading state information...", "Some packages could not be installed. This may mean that you have", "requested an impossible situation or if you are using the unstable", "distribution that some required packages have not yet been created", "or been moved out of Incoming.", "The following information may help to resolve the situation:", "", "The following packages have unmet dependencies:", " kubeadm : Depends: kubernetes-cni (= 0.6.0) but 0.7.5-00 is to be installed", " kubelet : Depends: kubernetes-cni (= 0.6.0) but 0.7.5-00 is to be installed"]}"

Reset Cluster Fails

Description

Reset cluster playbook fails due to prompt Are you sure you want to proceed? [y/N]

Error

fatal: [rpi-k8s-2]: FAILED! => {"changed": true, "cmd": ["kubeadm", "reset"], "delta": "0:00:00.105217", "end": "2019-07-05 19:16:51.451584", "msg": "non-zero return code", "rc": 1, "start": "2019-07-05 19:16:51.346367", "stderr": "Aborted reset operation", "stderr_lines": ["Aborted reset operation"], "stdout": "[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.\n[reset] Are you sure you want to proceed? [y/N]: ", "stdout_lines": ["[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.", "[reset] Are you sure you want to proceed? [y/N]: "]}

Resolution

Use kubeadm reset -f instead of kubeadm reset

dnsmasq resolver doesnt forward (intermittent)

had issues resolving raspbian.raspberrypi.org from slaves (for installing NTP), rebooted master and then worked OK. clients later tried to resolve download.gluster.org and couldnt. rebooted master again and then worked OK. Also tried using ISC DHCP instead of dnsmasq, not a better outcome.

Error when updating hosts

r

TASK [Updating /etc/hosts] *****************************************************
Friday 22 February 2019 21:03:39 -0500 (0:00:01.352) 0:00:30.846 *******
fatal: [rpi-k8s-1]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'dict object' has no attribute 'ipv4'"}
to retry, use: --limit

sshpass

How do I eliminate the need to install the sshpass program? It appears to be blacklisted.

Error {"changed": false, "msg": "AnsibleUndefinedVariable: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'ansible_eth0'"}

Hi,

Thank you for your help. I did make sure to set the jump ip and the correct range following the doc but I keep getting this error:

{"changed": false, "msg": "AnsibleUndefinedVariable: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'ansible_eth0'"}

The log:

TASK [Ensuring netfilter-persistent Is Enabled and Starts On Reboot] **************************************************************************************************************************************
Wednesday 20 February 2019 15:12:06 -0800 (0:00:03.741) 0:00:12.074 ****
ok: [rpi-k8s-1]

TASK [Setting Up IP Forwarding On Primary] ****************************************************************************************************************************************************************
Wednesday 20 February 2019 15:12:06 -0800 (0:00:00.765) 0:00:12.840 ****
ok: [rpi-k8s-1]

TASK [Configuring Static IP On Primary] *******************************************************************************************************************************************************************
Wednesday 20 February 2019 15:12:07 -0800 (0:00:00.612) 0:00:13.453 ****
ok: [rpi-k8s-1]

TASK [Updating /etc/hosts] ********************************************************************************************************************************************************************************
Wednesday 20 February 2019 15:12:09 -0800 (0:00:01.793) 0:00:15.246 ****
fatal: [rpi-k8s-1]: FAILED! => {"changed": false, "msg": "AnsibleUndefinedVariable: 'ansible.vars.hostvars.HostVarsVars object' has no attribute 'ansible_eth0'"}
to retry, use: --limit @/Volumes/Fifth/Backup/Softwares/Raspberry Pi/ansible-rpi-k8s-cluster/playbooks/deploy.retry

PLAY RECAP ************************************************************************************************************************************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0
rpi-k8s-1 : ok=6 changed=0 unreachable=0 failed=1

Wednesday 20 February 2019 15:12:09 -0800 (0:00:00.372) 0:00:15.619 ****

Updating APT Cache --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 6.10s
Installing iptables-persistent --------------------------------------------------------------------------------------------------------------------------------------------------------------------- 3.74s
Gathering Facts ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2.06s
Configuring Static IP On Primary ------------------------------------------------------------------------------------------------------------------------------------------------------------------- 1.79s
Ensuring netfilter-persistent Is Enabled and Starts On Reboot -------------------------------------------------------------------------------------------------------------------------------------- 0.77s
Setting Up IP Forwarding On Primary ---------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.61s
Updating /etc/hosts -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.37s
Fail When DHCP Scope Ending Is Not Correct --------------------------------------------------------------------------------------------------------------------------------------------------------- 0.03s
Checking To Make Sure Both DNSMasq and ISC-DHCP Are Not True --------------------------------------------------------------------------------------------------------------------------------------- 0.03s
Capturing The Current DHCP Scope Ending IP --------------------------------------------------------------------------------------------------------------------------------------------------------- 0.03s
Calculating The Required DHCP Scope Ending IP ------------------------------------------------------------------------------------------------------------------------------------------------------ 0.03s

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.