Code Monkey home page Code Monkey logo

microk8s's Introduction

The smallest, fastest Kubernetes

Single-package fully conformant lightweight Kubernetes that works on 42 flavours of Linux. Perfect for:

  • Developer workstations
  • IoT
  • Edge
  • CI/CD

Canonical might have assembled the easiest way to provision a single node Kubernetes cluster - Kelsey Hightower

Why MicroK8s?

  • Small. Developers want the smallest K8s for laptop and workstation development. MicroK8s provides a standalone K8s compatible with Azure AKS, Amazon EKS, Google GKE when you run it on Ubuntu.

  • Simple. Minimize administration and operations with a single-package install that has no moving parts for simplicity and certainty. All dependencies and batteries included.

  • Secure. Updates are available for all security issues and can be applied immediately or scheduled to suit your maintenance cycle.

  • Current. MicroK8s tracks upstream and releases beta, RC and final bits the same day as upstream K8s. You can track latest K8s or stick to any release version from 1.10 onwards.

  • Comprehensive. MicroK8s includes a curated collection of manifests for common K8s capabilities and services:

    • Service Mesh: Istio, Linkerd
    • Serverless: Knative
    • Monitoring: Fluentd, Prometheus, Grafana, Metrics
    • Ingress, DNS, Dashboard, Clustering
    • Automatic updates to the latest Kubernetes version
    • GPGPU bindings for AI/ML

Drop us a line at MicroK8s in the Wild if you are doing something fun with MicroK8s!

Quickstart

Install MicroK8s with:

snap install microk8s --classic

MicroK8s includes a microk8s kubectl command:

sudo microk8s kubectl get nodes
sudo microk8s kubectl get services

To use MicroK8s with your existing kubectl:

sudo microk8s kubectl config view --raw > $HOME/.kube/config

User access without sudo

The microk8s user group is created during the snap installation. Users in that group are granted access to microk8s commands. To add a user to that group:

sudo usermod -a -G microk8s <username>

Kubernetes add-ons

MicroK8s installs a barebones upstream Kubernetes. Additional services like dns and the Kubernetes dashboard can be enabled using the microk8s enable command.

sudo microk8s enable dns
sudo microk8s enable dashboard

Use microk8s status to see a list of enabled and available addons. You can find the addon manifests and/or scripts under ${SNAP}/actions/, with ${SNAP} pointing by default to /snap/microk8s/current.

Documentation

The official docs are maintained in the Kubernetes upstream Discourse.

Take a look at the build instructions if you want to contribute to MicroK8s.

Get it from the Snap Store

microk8s's People

Contributors

ajaykumar4 avatar apnar avatar balasu avatar balchua avatar berkayoz avatar bschimke95 avatar chris-sanders avatar dependabot[bot] avatar didier-durand avatar domfleischmann avatar evilnick avatar gapost94 avatar giner avatar github-actions[bot] avatar iskitsas avatar joedborg avatar joestringer avatar johnlettman avatar johnsca avatar knkski avatar ktsakalozos avatar lferran avatar marcoceppi avatar neoaggelos avatar niladrih avatar nonylene avatar sachinkumarsingh092 avatar shashi278 avatar tbertenshaw avatar tvansteenburgh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

microk8s's Issues

Setup tracks for 1.10, 1.11, etc

While I think we should expect most users to install from just stable when released, there will be cases where users will want to mirror a certain minor version of k8s or prevent autoupdates from going from 1.X to 1.Y

Missing ingress rule for http://www.microk8s.io/? -> :'(

Hi @marcoceppi,

I think you have a missing ingress rule or misconfigured LB for the .io site, the following work fine:

But the following do not work:

Good work here @ktsakalozos, better than Conjure-up for single node installs. My only other suggestion would be to add instructions for expanding the cluster through discoverer/CDK to get production support.

Cheers

kubeflow 0.1.2 deployment -> ambassador running into CrashLoopBackOff

I'm working on a VM (GCP Compute Engine, 8 cpu, 20G mem, 25G drive). I install microk8s and kubeflow in the following way:

wget https://bit.ly/2tp2aOo -O install-kubeflow-pre-micro.sh && chmod a+x install-kubeflow-pre-micro.sh && sudo  ./install-kubeflow-pre-micro.sh
export KUBECONFIG=/snap/microk8s/current/client.config
export GITHUB_TOKEN=${YOUR_GITHUB_TOKEN}
wget https://bit.ly/2tndL0g -O install-kubeflow.sh && chmod a+x install-kubeflow.sh && ./install-kubeflow.sh
sudo iptables -P FORWARD ACCEPT

I then inspect the deployment. I run the following:

kubectl get svc -n=kubeflow
kubectl get pods -n=kubeflow  # shows state of ambassador pods
kubectl get pods -n=kube-system # shows state of kube-dns

Might be similar to what happens in minikube, here's an open isue kubeflow/kubeflow#734

Feels like it could be related the state of kube-dns .. whether it is ready before kubeflow runs.

docker images ImagePullBackOff

How do you make docker images build locally available to microk8s docker daemon?
Cannot find any documentation about that.
microk8s docker registry is only in --edge and that is unstable, I am running --beta version

DNS is crashlooping

$ microk8s.kubectl get all --all-namespaces 
NAMESPACE     NAME                                                  READY     STATUS             RESTARTS   AGE
kube-system   pod/heapster-v1.5.2-84f5c8795f-m466m                  4/4       Running            0          23m
kube-system   pod/kube-dns-864b8bdc77-6mst4                         2/3       CrashLoopBackOff   15         23m
kube-system   pod/kubernetes-dashboard-6948bdb78-262gm              0/1       CrashLoopBackOff   8          23m
kube-system   pod/monitoring-influxdb-grafana-v4-7ffdc569b8-dbmvg   2/2       Running            0          23m

NAMESPACE     NAME                           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
default       service/kubernetes             ClusterIP   10.152.183.1     <none>        443/TCP             23m
kube-system   service/heapster               ClusterIP   10.152.183.109   <none>        80/TCP              23m
kube-system   service/kube-dns               ClusterIP   10.152.183.10    <none>        53/UDP,53/TCP       23m
kube-system   service/kubernetes-dashboard   ClusterIP   10.152.183.178   <none>        443/TCP             23m
kube-system   service/monitoring-grafana     ClusterIP   10.152.183.68    <none>        80/TCP              23m
kube-system   service/monitoring-influxdb    ClusterIP   10.152.183.252   <none>        8083/TCP,8086/TCP   23m

NAMESPACE     NAME                                             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/heapster-v1.5.2                  1         1         1            1           23m
kube-system   deployment.apps/kube-dns                         1         1         1            0           23m
kube-system   deployment.apps/kubernetes-dashboard             1         1         1            0           23m
kube-system   deployment.apps/monitoring-influxdb-grafana-v4   1         1         1            1           23m

NAMESPACE     NAME                                                        DESIRED   CURRENT   READY     AGE
kube-system   replicaset.apps/heapster-v1.5.2-84f5c8795f                  1         1         1         23m
kube-system   replicaset.apps/kube-dns-864b8bdc77                         1         1         0         23m
kube-system   replicaset.apps/kubernetes-dashboard-6948bdb78              1         1         0         23m
kube-system   replicaset.apps/monitoring-influxdb-grafana-v4-7ffdc569b8   1         1         1         23m

Microk8s will not start after a reboot

Seems etcd is falling to start because the unix socket already exists

ubuntu@ip-172-31-24-132:~$ sudo systemctl status snap.microk8s.daemon-etcd                                                                                                                                         
snap.microk8s.daemon-etcd.service - Service for snap application microk8s.daemon-etcd
   Loaded: loaded (/etc/systemd/system/snap.microk8s.daemon-etcd.service; enabled)
   Active: failed (Result: start-limit) since Thu 2018-06-07 17:06:00 UTC; 14s ago
  Process: 3171 ExecStart=/usr/bin/snap run microk8s.daemon-etcd (code=exited, status=1/FAILURE)
Jun 07 16:58:50 ip-172-31-24-132 microk8s.daemon-etcd[1491]: 2018-06-07 16:58:50.031560 I | etcdmain: Go Version: go1.9.5
Jun 07 16:58:50 ip-172-31-24-132 microk8s.daemon-etcd[1491]: 2018-06-07 16:58:50.031786 I | etcdmain: Go OS/Arch: linux/amd64
Jun 07 16:58:50 ip-172-31-24-132 microk8s.daemon-etcd[1491]: 2018-06-07 16:58:50.032001 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
Jun 07 16:58:50 ip-172-31-24-132 microk8s.daemon-etcd[1491]: 2018-06-07 16:58:50.032252 N | etcdmain: the server is already initialized as member before, starting as etcd member...
Jun 07 16:58:50 ip-172-31-24-132 microk8s.daemon-etcd[1491]: 2018-06-07 16:58:50.032651 I | embed: listening for peers on http://localhost:2380
Jun 07 16:58:50 ip-172-31-24-132 microk8s.daemon-etcd[1491]: 2018-06-07 16:58:50.032918 C | etcdmain: listen unix etcd.socket:2379: bind: address already in use

The workaround is to

sudo rm  /var/snap/microk8s/current/etcd.socket\:2379 
sudo systemctl restart snap.microk8s.daemon-etcd

No public DNS resolution inside pods

I cannot resolve public DNS inside running pods (with and without dns addon), althought internal k8s DNS works ok with the dns addon enabled. ufw is disabled. Running on DigitalOcean.

$ snap version
snap    2.33.1ubuntu2
snapd   2.33.1ubuntu2
series  16
ubuntu  16.04
kernel  4.4.0-130-generic
$ snap list
Name      Version    Rev   Tracking  Developer  Notes
core      16-2.33.1  4917  stable    canonical  core
microk8s  v1.11.0    104   beta      canonical  classic
any-pod$ curl google.com
curl: (6) Could not resolve host: google.com

Feature request : Support IPv6 on pod-side

Hi,

It would be awesome if pods can get an IPv6 connectivity.
It's possible using Calico as CNI provider, but I don't know if it's possible in your actual topology.

Microk8s could become our local Developement tool if I'm able to reach IPv6 stuffs.

Thanks !

Pods do not get killed

Killing a pod moves it to terminated but they get stuck there. This happens in devmode where AppArmor just gives a warning. However in this case we have the following in the logs:

[ 6206.004450] audit: type=1400 audit(1525881242.146:57393): apparmor="DENIED" operation="signal" profile="docker-default" pid=16609 comm="containerd" requested_mask="receive" denied_mask="receive" signal=term peer="snap.microk8s.daemon-docker"

Seems that docker runs with its own profile even though it is snapped. It seem to be the issue described here: https://forum.snapcraft.io/t/htop-snap-unable-to-signal-aa-enforced-processes/5222/2

systemd logs an issue with service status

I see this in my logs:

Jul 22 12:14:54 mark-X1Y2 snapd[21742]: 2018/07/22 12:14:54.247087 snap.go:291: cannot get status of service "daemon-apiserver": cannot get service status: empty field "Type" in ‘systemctl show’ output
Jul 22 12:14:54 mark-X1Y2 snapd[21742]: 2018/07/22 12:14:54.252146 snap.go:291: cannot get status of service "daemon-controller-manager": cannot get service status: empty field "Type" in ‘systemctl show’ output
Jul 22 12:14:54 mark-X1Y2 snapd[21742]: 2018/07/22 12:14:54.256705 snap.go:291: cannot get status of service "daemon-docker": cannot get service status: empty field "Type" in ‘systemctl show’ output
Jul 22 12:14:54 mark-X1Y2 snapd[21742]: 2018/07/22 12:14:54.261645 snap.go:291: cannot get status of service "daemon-etcd": cannot get service status: empty field "Type" in ‘systemctl show’ output
Jul 22 12:14:54 mark-X1Y2 snapd[21742]: 2018/07/22 12:14:54.267227 snap.go:291: cannot get status of service "daemon-kubelet": cannot get service status: empty field "Type" in ‘systemctl show’ output
Jul 22 12:14:54 mark-X1Y2 snapd[21742]: 2018/07/22 12:14:54.273328 snap.go:291: cannot get status of service "daemon-proxy": cannot get service status: empty field "Type" in ‘systemctl show’ output
Jul 22 12:14:54 mark-X1Y2 snapd[21742]: 2018/07/22 12:14:54.280097 snap.go:291: cannot get status of service "daemon-scheduler": cannot get service status: empty field "Type" in ‘systemctl show’ output

Is that an issue? Can it be avoided?

0/1 nodes are available 1 node(s) had diskpressure

I am trying to schedule a pod on my local microk8s cluster. in the events section i see a warning 0/1 nodes are available 1 node(s) had diskpressure how to check how much space node has and how to set a bigger value .

Installing microk8s breaks existing workload e.g. mysql on LXD due to br_netfilter + kubenet SNAT

I have "lxdbr0" as a LXD network bridge for my Juju test bed. After installing microk8s, mysql / percona-cluster charm deployments failed because those charms relies on peers' source IP addresses.

br_netfilter and kubenet SNAT iptables will be applied unconditionally after installing microk8s, so LXD private network communication is also affected by the MASQUERADE rule. For example, a packet of 10.0.8.102 -> 10.0.8.2, will be modified as 10.0.8.1 -> 10.0.8.22 on lxdbr0 which will be blocked by MySQL source IP address ACLs.

Although the iptables rule is enabled by kubelet and kubenet, if microk8s could apply some conditions onto the rule, that would be nice.
https://github.com/kubernetes/kubernetes/blob/692f9bb7b1fa6ca72ddd5a305607d79f9684e907/pkg/kubelet/dockershim/network/kubenet/kubenet_linux.go#L169-L181

[existing lxdbr0 bridge]

$ ip a s dev lxdbr0
8: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fe:22:7a:6f:a3:09 brd ff:ff:ff:ff:ff:ff
    inet 10.0.8.1/24 scope global lxdbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::30b2:deff:fe7a:383f/64 scope link 
       valid_lft forever preferred_lft forever

[existing iptables rules]

$ sudo iptables -t nat -L POSTROUTING -v
Chain POSTROUTING (policy ACCEPT 164 packets, 12290 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   30  1836 MASQUERADE  all  --  any    any     10.0.8.0/24         !10.0.8.0/24          /* generated for LXD network lxdbr0 */
    4   309 MASQUERADE  all  --  any    any     10.112.155.0/24     !10.112.155.0/24     
    0     0 MASQUERADE  udp  --  any    any     10.112.155.0/24     !10.112.155.0/24      masq ports: 1024-65535
    0     0 MASQUERADE  tcp  --  any    any     10.112.155.0/24     !10.112.155.0/24      masq ports: 1024-65535
    0     0 RETURN     all  --  any    any     10.112.155.0/24      255.255.255.255     
    0     0 RETURN     all  --  any    any     10.112.155.0/24      base-address.mcast.net/24 
    5   332 RETURN     all  --  any    any     192.168.122.0/24     base-address.mcast.net/24 
    0     0 RETURN     all  --  any    any     192.168.122.0/24     255.255.255.255     
    0     0 MASQUERADE  tcp  --  any    any     192.168.122.0/24    !192.168.122.0/24     masq ports: 1024-65535
    0     0 MASQUERADE  udp  --  any    any     192.168.122.0/24    !192.168.122.0/24     masq ports: 1024-65535
    0     0 MASQUERADE  all  --  any    any     192.168.122.0/24    !192.168.122.0/24    

[traffic 10.0.8.102 -> 10.0.8.2]

Jul  4 17:44:08    462 10.0.8.102 TCP_MISS/304 360 GET http://archive.ubuntu.com/ubuntu/dists/xenial/InRelease - HIER_DIRECT/91.189.88.162 -
Jul  4 17:44:08    473 10.0.8.102 TCP_REFRESH_UNMODIFIED/200 107244 GET http://security.ubuntu.com/ubuntu/dists/xenial-security/InRelease - HIER_DIRECT/91.189.88.162 -
Jul  4 17:44:08    235 10.0.8.102 TCP_REFRESH_UNMODIFIED/200 109644 GET http://archive.ubuntu.com/ubuntu/dists/xenial-updates/InRelease - HIER_DIRECT/91.189.88.162 -
Jul  4 17:44:08    476 10.0.8.102 TCP_REFRESH_UNMODIFIED/200 107272 GET http://archive.ubuntu.com/ubuntu/dists/xenial-backports/InRelease - HIER_DIRECT/91.189.88.162 -

-> source IP = 10.0.8.102

[install microk8s]

$ sudo snap install microk8s --classic --edge

[new iptables rules]

$ sudo iptables -t nat -L POSTROUTING -v
Chain POSTROUTING (policy ACCEPT 49 packets, 2947 bytes)
 pkts bytes target     prot opt in     out     source               destination         
  341 20493 KUBE-POSTROUTING  all  --  any    any     anywhere             anywhere             /* kubernetes postrouting rules */
    0     0 MASQUERADE  all  --  any    !docker0  172.17.0.0/16        anywhere            
   30  1836 MASQUERADE  all  --  any    any     10.0.8.0/24         !10.0.8.0/24          /* generated for LXD network lxdbr0 */
    4   309 MASQUERADE  all  --  any    any     10.112.155.0/24     !10.112.155.0/24     
    0     0 MASQUERADE  udp  --  any    any     10.112.155.0/24     !10.112.155.0/24      masq ports: 1024-65535
    0     0 MASQUERADE  tcp  --  any    any     10.112.155.0/24     !10.112.155.0/24      masq ports: 1024-65535
    0     0 RETURN     all  --  any    any     10.112.155.0/24      255.255.255.255     
    0     0 RETURN     all  --  any    any     10.112.155.0/24      base-address.mcast.net/24 
    6   405 RETURN     all  --  any    any     192.168.122.0/24     base-address.mcast.net/24 
    0     0 RETURN     all  --  any    any     192.168.122.0/24     255.255.255.255     
    0     0 MASQUERADE  tcp  --  any    any     192.168.122.0/24    !192.168.122.0/24     masq ports: 1024-65535
    0     0 MASQUERADE  udp  --  any    any     192.168.122.0/24    !192.168.122.0/24     masq ports: 1024-65535
    0     0 MASQUERADE  all  --  any    any     192.168.122.0/24    !192.168.122.0/24    
    3   222 MASQUERADE  all  --  any    any     anywhere            !10.152.183.0/24      /* kubenet: SNAT for outbound traffic from cluster */ ADDRTYPE match dst-type !LOCAL

[traffic 10.0.8.102 -> 10.0.8.2]

Jul  4 17:48:36    390 10.0.8.1 TCP_REFRESH_UNMODIFIED/200 107244 GET http://security.ubuntu.com/ubuntu/dists/xenial-security/InRelease - HIER_DIRECT/91.189.91.23 -
Jul  4 17:48:36    514 10.0.8.1 TCP_MISS/304 360 GET http://archive.ubuntu.com/ubuntu/dists/xenial/InRelease - HIER_DIRECT/91.189.88.162 -
Jul  4 17:48:36    234 10.0.8.1 TCP_REFRESH_UNMODIFIED/200 109644 GET http://archive.ubuntu.com/ubuntu/dists/xenial-updates/InRelease - HIER_DIRECT/91.189.88.162 -
Jul  4 17:48:37    472 10.0.8.1 TCP_REFRESH_UNMODIFIED/200 107272 GET http://archive.ubuntu.com/ubuntu/dists/xenial-backports/InRelease - HIER_DIRECT/91.189.88.162 -

-> source IP = 10.0.8.1

FWIW,

$ sudo rmmod br_netfilter

OR

$ sudo iptables -t nat -D POSTROUTING ! -d 10.152.183.0/24 \
    -m comment --comment "kubenet: SNAT for outbound traffic from cluster" \
    -m addrtype ! --dst-type LOCAL -j MASQUERADE

will temporarily disable the unwanted behavior, but it will break Kubernetes...

Problems connecting to the API server on a system with multiple NICs

See #70 (comment) for the originating thread.

When I try installing microk8s on a system with multiple NICs, kube-system pods are unable to connect to the kube-apiserver. For example, hostpath-provisioner consistently gets stuck in an error state, logging:

$ microk8s.kubectl logs pod/hostpath-provisioner-9979c7f64-n5jjl --namespace kube-system
F0816 20:40:47.825000       1 hostpath-provisioner.go:162] Error getting server version: Get https://10.152.183.1:443/version: dial tcp 10.152.183.1:443: i/o timeout

I think in this case kube-apiserver (and possibly other pieces?) might need to be configured to listen on specific network addresses that the pods will be able to access. I propose a solution similar to lxd init, a microk8s.init (or .setup, as you like) that prompts for any information necessary to get the kubelet set up properly on the host when the default or autodetected configuration doesn't work.

If such a configuration is possible.. I'm still not quite sure how to get kube-apiserver working on this machine. I tried modifying its command line args to bind to the primary NIC but that didn't seem to help.

microk8s.disable doesn't shut down k8s processes - v1.10.3

I would like to be able to completely shutdown microk8s without uninstalling it, but microk8s.disable leaves kubelet and other processes running.

If I kill them, they restart

ubuntu 18.04 LTS

Name      Version    Rev   Tracking  Developer  Notes
core      16-2.32.8  4650  stable    canonical  core
microk8s  v1.10.3    55    beta      canonical  classic

# microk8s.disable
# sleep 20
# ps x | grep microk | colrm 80

 2046 ?        Ssl  258:16 /snap/microk8s/55/kube-apiserver --v=4 --insecure-bi
 2053 ?        Ssl  110:32 /snap/microk8s/55/etcd --data-dir=/var/snap/microk8s
 2063 ?        Ssl   29:56 /snap/microk8s/55/usr/bin/dockerd -H unix:///var/sna
 2560 ?        Ssl   10:44 containerd -l unix:///var/snap/microk8s/common/var/r
25494 ?        Ssl    0:16 /snap/microk8s/55/kubelet --kubeconfig=/snap/microk8

microk8s.enable --help seems off

It is now showing

Usage: microk8s.enable ADDON...
Enable one or more ADDON included with microk8s
Example: microk8s.enable dns storage

Available addons:

  crds
  dashboard
  dns
  gpu
  ingress
  istio
  istio-demo
  istio-demo-auth
  registry
  storage

It is a good opportunity to do the work of placing addons in folders

Pod metrics are not available

After installing heapster with microk8s.enable dashboard only the metrics from the node are available in Grafana and as an output of microk8s.kubectl top like:

$ microk8s.kubectl top node
NAME       CPU(cores)   CPU%      MEMORY(bytes)   MEMORY%
microk8s   89m          8%        1237Mi          42%

However the pod metrics are not available:

$ microk8s.kubectl top pod -n kube-system
W0825 05:49:24.131657   10654 top_pod.go:263] Metrics not available for pod kube-system/heapster-v1.5.2-577898ddbf-wzq4w, age: 4m8.131641237s
error: Metrics not available for pod kube-system/heapster-v1.5.2-577898ddbf-wzq4w, age: 4m8.131641237s

You also don't get the per-pod CPU and memory graphs in the dashboard.

I think I've tracked it down to the summary API having an empty "pods" array:

$ curl http://localhost:10255/stats/summary
{
  "node": {
...
   }
  },
  "pods": []
 }

I haven't been able to figure out why - can't find anything of relevance on the kubelet service logs. This is reproducible on a fresh microk8s install. Any help would be appreciated, thanks!

No node created when installing microk8s

Install microk8s, and unfortunately there was no Nodes inside the cluster, and therefore no Pods could be started.

When running:

$ microk8s.kubectl get nodes
No resources found.

Also when running

$  microk8s.kubectl get nodes
No resources found.

All Pods failed with the message: Warning FailedScheduling 6s (x37 over 10m) default-scheduler no nodes available to schedule pods

As per this article, I stopped Docker and reinstalled the snap, and started it again, but that did not solve the issue.

You can see the video of my attempt on Twitch for next 14 days, so you can see all the steps and workarounds I took.

I'll add the Youtube version when it's uploaded, so it's here forever.


OS: Debian, Stretch

Docker Version:
Client:
Version: 18.03.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:10:06 2018
OS/Arch: linux/amd64
Experimental: false
Orchestrator: swarm

Server:
Engine:
Version: 18.03.0-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:08:35 2018
OS/Arch: linux/amd64
Experimental: false

Can't 'sudo snap remove microk8s'

Installed microk8s
Installed kubeflow.
Rather than remove kubeflow, I decided to just remove microk8s. But, it errors:

$ sudo snap remove microk8s
error: cannot perform the following tasks:
- Stop snap "microk8s" services ([start snap.microk8s.daemon-apiserver.service snap.microk8s.daemon-docker.service snap.microk8s.daemon-etcd.service snap.microk8s.daemon-proxy.service snap.microk8s.daemon-scheduler.service snap.microk8s.daemon-kubelet.service snap.microk8s.daemon-controller-manager.service] failed with exit status 5: Failed to start snap.microk8s.daemon-apiserver.service: Unit snap.microk8s.daemon-apiserver.service not found.
Failed to start snap.microk8s.daemon-docker.service: Unit snap.microk8s.daemon-docker.service not found.
Failed to start snap.microk8s.daemon-etcd.service: Unit snap.microk8s.daemon-etcd.service not found.
Failed to start snap.microk8s.daemon-proxy.service: Unit snap.microk8s.daemon-proxy.service not found.
Failed to start snap.microk8s.daemon-scheduler.service: Unit snap.microk8s.daemon-scheduler.service not found.
Failed to start snap.microk8s.daemon-kubelet.service: Unit snap.microk8s.daemon-kubelet.service not found.
Failed to start snap.microk8s.daemon-controller-manager.service: Unit snap.microk8s.daemon-controller-manager.service not found.
)

After that, still can't remove it.

$ snap list
Name              Version  Rev   Tracking  Developer         Notes
core              16-2.33  4830  stable    canonical         core
kubectl           1.10.3   405   stable    canonical         classic
microk8s          v1.10.3  55    beta      canonical         disabled,classic

Error out of disk space when install in a LXC container

Hi,

I have an error when I am trying to install microk8s in a LXC container with nesting and privileged security to true. I follow these steps:

  1. After access the container, I install it: sudo snap install microk8s --edge --classic
  2. Enable dns and dashboard addons: microk8s.enable dns dashboard
  3. List all pods: microk8s.kubectl get pods --all-namespaces
  4. Access info heapster pod: microk8s.kubectl describe pod heapster-v1.5.2-84f5c8795f-c8hld --namespace kube-system

At that moment I see this info:

Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  21m (x6 over 21m)  default-scheduler  0/1 nodes are available: 1 node(s) were not ready.
  Warning  FailedScheduling  21m                default-scheduler  0/1 nodes are available: 1 node(s) were not ready, 1 node(s) were out of disk space.

I have checked my ZFS storage and I have not a problem with my disk space.

Can anyone reproduce it with LXC container? Any ideas?

Thank you very much

microk8s needs a script to perform self introspection

Something like microk8s.status which will check for common issues, outages, and provide a clear resolution path when issues are found.

👌 microk8s is running and active
☑️ kubelet is running
  ☑️ has enough disk space
  ☑️ is not under pressure
☑️ api-server is active
☑️ etcd is active
☑️ docker is active

Failure scenarios may look like this

✋ microk8s is running with issues
🤔 kubelet is running
  ❗ needs more diskspace
  ❗ is under pressure
☑️ api-server is active
❎ etcd is not running
☑️ docker is active

Each service that we perform checks on would have one or more rules we use to ferret out common problems. This would grow with each snap revision as issues were discovered and resolved.

Cannot schedule PODs: NodeHasDiskPressure

I am trying microk8s on Ubuntu 18.04 and it cannot run any POD, these are the statuses after each command:

after fresh install (no dns, no dashboard)

Name:               monotop
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=monotop
Annotations:        node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Sat, 26 May 2018 11:09:11 -0600
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Sat, 26 May 2018 11:09:31 -0600   Sat, 26 May 2018 11:09:11 -0600   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Sat, 26 May 2018 11:09:31 -0600   Sat, 26 May 2018 11:09:11 -0600   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     True    Sat, 26 May 2018 11:09:31 -0600   Sat, 26 May 2018 11:09:21 -0600   KubeletHasDiskPressure       kubelet has disk pressure
  PIDPressure      False   Sat, 26 May 2018 11:09:31 -0600   Sat, 26 May 2018 11:09:11 -0600   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Sat, 26 May 2018 11:09:31 -0600   Sat, 26 May 2018 11:09:11 -0600   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  192.168.36.100
  Hostname:    monotop
Capacity:
 cpu:                4
 ephemeral-storage:  575354004Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             8074920Ki
 pods:               110
Allocatable:
 cpu:                4
 ephemeral-storage:  530246249209
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             7972520Ki
 pods:               110
System Info:
 Machine ID:                 813e56ef1c4f171bda95b46b5448007c
 System UUID:                CA17CB86-CBF2-E054-A153-18E4E8C4154B
 Boot ID:                    d69b0b06-7267-40bd-80e8-b53992bf96c5
 Kernel Version:             4.15.0-12-generic
 OS Image:                   Ubuntu 18.04 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://1.13.1
 Kubelet Version:            v1.10.3
 Kube-Proxy Version:         v1.10.3
ExternalID:                  monotop
Non-terminated Pods:         (0 in total)
  Namespace                  Name    CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----    ------------  ----------  ---------------  -------------
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ------------  ----------  ---------------  -------------
  0 (0%)        0 (0%)      0 (0%)           0 (0%)
Events:
  Type    Reason                   Age                From                 Message
  ----    ------                   ----               ----                 -------
  Normal  Starting                 33s                kube-proxy, monotop  Starting kube-proxy.
  Normal  Starting                 30s                kubelet, monotop     Starting kubelet.
  Normal  NodeHasSufficientPID     28s (x5 over 30s)  kubelet, monotop     Node monotop status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  28s                kubelet, monotop     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientDisk    27s (x6 over 30s)  kubelet, monotop     Node monotop status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  27s (x6 over 30s)  kubelet, monotop     Node monotop status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    27s (x6 over 30s)  kubelet, monotop     Node monotop status is now: NodeHasNoDiskPressure

after microk8s.enable dns, node status

Name:               monotop
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=monotop
Annotations:        node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Sat, 26 May 2018 11:09:11 -0600
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Sat, 26 May 2018 11:12:30 -0600   Sat, 26 May 2018 11:09:11 -0600   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Sat, 26 May 2018 11:12:30 -0600   Sat, 26 May 2018 11:09:11 -0600   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     True    Sat, 26 May 2018 11:12:30 -0600   Sat, 26 May 2018 11:11:10 -0600   KubeletHasDiskPressure       kubelet has disk pressure
  PIDPressure      False   Sat, 26 May 2018 11:12:30 -0600   Sat, 26 May 2018 11:09:11 -0600   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Sat, 26 May 2018 11:12:30 -0600   Sat, 26 May 2018 11:11:10 -0600   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  192.168.36.100
  Hostname:    monotop
Capacity:
 cpu:                4
 ephemeral-storage:  575354004Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             8074920Ki
 pods:               110
Allocatable:
 cpu:                4
 ephemeral-storage:  530246249209
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             7972520Ki
 pods:               110
System Info:
 Machine ID:                 813e56ef1c4f171bda95b46b5448007c
 System UUID:                CA17CB86-CBF2-E054-A153-18E4E8C4154B
 Boot ID:                    d69b0b06-7267-40bd-80e8-b53992bf96c5
 Kernel Version:             4.15.0-12-generic
 OS Image:                   Ubuntu 18.04 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://1.13.1
 Kubelet Version:            v1.10.3
 Kube-Proxy Version:         v1.10.3
ExternalID:                  monotop
Non-terminated Pods:         (0 in total)
  Namespace                  Name    CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----    ------------  ----------  ---------------  -------------
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ------------  ----------  ---------------  -------------
  0 (0%)        0 (0%)      0 (0%)           0 (0%)
Events:
  Type     Reason                   Age              From                 Message
  ----     ------                   ----             ----                 -------
  Normal   Starting                 3m               kube-proxy, monotop  Starting kube-proxy.
  Normal   Starting                 3m               kubelet, monotop     Starting kubelet.
  Normal   NodeAllocatableEnforced  3m               kubelet, monotop     Updated Node Allocatable limit across pods
  Normal   NodeHasSufficientPID     3m (x5 over 3m)  kubelet, monotop     Node monotop status is now: NodeHasSufficientPID
  Normal   NodeHasSufficientDisk    3m (x6 over 3m)  kubelet, monotop     Node monotop status is now: NodeHasSufficientDisk
  Normal   NodeHasNoDiskPressure    3m (x6 over 3m)  kubelet, monotop     Node monotop status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientMemory  3m (x6 over 3m)  kubelet, monotop     Node monotop status is now: NodeHasSufficientMemory
  Normal   NodeNotReady             1m               kubelet, monotop     Node monotop status is now: NodeNotReady
  Normal   Starting                 1m               kubelet, monotop     Starting kubelet.
  Normal   NodeHasSufficientDisk    1m               kubelet, monotop     Node monotop status is now: NodeHasSufficientDisk
  Normal   NodeHasNoDiskPressure    1m (x2 over 1m)  kubelet, monotop     Node monotop status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     1m               kubelet, monotop     Node monotop status is now: NodeHasSufficientPID
  Normal   NodeHasSufficientMemory  1m               kubelet, monotop     Node monotop status is now: NodeHasSufficientMemory
  Normal   NodeAllocatableEnforced  1m               kubelet, monotop     Updated Node Allocatable limit across pods
  Normal   NodeHasDiskPressure      1m               kubelet, monotop     Node monotop status is now: NodeHasDiskPressure
  Normal   NodeReady                1m               kubelet, monotop     Node monotop status is now: NodeReady
  Warning  EvictionThresholdMet     39s              kubelet, monotop     Attempting to reclaim imagefs
  Warning  EvictionThresholdMet     9s (x8 over 1m)  kubelet, monotop     Attempting to reclaim nodefs

as you can see it reports DiskPressure, but there is around 30G free on my system, the Pod statuses are:

$ microk8s.kubectl get pods --all-namespaces
NAMESPACE     NAME                        READY     STATUS    RESTARTS   AGE
kube-system   kube-dns-598d7bf7d4-q26rl   0/3       Pending   0          2m

any advice is appreciated.

bind everything to a localhost instead of all interfaces by default [security]

I suggest binding all kubernetes components to a localhost instead of all interfaces by default.

Having port 8080/tcp (kube-proxy -> kubernetes api server) exposed across all the interfaces completely compromises the system it is running this snap, in case the system does not have strict firewall rules.

Likely the same for 6443/tcp as the certificates are known, IIUC.

$ sudo netstat -tulpan |grep LISTEN |grep kube
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      30263/kubelet       
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      29203/kube-proxy    
tcp        0      0 127.0.0.1:42779         0.0.0.0:*               LISTEN      30263/kubelet       
tcp6       0      0 :::10250                :::*                    LISTEN      30263/kubelet       
tcp6       0      0 :::36555                :::*                    LISTEN      29203/kube-proxy    
tcp6       0      0 :::6443                 :::*                    LISTEN      29224/kube-apiserve 
tcp6       0      0 :::10251                :::*                    LISTEN      29196/kube-schedule 
tcp6       0      0 :::10252                :::*                    LISTEN      29205/kube-controll 
tcp6       0      0 :::10255                :::*                    LISTEN      30263/kubelet       
tcp6       0      0 :::8080                 :::*                    LISTEN      29224/kube-apiserve 
tcp6       0      0 :::10256                :::*                    LISTEN      29203/kube-proxy    
tcp6       0      0 :::46261                :::*                    LISTEN      29203/kube-proxy    
tcp6       0      0 :::30842                :::*                    LISTEN      29203/kube-proxy   

I am not sure if someone is planning to change this, but at least this deserves a security note in the main readme file and, preferably, before or right after someone runs snap install microk8s so the users are aware of this.

Pods get 'Unauthorized' when talking to master

Just upgraded from 1.11.0 to 1.11.1 and getting the following in kube-dns pods:

E0730 20:47:05.692976       1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:150: Failed to list *v1.Service: Unauthorized
E0730 20:47:05.695262       1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:147: Failed to list *v1.Endpoints: Unauthorized
I0730 20:47:05.929969       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
F0730 20:47:06.429965       1 dns.go:167] Timeout waiting for initialization

Other pods such as nginx-ingress-controller are failing as well.

It's a bit weird as I have --authorization-mode=AlwaysAllow on the kube-apiserver.

Feature Request: A way to disable k8s

It would be nice to not have to re-install microk8s when you want to use it or leave it running all the time. A way to just disable it all with a single switch would be nice even if it was just a single systemd thing.

networking in microk8s

This issue probably stems from my own confusion about the network that's created and being used by microk8s...

"kubectl get all" shows my services running on a 10.152.183.* network. But on my laptop, I don't see any bridge or interface associated with this subnet. What/where is this network?

On a related note, I've killed my wifi and put my laptop in airplane mode. I'd expect to continue to be able to hit my locally running kubernetes services without wired/wireless uplink. But I can't. What's going on here?

exposing service not working outside a vm

Using multipass to launch a vm.
With the vm launched launched, and access to it's shell, I do the following (roughly):

sudo snap install microk8s --classic --beta
microk8s.enable dns dashboard
alias kubectl=microk8s.kubectl
kubectl run echoserver --image=gcr.io/google_containers/echoserver:1.4 --port=8080 --replicas=2
kubectl expose deployment echoserver --type=NodePort
kubectl describe services/echoserver     # get its assigned nodeport
lynx http://localhost:<nodeport>   ## this works

from outside the vm

  • chrome: http://<vm_ip_addr>:<nodeport>
  • this fails

But, back in the VM

sudo iptables -P FORWARD ACCEPT

Now I can access from outside VM - chrome: http://<vm_ip_addr>:<nodeport>

There's a related bug in github - kubernetes/kubernetes#58908 - but this was closed, at it seemed that there was a networking configuration issue, or something like that.

connection to server refused

josh@ubuntu:~$ snap install microk8s --classic --beta
snap "microk8s" is already installed, see 'snap help refresh'

josh@ubuntu:~$ microk8s.kubetl get all
microk8s.kubetl: command not found

josh@ubuntu:~$ microk8s.enable dns dashboard
Applying DNS manifest
The connection to the server 127.0.0.1:8080 was refused - did you specify the right host or port?
The connection to the server 127.0.0.1:8080 was refused - did you specify the right host or port?

josh@ubuntu:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04 LTS
Release: 18.04
Codename: bionic

kubectl connection error

$ microk8s.kubectl get all
The connection to the server 127.0.0.1:8080 was refused - did you specify the right host or port?

$ snap list
Name Version Rev Tracking Developer Notes
microk8s v1.11.0 104 beta canonical classic

RBAC: cluster-admin not installed by default

I was facing an issue installing a chart with helm. The template contains some clusterrole and clusterrolebinding and it was failing because tiller wouldn't have permissions

$ helm install --name concourse stable/concourse
Error: release concourse failed: clusterroles.rbac.authorization.k8s.io "concourse-web" is forbidden: attempt to grant extra privileges: [{[get] [] [secrets] [] []}] user=&{system:serviceaccount:kube-system:tiller 2b72831d-94bb-11e8-9677-1866dae5f69c [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]

It turns out that the problem is that cluster-admin is actually not found:

$ kubectl get clusterrole cluster-admin
Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "cluster-admin" not found

However, in multiple places one can read something like "The cluster-admin ClusterRole exists by default in your Kubernetes cluster" (For example here and here)

After installing the cluster role

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  creationTimestamp: null
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: cluster-admin
rules:
- apiGroups:
  - '*'
  resources:
  - '*'
  verbs:
  - '*'
- nonResourceURLs:
  - '*'
  verbs:
  - '*'

I can now install the helm template. Not sure if it is a known limitation in microk8s, but I write it in case somebody faces the same issue.

microk8s snap corrupts LXD bridge, breaking Juju local provider

It appears that installing the microk8s snap affects the LXD bridge. In particular, connections to units provisioned by the Juju local provider from their peers appear to come from the LXD bridge address, rather than the peer container's address (the egress address advertised on the peer relation). I think that this means NAT is happening between lxd containers.

Can someone not me confirm, in case this odd problem is unique to my system?

sudo snap install microk8s --classic --beta
juju bootstrap --no-gui localhost lxd --config automatically-retry-hooks=false
juju deploy -n2 cs:postgresql

One of the two units will fail, as the secondary database is unable to connect to the primary due to IP address restrictions; the secondary will fail to clone the primary, and the IP address mentioned in the error will be the bridge IP address.

After uninstalling the microk8s snap, and rebooting to clear out the bridge it leaves behind, things work as expected.

Pods stuck in ContainerCreating status, Failed create pod sandbox

When running "microk8s.enable dns dashboard", the pods will stay in ContainerCreating status:

$ sudo snap install microk8s --beta --classic
microk8s (beta) v1.10.3 from 'canonical' installed

$ microk8s.kubectl get all 
NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.152.183.1   <none>        443/TCP   25s

$ microk8s.enable dns dashboard
Applying DNS manifest
service "kube-dns" created
serviceaccount "kube-dns" created
configmap "kube-dns" created
deployment.extensions "kube-dns" created
Restarting kubelet
Done
deployment.extensions "kubernetes-dashboard" created
service "kubernetes-dashboard" created
service "monitoring-grafana" created
replicationcontroller "monitoring-influxdb-grafana-v4" created
service "monitoring-influxdb" created

$ microk8s.kubectl get all --all-namespaces
NAMESPACE     NAME                                        READY     STATUS              RESTARTS   AGE
kube-system   pod/kube-dns-598d7bf7d4-f8lbm               0/3       ContainerCreating   0          9s
kube-system   pod/kubernetes-dashboard-545868474d-ltkg8   0/1       Pending             0          4s
kube-system   pod/monitoring-influxdb-grafana-v4-5qxm6    0/2       Pending             0          4s

NAMESPACE     NAME                                                   DESIRED   CURRENT   READY     AGE
kube-system   replicationcontroller/monitoring-influxdb-grafana-v4   1         1         0         4s

NAMESPACE     NAME                           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
default       service/kubernetes             ClusterIP   10.152.183.1     <none>        443/TCP             1m
kube-system   service/kube-dns               ClusterIP   10.152.183.10    <none>        53/UDP,53/TCP       9s
kube-system   service/kubernetes-dashboard   ClusterIP   10.152.183.204   <none>        80/TCP              4s
kube-system   service/monitoring-grafana     ClusterIP   10.152.183.115   <none>        80/TCP              4s
kube-system   service/monitoring-influxdb    ClusterIP   10.152.183.228   <none>        8083/TCP,8086/TCP   4s

NAMESPACE     NAME                                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/kube-dns               1         1         1            0           9s
kube-system   deployment.apps/kubernetes-dashboard   1         1         1            0           4s

NAMESPACE     NAME                                              DESIRED   CURRENT   READY     AGE
kube-system   replicaset.apps/kube-dns-598d7bf7d4               1         1         0         9s
kube-system   replicaset.apps/kubernetes-dashboard-545868474d   1         1         0         4s

The pods will stay in status ContainerCreating.

$ microk8s.kubectl describe pod/kubernetes-dashboard-545868474d-ltkg8 --namespace kube-system
Name:           kubernetes-dashboard-545868474d-ltkg8
Namespace:      kube-system
Node:           <hostname>/192.168.1.17
Start Time:     Tue, 12 Jun 2018 14:33:39 -0400
Labels:         k8s-app=kubernetes-dashboard
                pod-template-hash=1014240308
Annotations:    scheduler.alpha.kubernetes.io/critical-pod=
Status:         Pending
IP:             
Controlled By:  ReplicaSet/kubernetes-dashboard-545868474d
Containers:
  kubernetes-dashboard:
    Container ID:   
    Image:          gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.0
    Image ID:       
    Port:           9090/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:        100m
      memory:     50Mi
    Liveness:     http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-vxq5n (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          False 
  PodScheduled   True 
Volumes:
  default-token-vxq5n:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-vxq5n
    Optional:    false
QoS Class:       Guaranteed
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
Events:
  Type     Reason                  Age                From                     Message
  ----     ------                  ----               ----                     -------
  Normal   Scheduled               13m                default-scheduler        Successfully assigned kubernetes-dashboard-545868474d-ltkg8 to <hostname>
  Normal   SuccessfulMountVolume   13m                kubelet, <hostname>  MountVolume.SetUp succeeded for volume "default-token-vxq5n"
  Warning  FailedCreatePodSandBox  13m                kubelet, <hostname>  Failed create pod sandbox: rpc error: code = Unknown desc = NetworkPlugin kubenet failed to set up pod "kubernetes-dashboard-545868474d-ltkg8_kube-system" network: Error adding container to network: failed to Statfs "/proc/6763/ns/net": permission denied
  Normal   SandboxChanged          3m (x40 over 13m)  kubelet, <hostname>  Pod sandbox changed, it will be killed and re-created.

Document rules needed if ufw enabled

As shown by #66 and #67, if user has ufw enabled, some rules will need to be added to allow traffic for the apiserver and dns (maybe others?).

Let's add the necessary rules to the readme.

Using hostPort

I'm a newbie to k8s and this got me up and running super quick, so thanks for that! I wasn't able to get hostPort working, and came across this comment:

https://github.com/ubuntu/microk8s/blob/11fe17a5c52055eca1959b65d48510eb488ecd3a/microk8s-resources/actions/ingress.yaml#L82

I can't use hostNetwork or nodePort for my particular use case. Is that comment still correct? Digging around, it seems like it can work in newer versions of Calico, but requires a portMap plugin, but I don't really know how to go about installing/enabling that.

Problem connecting to the apiserver and rules for ufw

Written in case it can help others (I've been trying to debug this problem for a couple of days). It can be closed (maybe minimal changes in documentation).

I've spun up a "cluster"

$ sudo snap install microk8s --edge --classic
microk8s (edge) v1.11.0 from 'canonical' installed

After a while, journal in snap.microk8s.daemon-<...> seems OK (no strange messages at the end).

Then I enable dns and dashboard

$ microk8s.enable dns dashboard
Enabling DNS
Applying manifest
service/kube-dns created
serviceaccount/kube-dns created
configmap/kube-dns created
deployment.extensions/kube-dns created
Restarting kubelet
DNS is enabled
Enabling dashboard
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
service/monitoring-grafana created
service/monitoring-influxdb created
service/heapster created
deployment.extensions/monitoring-influxdb-grafana-v4 created
serviceaccount/heapster created
configmap/heapster-config created
configmap/eventer-config created
deployment.extensions/heapster-v1.5.2 created
dashboard enabled

However, both dns and dashboard fail to start

$ microk8s.kubectl get all --all-namespaces
NAMESPACE     NAME                                                  READY     STATUS             RESTARTS   AGE
kube-system   pod/heapster-v1.5.2-84f5c8795f-fvs7v                  4/4       Running            0          3m
kube-system   pod/kube-dns-864b8bdc77-jl2lb                         2/3       CrashLoopBackOff   3          3m
kube-system   pod/kubernetes-dashboard-6948bdb78-5k4z4              0/1       CrashLoopBackOff   3          3m
kube-system   pod/monitoring-influxdb-grafana-v4-7ffdc569b8-t42vc   2/2       Running            0          3m

NAMESPACE     NAME                           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
default       service/kubernetes             ClusterIP   10.152.183.1     <none>        443/TCP             10m
kube-system   service/heapster               ClusterIP   10.152.183.122   <none>        80/TCP              3m
kube-system   service/kube-dns               ClusterIP   10.152.183.10    <none>        53/UDP,53/TCP       3m
kube-system   service/kubernetes-dashboard   ClusterIP   10.152.183.49    <none>        443/TCP             3m
kube-system   service/monitoring-grafana     ClusterIP   10.152.183.142   <none>        80/TCP              3m
kube-system   service/monitoring-influxdb    ClusterIP   10.152.183.95    <none>        8083/TCP,8086/TCP   3m

NAMESPACE     NAME                                             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/heapster-v1.5.2                  1         1         1            1           3m
kube-system   deployment.apps/kube-dns                         1         1         1            0           3m
kube-system   deployment.apps/kubernetes-dashboard             1         1         1            0           3m
kube-system   deployment.apps/monitoring-influxdb-grafana-v4   1         1         1            1           3m

NAMESPACE     NAME                                                        DESIRED   CURRENT   READY     AGE
kube-system   replicaset.apps/heapster-v1.5.2-84f5c8795f                  1         1         1         3m
kube-system   replicaset.apps/kube-dns-864b8bdc77                         1         1         0         3m
kube-system   replicaset.apps/kubernetes-dashboard-6948bdb78              1         1         0         3m
kube-system   replicaset.apps/monitoring-influxdb-grafana-v4-7ffdc569b8   1         1         1         3m

Looking at the logs

$ microk8s.kubectl logs pod/kube-dns-864b8bdc77-jl2lb sidecar -n kube-system
[...]
Waiting for services and endpoints to be initialized from apiserver... [multiple times]
[...]

Similarly

$ microk8s.kubectl logs -f kubernetes-dashboard-6948bdb78-hf5q7 --namespace kube-system
2018/07/11 16:39:28 Starting overwatch
2018/07/11 16:39:28 Using in-cluster config to connect to apiserver
2018/07/11 16:39:28 Using service account token for csrf signing
2018/07/11 16:39:28 No request provided. Skipping authorization
2018/07/11 16:39:58 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.152.183.1:443/version: dial tcp 10.152.183.1:443: i/o timeout
Refer to our FAQ and wiki pages for more information: https://github.com/kubernetes/dashboard/wiki/FAQ

While preparing this issue, I came to realise that I have ufw enabled

$sudo ufw disable

solves the problem.
Maybe some reference at

https://github.com/juju-solutions/microk8s/blob/11fe17a5c52055eca1959b65d48510eb488ecd3a/README.md

Or maybe adding the ufw rules required.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.