OS Version: CentOS Linux release 7.4.1708 (Core)
K8s Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Docker Version: docker-ce.17.09.0
#######
After installing all components of k8s 1.9 on my server, and finishing some necessary steps such as turn off the swap, enable the kubelet, unify the value of Cgroup Driver between docker and kubelet, etc.
Then I try to initialize master by executing the command:
kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.199.211
There is the information below after the command executed:
[init] Using Kubernetes version: v1.9.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.09.0-ce. Max validated version: 17.03
[WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Starting the kubelet service
[certificates] Using the existing ca certificate and key.
[certificates] Using the existing apiserver certificate and key.
[certificates] Using the existing apiserver-kubelet-client certificate and key.
[certificates] Using the existing sa key.
[certificates] Using the existing front-proxy-ca certificate and key.
[certificates] Using the existing front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Using existing up-to-date KubeConfig file: "admin.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "kubelet.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "controller-manager.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 19.501140 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node k8s.master as master by adding a label and a taint
[markmaster] Master k8s.master tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: c2d65f.2c885de26823599e
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token c2d65f.2c885de26823599e 192.168.199.211:6443 --discovery-token-ca-cert-hash sha256:6fa47ddd6b6e05f01520314bee1c64b47d721d6da27e148de843fc60d103cffc
#######
Then executing this command: kubectl get po -n kube-system, get this result:
NAME READY STATUS RESTARTS AGE
etcd-k8s.master 1/1 Running 0 28m
kube-apiserver-k8s.master 1/1 Running 0 28m
kube-controller-manager-k8s.master 1/1 Running 0 28m
kube-dns-6f4fd4bdf-ljwrd 0/3 ContainerCreating 0 29m
kube-proxy-t5lwg 0/1 CrashLoopBackOff 2 29m
kube-scheduler-k8s.master 1/1 Running 0 28m
dns and proxy are disfunctional, see proxy logs: kubectl get logs -f kube-proxy-t5lwg -n kube-system, get this message:
I0207 04:54:28.263076 1 feature_gate.go:184] feature gates: map[]
error: unable to read certificate-authority /var/run/secrets/kubernetes.io/serviceaccount/ca.crt for default due to open /var/run/secrets/kubernetes.io/serviceaccount/ca.crt: no such file or directory
Then see dns logs: kubectl describe pod kube-dns-6f4fd4bdf-ljwrd -n kube-system, get this message:
Name: kube-dns-6f4fd4bdf-ljwrd
Namespace: kube-system
Node: k8s.master/192.168.199.211
Start Time: Wed, 07 Feb 2018 12:52:51 +0800
Labels: k8s-app=kube-dns
pod-template-hash=290980689
Annotations:
Status: Pending
IP:
Controlled By: ReplicaSet/kube-dns-6f4fd4bdf
Containers:
kubedns:
Container ID:
Image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7
Image ID:
Ports: 10053/UDP, 10053/TCP, 10055/TCP
Args:
--domain=cluster.local.
--dns-port=10053
--config-dir=/kube-dns-config
--v=2
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:10054/healthcheck/kubedns delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3
Environment:
PROMETHEUS_PORT: 10055
Mounts:
/kube-dns-config from kube-dns-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-rkst9 (ro)
dnsmasq:
Container ID:
Image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
Image ID:
Ports: 53/UDP, 53/TCP
Args:
-v=2
-logtostderr
-configDir=/etc/k8s/dns/dnsmasq-nanny
-restartDnsmasq=true
--
-k
--cache-size=1000
--no-negcache
--log-facility=-
--server=/cluster.local/127.0.0.1#10053
--server=/in-addr.arpa/127.0.0.1#10053
--server=/ip6.arpa/127.0.0.1#10053
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Requests:
cpu: 150m
memory: 20Mi
Liveness: http-get http://:10054/healthcheck/dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5
Environment:
Mounts:
/etc/k8s/dns/dnsmasq-nanny from kube-dns-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-rkst9 (ro)
sidecar:
Container ID:
Image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7
Image ID:
Port: 10054/TCP
Args:
--v=2
--logtostderr
--probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
--probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Requests:
cpu: 10m
memory: 20Mi
Liveness: http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-rkst9 (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
kube-dns-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kube-dns
Optional: true
kube-dns-token-rkst9:
Type: Secret (a volume populated by a Secret)
SecretName: kube-dns-token-rkst9
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
Normal Scheduled 7m default-scheduler Successfully assigned kube-dns-6f4fd4bdf-ljwrd to k8s.master
Normal SuccessfulMountVolume 7m kubelet, k8s.master MountVolume.SetUp succeeded for volume "kube-dns-config"
Normal SuccessfulMountVolume 7m kubelet, k8s.master MountVolume.SetUp succeeded for volume "kube-dns-token-rkst9"
Warning FailedCreatePodSandBox 7m (x12 over 7m) kubelet, k8s.master Failed create pod sandbox.
Normal SandboxChanged 2m (x290 over 7m) kubelet, k8s.master Pod sandbox changed, it will be killed and re-created.
#############
I don't know where goes wrong with the initializing process, appreciating your suggestions, thank you.