thenewnormal / kube-cluster-osx Goto Github PK
View Code? Open in Web Editor NEWLocal development multi node Kubernetes Cluster for macOS made very simple
License: Apache License 2.0
Local development multi node Kubernetes Cluster for macOS made very simple
License: Apache License 2.0
Hi there,
I recently reinstalled completely to be on the newest release. The NFS mount is not triggered any longer, and I don't get the menu item that allows to trigger the NFS mount of the /Users/myself on the VMs. This was a major feature for my use case of kube-cluster. Would it be possible to please fix/reinstate this?
Thanks!
Pablo
Hi,
Thank you for your great work. I tried installing deis v2 alpha with "install_deis" and I got the message : "Waiting for Deis PaaS to be ready... but first, coffee!"
It seems that in my case the node1 IP is 192.168.64.3 but Deis workflow is running at 192.168.64.4 and "deis register http://deis.192.168.64.4.xip.io" reach correctly the controller.
The install_deis should be modified to wait for workflow on one of the two nodes, not only the first one.
I've tried the latest version on a Mac 10.10.5 (@0.2.9). The VM's are always OFF. It doesn't start any VM's. Does anyone know where I can find the logs about it?
Thanks
(Version 0.1.9; installed via brew cask install kube-cluster
)
Creating 6GB Data disk for Node2 (it could take a while for big disks)...
6GiB 0:00:09 [ 640MiB/s] [==================================>] 100%
Created 6GB Data disk for Node2
Starting k8smaster-01 VM ...
> booting k8smaster-01
Error: open alpha: no such file or directory
Usage:
corectl load path/to/yourProfile [flags]
Examples:
corectl load profiles/demo.toml
Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users
All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"
Master VM has not booted, please check '~/kube-cluster/logs/master_vm_up.log' and report the problem !!!
(Now that I have corectl.app running...) Running initial setup fails when creating the vms. Each disk fails with logs like:
Creating 5GB sparse disk (QCow2) for Master ...
/Applications/Kube-Cluster.app/Contents/Resources/functions.sh: line 160: /usr/local/sbin/qcow-tool: No such file or directory
-
Created 5GB Data disk for Master
Obviously, I don't have /usr/local/sbin/qcow-tool
(I checked), but am not sure where it was supposed to come from. It's not a part of corectl.app, CoreOS.app, Kube-Cluster.app, nor is it available via brew as a quick workaround. Did I miss a prerequisite?
corectl
at /usr/local/bin/corectl
Kube-Cluster.app
didn't start with the error that it couldn't find /usr/local/sbin/corectl
corectl
to /usr/local/sbin/corectl
fixed the problem:$ ln -s /usr/local/bin/corectl /usr/local/sbin/corectl
After getting what I think is a clean install of kube-cluster, I ran the install_deis command. At the very end, it failed with trying to register the admin user. It appears the hostnames it is trying to use aren't resolving properly. I believe this is because it is trying to resolve via my LAN dns server instead of the k8s one:
Registering Deis Workflow user ...
http://deis.192.168.64.3.nip.io/v2/ does not appear to be a valid Deis controller.
Make sure that the Controller URI is correct, the server is running and
your client version is correct.
Error: Get http://deis.192.168.64.3.nip.io/v2/: dial tcp: lookup deis.192.168.64.3.nip.io on 192.168.1.1:53: no such host
I wasn't sure about the whole nip.io thing. I read the docs on the DNS addon that you link to in your readme ( https://github.com/kubernetes/kubernetes/blob/release-1.2/cluster/addons/dns/README.md ), and it talked about using the 10.100.0.10 DNS server. I followed the instructions on that page to run a simple busybox pod and tested out the names found there, but even with those, I still can't get deis to register. I'm assuming that is because the 10.100.0.10 address is only visible inside the VMs:
bash-3.2$ deis register
Usage: deis auth:register <controller> [options]
bash-3.2$ kubectl exec busybox -- nslookup deis-router.deis
Server: 10.100.0.10
Address 1: 10.100.0.10
Name: deis-router.deis
Address 1: 10.100.138.0
bash-3.2$ kubectl exec busybox -- nslookup deis-controller.deis
Server: 10.100.0.10
Address 1: 10.100.0.10
Name: deis-controller.deis
Address 1: 10.100.69.107
bash-3.2$ host deis-controller.deis
Host deis-controller.deis not found: 3(NXDOMAIN)
bash-3.2$ deis auth:register deis-controller.deis
http://deis-controller.deis/v2/ does not appear to be a valid Deis controller.
Make sure that the Controller URI is correct, the server is running and
your client version is correct.
Error: Get http://deis-controller.deis/v2/: dial tcp: lookup deis-controller.deis on **192.168.1.1:53**: no such host
bash-3.2$
Is this a botched configuration on my end or is there some sort of conflict with my LAN DNS?
Working with a clean install of kube-cluster 2.8.
After starting the cluster, I can run FleetUI, but when I try to run Kubernetes Dashboard, I see the following 500 error:
Internal Server Error (500)
Get https://10.100.0.1:443/api/v1/replicationcontrollers: x509: certificate signed by unknown authority
When performing "up" on the system, I see the following:
Starting k8smaster-01 VM ...
> booting k8smaster-01
[corectl] stable/835.11.0 already available on your system
[corectl] '/Users/sganyo' was already available to VMs via NFS
[corectl] started 'k8smaster-01' in background with IP 192.168.64.3 and PID 14695
Starting k8snode-01 VM ...
> booting k8snode-01
[corectl] stable/835.11.0 already available on your system
[corectl] '/Users/sganyo' was already available to VMs via NFS
[corectl] started 'k8snode-01' in background with IP 192.168.64.4 and PID 14723
Starting k8snode-02 VM ...
> booting k8snode-02
[corectl] stable/835.11.0 already available on your system
[corectl] '/Users/sganyo' was already available to VMs via NFS
[corectl] started 'k8snode-02' in background with IP 192.168.64.5 and PID 14741
Waiting for k8smaster-01 to be ready...
fleetctl list-machines:
MACHINE IP METADATA
012edea6... 192.168.64.3 role=control
Waiting for Kubernetes cluster to be ready. This can take a few minutes...
But then it waits forever. Ideas?
Fresh install using v0.1.9 and default values.
Status shows kube-apiserver failed with result 'dependency' without any details on what that might be.
Set CoreOS Release Channel:
1) Alpha
2) Beta
3) Stable (recommended)
Select an option: 3
Please type Nodes RAM size in GBs followed by [ENTER]:
[default is 2]:
Changing Nodes RAM to 2GB...
Creating 1GB Data disk for Master ...
1GiB 0:00:00 [ 1.3GiB/s] [=============================================================================================>] 100%
Created 1GB Data disk for Master
Please type Nodes Data disk size in GBs followed by [ENTER]:
[default is 10]:
Creating 10GB Data disk for Node1...
10GiB 0:00:08 [ 1.2GiB/s] [=============================================================================================>] 100%
Created 10GB Data disk for Node1
Creating 10GB Data disk for Node2...
10GiB 0:00:07 [ 1.3GiB/s] [=============================================================================================>] 100%
Created 10GB Data disk for Node2
Starting k8smaster-01 VM ...
> booting k8smaster-01
[corectl] downloading and verifying stable/835.13.0
28.93 MB / 28.93 MB [=============================================================================================================] 100.00 %
[corectl] SHA512 hash for coreos_production_pxe.vmlinuz OK
181.16 MB / 181.16 MB [===========================================================================================================] 100.00 %
[corectl] SHA512 hash for coreos_production_pxe_image.cpio.gz OK
[corectl] stable/835.13.0 ready
[corectl] '/Users/seren' was already available to VMs via NFS
[corectl] started 'k8smaster-01' in background with IP 192.168.64.2 and PID 76245
Starting k8snode-01 VM ...
> booting k8snode-01
[corectl] stable/835.13.0 already available on your system
[corectl] '/Users/seren' was already available to VMs via NFS
[corectl] started 'k8snode-01' in background with IP 192.168.64.3 and PID 76279
Starting k8snode-02 VM ...
> booting k8snode-02
[corectl] stable/835.13.0 already available on your system
[corectl] '/Users/seren' was already available to VMs via NFS
[corectl] started 'k8snode-02' in background with IP 192.168.64.4 and PID 76313
Installing Kubernetes files on to VMs...
Installing into k8smaster-01...
[corectl] uploading 'kube.tgz' to 'k8smaster-01:/home/core/kube.tgz'
41.46 MB / 41.46 MB [=============================================================================================================] 100.00 %
Done with k8smaster-01
Installing into k8snode-01...
[corectl] uploading 'kube.tgz' to 'k8snode-01:/home/core/kube.tgz'
41.46 MB / 41.46 MB [=============================================================================================================] 100.00 %
Done with k8snode-01
Installing into k8snode-02...
[corectl] uploading 'kube.tgz' to 'k8snode-02:/home/core/kube.tgz'
41.46 MB / 41.46 MB [=============================================================================================================] 100.00 %
Done with k8snode-02
fleetctl is up to date ...
Downloading latest version of helm for OS X
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 4760k 100 4760k 0 0 4585k 0 0:00:01 0:00:01 --:--:-- 6990k
Installed latest helm 0.3.1%2Bd4c0fa8 to ~/kube-cluster/bin ...
---> Checking repository charts
Already up-to-date.
---> Checking repository kube-charts
Already up-to-date.
---> Done
[ERROR] Remote kube-charts already exists, and is pointed to https://github.com/TheNewNormal/kube-charts
---> Checking repository charts
Already up-to-date.
---> Checking repository kube-charts
Already up-to-date.
---> Done
fleetctl list-machines:
MACHINE IP METADATA
a7a1af21... 192.168.64.3 role=node
e33df12e... 192.168.64.2 role=control
efdfb600... 192.168.64.4 role=node
Starting all fleet units in ~/kube-cluster/fleet:
Unit fleet-ui.service inactive
Unit fleet-ui.service launched on e33df12e.../192.168.64.2
Unit kube-apiserver.service inactive
Unit kube-apiserver.service launched on e33df12e.../192.168.64.2
Unit kube-controller-manager.service inactive
Unit kube-controller-manager.service launched on e33df12e.../192.168.64.2
Unit kube-scheduler.service inactive
Unit kube-scheduler.service launched on e33df12e.../192.168.64.2
Unit kube-apiproxy.service
Triggered global unit kube-apiproxy.service start
Unit kube-kubelet.service
Triggered global unit kube-kubelet.service start
Unit kube-proxy.service
Triggered global unit kube-proxy.service start
fleetctl list-units:
UNIT MACHINE ACTIVE SUB
fleet-ui.service e33df12e.../192.168.64.2 activating auto-restart
kube-apiserver.service e33df12e.../192.168.64.2 inactive dead
kube-controller-manager.service e33df12e.../192.168.64.2 inactive dead
kube-scheduler.service e33df12e.../192.168.64.2 inactive dead
Generate kubeconfig file ...
cluster "k8smaster-01" set.
context "default-context" set.
switched to context "default-context".
Waiting for Kubernetes cluster to be ready. This can take a few minutes...
bash-3.2$ fleetctl status kube-apiserver.service
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/run/fleet/units/kube-apiserver.service; linked-runtime; vendor preset: disabled)
Active: inactive (dead)
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Feb 19 09:45:14 k8smaster-01 systemd[1]: Dependency failed for Kubernetes API Server.
Feb 19 09:45:14 k8smaster-01 systemd[1]: kube-apiserver.service: Job kube-apiserver.service/start failed with result 'dependency'.
fleetctl list-machines:
MACHINE IP METADATA
a7a1af21... 192.168.64.3 role=node
e33df12e... 192.168.64.2 role=control
efdfb600... 192.168.64.4 role=node
fleetctl list-units:
UNIT MACHINE ACTIVE SUB
fleet-ui.service e33df12e.../192.168.64.2 activating auto-restart
kube-apiproxy.service a7a1af21.../192.168.64.3 active running
kube-apiproxy.service efdfb600.../192.168.64.4 active running
kube-apiserver.service e33df12e.../192.168.64.2 inactive dead
kube-controller-manager.service e33df12e.../192.168.64.2 inactive dead
kube-kubelet.service a7a1af21.../192.168.64.3 activating start-pre
kube-kubelet.service efdfb600.../192.168.64.4 activating start-pre
kube-proxy.service a7a1af21.../192.168.64.3 activating start-pre
kube-proxy.service efdfb600.../192.168.64.4 activating start-pre
kube-scheduler.service e33df12e.../192.168.64.2 inactive dead
I installed Kube-Cluster.app and corectl.app on my Mac (El Capitan).
But, Initial setup of Kube-Cluster VMs can not be completed because the following error occurs.
How can I solve it?
~$ /Applications/Kube-Cluster.app/Contents/Resources/first-init.command; exit;
Setting up Kubernetes Cluster for macOS
Reading ssh key from /Users/minoru/.ssh/id_rsa.pub
/Users/minoru/.ssh/id_rsa.pub found, updating configuration files ...
Set CoreOS Release Channel:
1) Alpha (may not always function properly)
2) Beta
3) Stable (recommended)
Select an option:
Set CoreOS Release Channel:
1) Alpha (may not always function properly)
2) Beta
3) Stable (recommended)
Select an option: 3
Please type Nodes RAM size in GBs followed by [ENTER]:
[default is 2]:
Changing Nodes RAM to 2GB...
Creating 5GB sparse disk (QCow2) for Master ...
dyld: Library not loaded: /usr/local/opt/libev/lib/libev.4.dylib
Referenced from: /Users/minoru/bin/qcow-tool
Reason: image not found
/Applications/Kube-Cluster.app/Contents/Resources/functions.sh: line 152: 7550 Trace/BPT trap: 5 ~/bin/qcow-tool create --size=5GiB master-data.img
-
Created 5GB Data disk for Master
Please type Nodes Data disk size in GBs followed by [ENTER]:
[default is 15]:
Creating 15GB sparse disk (QCow2) for Node1...
dyld: Library not loaded: /usr/local/opt/libev/lib/libev.4.dylib
Referenced from: /Users/minoru/bin/qcow-tool
Reason: image not found
/Applications/Kube-Cluster.app/Contents/Resources/functions.sh: line 152: 7552 Trace/BPT trap: 5 ~/bin/qcow-tool create --size=15GiB node-01-data.img
-
Created 15GB Data disk for Node1
Creating 15GB sparse disk (QCow2) for Node2...
dyld: Library not loaded: /usr/local/opt/libev/lib/libev.4.dylib
Referenced from: /Users/minoru/bin/qcow-tool
Reason: image not found
/Applications/Kube-Cluster.app/Contents/Resources/functions.sh: line 152: 7553 Trace/BPT trap: 5 ~/bin/qcow-tool create --size=15GiB node-02-data.img
-
Created 15GB Data disk for Node2
Starting k8smaster-01 VM ...
> booting k8smaster-01 (1/1)
---> downloading and verifying stable/1185.3.0
32.54 MB / 32.54 MB [====================================================================] 100.00 %
---> SHA512 hash for coreos_production_pxe.vmlinuz OK
215.79 MB / 215.79 MB [==================================================================] 100.00 %
---> SHA512 hash for coreos_production_pxe_image.cpio.gz OK
---> stable/1185.3.0 ready
[ERROR] stat master-data.img: no such file or directory
Master VM has not booted, please check '~/kube-cluster/logs/master_vm_up.log' and report the problem !!!
Press [Enter] key to continue...
Each attempt to initialize the cluster results in the below error on the master node:
Installing into k8smaster-01...
2016-07-20 22:37:16.686969 I | uploading 'kube.tgz' to 'k8smaster-01:/home/core/kube.tgz'
89.79 MB / 89.79 MB [===========================================================================================] 100.00 %
tar: ./kubelet: Wrote only 2048 of 10240 bytes
tar: Exiting with failure status due to previous errors
[ERROR] Process exited with status 2
Done with k8smaster-01
Upon checking the master node the root file system is full:
core@k8smaster-01 ~ $ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 371M 0 371M 0% /dev
tmpfs 499M 0 499M 0% /dev/shm
tmpfs 499M 13M 486M 3% /run
tmpfs 499M 0 499M 0% /sys/fs/cgroup
tmpfs 499M 499M 0 100% /
/dev/loop0 226M 226M 0 100% /usr
tmpfs 499M 0 499M 0% /media
tmpfs 499M 0 499M 0% /tmp
tmpfs 100M 0 100M 0% /run/user/500
This only occurs on the master node at the worker nodes are fine. I can't seem to track down the difference to fix this and was hoping someone had some thoughts as to the cause and a fix please.
I read here coreos/bugs#1114 that CoreOS won't ship with "socat" that is needed for "kubectl port-forward" although it was included in the CoreOS beta release. This makes local testing a bit hard for apps like the new Kubernetes Helm/Tiller (client/server). Since docker is being used and not Rkt doesn't it make sense to ship kube-cluster with socat? And is there any workaround?
I've tried stable, and alpha CoreOS channels without success, and have rebooted. Each attempt to install results in an infinite wait:
$ /Applications/Kube-Cluster.app/Contents/Resources/first-init.command; exit;
Setting up Kubernetes Cluster for OS X
Reading ssh key from /Users/cgwong/.ssh/id_rsa.pub
/Users/cgwong/.ssh/id_rsa.pub found, updating configuration files ...
Your Mac user's password will be saved in to 'Keychain'
and later one used for 'sudo' command to start VM !!!
This is not the password to access VMs via ssh or console !!!
Please type your Mac user's password followed by [ENTER]:
The sudo password is fine !!!
Set CoreOS Release Channel:
1) Alpha (may not always function properly)
2) Beta
3) Stable (recommended)
Select an option: 1
sed: can't read s/channel = "stable"/channel = "alpha"/g: No such file or directory
sed: can't read s/channel = "beta"/channel = "alpha"/g: No such file or directory
Please type Nodes RAM size in GBs followed by [ENTER]:
[default is 2]:
Changing Nodes RAM to 2GB...
sed: can't read s/\(memory = \)\(.*\)/\12048/g: No such file or directory
sed: can't read s/\(memory = \)\(.*\)/\12048/g: No such file or directory
Creating 1GB Data disk for Master ...
1GiB 0:00:01 [ 738MiB/s] [=======================================================================================>] 100%
Created 1GB Data disk for Master
Please type Nodes Data disk size in GBs followed by [ENTER]:
[default is 10]:
Creating 10GB Data disk for Node1...
10GiB 0:00:13 [ 739MiB/s] [=======================================================================================>] 100%
Created 10GB Data disk for Node1
Creating 10GB Data disk for Node2...
10GiB 0:00:13 [ 739MiB/s] [=======================================================================================>] 100%
Created 10GB Data disk for Node2
Starting k8smaster-01 VM ...
> booting k8smaster-01
[corectl] alpha/1068.0.0 already available on your system
[corectl] '/Users/cgwong' was already available to VMs via NFS
[corectl] started 'k8smaster-01' in background with IP 192.168.64.3 and PID 47560
sed: can't read s/_MASTER_IP_/192.168.64.3/: No such file or directory
sed: can't read s/_MASTER_IP_/192.168.64.3/: No such file or directory
Starting k8snode-01 VM ...
> booting k8snode-01
[corectl] alpha/1068.0.0 already available on your system
[corectl] '/Users/cgwong' was already available to VMs via NFS
[corectl] started 'k8snode-01' in background with IP 192.168.64.4 and PID 47673
Starting k8snode-02 VM ...
> booting k8snode-02
[corectl] alpha/1068.0.0 already available on your system
[corectl] '/Users/cgwong' was already available to VMs via NFS
[corectl] started 'k8snode-02' in background with IP 192.168.64.5 and PID 47813
Installing Kubernetes files on to VMs...
Installing into k8smaster-01...
[corectl] uploading 'kube.tgz' to 'k8smaster-01:/home/core/kube.tgz'
60.42 MB / 60.42 MB [=======================================================================================================] 100.00 %
Done with k8smaster-01
Installing into k8snode-01...
[corectl] uploading 'kube.tgz' to 'k8snode-01:/home/core/kube.tgz'
60.42 MB / 60.42 MB [=======================================================================================================] 100.00 %
Done with k8snode-01
Installing into k8snode-02...
[corectl] uploading 'kube.tgz' to 'k8snode-02:/home/core/kube.tgz'
60.42 MB / 60.42 MB [=======================================================================================================] 100.00 %
Done with k8snode-02
fleetctl is up to date ...
Downloading latest version of helmc for OS X
Installed latest helmc helmc/0.8.1%2Be4b3983 to ~/kube-cluster/bin ...
---> Checking repository charts
Already up-to-date.
---> Checking repository kube-charts
Already up-to-date.
---> Done
[ERROR] Remote kube-charts already exists, and is pointed to https://github.com/TheNewNormal/kube-charts
---> Checking repository charts
Already up-to-date.
---> Checking repository kube-charts
Already up-to-date.
---> Done
fleetctl list-machines:
MACHINE IP METADATA
209da0fb... 192.168.64.3 role=control
Starting all fleet units in ~/kube-cluster/fleet:
Unit fleet-ui.service inactive
Unit fleet-ui.service launched on 209da0fb.../192.168.64.3
Unit kube-apiserver.service inactive
Unit kube-apiserver.service launched on 209da0fb.../192.168.64.3
Unit kube-controller-manager.service inactive
Unit kube-controller-manager.service launched on 209da0fb.../192.168.64.3
Unit kube-scheduler.service inactive
Unit kube-scheduler.service launched on 209da0fb.../192.168.64.3
Unit kube-apiproxy.service
Triggered global unit kube-apiproxy.service start
Unit kube-kubelet.service
Triggered global unit kube-kubelet.service start
Unit kube-proxy.service
Triggered global unit kube-proxy.service start
fleetctl list-units:
UNIT MACHINE ACTIVE SUB
fleet-ui.service 209da0fb.../192.168.64.3 active running
kube-apiserver.service 209da0fb.../192.168.64.3 active running
kube-controller-manager.service 209da0fb.../192.168.64.3 active running
kube-scheduler.service 209da0fb.../192.168.64.3 active running
Generating kubeconfig file ...
cluster "k8smaster-01" set.
context "default-context" set.
switched to context "default-context".
Waiting for Kubernetes cluster to be ready. This can take a few minutes...
|
Any thoughts?
I had the same issue with dlite mentioned in TheNewNormal/kube-solo-osx#30. I renamed my /etc/exports and tried again. Now, when I try to bring the cluster up it just sits there. There is an error right at the beginning about running connection boilerplate: default, but it keeps running and then just hangs (still spinning though). Here is the whole output:
Last login: Mon May 16 17:14:21 on ttys011
~ > /Applications/Kube-Cluster.app/Contents/Resources/up.command; exit;
Starting k8smaster-01 VM ...
> booting k8smaster-01
[corectl] stable/899.17.0 already available on your system
[corectl] '/Users/gigi' was made available to VMs via NFS
[corectl] started 'k8smaster-01' in background with IP 192.168.64.3 and PID 4273
Starting k8snode-01 VM ...
> booting k8snode-01
[corectl] stable/899.17.0 already available on your system
[corectl] '/Users/gigi' was already available to VMs via NFS
[corectl] started 'k8snode-01' in background with IP 192.168.64.4 and PID 4300
Starting k8snode-02 VM ...
> booting k8snode-02
[corectl] stable/899.17.0 already available on your system
[corectl] '/Users/gigi' was already available to VMs via NFS
[corectl] started 'k8snode-02' in background with IP 192.168.64.5 and PID 4324
Generating kubeconfig file ...
cluster "k8smaster-01" set.
context "default-context" set.
switched to context "default-context".
Waiting for k8smaster-01 to be ready...
fleetctl list-machines:
MACHINE IP METADATA
085cc448... 192.168.64.5 role=node
3c1a4267... 192.168.64.3 role=control
b7f872c1... 192.168.64.4 role=node
Waiting for Kubernetes cluster to be ready. This can take a few minutes...
Sorry for my ignorance but I am trying to us an ENV variable set by kubernetes in a replication container, but all the kubenetes ENV variables come up empty:
bash-3.2$ kubectl exec curlrc-mgu06 -- printenv | grep API
API_PORT=tcp://10.100.63.201:8080
API_PORT_8080_TCP_PROTO=tcp
API_PORT_8080_TCP_ADDR=10.100.63.201
API_PORT_8080_TCP_PORT=8080
API_PORT_8080_TCP=tcp://10.100.63.201:8080
API_SERVICE_HOST=10.100.63.201
API_SERVICE_PORT=8080
bash-3.2$ kubectl exec curlrc-mgu06 -- echo $API_SERVICE_HOST
bash-3.2$
whereas echo $HOME
works as expected.
What is the correct way to reference these?
I did the following:
At that point creation of the initial nodes was started, but failed:
Data disks do not exist, they will be created now ...
Creating 1GB Data disk for Master ...
1GiB 0:00:03 [ 310MiB/s] [================================>] 100%
Created 1GB Data disk for Master
Please type Nodes Data disk size in GBs followed by [ENTER]:
[default is 10]:
Creating 10GB Data disk for Node1...
10GiB 0:00:36 [ 283MiB/s] [================================>] 100%
Created 10GB Data disk for Node1
Creating 10GB Data disk for Node2...
10GiB 0:00:32 [ 317MiB/s] [================================>] 100%
Created 10GB Data disk for Node2
Starting k8smaster-01 VM ...
> booting k8smaster-01
Error: open alpha: no such file or directory
Usage:
corectl load path/to/yourProfile [flags]
Examples:
corectl load profiles/demo.toml
Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users
All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"
Master VM has not booted, please check '~/kube-cluster/logs/master_vm_up.log' and report the problem !!!
The master_vm_up.log
file contains the following:
> booting k8smaster-01
Error: open alpha: no such file or directory
Usage:
corectl load path/to/yourProfile [flags]
Examples:
corectl load profiles/demo.toml
Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users
All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"
known issue it should be fix in the new Deis Workflow v2.2 deis/builder#382
OSX 10.11.5
Last login: Tue Jul 12 16:58:43 on ttys001
| ~ @ Antons-MBP-2 (necrogami)
| => /Applications/Kube-Cluster.app/Contents/Resources/up.command; exit;
Data disks do not exist, they will be created now ...
Creating 2GB Data disk for Master ...
2GiB 0:00:01 [1.49GiB/s] [=======================================================================================>] 100%
Created 2GB Data disk for Master
Please type Nodes Data disk size in GBs followed by [ENTER]:
[default is 10]:
Creating 10GB Data disk for Node1...
10GiB 0:00:06 [1.48GiB/s] [=======================================================================================>] 100%
Created 10GB Data disk for Node1
Creating 10GB Data disk for Node2...
10GiB 0:00:06 [1.48GiB/s] [=======================================================================================>] 100%
Created 10GB Data disk for Node2
Starting k8smaster-01 VM ...
[WARN] unable to run "defaults read /Library/Preferences/SystemConfiguration/com.apple.vmnet.plist" Shared_Net_Address ...
[WARN] ... assuming macOS default value (192.168.64.1)
[WARN] unable to run "defaults read /Library/Preferences/SystemConfiguration/com.apple.vmnet.plist" Shared_Net_Mask ...
[WARN] ... assuming macOS default value (255.255.255.0)
[ERROR] exit status 1
Master VM has not booted, please check '~/kube-cluster/logs/master_vm_up.log' and report the problem !!!
Press [Enter] key to continue...
I freshly installed the latest corectl and kube-cluster-osx today on OS X El Capitan 10.11.2 (15C50). When I click Setup->Initial setup of Kube-Cluster VMs, I get this error:
[ERROR] openpgp: invalid data: user ID packet not followed by self-signature
Full console log:
22:17:53-myusername~$ /Applications/Kube-Cluster.app/Contents/Resources/first-init.command; exit;
Setting up Kubernetes Cluster for macOS
Reading ssh key from /Users/myusername/.ssh/id_rsa.pub
/Users/myusername/.ssh/id_rsa.pub found, updating configuration files ...
Set CoreOS Release Channel:
1) Alpha (may not always function properly)
2) Beta
3) Stable (recommended)
Select an option: 3
Please type Nodes RAM size in GBs followed by [ENTER]:
[default is 2]:
Changing Nodes RAM to 2GB...
Creating 5GB sparse disk (QCow2) for Master ...
-
Created 5GB Data disk for Master
Please type Nodes Data disk size in GBs followed by [ENTER]:
[default is 15]:
Creating 15GB sparse disk (QCow2) for Node1...
-
Created 15GB Data disk for Node1
Creating 15GB sparse disk (QCow2) for Node2...
-
Created 15GB Data disk for Node2
Starting k8smaster-01 VM ...
> booting k8smaster-01 (1/1)
[WARN] /Users/myusername/.coreos/images/stable/1122.2.0/coreos_production_pxe.vmlinuz missing - stable/1122.2.0 ignored
---> downloading and verifying stable/1122.2.0
[ERROR] openpgp: invalid data: user ID packet not followed by self-signature
Master VM has not booted, please check '~/kube-cluster/logs/master_vm_up.log' and report the problem !!!
Press [Enter] key to continue...
The Initial Setup is deploying an outdated SkyDNS; as well as an outdated Kubernetes-Dashboard.
Even when running the "Update to latest Stable Kubernetes Version" option is used; it still re-deploys old services. It looks like you've staticly added in an old kube.tar and rc/svc yaml's.
It would be better to run the command:
export KUBERNETES_PROVIDER=vagrant; curl -sS https://get.k8s.io | bash
vs using hard-coded files for setting up kubernetes.
Hi I have corectl and kube-cluster installed (and iTerm). When I hit 'up' fro the kube-cluster menu I get the following. Any help to get this working would be appreciated thanks.
Kube-Cluster data disks do not exist, they will be created now ...
Created 5GB Data disk for Master
Please type Nodes Data disk size in GBs followed by [ENTER]:
[default is 15]:
We are using a private registry and with your help we are now able to access it, but it requires credentials.
These credentials are normally put in /var/lib/kubelet/.dockercfg
and putting them there works fine.
However they are removed after e halt or reload.
Is there anyway to run a script on startup to put them there from a shared directory, or use cloudinit?
I would like to find out if this is possible using the xhyve setup and if so what the steps would be to enable this.
My use case is to have the local cluster available for dev, and be able to connect to a test cluster over a flannel network which has other containers running that the local resources can talk to.
If this is not possible, any guidance as to how to achieve this on osx (i.e access a flannel network) would also be much appreciated.
Hi
Are you going to update fleetctl to the new version?
kube-cluster-osx/src/bin/fleetctl is 0.11.5.
the latest version is fleet v0.11.7.
Thanks.
I have dnsmasq running on my laptop for development purposes this conflicts with skydns also wanting to use port 53. I'd suggest 2 things 1 put in detection that will warn the user that something is running on port 53 and will conflict with skydns and 2 later develop a solution for cascading dns services.
I'm getting the following error when attempting to create a pod in the environment:
bash-3.2$ kubectl get po
NAME READY STATUS RESTARTS AGE
ingester-hgdcr 1/2 PullImageError 1 29m
The image in question is just a public image being pulled from dockerhub:
containers:
- name: nginx
image: scottganyo/nginx-sandwich-nginx
I've deployed the same file in other Kubernetes environments without issue... is there something different you're doing here than the standard setup?
Related: Is there a way I can just use local docker images in this K8s environment instead of going through docker hub?
I've tried this multiple times now. via destroying and re-upping I get through creating the nodes but it hangs at waiting for etcd service to be ready
Last login: Wed Jul 13 10:34:46 on ttys007
| ~ @ aswartz (necrogami)
| => /Applications/Kube-Cluster.app/Contents/Resources/up.command; exit;
Data disks do not exist, they will be created now ...
Creating 2GB Data disk for Master ...
2GiB 0:00:01 [1.45GiB/s] [================================>] 100%
Created 2GB Data disk for Master
Please type Nodes Data disk size in GBs followed by [ENTER]:
[default is 10]:
Creating 10GB Data disk for Node1...
10GiB 0:00:06 [1.44GiB/s] [================================>] 100%
Created 10GB Data disk for Node1
Creating 10GB Data disk for Node2...
10GiB 0:00:06 [1.45GiB/s] [================================>] 100%
Created 10GB Data disk for Node2
Starting k8smaster-01 VM ...
booting k8smaster-01 (1/1)
---> 'k8smaster-01' started successfuly with address 192.168.64.2 and PID 35507
---> 'k8smaster-01' boot logs can be found at '/Users/necrogami/.coreos/running/7E79BC00-7E3C-4B19-9CDF-83F99FD73A23/log'
---> 'k8smaster-01' console can be found at '/Users/necrogami/.coreos/running/7E79BC00-7E3C-4B19-9CDF-83F99FD73A23/tty'
Starting k8snode-01 VM ...
booting k8snode-01 (1/1)
---> 'k8snode-01' started successfuly with address 192.168.64.3 and PID 35680
---> 'k8snode-01' boot logs can be found at '/Users/necrogami/.coreos/running/31D7ADDF-96D7-428A-A17F-14CB902CCD05/log'
---> 'k8snode-01' console can be found at '/Users/necrogami/.coreos/running/31D7ADDF-96D7-428A-A17F-14CB902CCD05/tty'
Starting k8snode-02 VM ...
booting k8snode-02 (1/1)
---> 'k8snode-02' started successfuly with address 192.168.64.4 and PID 35748
---> 'k8snode-02' boot logs can be found at '/Users/necrogami/.coreos/running/0ECA856A-ACE8-4EA2-A906-1B53BB071C69/log'
---> 'k8snode-02' console can be found at '/Users/necrogami/.coreos/running/0ECA856A-ACE8-4EA2-A906-1B53BB071C69/tty'
Waiting for etcd service to be ready on k8smaster-01 VM...
There's a few errors in the log, but everything looks pretty good until it reaches "Waiting for Kubernetes cluster to be ready. This can take a few minutes..." - half an hour later, it's still spinning and waiting.
[~]$ /Applications/Kube-Cluster.app/Contents/Resources/first-init.command; exit;
Setting up Kubernetes Cluster for OS X
Reading ssh key from /Users/markharris/.ssh/id_rsa.pub
/Users/markharris/.ssh/id_rsa.pub found, updating configuration files ...
Your Mac user's password will be saved in to 'Keychain'
and later one used for 'sudo' command to start VM !!!
This is not the password to access VMs via ssh or console !!!
Please type your Mac user's password followed by [ENTER]:
The sudo password is fine !!!
Usage: add-generic-password [-a account] [-s service] [-w password] [options...] [-A|-T appPath] [keychain]
-a Specify account name (required)
-c Specify item creator (optional four-character code)
-C Specify item type (optional four-character code)
-D Specify kind (default is "application password")
-G Specify generic attribute (optional)
-j Specify comment string (optional)
-l Specify label (if omitted, service name is used as default label)
-s Specify service name (required)
-p Specify password to be added (legacy option, equivalent to -w)
-w Specify password to be added
-A Allow any application to access this item without warning (insecure, not recommended!)
-T Specify an application which may access this item (multiple -T options are allowed)
-U Update item if it already exists (if omitted, the item cannot already exist)
By default, the application which creates an item is trusted to access its data without warning.
You can remove this default access by explicitly specifying an empty app pathname: -T ""
If no keychain is specified, the password is added to the default keychain.
Add a generic password item.
Set CoreOS Release Channel:
1) Alpha (may not always function properly)
2) Beta
3) Stable (recommended)
Select an option: 3
Please type Nodes RAM size in GBs followed by [ENTER]:
[default is 2]: 2
Changing Nodes RAM to 2GB...
Creating 1GB Data disk for Master ...
1GiB 0:00:00 [1.09GiB/s] [=====================================================>] 100%
Created 1GB Data disk for Master
Please type Nodes Data disk size in GBs followed by [ENTER]:
[default is 10]:
Creating 10GB Data disk for Node1...
10GiB 0:00:08 [1.15GiB/s] [=====================================================>] 100%
Created 10GB Data disk for Node1
Creating 10GB Data disk for Node2...
10GiB 0:00:09 [ 1.1GiB/s] [=====================================================>] 100%
Created 10GB Data disk for Node2
security: SecKeychainSearchCopyNext: The specified item could not be found in the keychain.
security: SecKeychainSearchCopyNext: The specified item could not be found in the keychain.
Starting k8smaster-01 VM ...
Password:
> booting k8smaster-01
[corectl] stable/835.13.0 already available on your system
[corectl] NFS started in order for '/Users/markharris' to be made available to the VMs
[corectl] started 'k8smaster-01' in background with IP 192.168.64.2 and PID 48275
Error: 'k8smaster-01' not found, or dead
Usage:
corectl query [VMids] [flags]
Aliases:
query, q
Flags:
-a, --all display extended information about a running CoreOS instance
-i, --ip displays given instance IP address
-j, --json outputs in JSON for easy 3rd party integration
Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users
All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"
Error: 'k8smaster-01' not found, or dead
Usage:
corectl query [VMids] [flags]
Aliases:
query, q
Flags:
-a, --all display extended information about a running CoreOS instance
-i, --ip displays given instance IP address
-j, --json outputs in JSON for easy 3rd party integration
Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users
All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"
Starting k8snode-01 VM ...
> booting k8snode-01
[corectl] stable/835.13.0 already available on your system
[corectl] NFS started in order for '/Users/markharris' to be made available to the VMs
[corectl] started 'k8snode-01' in background with IP 192.168.64.3 and PID 48528
Error: 'k8snode-01' not found, or dead
Usage:
corectl query [VMids] [flags]
Aliases:
query, q
Flags:
-a, --all display extended information about a running CoreOS instance
-i, --ip displays given instance IP address
-j, --json outputs in JSON for easy 3rd party integration
Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users
All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"
Error: 'k8snode-01' not found, or dead
Usage:
corectl query [VMids] [flags]
Aliases:
query, q
Flags:
-a, --all display extended information about a running CoreOS instance
-i, --ip displays given instance IP address
-j, --json outputs in JSON for easy 3rd party integration
Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users
All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"
Starting k8snode-02 VM ...
> booting k8snode-02
[corectl] stable/835.13.0 already available on your system
[corectl] NFS started in order for '/Users/markharris' to be made available to the VMs
[corectl] started 'k8snode-02' in background with IP 192.168.64.4 and PID 48555
Error: 'k8snode-02' not found, or dead
Usage:
corectl query [VMids] [flags]
Aliases:
query, q
Flags:
-a, --all display extended information about a running CoreOS instance
-i, --ip displays given instance IP address
-j, --json outputs in JSON for easy 3rd party integration
Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users
All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"
Error: 'k8snode-02' not found, or dead
Usage:
corectl query [VMids] [flags]
Aliases:
query, q
Flags:
-a, --all display extended information about a running CoreOS instance
-i, --ip displays given instance IP address
-j, --json outputs in JSON for easy 3rd party integration
Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users
All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"
Error: 'k8smaster-01' not found, or dead
Usage:
corectl query [VMids] [flags]
Aliases:
query, q
Flags:
-a, --all display extended information about a running CoreOS instance
-i, --ip displays given instance IP address
-j, --json outputs in JSON for easy 3rd party integration
Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users
All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"
Installing Kubernetes files on to VMs...
Installing into k8smaster-01...
Error: 'k8smaster-01' not found, or dead
Usage:
corectl put path/to/file VMid:/file/path/on/destination [flags]
Aliases:
put, copy, cp, scp
Examples:
// copies 'filePath' into '/destinationPath' inside VMid
corectl put filePath VMid:/destinationPath
Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users
All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"
Error: 'k8smaster-01' not found, or dead
Usage:
corectl ssh VMid ["command1;..."] [flags]
Aliases:
ssh, attach
Examples:
corectl ssh VMid // logins into VMid
corectl ssh VMid "some commands" // runs 'some commands' inside VMid and exits
Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users
All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"
Error: 'k8smaster-01' not found, or dead
Usage:
corectl ssh VMid ["command1;..."] [flags]
Aliases:
ssh, attach
Examples:
corectl ssh VMid // logins into VMid
corectl ssh VMid "some commands" // runs 'some commands' inside VMid and exits
Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users
All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"
Done with k8smaster-01
Installing into k8snode-01...
Error: 'k8snode-01' not found, or dead
Usage:
corectl put path/to/file VMid:/file/path/on/destination [flags]
Aliases:
put, copy, cp, scp
Examples:
// copies 'filePath' into '/destinationPath' inside VMid
corectl put filePath VMid:/destinationPath
Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users
All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"
Error: 'k8snode-01' not found, or dead
Usage:
corectl ssh VMid ["command1;..."] [flags]
Aliases:
ssh, attach
Examples:
corectl ssh VMid // logins into VMid
corectl ssh VMid "some commands" // runs 'some commands' inside VMid and exits
Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users
All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"
Error: 'k8snode-01' not found, or dead
Usage:
corectl ssh VMid ["command1;..."] [flags]
Aliases:
ssh, attach
Examples:
corectl ssh VMid // logins into VMid
corectl ssh VMid "some commands" // runs 'some commands' inside VMid and exits
Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users
All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"
Done with k8snode-01
Installing into k8snode-02...
Error: 'k8snode-02' not found, or dead
Usage:
corectl put path/to/file VMid:/file/path/on/destination [flags]
Aliases:
put, copy, cp, scp
Examples:
// copies 'filePath' into '/destinationPath' inside VMid
corectl put filePath VMid:/destinationPath
Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users
All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"
Error: 'k8snode-02' not found, or dead
Usage:
corectl ssh VMid ["command1;..."] [flags]
Aliases:
ssh, attach
Examples:
corectl ssh VMid // logins into VMid
corectl ssh VMid "some commands" // runs 'some commands' inside VMid and exits
Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users
All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"
Error: 'k8snode-02' not found, or dead
Usage:
corectl ssh VMid ["command1;..."] [flags]
Aliases:
ssh, attach
Examples:
corectl ssh VMid // logins into VMid
corectl ssh VMid "some commands" // runs 'some commands' inside VMid and exits
Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users
All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"
Done with k8snode-02
Error: 'k8smaster-01' not found, or dead
Usage:
corectl ssh VMid ["command1;..."] [flags]
Aliases:
ssh, attach
Examples:
corectl ssh VMid // logins into VMid
corectl ssh VMid "some commands" // runs 'some commands' inside VMid and exits
Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users
All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"
fleetctl is up to date ...
Downloading latest version of helm for OS X
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 4764k 100 4764k 0 0 1657k 0 0:00:02 0:00:02 --:--:-- 2267k
Installed latest helm 0.5.0%2B1689ee4 to ~/kube-cluster/bin ...
---> Creating /Users/markharris/.helm/config.yaml
---> Checking repository charts
---> Cloning into '/Users/markharris/.helm/cache/charts'...
Already up-to-date.
---> Done
---> Cloning into '/Users/markharris/.helm/cache/kube-charts'...
---> Hooray! Successfully added the repo.
---> Checking repository charts
Already up-to-date.
---> Checking repository kube-charts
Already up-to-date.
---> Done
fleetctl list-machines:
2016/03/10 16:52:54 ERROR fleetctl.go:216: error attempting to check latest fleet version in Registry: dial tcp :2379: connection refused
Error retrieving list of active machines: dial tcp :2379: connection refused
Starting all fleet units in ~/kube-cluster/fleet:
2016/03/10 16:52:54 ERROR fleetctl.go:216: error attempting to check latest fleet version in Registry: dial tcp :2379: connection refused
Error creating units: error retrieving Unit(fleet-ui.service) from Registry: dial tcp :2379: connection refused
2016/03/10 16:52:54 ERROR fleetctl.go:216: error attempting to check latest fleet version in Registry: dial tcp :2379: connection refused
Error creating units: error retrieving Unit(kube-apiserver.service) from Registry: dial tcp :2379: connection refused
2016/03/10 16:52:54 ERROR fleetctl.go:216: error attempting to check latest fleet version in Registry: dial tcp :2379: connection refused
Error creating units: error retrieving Unit(kube-controller-manager.service) from Registry: dial tcp :2379: connection refused
2016/03/10 16:52:54 ERROR fleetctl.go:216: error attempting to check latest fleet version in Registry: dial tcp :2379: connection refused
Error creating units: error retrieving Unit(kube-scheduler.service) from Registry: dial tcp :2379: connection refused
2016/03/10 16:52:54 ERROR fleetctl.go:216: error attempting to check latest fleet version in Registry: dial tcp :2379: connection refused
Error creating units: error retrieving Unit(kube-apiproxy.service) from Registry: dial tcp :2379: connection refused
2016/03/10 16:52:54 ERROR fleetctl.go:216: error attempting to check latest fleet version in Registry: dial tcp :2379: connection refused
Error creating units: error retrieving Unit(kube-kubelet.service) from Registry: dial tcp :2379: connection refused
2016/03/10 16:52:54 ERROR fleetctl.go:216: error attempting to check latest fleet version in Registry: dial tcp :2379: connection refused
Error creating units: error retrieving Unit(kube-proxy.service) from Registry: dial tcp :2379: connection refused
fleetctl list-units:
2016/03/10 16:52:54 ERROR fleetctl.go:216: error attempting to check latest fleet version in Registry: dial tcp :2379: connection refused
Error retrieving list of units from repository: dial tcp :2379: connection refused
Generate kubeconfig file ...
cluster "k8smaster-01" set.
context "default-context" set.
switched to context "default-context".
Waiting for Kubernetes cluster to be ready. This can take a few minutes...
Each of the three logs in ~/kube-cluster/logs look good, for example, here's master_vm_up.log:
> booting k8smaster-01
[corectl] stable/835.13.0 already available on your system
[corectl] NFS started in order for '/Users/markharris' to be made available to the VMs
[corectl] started 'k8smaster-01' in background with IP 192.168.64.2 and PID 48275
Master VM successfully started !!!
If I try and, eg, ssh to k8master-01 or run Preset OS Shell
, a notification pops up telling me "VMs are OFF!".
Any help figuring out what's going on would be really appreciated.
% cat ~/kube-cluster/logs/first-init_master_vm_up.log
> booting k8smaster-01
[corectl] alpha/926.0.0 already available on your system
[corectl] '/Users/adamreese' was already available to VMs via NFS
[corectl] signal: abort trap
Error: VM exited with error while attempting to start in background
Usage:
corectl load path/to/yourProfile [flags]
Examples:
corectl load profiles/demo.toml
Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users
All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"
I am having trouble getting the cluster to work with our vpn ON. When I turn the vpn off all works fine, but I need the vpn to access a private regis try which is behind this vpn connection.
The process is stucka at: Waiting for Kubernetes cluster to be ready. This can take a few minutes...
We added 192.168.0.0/16 to NOT use vpn, but probably I need to add some more routes to get it working.
Ping, ssh and traceroute seem to return sane output:
$ traceroute 192.168.64.5
traceroute to 192.168.64.5 (192.168.64.5), 64 hops max, 52 byte packets
1 192.168.64.5 (192.168.64.5) 0.430 ms 0.326 ms 0.371 ms
$ ping 192.168.64.5
PING 192.168.64.5 (192.168.64.5): 56 data bytes
64 bytes from 192.168.64.5: icmp_seq=0 ttl=64 time=0.255 ms
64 bytes from 192.168.64.5: icmp_seq=1 ttl=64 time=0.345 ms
^C
--- 192.168.64.5 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.255/0.300/0.345/0.045 ms
$ ssh 192.168.64.5
The authenticity of host '192.168.64.5 (192.168.64.5)' can't be established.
ED25519 key fingerprint is SHA256:/AJqX7ZOEvwGB8nsrDqxF8myOmstKVczRMUq26IJ6sA.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.64.5' (ED25519) to the list of known hosts.
[email protected]'s password:
curl-ing the api-server fails:
$ curl http://192.168.64.4:8080
curl: (7) Failed to connect to 192.168.64.4 port 8080: Connection refused
When I run the 'up' command without vpn all works fine, but the nodes are not able to reach the private registry when I connect the vpn.
I realize this is not an issue with the project, but I'd really appreciate some help/hints with this.
Hello,
I have concern over the security of a feature of this application that I would like to discuss privately. Please may you provide a suitable email address?
Craig
I updated to macOS Sierra yesterday (in hindsight, I should've waited) and just fixed all of the included binaries to run correctly on Sierra. After a few hours wrestling with k8s the cluster is now up on macOS Sierra and Go 1.7.1 (compiled from source).
To make someone else's life easier I could fork the repo and provide the updated binaries if you want. The included fleetctl
, etcdctl
as well as kubectl
need to be recompiled. Also, I noticed that fleetctl
is included twice in the app, is that intended? Found the binaries in /Applications/Kube-Cluster.app/Contents/Resources/bin/
as well as the additional fleetctl
in /Applications/Kube-Cluster.app/Contents/Resources/files/
.
Please advice – thanks for your hard work!
After "Change Kubernetes version" to 1.6.2
Restart Kube-cluster will occur following error:
Starting Kubernetes fleet units ...
####################################################################
WARNING: fleetctl (0.11.7) is older than the latest registered
version of fleet found in the cluster (0.11.8). You are strongly
recommended to upgrade fleetctl to prevent incompatibility issues.
####################################################################
Any suggestion?
I would like to use a private registry that I have running in a docker-machine exposed to my local.
What is the proper way to expose the host on the nodes?
I cannot reach by the ip mentioned at login or the docker-machine ip.
Last login: Fri Jan 29 15:50:33 on ttys002
$ /Applications/Kube-Cluster.app/Contents/Resources/ssh_node1.command; exit;
Last login: Fri Jan 29 14:48:25 2016 from 192.168.64.1
CoreOS stable (835.11.0)
Update Strategy: No Reboots
$ curl http://192.168.64.1:5000/v2
curl: (7) Failed to connect to 192.168.64.1 port 5000: Connection refused
$ curl http://192.168.99.100/:5000/v2
(Using 0.2.0 installed by download from github releases & drag to applications folder, but renamed Kube-Cluster 0.2.0.app)
$ /Applications/Kube-Cluster\ 0.2.0.app/Contents/Resources/first-init.command; exit;
Setting up Kubernetes Cluster for OS X
Reading ssh key from /Users/dmankin/.ssh/id_rsa.pub
/Users/dmankin/.ssh/id_rsa.pub found, updating configuration files ...
Your Mac user's password will be saved in to 'Keychain'
and later one used for 'sudo' command to start VM !!!
This is not the password to access VMs via ssh or console !!!
Please type your Mac user's password followed by [ENTER]:
The sudo password is fine !!!
Set CoreOS Release Channel:
1) Alpha (may not always function properly)
2) Beta
3) Stable (recommended)
Select an option: 3
Please type Nodes RAM size in GBs followed by [ENTER]:
[default is 2]:
Changing Nodes RAM to 2GB...
Creating 1GB Data disk for Master ...
1GiB 0:00:01 [ 718MiB/s] [===============================================================================>] 100%
Created 1GB Data disk for Master
Please type Nodes Data disk size in GBs followed by [ENTER]:
[default is 10]: 6
Creating 6GB Data disk for Node1 (it could take a while for big disks)...
6GiB 0:00:09 [ 669MiB/s] [===============================================================================>] 100%
Created 6GB Data disk for Node1
Creating 6GB Data disk for Node2 (it could take a while for big disks)...
6GiB 0:00:08 [ 711MiB/s] [===============================================================================>] 100%
Created 6GB Data disk for Node2
Starting k8smaster-01 VM ...
> booting k8smaster-01
[corectl] downloading and verifying stable/835.13.0
28.93 MB / 28.93 MB [===============================================================================================] 100.00 %
[corectl] SHA512 hash for coreos_production_pxe.vmlinuz OK
181.16 MB / 181.16 MB [=============================================================================================] 100.00 %
[corectl] SHA512 hash for coreos_production_pxe_image.cpio.gz OK
[corectl] stable/835.13.0 ready
Error: unable to validate /etc/exports ('[]')
Usage:
corectl load path/to/yourProfile [flags]
Examples:
corectl load profiles/demo.toml
Global Flags:
--debug adds extra verbosity, and options, for debugging purposes and/or power users
All flags can also be configured via upper-case environment variables prefixed with "COREOS_"
For example, "--debug" => "COREOS_DEBUG"
Master VM has not booted, please check '~/kube-cluster/logs/master_vm_up.log' and report the problem !!!
Press [Enter] key to continue...
During my first start up, a console window opened and the install seemed to be proceeding fine. ATHen I received a never-ending stream of these error messages:
rectory
\error: stat /Users/mattbutcher/kube-cluster/kube/kubeconfig: no such file or directory
|error: stat /Users/mattbutcher/kube-cluster/kube/kubeconfig: no such file or directory
/error: stat /Users/mattbutcher/kube-cluster/kube/kubeconfig: no such file or directory
-error: stat /Users/mattbutcher/kube-cluster/kube/kubeconfig: no such file or directory
\error: stat /Users/mattbutcher/kube-cluster/kube/kubeconfig: no such file or directory
|error: stat /Users/mattbutcher/kube-cluster/kube/kubeconfig: no such file or directory
/error: stat /Users/mattbutcher/kube-cluster/kube/kubeconfig: no such file or directory
-error: stat /Users/mattbutch
I seem to be stuck now.
Hi,
I tried to install KUBE-cluster-osx multiple times. Every time, I ran into the same problem.
All three nodes were launched, with IP equal to 182.168.64.2/3/4 respectively. I could see the process in the terminal and I could ping the IP without any problem.
[corectl] started 'k8smaster-01' in background with IP 192.168.64.2 and PID 7565
[corectl] started 'k8snode-01' in background with IP 192.168.64.3 and PID 7637
[corectl] started 'k8snode-02' in background with IP 192.168.64.4 and PID 7661
However, when fleetctl tried to talk to ECTD process (I believe in the k8smaster-01 node), for unknown reason, the IP in "dial" field is blank. Then the installation just hang afterwards.
fleetctl list-machines:
2016/03/22 23:27:19 ERROR fleetctl.go:216: error attempting to check latest fleet version in Registry: dial tcp :2379: connection refused
Error retrieving list of active machines: dial tcp :2379: connection refused
Starting all fleet units in ~/kube-cluster/fleet:
2016/03/22 23:27:19 ERROR fleetctl.go:216: error attempting to check latest fleet version in Registry: dial tcp :2379: connection refused
Error creating units: error retrieving Unit(fleet-ui.service) from Registry: dial tcp :2379: connection refused
2016/03/22 23:27:19 ERROR fleetctl.go:216: error attempting to check latest fleet version in Registry: dial tcp :2379: connection refused
Error creating units: error retrieving Unit(kube-apiserver.service) from Registry: dial tcp :2379: connection refused
2016/03/22 23:27:19 ERROR fleetctl.go:216: error attempting to check latest fleet version in Registry: dial tcp :2379: connection refused
Error creating units: error retrieving Unit(kube-controller-manager.service) from Registry: dial tcp :2379: connection refused
2016/03/22 23:27:19 ERROR fleetctl.go:216: error attempting to check latest fleet version in Registry: dial tcp :2379: connection refused
Error creating units: error retrieving Unit(kube-scheduler.service) from Registry: dial tcp :2379: connection refused
2016/03/22 23:27:19 ERROR fleetctl.go:216: error attempting to check latest fleet version in Registry: dial tcp :2379: connection refused
Error creating units: error retrieving Unit(kube-apiproxy.service) from Registry: dial tcp :2379: connection refused
2016/03/22 23:27:19 ERROR fleetctl.go:216: error attempting to check latest fleet version in Registry: dial tcp :2379: connection refused
Error creating units: error retrieving Unit(kube-kubelet.service) from Registry: dial tcp :2379: connection refused
2016/03/22 23:27:19 ERROR fleetctl.go:216: error attempting to check latest fleet version in Registry: dial tcp :2379: connection refused
Error creating units: error retrieving Unit(kube-proxy.service) from Registry: dial tcp :2379: connection refused
Waiting for Kubernetes cluster to be ready. This can take a few minutes...
I tried both "stable" and "beta" release. The result is the same.
How can I perform further troubleshooting?
Thanks!
Shixiong
hardcode the /usr/bin/sed to be used
When I try to install the app by copying to /Applications I get this error:
This file can’t be copied because there is a problem with the file.
This is from a fresh download of 0.4.3 and 0.4.2. I've also tried building the app myself, but I get an Info.plist error.
Hi,
After experimenting last weekend succesfully with kube-cluster (and kube-solo) I decided to demo kubernetes at work. Unfortunately both implementations complained about not being able to connect to the internet. I tried many possible ways of connecting to the internet (ethernet, wifi, vpn) but nothing helped.
Back at home, using my own wifi connection, everything seems to work again!
Is it true that kube-cluster and kube-solo only work with the internet connection used the first time? Can I configure this somewhere? Firewall rules?
Kind regards,
Peter
Getting the below when attempting initial setup:
$ /Applications/Kube-Cluster.app/Contents/Resources/first-init.command; exit;
Setting up Kubernetes Cluster for macOS
Reading ssh key from /Users/cgwong/.ssh/id_rsa.pub
/Users/cgwong/.ssh/id_rsa.pub found, updating configuration files ...
Set CoreOS Release Channel:
1) Alpha (may not always function properly)
2) Beta
3) Stable (recommended)
Select an option: 3
Please type Nodes RAM size in GBs followed by [ENTER]:
[default is 2]:
Changing Nodes RAM to 2GB...
Creating 5GB sparse disk (QCow2) for Master ...
-
Created 5GB Data disk for Master
Please type Nodes Data disk size in GBs followed by [ENTER]:
[default is 15]:
Creating 15GB sparse disk (QCow2) for Node1...
-
Created 15GB Data disk for Node1
Creating 15GB sparse disk (QCow2) for Node2...
-
Created 15GB Data disk for Node2
Starting k8smaster-01 VM ...
> booting k8smaster-01 (1/1)
---> 'k8smaster-01' started successfuly with address 192.168.64.3 and PID 77647
---> 'k8smaster-01' boot logs can be found at '/Users/cgwong/.coreos/running/7E79BC00-7E3C-4B19-9CDF-83F99FD73A23/log'
---> 'k8smaster-01' console can be found at '/Users/cgwong/.coreos/running/7E79BC00-7E3C-4B19-9CDF-83F99FD73A23/tty'
Starting k8snode-01 VM ...
> booting k8snode-01 (1/1)
[ERROR] Unable to grab VM's IP after 30s (!)... Aborted
Node1 VM has not booted, please check '~/kube-cluster/logs/node1_vm_up.log' and report the problem !!!
Press [Enter] key to continue...
Checked the master boot log:
tail -f ~/.coreos/running/7E79BC00-7E3C-4B19-9CDF-83F99FD73A23/log
SSH host key: SHA256:dVhd86dKSBRW/DYPIGwyOZasoQ7BZRt7rrfvKkR+tWc (RSA)
eth0: 192.168.64.3 fe80::dc1d:85ff:fe1f:cf32
k8smaster-01 login: core (automatic login)
CoreOS stable (1068.8.0)
Update Strategy: No Reboots
Failed Units: 1
coreos-setup-environment.service
core@k8smaster-01 ~ $
Corectl: 0.2.4
Kube-Cluster: 0.4.9
Let me know if you need anything else as the log indicated has the same content as the screen. Any suggestions?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.