projectatomic / adb-utils Goto Github PK
View Code? Open in Web Editor NEWA set of utilities for managing services used provided in the Atomic Developer Bundle.
License: GNU General Public License v2.0
A set of utilities for managing services used provided in the Atomic Developer Bundle.
License: GNU General Public License v2.0
OPTIONS="-r #{PUBLIC_HOST} -host #{PUBLIC_ADDRESS}"
PUBLIC_ADDRESS
is really an IP, but the parameter is -host
. And on the other hand PUBLIC_HOST
is past to a a -r
option. Seems confusing.
Something like this would probably be better:
OPTIONS="-host #{PUBLIC_HOST} -ip #{PUBLIC_IP}"
This help to address the issue where an application developer should be able setup OpenShift in CDK/ADB and run one template without connecting to internet.
Currently the registry is exposed as hub.rhel-cdk.10.1.2.2.xip.io (see services/openshift/scripts/openshift_provision). Naming it hub.openshift.10.1.2.2.xip.io
would clarify better what this exposed route/registry is.
testing the CDK v2 Beta 4 build on Windows with the rhel-ose Vagrantfile. openshift is consuming lots of CPU and logging continuously while idle.
No actual oc commands have been run. I just started up the vagrant box and waited for it to quiesce. I let it run for an extended period of time to make sure it wasn't just startup activity.
The docker daemon is running but has very low cpu usage < 5%. Openshift is running around 40% inside the Vagrant box according to top. Docker status however says the openshift container is using 0% cpu.
Test Environment: Windows 7/64 bit. VirtualBox, Cygwin.
Log messages visible on the RHEL host with journalctl. This block of messages is repeated every
Jan 28 16:59:55 localhost.localdomain docker[1171]: time="2016-01-28T16:59:55.075872560-05:00" level=info msg="GET /version"
Jan 28 16:59:59 localhost.localdomain docker[1171]: time="2016-01-28T16:59:59.176319846-05:00" level=info msg="GET /version"
Jan 28 17:00:00 localhost.localdomain docker[1171]: time="2016-01-28T17:00:00.179259156-05:00" level=info msg="GET /version"
Jan 28 17:00:00 localhost.localdomain docker[1171]: time="2016-01-28T17:00:00.802765068-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:00 localhost.localdomain docker[1171]: time="2016-01-28T17:00:00.839968013-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:00 localhost.localdomain docker[1171]: time="2016-01-28T17:00:00.991650616-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:01 localhost.localdomain docker[1171]: time="2016-01-28T17:00:01.055773418-05:00" level=info msg="GET /containers/json?all=1"
Jan 28 17:00:01 localhost.localdomain docker[1171]: time="2016-01-28T17:00:01.060927739-05:00" level=info msg="GET /containers/bbf2286717c3f2dbc88d1c36faa2900a2968b0d48f0b6c08723c37f76c1a8663/json"
Jan 28 17:00:01 localhost.localdomain docker[1171]: time="2016-01-28T17:00:01.064328705-05:00" level=info msg="GET /containers/json?all=1"
Jan 28 17:00:01 localhost.localdomain docker[1171]: time="2016-01-28T17:00:01.065548738-05:00" level=info msg="GET /containers/cd41ed52635b18a9ca0394e41d1aa4554ed5a4e9f1e32428b4ba0a0553020122/json"
Jan 28 17:00:01 localhost.localdomain docker[1171]: time="2016-01-28T17:00:01.067715155-05:00" level=info msg="GET /containers/9429a8e8ac82b39d9d077bdcdaa185d8f5aa4376e80e4b7c6531b65d156a94e0/json"
Jan 28 17:00:01 localhost.localdomain docker[1171]: time="2016-01-28T17:00:01.070379807-05:00" level=info msg="GET /containers/0d8f726aa18bd335cd18e31ea68cc28cd72d5eb8894e48d307cfaaf9165f7648/json"
Jan 28 17:00:01 localhost.localdomain docker[1171]: time="2016-01-28T17:00:01.070821688-05:00" level=info msg="GET /containers/69f2f4475928bbc5606df6717f2eb2fd6ea6ab224754db808eea785bfb452755/json"
Jan 28 17:00:01 localhost.localdomain docker[1171]: time="2016-01-28T17:00:01.117298514-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:01 localhost.localdomain docker[1171]: time="2016-01-28T17:00:01.119719263-05:00" level=info msg="GET /containers/69f2f4475928bbc5606df6717f2eb2fd6ea6ab224754db808eea785bfb452755/json"
Jan 28 17:00:01 localhost.localdomain docker[1171]: time="2016-01-28T17:00:01.129658054-05:00" level=info msg="GET /containers/json?all=1"
Jan 28 17:00:01 localhost.localdomain docker[1171]: time="2016-01-28T17:00:01.141560702-05:00" level=info msg="GET /containers/bbf2286717c3f2dbc88d1c36faa2900a2968b0d48f0b6c08723c37f76c1a8663/json"
Jan 28 17:00:01 localhost.localdomain docker[1171]: time="2016-01-28T17:00:01.142161709-05:00" level=info msg="GET /containers/0d8f726aa18bd335cd18e31ea68cc28cd72d5eb8894e48d307cfaaf9165f7648/json"
Jan 28 17:00:01 localhost.localdomain docker[1171]: time="2016-01-28T17:00:01.144853797-05:00" level=info msg="GET /containers/json?all=1"
Jan 28 17:00:01 localhost.localdomain docker[1171]: time="2016-01-28T17:00:01.150078023-05:00" level=info msg="GET /containers/9429a8e8ac82b39d9d077bdcdaa185d8f5aa4376e80e4b7c6531b65d156a94e0/json"
Jan 28 17:00:01 localhost.localdomain docker[1171]: time="2016-01-28T17:00:01.150679163-05:00" level=info msg="GET /containers/cd41ed52635b18a9ca0394e41d1aa4554ed5a4e9f1e32428b4ba0a0553020122/json"
Jan 28 17:00:01 localhost.localdomain docker[1171]: time="2016-01-28T17:00:01.155718909-05:00" level=info msg="GET /containers/0d8f726aa18bd335cd18e31ea68cc28cd72d5eb8894e48d307cfaaf9165f7648/json"
Jan 28 17:00:01 localhost.localdomain docker[1171]: time="2016-01-28T17:00:01.156188710-05:00" level=info msg="GET /containers/69f2f4475928bbc5606df6717f2eb2fd6ea6ab224754db808eea785bfb452755/json"
Jan 28 17:00:01 localhost.localdomain docker[1171]: time="2016-01-28T17:00:01.231848928-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:01 localhost.localdomain docker[1171]: time="2016-01-28T17:00:01.341555664-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:01 localhost.localdomain docker[1171]: time="2016-01-28T17:00:01.449176246-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:01 localhost.localdomain docker[1171]: time="2016-01-28T17:00:01.561188899-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:01 localhost.localdomain docker[1171]: time="2016-01-28T17:00:01.672235028-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:01 localhost.localdomain docker[1171]: time="2016-01-28T17:00:01.783471199-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:01 localhost.localdomain docker[1171]: time="2016-01-28T17:00:01.935331616-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:02 localhost.localdomain docker[1171]: time="2016-01-28T17:00:02.054839788-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:02 localhost.localdomain docker[1171]: time="2016-01-28T17:00:02.164359472-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:02 localhost.localdomain docker[1171]: time="2016-01-28T17:00:02.269183092-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:02 localhost.localdomain docker[1171]: time="2016-01-28T17:00:02.398596626-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:02 localhost.localdomain docker[1171]: time="2016-01-28T17:00:02.584494063-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:02 localhost.localdomain docker[1171]: time="2016-01-28T17:00:02.689632571-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:02 localhost.localdomain docker[1171]: time="2016-01-28T17:00:02.798564792-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:02 localhost.localdomain docker[1171]: time="2016-01-28T17:00:02.912972810-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:05 localhost.localdomain docker[1171]: time="2016-01-28T17:00:05.191618278-05:00" level=info msg="GET /version"
Jan 28 17:00:09 localhost.localdomain docker[1171]: time="2016-01-28T17:00:09.242204525-05:00" level=info msg="GET /version"
Jan 28 17:00:10 localhost.localdomain docker[1171]: time="2016-01-28T17:00:10.202793514-05:00" level=info msg="GET /version"
Jan 28 17:00:10 localhost.localdomain docker[1171]: time="2016-01-28T17:00:10.805228317-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:10 localhost.localdomain docker[1171]: time="2016-01-28T17:00:10.885071782-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:10.997267424-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.117632861-05:00" level=info msg="GET /containers/json?all=1"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.124385943-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.128865813-05:00" level=info msg="GET /containers/bbf2286717c3f2dbc88d1c36faa2900a2968b0d48f0b6c08723c37f76c1a8663/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.138254973-05:00" level=info msg="GET /containers/cd41ed52635b18a9ca0394e41d1aa4554ed5a4e9f1e32428b4ba0a0553020122/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.144795546-05:00" level=info msg="GET /containers/69f2f4475928bbc5606df6717f2eb2fd6ea6ab224754db808eea785bfb452755/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.182006488-05:00" level=info msg="GET /containers/json?all=1"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.186998500-05:00" level=info msg="GET /containers/69f2f4475928bbc5606df6717f2eb2fd6ea6ab224754db808eea785bfb452755/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.187607727-05:00" level=info msg="GET /containers/9429a8e8ac82b39d9d077bdcdaa185d8f5aa4376e80e4b7c6531b65d156a94e0/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.191831320-05:00" level=info msg="GET /containers/0d8f726aa18bd335cd18e31ea68cc28cd72d5eb8894e48d307cfaaf9165f7648/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.195752836-05:00" level=info msg="GET /containers/json?all=1"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.205331985-05:00" level=info msg="GET /containers/bbf2286717c3f2dbc88d1c36faa2900a2968b0d48f0b6c08723c37f76c1a8663/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.209223580-05:00" level=info msg="GET /containers/cd41ed52635b18a9ca0394e41d1aa4554ed5a4e9f1e32428b4ba0a0553020122/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.209855193-05:00" level=info msg="GET /containers/0d8f726aa18bd335cd18e31ea68cc28cd72d5eb8894e48d307cfaaf9165f7648/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.214677151-05:00" level=info msg="GET /containers/69f2f4475928bbc5606df6717f2eb2fd6ea6ab224754db808eea785bfb452755/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.215050480-05:00" level=info msg="GET /containers/json?all=1"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.219800762-05:00" level=info msg="GET /containers/9429a8e8ac82b39d9d077bdcdaa185d8f5aa4376e80e4b7c6531b65d156a94e0/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.223444120-05:00" level=info msg="GET /containers/0d8f726aa18bd335cd18e31ea68cc28cd72d5eb8894e48d307cfaaf9165f7648/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.231932902-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.338820838-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.452009089-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.560872695-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.669613620-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.708016048-05:00" level=info msg="GET /containers/json?all=1"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.711173333-05:00" level=info msg="GET /containers/a98bdc039c09be6836b5d37332a0f132a0fb8e5be291b09ee72c103ff97c21a8/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.713492768-05:00" level=info msg="GET /containers/5fb8bde3ab6fac1a9b64f641900e30f2c1f1f54b933e805a8f56d07c62d41f9c/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.715679337-05:00" level=info msg="GET /containers/bbf2286717c3f2dbc88d1c36faa2900a2968b0d48f0b6c08723c37f76c1a8663/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.717612619-05:00" level=info msg="GET /containers/0d8f726aa18bd335cd18e31ea68cc28cd72d5eb8894e48d307cfaaf9165f7648/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.718807364-05:00" level=info msg="GET /containers/8f780c77a9ffe84b3d3ab02274e46d522c87bad17b776bfb854698b59a183a09/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.722435720-05:00" level=info msg="GET /containers/4773a35ece65a19a8be2739b9712fbf6f5bc83cc6df62dbed02d8a29343c3153/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.724204610-05:00" level=info msg="GET /containers/cd41ed52635b18a9ca0394e41d1aa4554ed5a4e9f1e32428b4ba0a0553020122/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.726388667-05:00" level=info msg="GET /containers/69f2f4475928bbc5606df6717f2eb2fd6ea6ab224754db808eea785bfb452755/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.728190493-05:00" level=info msg="GET /containers/9429a8e8ac82b39d9d077bdcdaa185d8f5aa4376e80e4b7c6531b65d156a94e0/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.782638250-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:11 localhost.localdomain docker[1171]: time="2016-01-28T17:00:11.890911857-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:12 localhost.localdomain docker[1171]: time="2016-01-28T17:00:12.110874601-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:12 localhost.localdomain docker[1171]: time="2016-01-28T17:00:12.213582574-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:12 localhost.localdomain docker[1171]: time="2016-01-28T17:00:12.319213402-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:12 localhost.localdomain docker[1171]: time="2016-01-28T17:00:12.424532690-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:12 localhost.localdomain docker[1171]: time="2016-01-28T17:00:12.543582359-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:12 localhost.localdomain docker[1171]: time="2016-01-28T17:00:12.647404422-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:12 localhost.localdomain docker[1171]: time="2016-01-28T17:00:12.751936079-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:12 localhost.localdomain docker[1171]: time="2016-01-28T17:00:12.858706535-05:00" level=info msg="GET /containers/json"
Jan 28 17:00:12 localhost.localdomain docker[1171]: time="2016-01-28T17:00:12.970662728-05:00" level=info msg="GET /containers/json"
From @hferentschik on January 25, 2016 13:18
Running sudo journalctl --unit=openshift
, there is heaps of output for the OpenShift service.
Jan 25 05:21:30 localhost.localdomain sh[12992]: + set -o pipefail
Jan 25 05:21:30 localhost.localdomain sh[12992]: + set -o nounset
Jan 25 05:21:30 localhost.localdomain sh[12992]: + export ORIGIN_DIR=/var/lib/origin
Jan 25 05:21:30 localhost.localdomain sh[12992]: + ORIGIN_DIR=/var/lib/origin
Jan 25 05:21:30 localhost.localdomain sh[12992]: + export OPENSHIFT_DIR=/var/lib/ origin/openshift.local.config/master
Jan 25 05:21:30 localhost.localdomain sh[12992]: + OPENSHIFT_DIR=/var/lib/origin/ openshift.local.config/master
Jan 25 05:21:30 localhost.localdomain sh[12992]: + export KUBECONFIG=/var/lib/origin /openshift.local.config/master/admin.kubeconfig
Jan 25 05:21:30 localhost.localdomain sh[12992]: + KUBECONFIG=/var/lib/origin/ openshift.local.config/master/admin.kubeconfig
Jan 25 05:21:30 localhost.localdomain sh[12992]: + echo 'export KUBECONFIG=/var/lib/ origin/openshift.local.config/master/admin.kubeconfig'
Jan 25 05:21:30 localhost.localdomain sh[12992]: + '[' 4 -gt 0 ']'
Jan 25 05:21:30 localhost.localdomain sh[12992]: + case $1 in
Jan 25 05:21:30 localhost.localdomain sh[12992]: + shift
Jan 25 05:21:30 localhost.localdomain sh[12992]: + route=cdk.vm
Jan 25 05:21:30 localhost.localdomain sh[12992]: + shift
Jan 25 05:21:30 localhost.localdomain sh[12992]: + '[' 2 -gt 0 ']'
Jan 25 05:21:30 localhost.localdomain sh[12992]: + case $1 in
Jan 25 05:21:30 localhost.localdomain sh[12992]: + shift
Jan 25 05:21:30 localhost.localdomain sh[12992]: + host=10.1.2.2
Jan 25 05:21:30 localhost.localdomain sh[12992]: + shift
Jan 25 05:21:30 localhost.localdomain sh[12992]: + '[' 0 -gt 0 ']'
Jan 25 05:21:30 localhost.localdomain sh[12992]: + binaries=(oc oadm)
Jan 25 05:21:30 localhost.localdomain sh[12992]: + for n in '${binaries[@]}'
Jan 25 05:21:30 localhost.localdomain sh[12992]: + '[' -f /usr/bin/oc ']'
Jan 25 05:21:30 localhost.localdomain sh[12992]: + echo '[INFO] Copying the OpenShift '\''oc'\'' binary to host /usr/bin/oc'
Jan 25 05:21:30 localhost.localdomain sh[12992]: [INFO] Copying the OpenShift 'oc' binary to host /usr/bin/oc
Jan 25 05:21:30 localhost.localdomain sh[12992]: + docker run --rm --entrypoint=/bin /cat openshift /usr/bin/oc
Jan 25 05:21:34 localhost.localdomain sh[12992]: + chmod +x /usr/bin/oc
Jan 25 05:21:34 localhost.localdomain sh[12992]: + for n in '${binaries[@]}'
Jan 25 05:21:34 localhost.localdomain sh[12992]: + '[' -f /usr/bin/oadm ']'
Jan 25 05:21:34 localhost.localdomain sh[12992]: + echo '[INFO] Copying the OpenShift '\''oadm'\'' binary to host /usr/bin/oadm'
Jan 25 05:21:34 localhost.localdomain sh[12992]: [INFO] Copying the OpenShift 'oadm' binary to host /usr/bin/oadm
Jan 25 05:21:34 localhost.localdomain sh[12992]: + docker run --rm --entrypoint=/bin /cat openshift /usr/bin/oadm
Jan 25 05:21:40 localhost.localdomain sh[12992]: + chmod +x /usr/bin/oadm
Jan 25 05:21:40 localhost.localdomain sh[12992]: + echo 'export OPENSHIFT_DIR=#{ORIGIN_DIR}/openshift.local.config/master'
Jan 25 05:21:40 localhost.localdomain sh[12992]: + rm_openshift_container
Jan 25 05:21:40 localhost.localdomain sh[12992]: + docker inspect openshift
Jan 25 05:21:40 localhost.localdomain sh[12992]: + docker rm -f -v openshift
Jan 25 05:21:40 localhost.localdomain sh[12992]: + master_config=/var/lib/origin/ openshift.local.config/master/master-config.yaml
Jan 25 05:21:40 localhost.localdomain sh[12992]: ++ hostname
Jan 25 05:21:40 localhost.localdomain sh[12992]: + node_config=/var/lib/origin/ openshift.local.config/node-localhost.localdomain/node-config.yaml
Jan 25 05:21:40 localhost.localdomain sh[12992]: + '[' '!' -f /var/lib/origin/ openshift.local.config/master/master-config.yaml ']'
Jan 25 05:21:40 localhost.localdomain sh[12992]: + dirs=(openshift.local.volumes openshift.local.config openshift.local.etcd)
Jan 25 05:21:40 localhost.localdomain sh[12992]: + for d in '${dirs[@]}'
Jan 25 05:21:40 localhost.localdomain sh[12992]: + mkdir -p /var/lib/origin/ openshift.local.volumes
...
Looks like xtrace
is enabled. It is really hard to find useful information this way. The log should be less verbose.
Copied from original issue: projectatomic/adb-atomic-developer-bundle#211
Currently when we are trying to implement openshift_stop scripts then it always send systemctl status openshift
in failed state because openshift process not gracefully exit.
As per article https://www.ctl.io/developers/blog/post/gracefully-stopping-docker-containers/ when running our own application in a container, we must decide how the different signals will be interpreted by our app.
# docker stop openshift
openshift
# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a0786157a986 openshift "/usr/bin/openshift s" 2 minutes ago Exited (143) 4 seconds ago openshift
# docker kill --signal=SIGINT openshift
openshift
# docker ps -a
8d9d8616e791 openshift "/usr/bin/openshift s" 52 seconds ago Exited (130) 2 seconds ago openshift
# docker kill --signal=SIGQUIT openshift
193cad9ae9e0 openshift "/usr/bin/openshift s" 42 seconds ago Exited (2) 18 seconds ago openshift
None of above will provide a exit status 0 and it make systemd in failed
state. There is another option -t
which described as "Seconds to wait for stop before killing it" but I don't want to use any random waiting time.
It would be nice to have some default project created for the openshift-dev user defined here:
https://github.com/projectatomic/adb-utils/blob/master/services/openshift/scripts/openshift#L113
When you start cdk in Eclipse/JBDS, a connection to openshift is created automatically in the OpenShift Explorer view. When you connect as the openshift-dev user, at the beginning you don't see anything under that connection which may seem like it's not working properly. So if there is a project to start with, it will be nicer for the user.
From @hferentschik on January 25, 2016 13:52
ATM all the configuration of OpenShift occurs on the first startup of the service. This could be improved by doing everything upfront so that the VM and OpenShift is fully prepared.
Also instead of generating the Openshift config on the fly one might as well check it in and save some of the service "provisioning".
Copied from original issue: projectatomic/adb-atomic-developer-bundle#216
We should support oc auto completion which is currently not working.
$ oc des<TAB>
$ oc describe
Openshift has such scripts: https://github.com/openshift/origin/tree/master/contrib/completions/bash
From @hferentschik on January 25, 2016 13:48
I think is it not good practice to use an unauthenticated OpenShift. This is of course open for discussion, but I think one should at least use basic authentication.
One needs to also discuss whether an admin user should be pre-created.
Copied from original issue: projectatomic/adb-atomic-developer-bundle#215
Having a single interface to handle setup Kubernetes, OpenShift etc helps to improve the user experience. The interface can be called from Vagrantfile or inside the Vagrantbox
This allows for using for example http://openshift3swagger-claytondev.rhcloud.com/ to explore the OpenShift REST API with Swagger.
To do this the the corsAllowedOrigins
parameter in master-config.yaml
must be adjusted.
ATM, the default host is hard-coded:
OPTIONS="-r adb.vm -host 10.1.2.2"
This ties the VM to a specific IP. This should really be a determined IP. Or maybe we can get it to work w/o!?
add_insecure_registry does the following:
function init_insecure_reg()
{
if grep -q '# INSECURE_REGISTRY' ${docker_conf_file}
then
sed -i.orig "/# INSECURE_REGISTRY=*/c\INSECURE_REGISTRY=\"\"" ${docker_conf_file}
# Get machine IP address
local local_ip=$(get_ip_address)
sed -i.back "s/INSECURE_REGISTRY=\"\(.*\)\"/INSECURE_REGISTRY=\"\1 -- insecure-registry 172.30.0.0\/16 --insecure-registry $local_ip\"/" ${docker_conf_file}
is_restart_req=true
fi
}
In particular it adds the local ip as insecure registry. I am wondering what the use case it for that? There is no registry running under this IP (at least not what I am aware of).
Any idea what it is supposed to be good for?
We should be able to use https://github.com/projectatomic/adb-utils/blob/master/services/openshift/scripts/docker_config i.e. /opt/adb/openshift/docker_config to add new docker registry (as many times the user wants)
the openshift startup should use sd_notify to allow systemd to correctly report and sequence things that need to wait on all of openshift to be ready.
From @hferentschik on January 25, 2016 13:52
ATM all the configuration of OpenShift occurs on the first startup of the service. This could be improved by doing everything upfront so that the VM and OpenShift is fully prepared.
Also instead of generating the Openshift config on the fly one might as well check it in and save some of the service "provisioning".
Copied from original issue: projectatomic/adb-atomic-developer-bundle#216
Currently to test a patch set we create a local rpm manually and then deployed to box. Creating RPM is a trivial task and can be part of CI.
We shouldn't be restarting docker here unless we really need to.
We create two users in OpenSHift i.e. openshift-dev and admin . We should either print these username/password in the command line or add it to the documentation
The script should allow to be called multiple times specifying new registries to add. This way it could also be called from the Vagrantfile
during provisioning in order to add for example hub.cdk.10.1.2.2.xip.io (the external route for the OpenShift registry).
For this to work one needs to be able to specify a host or IP. The script seems to only deal with IPs. Also it should check the INSECURE_REGISTRY
option whether the registry to add is already added. If so, do nothing, if not add.
Also the current sed
expression seems wrong, resulting in a trailing '':
INSECURE_REGISTRY="--insecure-registry 172.30.0.0/16 --insecure-registry 10.1.2.2"/
Currently we have:
wait_for_openshift_api() {
# Wait for container to show up
ATTEMPT=0
until docker inspect openshift > /dev/null 2>&1 || [ $ATTEMPT -eq 10 ]; do
sleep 1
((ATTEMPT++))
done
# Wait for API to be accessible
ATTEMPT=0
until curl -ksSf https://127.0.0.1:8443/api > /dev/null 2>&1 || [ $ATTEMPT - eq 30 ]; do
sleep 1
((ATTEMPT++))
done
}
In particular we are checking whether the api requests responds. We should use https://127.0.0.1:8443/healthz/ready instead which is the proper health URL. I am sure whether api can respond while OpenShift is not ready yet, but using the health URL seems the safer bet.
$ sudo systemctl enable openshift
$ sudo systemctl start openshift
Job for openshift.service failed because the control process exited with error code. See "systemctl status openshift.service" and "journalctl -xe" for details.
$ sudo journalctl --unit=openshift
Feb 15 09:21:36 localhost.localdomain systemd[1]: openshift.service: main process exited, code=exited, status=143/n/a
Feb 15 09:21:36 localhost.localdomain systemd[1]: Failed to start Docker Application Container for OpenShift.
Feb 15 09:21:36 localhost.localdomain systemd[1]: Unit openshift.service entered failed state.
Feb 15 09:21:36 localhost.localdomain systemd[1]: openshift.service failed.
Feb 15 09:22:06 localhost.localdomain systemd[1]: openshift.service holdoff time over, scheduling restart.
Feb 15 09:22:06 localhost.localdomain systemd[1]: Starting Docker Application Container for OpenShift...
Feb 15 09:22:06 localhost.localdomain docker[13560]: openshift
Feb 15 09:22:07 localhost.localdomain docker[13565]: openshift
Feb 15 09:22:07 localhost.localdomain docker[13571]: Error response from daemon: Conflict: Tag latest is already set to image 62208f15133798af253a651386ae2753da9130fcc68f1863cbd735c547356173, if you want to replace it, please use -f opt
Feb 15 09:22:08 localhost.localdomain sh[13667]: [INFO] Starting OpenShift server
The OpenShift registry is now exposed as a route in openshift_provision:
registry_host_name="hub.openshift.${ip}.xip.io"
oc expose service docker-registry --hostname ${registry_host_name}
But to be able to push to the registry one also needs to add the route/host to /etc/sysconfig/docker.
For now I have to add:
/opt/adb/openshift/add_insecure_registry -ip #{REGISTRY_HOST}
to my Vagrantfile.
I am pretty sure we already discussed the need for this insecure registry entry somewhere. I think in the context the changes to add_insecure_registry.
$ oc get nodes
Error from server: User "admin" cannot list all nodes in the cluster
I would have expected the admin user can do that!?
To align with the other scripts in this directory.
@hferentschik pointed out below situation and we should have better use-case for host
value in openshift_option.
why do you hardcode xip.io? What if someone wants to go the Landrush route?
Is the problem not in openshift_option. There we just use the hostname, but it should be something like 10.1.2.2.xip.io or
Discussion : #67
Since now we have hostname set in kickstart file we should use instead hard-code, which will make easy for downstream repo also.
The script should be able to stop OpenShift if required and setup k8s
From @hferentschik on January 25, 2016 13:37
$sudo systemctl reload openshift
Failed to reload openshift.service: Job type reload is not applicable for unit openshift.service.
Copied from original issue: projectatomic/adb-atomic-developer-bundle#213
To expose the OpenShift registry the current provisioning is doting something like
oc expose service docker-registry --hostname ${registry_host_name}
This won't really help much without also editing the Docker daemon options and adding registry_host_name
to the list of insecure registries. See also https://github.com/redhat-developer-tooling/openshift-vagrant/blob/master/cdk-v2/scripts/configure_docker.sh#L39
Related
We need the ability to switch the configuration of docker daemon to run on
The implementation should reside in the box so that external tooling can call the script to toggle between the docker daemon configuration.
is having this implementation as part of sccli a good option?
Following are the contents of /etc/sysconfig/openshift_option
# /etc/sysconfig/openshift_options
# Modify these options if you want to change openshift route and hostname
OPTIONS="-host adb.vm -ip 10.1.2.2"
IMAGE="openshift/origin:v1.1.1"
DOCKER_REGISTRY="docker.io"
Box IP 10.1.2.2
is hard coded in this config.
External client connection to OpenShift server in the box will not work if box IP is different.
In /opt/adb/openshift/openshift_provision we have:
registry_host_name="hub.$(hostname)"
oc expose service docker-registry --hostname ${registry_host_name}
hostname
does not seem to be set anywhere. The resulting route becomes (accidentally!?) hub.localhost.localdomain, but it should be something like hub.cdk.10.1.2.2.xip.io.
It seems nothing is passed to the provision script either (/usr/lib/systemd/system/openshift.service):
ExecStartPost=/usr/bin/sh /opt/adb/openshift/openshift_provision
From @hferentschik on January 25, 2016 13:36
Running sudo systemctl stop openshift
does not stop the service. The OpenShift container stays running and the OpenShift services are still accessible.
I did the following:
vagrant up --no-provision
to bring up a new instance of the VM
vagrant ssh
sudo systemctl status openshift
openshift.service - Docker Application Container for OpenShift
Loaded: loaded (/etc/systemd/system/openshift.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Docs: https://docs.openshift.org/
sudo systemctl enable openshift
sudo systemctl start openshift
sudo systemctl status openshift
- the service starts, but note that two commands reported ERRORS
. In this case this is harmless, but still should be avoided.
openshift.service - Docker Application Container for OpenShift
Loaded: loaded (/etc/systemd/system/openshift.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2016-01-25 08:26:58 EST; 1min 9s ago
Docs: https://docs.openshift.org/
Process: 13231 ExecStartPost=/usr/bin/sh -x /opt/adb/openshift_provision $OPTIONS (code=exited, status=0/SUCCESS)
Process: 13033 ExecStartPost=/bin/sleep 20 (code=exited, status=0/SUCCESS)
Process: 13027 ExecStartPre=/usr/bin/docker tag ${DOCKER_REGISTRY}/${IMAGE} openshift (code=exited, status=0/SUCCESS)
Process: 13020 ExecStartPre=/usr/bin/docker rm openshift (code=exited, status=1/ FAILURE)
Process: 13016 ExecStartPre=/usr/bin/docker stop openshift (code=exited, status=1/ FAILURE)
Main PID: 13032 (sh)
Memory: 107.8M
CGroup: /system.slice/openshift.service
├─13032 /usr/bin/sh -x /opt/adb/openshift -r cdk.vm -host 10.1.2.2
└─13226 docker run --name openshift --privileged --net=host --pid=host - v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys:ro -v /var/lib/ docker:/var/lib/docker:rw -v /var/lib/origin/openshift.local.volumes:/var /lib/origi...
Jan 25 08:26:57 localhost.localdomain sh[13231]: + oc login https://10.1.2.2:8443 - u test-admin -p test --certificate-authority=/var/lib/origin/openshift.local.config/ master/ca.crt
Jan 25 08:26:57 localhost.localdomain sh[13231]: + oc new-project test '--display- name=OpenShift 3 Sample' '--description=This is an example project to demonstrate OpenShift v3'
Jan 25 08:26:57 localhost.localdomain sh[13231]: + sudo touch /var/lib/origin/ configured.user
Jan 25 08:26:57 localhost.localdomain sudo[13600]: root : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/touch /var/lib/origin/configured.user
Jan 25 08:26:58 localhost.localdomain systemd[1]: Started Docker Application Container for OpenShift.
Jan 25 08:27:48 localhost.localdomain sh[13032]: W0125 08:27:48.618395 13264 manager.go:1892] Hairpin setup failed for pod "docker-registry-1-deploy_default": open /sys/devices/virtual/net/veth182179f/brport/hairpin...only file system
Jan 25 08:27:50 localhost.localdomain sh[13032]: I0125 08:27:50.837607 13264 event .go:216] Event(api.ObjectReference{Kind:"ReplicationController", Namespace:"default", Name:"docker-registry-1", UID:"444d15e5- c367-11e5-916d-52540045...
Jan 25 08:27:52 localhost.localdomain sh[13032]: W0125 08:27:52.240466 13264 manager.go:1892] Hairpin setup failed for pod "docker-registry-1-v2mul_default": open /sys/devices/virtual/net/veth9c98c9f/brport/hairpin_...only file system
Jan 25 08:27:56 localhost.localdomain sh[13032]: W0125 08:27:56.562703 13264 manager.go:1892] Hairpin setup failed for pod "router-1-deploy_default": open /sys/ devices/virtual/net/vethf8ebeda/brport/hairpin_mode: read-only file system
Jan 25 08:27:58 localhost.localdomain sh[13032]: I0125 08:27:58.826657 13264 event .go:216] Event(api.ObjectReference{Kind:"ReplicationController", Namespace:"default", Name:"router-1", UID:"44586451-c367-11e5-916d-5...: router-1- b7te3
Hint: Some lines were ellipsized, use -l to show in full.
docker ps
- OpenShift running
4cd005fc5b72 openshift "/usr/bin/openshift s" 4 minutes ago Up 4 minutes openshift
sudo systemctl stop openshift
docker ps
- container still running. No shutdown!!
4cd005fc5b72 openshift "/usr/bin/openshift s" 6 minutes ago Up 6 minutes openshift
sudo systemctl status openshift
- the status output seems wrong now as well
openshift.service - Docker Application Container for OpenShift
Loaded: loaded (/etc/systemd/system/openshift.service; enabled; vendor preset: disabled)
Active: inactive (dead) since Mon 2016-01-25 08:32:09 EST; 2min 10s ago
Docs: https://docs.openshift.org/
Process: 13231 ExecStartPost=/usr/bin/sh -x /opt/adb/openshift_provision $OPTIONS (code=exited, status=0/SUCCESS)
Process: 13033 ExecStartPost=/bin/sleep 20 (code=exited, status=0/SUCCESS)
Process: 13032 ExecStart=/usr/bin/sh -x /opt/adb/openshift $OPTIONS (code=killed, signal=TERM)
Process: 13027 ExecStartPre=/usr/bin/docker tag ${DOCKER_REGISTRY}/${IMAGE} openshift (code=exited, status=0/SUCCESS)
Process: 13020 ExecStartPre=/usr/bin/docker rm openshift (code=exited, status=1/ FAILURE)
Process: 13016 ExecStartPre=/usr/bin/docker stop openshift (code=exited, status=1/ FAILURE)
Main PID: 13032 (code=killed, signal=TERM)
Memory: 106.5M
CGroup: /system.slice/openshift.service
└─13226 docker run --name openshift --privileged --net=host --pid=host - v /:/rootfs:ro -v /var/run:/var/run:rw -v /sys:/sys:ro -v /var/lib/ docker:/var/lib/docker:rw -v /var/lib/origin/openshift.local.volumes:/var /lib/origi...
Copied from original issue: projectatomic/adb-atomic-developer-bundle#212
It needs to be added to the RPM
This would allow to clean out some the code which deals with removal of existed container. Stopping the container would automatically remove it.
Only drawback might be that it one cannot inspect a failing container after exit, but that's really a very advanced use case.
Currently we don't have any ELK (ElasticSearch Logging) or metrics (like hawkular) enabled which will be good for developer experience.
Refer : #45
Currently /var/lib/origin is used, even though all other scripts use just 'openshift'.
Maybe we should even put it under /opt/adb/openshift
!?
openshift_provision does copy the following binaries:
binaries=(openshift oc oadm)
for n in ${binaries[@]}; do
echo "[INFO] Copying ${n} binary to VM"
docker cp openshift:/usr/bin/${n} /usr/bin/${n}
done
However with (at least) OSE 3.1 and OSE 3.2 oadm is really a symbolic link to openshift within the container. Downloading all three binaries like this, seems to break the links. Trying to execute oadm results into the following error:
/usr/bin/oadm
-bash: /usr/bin/oadm: Too many levels of symbolic links
Not quite sure why this had not popped up earlier.
I guess we just need to copy oc
and openshift
and then create the symbolic link ourselves.
As part of the OpenShift service config we have;
ExecStartPre=-/usr/bin/docker tag ${DOCKER_REGISTRY}/${IMAGE} openshift
This won't work if the image is not pulled yet. This makes the IMAGE
option in /etc/sysconfig/openshift_option useless. One needs to pull the image first and then tag (in order to allow to run against a different OpenShift version).
Also in order to allow for example to run latest Origin, it does not make sense to prefix DOCKER_REGISTRY
. The image should be given fully specified.
Currently cert generation script which used by adbinfo is part of kickstart file. As per our effort we can include it in adb-utils repo and include it in rpm.
Currently if one wants to custom configure Docker or OpenShift one needs to either write ones one 'options' file or make sure to add the right values via sed or similar. For example:
cat << EOF > /etc/sysconfig/openshift_option
OPTIONS="-r #{PUBLIC_HOST} -host #{PUBLIC_ADDRESS}"
IMAGE="openshift3/ose:v3.1.0.4"
DOCKER_REGISTRY="registry.access.redhat.com"
EOF
This is error prone and not "easy" for the user. Also in the case where the whole file is overwritten (as above) useful information like usage comments are removed.
If we assume that the adbinfo (or better its successor plugin) is a requirement for the CDK VM, we could add some provisioning capabilities which will allow to set a few simple config values and the updates to the appropriate files will happen during the provisioning (similar as the generation of the certificates should happen during provisioning).
This way we could make the user experience much nicer offering proper configuration options to change things like default route, IP, etc
From @hferentschik on January 25, 2016 13:46
/opt/adb/openshift_provision
does some basic provisioning for OpenShift. It sets up the OpenShift registry as well as the router. However, it also does change the default permissions of OPenShift to allow to run any image. I think this goes too far. This should most likely be user driven.
The same applies for the imported templates and the created test-admin user. Both things are use-case specific and should be driven by user provisioning.
Copied from original issue: projectatomic/adb-atomic-developer-bundle#214
Refer #45
If docker images are present for a particular tag and user is asking for that tag images we should not do docker pull and wait for the command to say the image is up to date
.
Because
docker pull
for that tag, it takes 4 - 5 seconds to say image is up to date
. So for 5 images it can save some time for the user.docker pull
command to tell us whether we have images , but the code should figure out the same by itself locally.Following are the contents of /etc/sysconfig/openshift_option
# /etc/sysconfig/openshift_options
# Modify these options if you want to change openshift route and hostname
OPTIONS="-host adb.vm -ip 10.1.2.2"
IMAGE="openshift/origin:v1.1.1"
DOCKER_REGISTRY="docker.io"
IMAGE should not be hardcoded
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.