Code Monkey home page Code Monkey logo

catasb's Introduction

Fusor Build Status Coverage Status

Our issue tracker is located at http://bugzilla.redhat.com

Fusor API documentation is located here (or here for single-page).

To file bugs or enhancement requests, use https://bugzilla.redhat.com/enter_bug.cgi?product=Red%20Hat%20Quickstart%20Cloud%20Installer (will require an account).

The form will be pre-filled with a template for filing a bug. If you are filing an RFE, an alternative template is provided below.

Request for Enhancement (RFE)

Motivation:

  • Why do you want this feature?
  • Is it a priority? Why?
  • Who needs it (internal, customer, etc)?

Current Behavior:

  • If you are proposing a change to an existing feature, where does the existing feature fall short?
  • If it is too difficult to use in its current state, where does the difficulty lie?

Desired Behavior:

  • If this feature did exist, how would you use it?
  • If it has a UI component, how would you like it to be presented to you? How would you like to interact with it?
  • If there are multiple options/paths/configurations, which ones are necessary for your use case?

Additional information:

  • Relevant product/documentation links
  • Caveats
  • Contacts
  • Communities
  • Anything else that may be of use

catasb's People

Contributors

alessfg avatar cfchase avatar david-martin avatar djwhatle avatar djzager avatar dymurray avatar eriknelson avatar fabianvf avatar geekgonecrazy avatar jaymccon avatar jmontleon avatar jmrodri avatar jwmatthews avatar karmab avatar mhrivnak avatar philbrookes avatar shawn-hurley avatar tchughesiv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

catasb's Issues

With ec2 setup add a mechanism to clean up all created resources in the account.

@thoraxe raised the issue that catasb is not cleaning up after itself with ec2 provisioning.

There is no mechanism to clean up the various resources beyond the instance/volume that are provisioned.
Example: VPC, Security Groups, Subnets, etc.

This issue is tracking an ability to provide a "nuke_all.sh" or equivalent that a user can run to cleanup any resources created by catasb.

One thought for implementation is we could tag all resources with a "catasb" identifier then query the account for all resources matching the explicit tag and remove those for a nuke operation.

There is a concern of not making this part of typical workflow in a shared IAM account, as in a shared account it's expected that multiple users will be reusing the same VPC, Subnets, etc. In the shared use case we expect catasb to create a specific VPC once and for all users to reuse it. We wouldn't want individual users to "nuke" these shared resources after their testing is complete as it would adversely impact others in the account.

Duplicate files

There are a few duplicate files that are exactly the same with different filenames.

$ fdupes -R .

./ansible/roles/aws_display_info/tasks/main.yml                                       
./ansible/roles/aws_terminate_ec2_instances/tasks/main.yml                            
./ansible/roles/aws_packages/tasks/main.yml                                           

./ansible/reset_local_environment.retry    
./ansible/setup_local_environment.retry    

./local/linux/reset_environment.sh         
./local/linux/run_setup_local.sh           

./local/gate/run_gate.sh                   
./local/gate/reset_environment.sh          

./local/mac/reset_environment.sh           
./local/mac/run_mac_local.sh      

Support both modes

catasb used to setup a libvirt setup locally. That functionality has been moved to archive for EC-2. I propose we create an ec-2 directory and a libvirt directory. If this is going to be the setup for running the ansible-service-broker and catalog for both a devel environment and testing ground.

shouldn't default to anyuid scc

Currently, during catasb setup, it appears that all authenticated users are being given anyuid scc rights.

shell: "{{ oc_cmd }} adm policy add-scc-to-group anyuid system:authenticated"

I assume this is to provide a more pleasant experience for devs, but it's not how openshift runs by default and could prove problematic when an apb makes the transition to openshift online/dedicated/etc. Images execting to run as a certain uid, root or otherwise, could error out.

Maybe, instead, we add an scc option to the apb.yml spec and allow for a dev to ask the broker for a certain scc allocated to a certain sa in a specific project, but keep the defaults as is?

We can drive some of this work if folks are agreeable to it...

sudo: a password is required

TASK [openshift_setup : Resetting cluster, True] *********************************************************************************************************************************************************
changed: [localhost]

TASK [openshift_setup : Install docker through pip as it's a requirement of ansible docker module] *******************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "module_stderr": "sudo: a password is required\n", "module_stdout": "", "msg": "MODULE FAILURE", "rc": 1}

Need update docker library > 2.0

According the ansibleplaybookbundle/ansible-playbook-bundle#36 (comment) we need update docker library > 2.0, otherwise it will conflict with apb build

# pip list|grep docker
docker (2.3.0)
docker-pycreds (0.2.1)

Local install failed with:
TASK [openshift_setup : Pulling all docker images we require] ******************
failed: [localhost] (item={u'tag': u'summit', u'img': u'docker.io/ansibleplaybookbundle/ansible-service-broker-apb'}) => {"failed": true, "item": {"img": "docker.io/ansibleplaybookbundle/ansible-service-broker-apb", "tag": "summit"}, "msg": "Failed to import docker-py - cannot import name Client. Try `pip install docker-py`"}
failed: [localhost] (item={u'tag': u'summit', u'img': u'docker.io/ansibleplaybookbundle/ansible-service-broker-asb'}) => {"failed": true, "item": {"img": "docker.io/ansibleplaybookbundle/ansible-service-broker-asb", "tag": "summit"}, "msg": "Failed to import docker-py - cannot import name Client. Try `pip install docker-py`"}
failed: [localhost] (item={u'tag': u'summit', u'img': u'docker.io/ansibleplaybookbundle/ansible-service-broker-etcd'}) => {"failed": true, "item": {"img": "docker.io/ansibleplaybookbundle/ansible-service-broker-etcd", "tag": "summit"}, "msg": "Failed to import docker-py - cannot import name Client. Try `pip install docker-py`"}
failed: [localhost] (item={u'tag': u'summit', u'img': u'docker.io/ansibleplaybookbundle/postgresql-demo-apb'}) => {"failed": true, "item": {"img": "docker.io/ansibleplaybookbundle/postgresql-demo-apb", "tag": "summit"}, "msg": "Failed to import docker-py - cannot import name Client. Try `pip install docker-py`"}
failed: [localhost] (item={u'tag': u'summit', u'img': u'docker.io/ansibleplaybookbundle/rds-postgres-apb'}) => {"failed": true, "item": {"img": "docker.io/ansibleplaybookbundle/rds-postgres-apb", "tag": "summit"}, "msg": "Failed to import docker-py - cannot import name Client. Try `pip install docker-py`"}
failed: [localhost] (item={u'tag': u'app-latest', u'img': u'manageiq/manageiq-pods'}) => {"failed": true, "item": {"img": "manageiq/manageiq-pods", "tag": "app-latest"}, "msg": "Failed to import docker-py - cannot import name Client. Try `pip install docker-py`"}
failed: [localhost] (item={u'tag': u'memcached-latest', u'img': u'manageiq/manageiq-pods'}) => {"failed": true, "item": {"img": "manageiq/manageiq-pods", "tag": "memcached-latest"}, "msg": "Failed to import docker-py - cannot import name Client. Try `pip install docker-py`"}
failed: [localhost] (item={u'tag': u'postgresql-latest', u'img': u'manageiq/manageiq-pods'}) => {"failed": true, "item": {"img": "manageiq/manageiq-pods", "tag": "postgresql-latest"}, "msg": "Failed to import docker-py - cannot import name Client. Try `pip install docker-py`"}
failed: [localhost] (item={u'tag': u'postgis', u'img': u'docker.io/fabianvf/postgresql'}) => {"failed": true, "item": {"img": "docker.io/fabianvf/postgresql", "tag": "postgis"}, "msg": "Failed to import docker-py - cannot import name Client. Try `pip install docker-py`"}
failed: [localhost] (item={u'tag': u'summit', u'img': u'docker.io/ansibleplaybookbundle/controller-manager'}) => {"failed": true, "item": {"img": "docker.io/ansibleplaybookbundle/controller-manager", "tag": "summit"}, "msg": "Failed to import docker-py - cannot import name Client. Try `pip install docker-py`"}
failed: [localhost] (item={u'tag': u'summit', u'img': u'docker.io/ansibleplaybookbundle/apiserver'}) => {"failed": true, "item": {"img": "docker.io/ansibleplaybookbundle/apiserver", "tag": "summit"}, "msg": "Failed to import docker-py - cannot import name Client. Try `pip install docker-py`"}
failed: [localhost] (item={u'tag': u'latest', u'img': u'docker.io/centos/python-35-centos7'}) => {"failed": true, "item": {"img": "docker.io/centos/python-35-centos7", "tag": "latest"}, "msg": "Failed to import docker-py - cannot import name Client. Try `pip install docker-py`"}
failed: [localhost] (item={u'tag': u'latest', u'img': u'docker.io/centos/python-34-centos7'}) => {"failed": true, "item": {"img": "docker.io/centos/python-34-centos7", "tag": "latest"}, "msg": "Failed to import docker-py - cannot import name Client. Try `pip install docker-py`"}
failed: [localhost] (item={u'tag': u'latest', u'img': u'docker.io/centos/python-27-centos7'}) => {"failed": true, "item": {"img": "docker.io/centos/python-27-centos7", "tag": "latest"}, "msg": "Failed to import docker-py - cannot import name Client. Try `pip install docker-py`"}
failed: [localhost] (item={u'tag': u'summit', u'img': u'docker.io/ansibleplaybookbundle/origin'}) => {"failed": true, "item": {"img": "docker.io/ansibleplaybookbundle/origin", "tag": "summit"}, "msg": "Failed to import docker-py - cannot import name Client. Try `pip install docker-py`"}
failed: [localhost] (item={u'tag': u'summit', u'img': u'docker.io/ansibleplaybookbundle/origin-sti-builder'}) => {"failed": true, "item": {"img": "docker.io/ansibleplaybookbundle/origin-sti-builder", "tag": "summit"}, "msg": "Failed to import docker-py - cannot import name Client. Try `pip install docker-py`"}
failed: [localhost] (item={u'tag': u'summit', u'img': u'docker.io/ansibleplaybookbundle/origin-deployer'}) => {"failed": true, "item": {"img": "docker.io/ansibleplaybookbundle/origin-deployer", "tag": "summit"}, "msg": "Failed to import docker-py - cannot import name Client. Try `pip install docker-py`"}
failed: [localhost] (item={u'tag': u'summit', u'img': u'docker.io/ansibleplaybookbundle/origin-docker-registry'}) => {"failed": true, "item": {"img": "docker.io/ansibleplaybookbundle/origin-docker-registry", "tag": "summit"}, "msg": "Failed to import docker-py - cannot import name Client. Try `pip install docker-py`"}
failed: [localhost] (item={u'tag': u'summit', u'img': u'docker.io/ansibleplaybookbundle/origin-haproxy-router'}) => {"failed": true, "item": {"img": "docker.io/ansibleplaybookbundle/origin-haproxy-router", "tag": "summit"}, "msg": "Failed to import docker-py - cannot import name Client. Try `pip install docker-py`"}
	to retry, use: --limit @/data/src/github.com/fusor/catasb/ansible/setup_local_environment.retry

`local_oc_client` doesn't respect the PATH

I have a .bin-override/ early in my PATH where I sometimes will drop self-built oc binaries. I would expect local_oc_client to respect this and use my override rather than /usr/bin/oc, which I don't think it's doing right now. Need to double check this, and make a small change if that's true.

Cloning to /tmp/ansible-service-broker fails if the directory already exists with local

Seeing this failure, probably isolated to me since I do this a lot, but I already had /tmp/ansible-service-broker present on my local system, so it failed to clone the broker:

TASK [ansible_service_broker_setup : git clone ansible-service-broker] *********
fatal: [localhost]: FAILED! => {"changed": false, "cmd": "/usr/bin/git clone --origin origin https://github.com/fusor/ansible-service-broker.git /tmp/ansible-service-broker", "failed": true, "msg": "fatal: destination path '/tmp/ansible-service-broker' already exists and is not an empty directory.", "rc": 128, "stderr": "fatal: destination path '/tmp/ansible-service-broker' already exists and is not an empty directory.\n", "stdout": "", "stdout_lines": []}
        to retry, use: --limit @/tmp/retry/setup_local_environment.retry

Unable to register existing brokers

When I register a new external broker it errors getting the catalog. The error is:

I0716 08:59:19.549125 1 event.go:217] Event(v1.ObjectReference{Kind:"Broker", Namespace:"", Name:"anynines-postgres", UID:"74e8215f-69fb-11e7-a06d-0242ac110003", APIVersion:"servicecatalog.k8s.io", ResourceVersion:"45", FieldPath:""}): type: 'Warning' reason: 'ErrorFetchingCatalog' Error getting broker catalog for broker "anynines-postgres": Status: 401; ErrorMessage: <nil>; Description: <nil>; ResponseError: invalid character 'H' looking for beginning of value I0716 08:59:19.550599 1 controller_broker.go:427] Updated ready condition for Broker anynines-postgres to False

However this worked in a previous release (built on alpha-1). I suspect it no longer passing the authentication details details through correctly. Querying the service catalog for the broker status returns:

{ "metadata": { "name": "anynines-postgres", "selfLink": "/apis/servicecatalog.k8s.io/v1alpha1/brokersanynines-postgres", "uid": "74e8215f-69fb-11e7-a06d-0242ac110003", "resourceVersion": "45", "creationTimestamp": "2017-07-16T07:50:37Z", "finalizers": [ "kubernetes-incubator/service-catalog" ] }, "spec": { "url": "http://postgresql-service-broker.service.dc1.consul:3000/" }, "status": { "conditions": [ { "type": "Ready", "status": "False", "lastTransitionTime": "2017-07-16T07:50:37Z", "reason": "ErrorFetchingCatalog", "message": "Error fetching catalog. Error getting broker catalog for broker \"anynines-postgres\": Status: 401; ErrorMessage: \u003cnil\u003e; Description: \u003cnil\u003e; ResponseError: invalid character 'H' looking for beginning of value" }

Notice the spec contains no authentication details. However on a previous release version I get which does correctly contain a reference to the authentication details.

{ "metadata": { "name": "anynines-postgres", "selfLink": "/apis/servicecatalog.k8s.io/v1alpha1/brokersanynines-postgres", "uid": "e32c6982-6877-11e7-9e6a-0242ac110002", "resourceVersion": "28", "creationTimestamp": "2017-07-14T09:36:17Z", "finalizers": [ "kubernetes" ] }, "spec": { "url": "http://postgresql-service-broker.service.dc1.consul:3000/", "authSecret": { "namespace": "openshift", "name": "anynines-secret" } }, "status": { "conditions": [ { "type": "Ready", "status": "True", "lastTransitionTime": "2017-07-14T09:36:18Z", "reason": "FetchedCatalog", "message": "Successfully fetched catalog entries from broker." } ] } }

Any help would be much appreciated.

ERROR! the playbook: /setup_local_environment.yml could not be found

OS - ubuntu 16.0
Openshit Setup: oc cluster up (using v1.5.0)

When I execute run_setup_local.sh getting following error.

ERROR! the playbook: /setup_local_environment.yml could not be found

After export ANS_CODE=/catasb/local/linux/ansible/ error gone but got following error.

root@ip-172-31-26-21:~/catasb/local/linux# ./run_setup_local.sh
 [WARNING]: provided hosts list is empty, only localhost is available

Enter your dockerhub username: prasenforu
Enter your dockerhub password:
Enter the dockerhub organization you'd like to pull images from: ansibleplaybookbundle

PLAY [localhost] *******************************************************************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************************************************************
ok: [localhost]

TASK [openshift_setup : set_fact] **************************************************************************************************************************************
skipping: [localhost]

TASK [openshift_setup : set_fact] **************************************************************************************************************************************
skipping: [localhost]

TASK [openshift_setup : set_fact] **************************************************************************************************************************************
ok: [localhost]

TASK [openshift_setup : set_fact] **************************************************************************************************************************************
skipping: [localhost]

TASK [openshift_setup : set_fact] **************************************************************************************************************************************
ok: [localhost]

TASK [openshift_setup : set_fact] **************************************************************************************************************************************
ok: [localhost]

TASK [openshift_setup : set_fact] **************************************************************************************************************************************
ok: [localhost]

TASK [openshift_setup : set_fact] **************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a variable that is undefined. The error was: 'openshift_build_type' is undefined\n\nThe error appears to have been in '/root/catasb/ansible/roles/openshift_setup/tasks/main.yml': line 31, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n  - set_fact:\n    ^ here\n"}
        to retry, use: --limit @/root/catasb/ansible/setup_local_environment.retry

PLAY RECAP *************************************************************************************************************************************************************
localhost                  : ok=5    changed=0    unreachable=0    failed=1


Failed to run run_setup_local.sh: Error connecting: Error while fetching server API version: ('Connection aborted.', error(13, 'Permission denied'))"

failed: [localhost] (item={u'tag': u'summit', u'img': u'docker.io/ansibleplaybookbundle/ansible-service-broker-apb'}) => {"failed": true, "item": {"img": "docker.io/ansibleplaybookbundle/ansible-service-broker-apb", "tag": "summit"}, "msg": "Error connecting: Error while fetching server API version: ('Connection aborted.', error(13, 'Permission denied'))"}
failed: [localhost] (item={u'tag': u'summit', u'img': u'docker.io/ansibleplaybookbundle/ansible-service-broker-asb'}) => {"failed": true, "item": {"img": "docker.io/ansibleplaybookbundle/ansible-service-broker-asb", "tag": "summit"}, "msg": "Error connecting: Error while fetching server API version: ('Connection aborted.', error(13, 'Permission denied'))"}

local/linux/run_setup_local.sh ends in error waiting for ASB deployment configs

My deployment was on a centos7 vm.

snip
==> default: ok: [localhost] => {
==> default: "msg": [
==> default: "oc v3.9.0-alpha.0+a0adcf4",
==> default: "kubernetes v1.8.1+0d5291c",
==> default: "features: Basic-Auth GSSAPI Kerberos SPNEGO"
==> default: ]
==> default: }
snip
==> default: FAILED - RETRYING: Waiting 10 minutes for ASB deployment configs (3 retries left).

==> default: FAILED - RETRYING: Waiting 10 minutes for ASB deployment configs (2 retries left).

==> default: FAILED - RETRYING: Waiting 10 minutes for ASB deployment configs (1 retries left).

==> default: failed: [localhost] (item=asb) => {"attempts": 60, "changed": true, "cmd": ""/root/bin/oc" get deploymentconfig "asb" -o go-template='{{if eq .spec.replicas .status.availableReplicas}}good{{end}}' | grep 'good'", "delta": "0:00:00.162655", "end": "2017-12-05 01:43:03.310830", "failed": true, "item": "asb", "msg": "non-zero return code", "rc": 1, "start": "2017-12-05 01:43:03.148175", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
==> default: changed: [localhost] => (item=asb-etcd)
==> default: to retry, use: --limit @/home/vagrant/catasb/ansible/setup_local_environment.retry
==> default:
==> default: PLAY RECAP *********************************************************************
==> default: localhost : ok=70 changed=36 unreachable=0 failed=1

Error on macOS with pip install, related to 'six' package and macOS's python: copystat\n os.chflags(dst, st.st_flags)\nOSError: [Errno 1] Operation not permitted: '/tmp/pip-QDYDAr-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info'\n"}

Below is a sample of an issue people may hit on macOS.

TASK [ansible_service_broker_setup : git clone ansible-service-broker] ******************************************************************************************************************
changed: [localhost]

TASK [ansible_service_broker_setup : Install asbcli requirements] ***********************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "cmd": "/usr/local/bin/pip2 install -r /tmp/ansible-service-broker/scripts/asbcli/requirements.txt", "failed": true, "msg": "stdout: Requirement already satisfied: appdirs==1.4.0 in /Library/Python/2.7/site-packages (from -r /tmp/ansible-service-broker/scripts/asbcli/requirements.txt (line 1))\nCollecting packaging==16.8 (from -r /tmp/ansible-service-broker/scripts/asbcli/requirements.txt (line 2))\n Using cached packaging-16.8-py2.py3-none-any.whl\nCollecting pyparsing==2.1.10 (from -r /tmp/ansible-service-broker/scripts/asbcli/requirements.txt (line 3))\n Using cached pyparsing-2.1.10-py2.py3-none-any.whl\nCollecting requests==2.13.0 (from -r /tmp/ansible-service-broker/scripts/asbcli/requirements.txt (line 4))\n Using cached requests-2.13.0-py2.py3-none-any.whl\nCollecting six==1.10.0 (from -r /tmp/ansible-service-broker/scripts/asbcli/requirements.txt (line 5))\n Using cached six-1.10.0-py2.py3-none-any.whl\nInstalling collected packages: six, pyparsing, packaging, requests\n Found existing installation: six 1.4.1\n Uninstalling six-1.4.1:\n\n:stderr: DEPRECATION: Uninstalling a distutils installed project (six) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project.\nException:\nTraceback (most recent call last):\n File "/Library/Python/2.7/site-packages/pip-9.0.1-py2.7.egg/pip/basecommand.py", line 215, in main\n status = self.run(options, args)\n File "/Library/Python/2.7/site-packages/pip-9.0.1-py2.7.egg/pip/commands/install.py", line 342, in run\n prefix=options.prefix_path,\n File "/Library/Python/2.7/site-packages/pip-9.0.1-py2.7.egg/pip/req/req_set.py", line 778, in install\n requirement.uninstall(auto_confirm=True)\n File "/Library/Python/2.7/site-packages/pip-9.0.1-py2.7.egg/pip/req/req_install.py", line 754, in uninstall\n paths_to_remove.remove(auto_confirm)\n File "/Library/Python/2.7/site-packages/pip-9.0.1-py2.7.egg/pip/req/req_uninstall.py", line 115, in remove\n renames(path, new_path)\n File "/Library/Python/2.7/site-packages/pip-9.0.1-py2.7.egg/pip/utils/init.py", line 267, in renames\n shutil.move(old, new)\n File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move\n copy2(src, real_dst)\n File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2\n copystat(src, dst)\n File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat\n os.chflags(dst, st.st_flags)\nOSError: [Errno 1] Operation not permitted: '/tmp/pip-QDYDAr-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/six-1.4.1-py2.7.egg-info'\n"}
to retry, use: --limit @/git/fusor/catasb/ansible/reset_mac_environment.retry

Workaround for this is to reinstall python from brew and not use the system python from Apple.
The system python from Apple has a conflict with the 'six' package.

To do this:
brew reinstall python
sudo pip install ansible boto boto3

That resolved the issues for me.

Destination /var/lib/origin/openshift.local.config not writable

When running local, the following permission error arises on a fresh system:

TASK [openshift_setup : Add extension script to oc config to talk to svc catalog] ***********************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": true, "failed": true, "msg": "Destination /var/lib/origin/openshift.local.config not writable"}
        to retry, use: --limit @/home/ernelson/cluster/catasb/ansible/setup_local_environment.retry

zeus ran into this his first time as well. I think he got around it by adding become: True to the task, but I'm unsure if we actually want to add this long term.

terminate_instance script for ec2 is deleting more than desired

I had several instances provisioned that had a tag consisting of "jwm".
Some of the instances were from the multi-node work, so I had node01-jwm, node02-jwm, master-jwm.

In addition I had a single node, jwm.ec2.dog8code.com.

I ran terminate_instance.sh in the minimal directory, assumed that only the single node, jwm.ec2.dogcode.com would be terminated, but looks like all of the nodes which had 'jwm' in the tag were deleted.

We should re-examine the tagging and terminate behavior between single node in ec2 and multi node.

run_setup_local.sh fails - SeLinux

Possibly related: openshift/openshift-ansible#3303

$ ./run_setup_local.sh
SUDO password:
[WARNING]: Host file not found: /etc/ansible/ec2.py

[WARNING]: provided hosts list is empty, only localhost is available

PLAY [localhost] *****************************************************************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************************************************************
ok: [localhost]

TASK [openshift_setup : set_fact] ************************************************************************************************************************************************************
skipping: [localhost]

TASK [openshift_setup : set_fact] ************************************************************************************************************************************************************
ok: [localhost]

TASK [openshift_setup : set_fact] ************************************************************************************************************************************************************
skipping: [localhost]

TASK [openshift_setup : set_fact] ************************************************************************************************************************************************************
ok: [localhost]

TASK [openshift_setup : set_fact] ************************************************************************************************************************************************************
ok: [localhost]

TASK [openshift_setup : set_fact] ************************************************************************************************************************************************************
ok: [localhost]

TASK [openshift_setup : set_fact] ************************************************************************************************************************************************************
ok: [localhost]

TASK [openshift_setup : set_fact] ************************************************************************************************************************************************************
ok: [localhost]

TASK [openshift_setup : set_fact] ************************************************************************************************************************************************************
ok: [localhost]

TASK [openshift_setup : set_fact] ************************************************************************************************************************************************************
ok: [localhost]

TASK [openshift_setup : file] ****************************************************************************************************************************************************************
ok: [localhost]

TASK [openshift_setup : Download oc binary "https://github.com/openshift/origin/releases/download/v3.6.0-alpha.1/openshift-origin-client-tools-v3.6.0-alpha.1-46942ad-linux-64bit.tar.gz"] ***
ok: [localhost]

TASK [openshift_setup : extract archive] *****************************************************************************************************************************************************
skipping: [localhost]

TASK [openshift_setup : Untar openshift-origin-client-tools-v3.6.0-alpha.1-46942ad-linux-64bit.tar.gz] ***************************************************************************************
skipping: [localhost]

TASK [openshift_setup : Install oc] **********************************************************************************************************************************************************
skipping: [localhost]

TASK [openshift_setup : Install oc] **********************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Aborting, target uses selinux but python bindings (libselinux-python) aren't installed!"} to retry, use: --limit @/home/rlourenc/workspace/go/src/github.com/catasb/ansible/setup_local_environment.retry
PLAY RECAP ***********************************************************************************************************************************************************************************
localhost : ok=11 changed=0 unreachable=0 failed=1

✘-2 ~/workspace/go/src/github.com/catasb/local/linux [master|✔]
12:46 $ sudo dnf install libselinux*
Last metadata expiration check: 2:27:29 ago on Tue Jun 20 10:19:09 2017.
Package libselinux-devel-2.5-13.fc25.x86_64 is already installed, skipping.
Package libselinux-python-2.5-13.fc25.x86_64 is already installed, skipping.
Package libselinux-python3-2.5-13.fc25.x86_64 is already installed, skipping.
Package libselinux-utils-2.5-13.fc25.x86_64 is already installed, skipping.
Package libselinux-static-2.5-13.fc25.x86_64 is already installed, skipping.
Package libselinux-2.5-13.fc25.x86_64 is already installed, skipping.
Package libselinux-2.5-13.fc25.i686 is already installed, skipping.

12:43 $ cat /etc/redhat-release
Fedora release 25 (Twenty Five)

oc cluster up fails if --insecure-registry is missing

Docker daemon on a host needs to be run with --insecure-registry 172.30.0.0/16. oc cluster up task fails if the flag is missing.


TASK [openshift_setup : Run oc cluster up] ***************************************************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/home/ernelson/bin/oc cluster up --routing-suffix=172.33.0.1.nip.io --public-hostname=172.33.0.1.nip.io --host-pv-dir=/persistedvolumes --version=latest --image=docker.io/ansibleplaybookbundle/origin --host-config-dir=/var/lib/origin/openshift.local.config --use-existing-config", "delta": "0:00:24.078209", "end": "2017-04-27 17:54:09.664760", "failed": true, "rc": 1, "start": "2017-04-27 17:53:45.586551", "stderr": "", "stderr_lines": [], "stdout": "Starting OpenShift using docker.io/ansibleplaybookbundle/origin:latest ...\nPulling image docker.io/ansibleplaybookbundle/origin:latest\nPulled 3/6 layers, 50% complete\nPulled 4/6 layers, 83% complete\nPulled 5/6 layers, 98% complete\nPulled 6/6 layers, 100% complete\nExtracting\nImage pull complete\n-- Checking OpenShift client ... OK\n-- Checking Docker client ... OK\n-- Checking Docker version ... OK\n-- Checking for existing OpenShift container ... OK\n-- Checking for docker.io/ansibleplaybookbundle/origin:latest image ... \n   Pulling image docker.io/ansibleplaybookbundle/origin:latest\n   Pulled 3/6 layers, 50% complete\n   Pulled 4/6 layers, 83% complete\n   Pulled 5/6 layers, 98% complete\n   Pulled 6/6 layers, 100% complete\n   Extracting\n   Image pull complete\n-- Checking Docker daemon configuration ... FAIL\n   Error: did not detect an --insecure-registry argument on the Docker daemon\n   Solution:\n\n     Ensure that the Docker daemon is running with the following argument:\n     \t--insecure-registry 172.30.0.0/16", "stdout_lines": ["Starting OpenShift using docker.io/ansibleplaybookbundle/origin:latest ...", "Pulling image docker.io/ansibleplaybookbundle/origin:latest", "Pulled 3/6 layers, 50% complete", "Pulled 4/6 layers, 83% complete", "Pulled 5/6 layers, 98% complete", "Pulled 6/6 layers, 100% complete", "Extracting", "Image pull complete", "-- Checking OpenShift client ... OK", "-- Checking Docker client ... OK", "-- Checking Docker version ... OK", "-- Checking for existing OpenShift container ... OK", "-- Checking for docker.io/ansibleplaybookbundle/origin:latest image ... ", "   Pulling image docker.io/ansibleplaybookbundle/origin:latest", "   Pulled 3/6 layers, 50% complete", "   Pulled 4/6 layers, 83% complete", "   Pulled 5/6 layers, 98% complete", "   Pulled 6/6 layers, 100% complete", "   Extracting", "   Image pull complete", "-- Checking Docker daemon configuration ... FAIL", "   Error: did not detect an --insecure-registry argument on the Docker daemon", "   Solution:", "", "     Ensure that the Docker daemon is running with the following argument:", "     \t--insecure-registry 172.30.0.0/16"]}
        to retry, use: --limit @/home/ernelson/cluster/catasb/ansible/setup_local_environment.retry

local installation failed due to a new OC binary version

Running the script on a centos 7.5 VM and getting the following error:

TASK [openshift_setup : debug] ************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "Looking at oc cluster up command: '/root/bin/oc cluster up --routing-suffix=192.168.100.90.nip.io --public-hostname=192.168.100.90 --base-dir=/tmp/openshift.local.clusterup --tag=v3.11 --image=docker.io/openshift/origin-\${component}:\${version} --enable=service-catalog,router,registry,web-console,persistent-volumes,sample-templates,rhel-imagestreams,automation-service-broker,template-service-broker'"
}

TASK [openshift_setup : Run oc cluster up to start the cluster] ***************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/root/bin/oc cluster down && /root/bin/oc cluster up --routing-suffix=192.168.100.90.nip.io --public-hostname=192.168.100.90 --base-dir=/tmp/openshift.local.clusterup --tag=v3.11 --image=docker.io/openshift/origin-\${component}:\${version} --enable=service-catalog,router,registry,web-console,persistent-volumes,sample-templates,rhel-imagestreams,automation-service-broker,template-service-broker", "delta": "0:00:00.215353", "end": "2018-10-16 02:41:17.937460", "msg": "non-zero return code", "rc": 1, "start": "2018-10-16 02:41:17.722107", "stderr": "Error: unknown flag: --routing-suffix\n\n\nUsage:\n oc cluster up [flags]\n\nExamples:\n # Start OpenShift using a specific public host name\n oc cluster up --public-hostname=my.address.example.com\n\nOptions:\n --base-dir='': Directory on Docker host for cluster up configuration\n --image='openshift/origin-${component}:${version}': Specify the images to use for OpenShift\n --public-hostname='': Public hostname for OpenShift cluster\n --server-loglevel=0: Log level for OpenShift server\n\nUse "oc options" for a list of global command-line options (applies to all commands).", "stderr_lines": ["Error: unknown flag: --routing-suffix", "", "", "Usage:", " oc cluster up [flags]", "", "Examples:", " # Start OpenShift using a specific public host name", " oc cluster up --public-hostname=my.address.example.com", "", "Options:", " --base-dir='': Directory on Docker host for cluster up configuration", " --image='openshift/origin-${component}:${version}': Specify the images to use for OpenShift", " --public-hostname='': Public hostname for OpenShift cluster", " --server-loglevel=0: Log level for OpenShift server", "", "Use "oc options" for a list of global command-line options (applies to all commands)."], "stdout": "", "stdout_lines": []}
to retry, use: --limit @/root/catasb/ansible/setup_local_environment.retry

PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=43 changed=12 unreachable=0 failed=1

You can see that the OC binary uploaded to https://apb-oc.s3.amazonaws.com/ was changed today (16.10.2018) and the OC version was changed from 3.11-alpha to 4.0-alpha.

[root@localhost ~]# oc version
oc v4.0.0-alpha.0+6f594bd-337
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO

Support local setup running on Mac environment

Hi,

I created a local directory on my Mac for persistedvolumes and added it to the Docker Preferences.

  1. I set the Cluster ip to 127.0.0.1

  2. When i ran the installation , iam getting the following error,

What am i missing. If i run oc cluster up the cluster is starting normally.

TASK [openshift_setup : debug] ***********************************************************************************************************************************************************************************************************************************************
ok: [localhost] => {
"changed": false,
"msg": "oc_cluster_up_first_run = False, oc_cluster_status.stdout = 'The OpenShift cluster was started 6 minutes ago\n\nWeb console URL: https://localhost:8443\n\nConfig is at host directory /var/lib/origin/openshift.local.config\nVolumes are at host directory /var/lib/origin/openshift.local.volumes\nPersistent volumes are at host directory /persistedvolumes\nData will be discarded when cluster is destroyed'"
}

TASK [openshift_setup : Add extension script to oc config to talk to svc catalog] ********************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": true, "failed": true, "msg": "Destination directory /var/lib/origin/openshift.local.config does not exist"}
to retry, use: --limit @/Users/asanthan/Downloads/openshift-origin-client-tools-v1.5.0-031cbe4-mac/catasb/ansible/setup_local_environment.retry

PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************************

Failed to connect to PostgresSQL after creating the binding

Hi ,

I have successfully provisioned the ec2 instance and everything is working perfectly except when i try to create the binding, it is creating the secrets with all the parameters for postgres , except the application is not able to retrieve it.

Am i missing any steps ?

image

cluster up fails on ec2 with latest origin image

if running catasb on ec2 with the latest origin image it fails to start the cluster with the following error:

TASK [openshift_setup : Login as admin] **************************************************************************************************************
fatal: [34.236.78.118]: FAILED! => {"changed": true, "cmd": "/usr/bin/oc login --insecure-skip-tls-verify -u admin -p admin", "delta": "0:00:00.425830", "end": "2017-09-28 05:08:48.842779", "failed": true, "rc": 1, "start": "2017-09-28 05:08:48.416949", "stderr": "error: x509: certificate signed by unknown authority", "stderr_lines": ["error: x509: certificate signed by unknown authority"], "stdout": "", "stdout_lines": []}

this started happening about 12 hours ago, previous latest builds worked as expected.

In my tests modifying line 362 in ansible/roles/openshift_setup/tasks/main.py to add the --insecure-skip-tls-verify and hostname:port fixes it

shell: "{{ oc_cmd }} --insecure-skip-tls-verify login {{ hostname }}:8443 -u {{ cluster_user }} -p {{ cluster_user_password }}"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.