Code Monkey home page Code Monkey logo

banana's People

banana's Issues

Glance requires an entry for snet endpoint even when not using swift for the image backend.

Issue by cloudnull
Monday Aug 25, 2014 at 20:43 GMT
Originally opened as https://github.com/rcbops/ansible-lxc-rpc-orig/issues/463


When running a new installation the system attempts to laydown a host name for a service_net endpoint even when using a file backend. The desired outcome would be that all swift related options are optional and only parsed when set to a backend of swift.

The task that is attempting to execute:
https://github.com/rcbops/ansible-lxc-rpc/blob/ead8d1d698aa2820f53a20c753f4f6db03e44b64/rpc_deployment/roles/glance_snet_override/tasks/main.yml

Valid Endpoints:
https://github.com/rcbops/ansible-lxc-rpc/blob/ead8d1d698aa2820f53a20c753f4f6db03e44b64/rpc_deployment/inventory/group_vars/glance_all.yml#L98-L116

Glance Options:
https://github.com/rcbops/ansible-lxc-rpc/blob/master/etc/rpc_deploy/user_variables.yml#L56-L65

  • Running the code results with an error when snet is true || false
    glance_swift_enable_snet: false
TASK: [glance_snet_override | Remove hosts entry if glance_swift_enable_snet is False] ***
fatal: [infra1_glance_container-dbd3a387] => One or more undefined variables: 'dict object' has no attribute 'SomeRegion'

FATAL: all hosts have already failed -- aborting

glance_swift_enable_snet: true

TASK: [glance_snet_override | Add hosts entry if glance_swift_enable_snet is True] ***
fatal: [infra1_glance_container-dbd3a387] => One or more undefined variables: 'dict object' has no attribute 'SomeRegion'

FATAL: all hosts have already failed -- aborting

Related Issue: rcbops/ansible-lxc-rpc#260

Pinned Repo broken for linux-image-extra-virtual

Issue by johnmarkschofield
Tuesday Aug 19, 2014 at 17:26 GMT
Originally opened as https://github.com/rcbops/ansible-lxc-rpc-orig/issues/415


The vhost_net kernel module is installed by default in the Ubuntu Trusty image on mycloud.rackspace.com. In the VagrandCloud Trusty Image and the Official Ubuntu Image, that kernel module is not present.

The fix is to install the linux-image-extra package, which includes the vhost_net module. To avoid specifying kernel versions in my scripts, I install linux-image-extra-virtual.

With the pinned repo, I am unable to install any of the linux-image-extra-* packages. With a standard ubuntu repo, I can.

So I believe we have mismatches between different kernel packages present in the repo. All need to be updated to a consistent version.

[Cinder] iSCSI doesn't require running nova-compute outside of a container after all!

Issue by Apsu
Wednesday Aug 06, 2014 at 16:13 GMT
Originally opened as https://github.com/rcbops/ansible-lxc-rpc-orig/issues/314


After lots of digging around and reading kernel code, trying to figure out how to fix iscsitarget's crackheadedness, I discovered that there's another iscsi targeting system built into recent kernels such as 3.13. If we load the scsi_transport_iscsi module, tgtd can talk to it from inside of a container with no problem! The initiator still only needs the iscsi_tcp module.

I made a crappy asciinema recording to demonstrate here: https://asciinema.org/a/11317

We should be able to just add the scsi_transport_iscsi module to cinder's module list and turn is_metal back off by default.

Rabbit FQDN Issues returning

Issue by andymcc
Friday Aug 29, 2014 at 14:29 GMT
Originally opened as https://github.com/rcbops/ansible-lxc-rpc-orig/issues/486


Rabbit sometimes fails to start because it can't connect to itself (when using anFQDN):

TASK: [rabbit_common | Install rabbit packages] *******************************
failed: [node12.domain.com_rabbit_mq_container-48e7bee8] => (item=rabbitmq-server)

Since the package install starts the package this fails, logs show the following:

rabbit@node12:

  • unable to connect to epmd (port 4369) on node12: address (cannot connect to host/port)

hosts shows the following:
root@node12:~# cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 node12.domain.com_rabbit_mq_container-48e7bee8

Moving the "Fix /etc/hosts" entry in the rabbit_common to be above the install will fix this.

README out of date

Issue by jcourtois
Monday Aug 18, 2014 at 15:35 GMT
Originally opened as https://github.com/rcbops/ansible-lxc-rpc-orig/issues/398


The README hasn't been updated for about a month. Since then, at least one container (nova-spice-console) has been added, another has been removed (nova-compute) and a number of other changes have been made at least some of which are probably material.

Can someone give the README a close reading and edit it for accuracy and usability?

Some containers are using unconfined apparmor profiles

Issue by Apsu
Monday Jul 28, 2014 at 16:18 GMT
Originally opened as https://github.com/rcbops/ansible-lxc-rpc-orig/issues/239


Some containers, such as neutron-agents, nova-compute and cinder-volumes are disabling apparmor by setting their profile to "unconfined". This is primarily due to us failing to figure out the right way to provide access to all the host resources and capabilities ( https://www.kernel.org/pub/linux/libs/security/linux-privs/kernel-2.2/capfaq-0.2.txt ) required to do the needful.

We should figure out how to use apparmor correctly so we can at least make a passable attempt at locking these containers down as much as possible, in line with the rest of the cluster containers.

All Package repos need to be replaced with the frozen repo

Issue by cloudnull
Monday Aug 25, 2014 at 14:58 GMT
Originally opened as https://github.com/rcbops/ansible-lxc-rpc-orig/issues/458


The repo URL needs to be updated to use the cname, we presently have:

These repo need to use the rpc_repo_url variable:

Related Issues:

All git repos should be set to a TAG or SHA.

Issue by cloudnull
Monday Aug 25, 2014 at 15:06 GMT
Originally opened as https://github.com/rcbops/ansible-lxc-rpc-orig/issues/459


All of the git repos that we have we need to set the tag or sha so that we never install anything from source that is unexpected. Additionally, we should create a tarball of all sources we are installing and add them to our frozen repo. Installing from tarball and then falling back to git repos if needed will not only speed up installations, clones can take a long time and vary due to lots of uncontrollable network conditions, but should also provide us a mechanism to move into in a CDC environment. IMO, by having the repo contain a source directory where software is downloaded and installed from that contains our tar'd up git repos will be the first step to fix the CDC situation where no internet access is available.

Here are all of the git repos we have and the respected branches that need to be stabilized:

Cinder:

Cinder Git Branch:

  • inventory/group_vars/cinder_all.yml:65:git_install_branch: stable/icehouse

Glance:

Glance Git Branch:

  • inventory/group_vars/glance_all.yml:75:git_install_branch: stable/icehouse

Heat:

Heat Git Branch:
inventory/group_vars/heat_all.yml:70:git_install_branch: stable/icehouse

Horizon:

Horizon Git Branch:

  • inventory/group_vars/horizon.yml:46:git_install_branch: stable/icehouse

Keystone:

Keystone Git Branch:

  • inventory/group_vars/keystone_all.yml:60:git_install_branch: stable/icehouse

Neutron:

Neutron Git Branch:

  • inventory/group_vars/neutron_all.yml:83:git_install_branch: stable/icehouse

Nova:

Nova Git Branch:

  • inventory/group_vars/nova_all.yml:79:git_install_branch: stable/icehouse

RaxMon:

RaxMon Git Branch:

  • etc/rpc_deploy/user_variables.yml:112:maas_repo_version: master

Holland:

Holland Git Branch:

  • playbooks/rpc_support.yml:34: holland_release: "{{ rpc_support_holland_branch|default('v1.0.10') }}"

Glance image replication, or other such mechanisms

Issue by byronmccollum
Thursday Aug 28, 2014 at 03:20 GMT
Originally opened as https://github.com/rcbops/ansible-lxc-rpc-orig/issues/484


Glance image replicator redux...

Registering an image lands the bits on only one infra node. Spawning an instance from that image could cause nova scheduler to retry multiple times until the image fetch call goes to the correct infra node containing the image. If scheduler retries >= number of infra nodes, this should eventually success (after unnecessary delay, and artificially induced lumpy compute distribution), but with any amount of load, it's quite possible all scheduler retries go to infra nodes without the image.

So, are there plans to reintroduce glance image replicator, or other such mechanisms. Or is the preferred / recommended configuration to use Swift if you have more that one infra node?

Time out on initial Network check is probably too long

Issue by andymcc
Friday Aug 29, 2014 at 13:20 GMT
Originally opened as https://github.com/rcbops/ansible-lxc-rpc-orig/issues/485


The initial time out check is 60 seconds - this seems too long, we could do this at 10seconds, since it just exists to determine whether we should reboot the container or not - the post reboot check can be longer (as it is 60 seconds).

This adds a lot of time to the initial run, and if the network is up (before we even restart it - e.g. it was already setup from a previous run) then it won't take 60 seconds to confirm.

dnsmasq package not available in pinned apt repo

Issue by johnmarkschofield
Wednesday Aug 20, 2014 at 16:36 GMT
Originally opened as https://github.com/rcbops/ansible-lxc-rpc-orig/issues/426


When using the pinned apt repo:

root@schof-aio:~# cat /etc/apt/sources.list
deb [arch=amd64] http://dc0e2a2ef0676c3453b1-31bb9324d3aeab0d08fa434012c1e64d.r5.cf1.rackcdn.com LA main
root@schof-aio:~#

The dnsmasq package is not available:

root@schof-aio:~# apt-get update
Hit http://mirror.jmu.edu trusty InRelease
Hit http://mirror.jmu.edu trusty/main amd64 Packages
Hit http://mirror.jmu.edu trusty/main i386 Packages
Hit http://dc0e2a2ef0676c3453b1-31bb9324d3aeab0d08fa434012c1e64d.r5.cf1.rackcdn.com LA InRelease
Ign http://mirror.jmu.edu trusty/main Translation-en_US
Ign http://mirror.jmu.edu trusty/main Translation-en
Hit http://dc0e2a2ef0676c3453b1-31bb9324d3aeab0d08fa434012c1e64d.r5.cf1.rackcdn.com LA/main amd64 Packages
Ign http://dc0e2a2ef0676c3453b1-31bb9324d3aeab0d08fa434012c1e64d.r5.cf1.rackcdn.com LA/main Translation-en_US
Ign http://dc0e2a2ef0676c3453b1-31bb9324d3aeab0d08fa434012c1e64d.r5.cf1.rackcdn.com LA/main Translation-en
Reading package lists... Done
root@schof-aio:~# apt-get install dnsmasq
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package dnsmasq is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
  dnsmasq-base

E: Package 'dnsmasq' has no installation candidate
root@schof-aio:~#

This causes the openstack-common.yml playbook to fail:

TASK: [container_common | Ensure container packages are installed] ************
<10.51.50.1> ESTABLISH CONNECTION FOR USER: root
<10.51.50.1> REMOTE_MODULE apt pkg=libpq-dev,dnsmasq,dnsmasq-utils state=present
<10.51.50.1> EXEC ['ssh', '-C', '-vvv', '-o', 'ControlMaster=auto', '-o', 'ControlPersist=60s', '-o', 'ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r', '-o', 'StrictHostKeyChecking=no', '-o', 'Port=22', '-o', 'KbdInteractiveAuthentication=no', '-o', 'PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey', '-o', 'PasswordAuthentication=no', '-o', 'ConnectTimeout=10', u'10.51.50.1', u"/bin/sh -c 'LC_CTYPE=en_US.UTF-8 LANG=en_US.UTF-8 /usr/bin/python'"]
failed: [infra1] => (item=libpq-dev,dnsmasq,dnsmasq-utils) => {"failed": true, "item": "libpq-dev,dnsmasq,dnsmasq-utils"}
msg: No package matching 'dnsmasq' is available

FATAL: all hosts have already failed -- aborting

PLAY RECAP ********************************************************************
           to retry, use: --limit @/root/openstack-common.retry

infra1                     : ok=21   changed=3    unreachable=0    failed=1

OpenStackInstaller: 2014-08-19 20:00:33,021 - CRITICAL: Failed running playbook 'playbooks/openstack/openstack-common.yml' 3 times. Aborting...

This is a release-blocker.

Should be able to limit setup plays to a set of hosts or containers

Issue by hughsaunders
Friday Aug 22, 2014 at 11:38 GMT
Originally opened as https://github.com/rcbops/ansible-lxc-rpc-orig/issues/449


Currently the destroy-containers.yml has variables host_group and container_group that can be supplied with -e to limit the containers that will be removed.

It would be useful if all the playbooks directly included by host-setup.yml followed the same convention, so for example all the galera containers can be rebuilt, or all the galera containers on a node etc.

Playbooks included by host-setup.yml:

  • include: setup-common.yml #could use host_group
  • include: build-containers.yml #already uses host_group, could use container_group
  • include: restart-containers.yml #already uses host_group, could use container_group
  • include: host-common.yml #could use host_group

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.