Code Monkey home page Code Monkey logo

community.kubernetes's Introduction

Kubernetes Collection for Ansible

IMPORTANT The community.kubernetes collection is being renamed to kubernetes.core. As of version 2.0.0, the collection has been replaced by deprecated redirects for all content to kubernetes.core. If you are using FQCNs starting with community.kubernetes, please update them to kubernetes.core.

This repo hosts the community.kubernetes (a.k.a. kubernetes.core) Ansible Collection.

The collection includes a variety of Ansible content to help automate the management of applications in Kubernetes and OpenShift clusters, as well as the provisioning and maintenance of clusters themselves.

Installation and Usage

Installing the Collection from Ansible Galaxy

Before using the Kubernetes collection, you need to install it with the Ansible Galaxy CLI:

ansible-galaxy collection install community.kubernetes

You can also include it in a requirements.yml file and install it via ansible-galaxy collection install -r requirements.yml, using the format:

---
collections:
  - name: community.kubernetes
    version: 2.0.1

Installing the OpenShift Python Library

Content in this collection requires the OpenShift Python client to interact with Kubernetes' APIs. You can install it with:

pip3 install openshift

Using modules from the Kubernetes Collection in your playbooks

It's preferable to use content in this collection using their Fully Qualified Collection Namespace (FQCN), for example community.kubernetes.k8s_info:

---
- hosts: localhost
  gather_facts: false
  connection: local

  tasks:
    - name: Ensure the myapp Namespace exists.
      community.kubernetes.k8s:
        api_version: v1
        kind: Namespace
        name: myapp
        state: present

    - name: Ensure the myapp Service exists in the myapp Namespace.
      community.kubernetes.k8s:
        state: present
        definition:
          apiVersion: v1
          kind: Service
          metadata:
            name: myapp
            namespace: myapp
          spec:
            type: LoadBalancer
            ports:
            - port: 8080
              targetPort: 8080
            selector:
              app: myapp

    - name: Get a list of all Services in the myapp namespace.
      community.kubernetes.k8s_info:
        kind: Service
        namespace: myapp
      register: myapp_services

    - name: Display number of Services in the myapp namespace.
      debug:
        var: myapp_services.resources | count

If upgrading older playbooks which were built prior to Ansible 2.10 and this collection's existence, you can also define collections in your play and refer to this collection's modules as you did in Ansible 2.9 and below, as in this example:

---
- hosts: localhost
  gather_facts: false
  connection: local

  collections:
    - community.kubernetes

  tasks:
    - name: Ensure the myapp Namespace exists.
      k8s:
        api_version: v1
        kind: Namespace
        name: myapp
        state: present

For documentation on how to use individual modules and other content included in this collection, please see the links in the 'Included content' section earlier in this README.

Testing and Development

If you want to develop new content for this collection or improve what's already here, the easiest way to work on the collection is to clone it into one of the configured COLLECTIONS_PATHS, and work on it there.

See Contributing to community.kubernetes.

Testing with ansible-test

The tests directory contains configuration for running sanity and integration tests using ansible-test.

You can run the collection's test suites with the commands:

make test-sanity
make test-integration

Testing with molecule

There are also integration tests in the molecule directory which are meant to be run against a local Kubernetes cluster, e.g. using KinD or Minikube. To setup a local cluster using KinD and run Molecule:

kind create cluster
make test-molecule

Publishing New Versions

Releases are automatically built and pushed to Ansible Galaxy for any new tag. Before tagging a release, make sure to do the following:

  1. Update the version in the following places:
    1. The version in galaxy.yml
    2. This README's requirements.yml example
    3. The DOWNSTREAM_VERSION in utils/downstream.sh
    4. The VERSION in Makefile
  2. Update the CHANGELOG:
    1. Make sure you have antsibull-changelog installed.
    2. Make sure there are fragments for all known changes in changelogs/fragments.
    3. Run antsibull-changelog release.
  3. Commit the changes and create a PR with the changes. Wait for tests to pass, then merge it once they have.
  4. Tag the version in Git and push to GitHub.
  5. Manually build and release the kubernetes.core collection (see following section).

After the version is published, verify it exists on the Kubernetes Collection Galaxy page.

Publishing kubernetes.core

Until the contents of repository are moved into a new kubernetes.core repository on GitHub, this repository is the source of both the kubernetes.core and community.kubernetes repositories on Ansible Galaxy.

To publish the kubernetes.core collection on Ansible Galaxy, do the following:

  1. Run make downstream-release (on macOS, add LC_ALL=C before the command).

The process for uploading a supported release to Automation Hub is documented separately.

More Information

For more information about Ansible's Kubernetes integration, join the #ansible-kubernetes channel on irc.libera.chat, and browse the resources in the Kubernetes Working Group Community wiki page.

License

GNU General Public License v3.0 or later

See LICENCE to see the full text.

community.kubernetes's People

Contributors

abadger avatar akasurde avatar andersson007 avatar bcoca avatar bmillemathias-1a avatar danilo404 avatar djzager avatar fabianvf avatar geerlingguy avatar geethree avatar goneri avatar gravesm avatar gundalow avatar jaydesl avatar jfrabaute avatar jmontleon avatar joschi36 avatar julienhuon avatar lucasboisserie avatar maxamillion avatar nat45928 avatar orjan avatar philipgough avatar s-hertel avatar shaunsmiley-xevo avatar tima avatar trbs avatar tristancacqueray avatar willthames avatar yselkowitz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

community.kubernetes's Issues

Add roles to install Kubernetes

SUMMARY

It would be interesting if this collection also included roles and playbook(s) for installing a cluster.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

roles

ADDITIONAL INFORMATION

To have a community supported and well structured set of roles to install Kubernetes with Ansible would be a good starting point for people with Ansible knowledge to create a Kubernetes cluster.
The initial effort could be to have a few simple roles to set up a cluster with very few options, but the collection could easily grow to support different scenarios.
Internally the roles could use the existing Ansible modules in the collection, where applicable.

---
- hosts: kubernetes_nodes
  collections:
    - name: community.kubernetes
  roles:
    - role: kubernetes_node

- hosts: kubernetes_masters
  collections:
    - name: community.kubernetes
  roles:
    - role: etcd
    - role: kubernetes_master

Migrate doc_fragments into collection

Open CVE against the kubernetes connection plugin

SUMMARY

There is an unembargoed security issue with the kubectl connection plugin:

https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2020-1753

The kubectl connection plugin has two parameters that hold secret data:

In addition, the kubectl_extra_args argument could be passed a password and token: https://github.com/ansible-collections/kubernetes/blob/master/plugins/connection/kubectl.py#L63

All three of these parameters are passed to the kubectl command on the command line. Any user on the system can then read the secret information with a simple call to ps as command line arguments are not hidden from other users.

ISSUE TYPE
  • Bug Report
COMPONENT NAME
  • lib/ansible/plugins/connection/kubectl.py
ANSIBLE VERSION
All
CONFIGURATION
N/A
OS / ENVIRONMENT

N/A

ADDITIONAL INFORMATION

There appears to be a couple ways to fix this:

  • The password and token appear to be settable in the configuration file: (Note: Do not use this command line in the plugin either as it will just shift the problem from one command line invocation to another. Just use that to see where the token and password will land in the configuration) https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-set-credentials-em- so you can put the password and token into a configuration file instead of handing it over on the command line. Make sure that the configuration file is mode 0600 so that it closes the hole.

    • The kubectl connection plugin does take a config file parameter already so you will have to work out whether you can pass kubectl two config files or whether you'll have to create a temporary config file which merges the user specified one with the password or token.

    • Also be sure to delete the temporary config file afterwards. If ansible is interrupted with kill -9, the config file could be left laying around. However, since unix permissions protect the file and do not protect the process list, breaking into the config file would require a privileged account on the box, not just a normal user account so this is preferable.

  • Remove the token and password options. I hear that you are planning to release a backwards incompatible version of the collection. This is the ideal time to get rid of the CVE by removing these options. People who need to use token or password can put them into their own config file so they can still handle those use cases, just without the vulnerability.

Both of the above only address the specific password and token field. As noted above, the extra-args field can also contain the password and token. To close that hole you should add code to scan the extra args for password and token command line options. if found, the connection plugin should either error (my preference if you are making a backwards incompatible release) or warn that using password and token in extra-args is insecure.

Rename repo to community.kubernetes

SUMMARY

Hi,
To be more consistent with the other repos https://github.com/ansible-collections/ I'd like to rename this GitHub Repo to be community.kubernetes

  • Old URLs for issues and PRs will still work
  • Old git URLs will work, though it will be good to update them locally
  • We will need to update the URLs in galaxy.yml

Are you happy with this?

ISSUE TYPE
  • Bug Report

Publish new collection versions from git tags automatically

SUMMARY

Right now the Kubernetes collection release process is fully manual, and therefore prone to error. We've already hit our first minor bump in #29 — having a fully automated process should make the release process more foolproof, and also lighten the load on collection maintainers.

Using a process similar to the one outlined in this post (Automatically building and publishing Ansible Galaxy Collections), we can set up a GitHub Actions workflow called build.yml that would run only on tags, and it would build and publish the collection at the same version as the new tag.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

CI

ADDITIONAL INFORMATION

There are a couple things that I'm slightly worried about, with regard to permissions and an Ansible Galaxy token:

Until those issues are resolved, the only way to do automated publishing of new release tags would be to have one of the people with access to the Galaxy community namespace/org add their galaxy token to GitHub Actions as a secret... and doing that would expose that user to potential impersonation attacks on Galaxy.

Allow templates to be passed directly through to k8s module

SUMMARY

Frequently, when using the k8s module to apply a Kubernetes manifest, I have a manifest template with one or more resource definitions that I would like to have templated via Ansible/Jinja.

The current way I do this is:

    - name: Create app resources.
      k8s:
        definition: '{{ item }}'
        state: present
      loop:
        - "{{ lookup('template', 'templates/app.yml.j2') | from_yaml_all | list }}"

It seems like it would be much simpler to have a template option for k8s which would use Ansible's Templar (only available if the template is being passed through on localhost, I guess, since it can't be templated if running as a module on the remote host (or can it?)) to parse and template the manifest file:

    - name: Create app resources.
      k8s:
        template: templates/app.yml.j2
        state: present

This feature request was originally posted in the ansible/ansible repo here: ansible/ansible#60134

That issue's original description (from @chris-short):

I think it'd be beneficial for the k8s module (and subsequently operator-sdk) to avoid using Jinja2 templating when possible. In a nutshell, I'd like a cleaner, simpler way to invoke templating of k8s YAML files as their deployed via Ansible. Example of the current state: https://github.com/geerlingguy/mcrouter-operator/blob/a3babecae85ece431f5cf316bd8a407ba60a5b42/roles/mcrouter/tasks/main.yml

It's 431 bytes in total, I get that. It's not a lot of Jinja2, I get that too. Jinja2 is another thing to add on to the pile of things to learn before you can harness it and removing that barrier will be beneficial. The effort of learning Jinja2 for this use case (globally) far outstrips our effort to help solve this problem on behalf of our users.

The overarching goal is to minimize the amount of Jinja2 needed in playbooks to make this work.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

k8s

ADDITIONAL INFORMATION

Please see the earlier discussion in ansible/ansible#60134 for more background (especially why this could be challenging to implement, or involve a few caveats).

Seamless handling of ansible vaulted files in K8s modules

SUMMARY

The kubeconfig parameter in the k8s module and all others such seamless detect and decrypt ansible-vault'ed files.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

module_utils/common.py (This seems to be where the underlying logic is handled.)

ADDITIONAL INFORMATION

If you are storing a kubeconfig file in a repo -- maybe you are working with multiple clusters or perhaps its for a service account. Whatever the reason you don't want to store that in plain text where anyone can read those credentials and use them to control the associated cluster.

Using ansible-vault rectifies this by encrypting the credentials away from prying eyes so only those with the passphrase can use what's there.

Unfortunately I found out the hard way that the kubeconfig parameter in the k8s module (and really many others) cannot work with ansible-vault encrypted files.

Here is what I had to do work around not having that feature:

    - name: prepare api access config
      template:
        src: kubeconfig.j2
        dest: kubeconfig.tmp
        mode: '0400'
    - name: define cache service resource
      k8s:
        state: "{{ cache_service_state | default(omit) }}"
        definition: "{{ lookup('template', 'k8s-cache-service-def.yaml.j2') | from_yaml }}"
        kubeconfig: kubeconfig.tmp
    - name: remove kubeconfig.tmp

Where kubeconfig.j2 simply contained this:

{{ lookup('file', k8s_kubeconfig_src) }}

This caused ansible to decrypt the vaulted kubeconfig file and write it out to a temp file that could be read in by the k8s module to issue its API. It's flawed because for a brief moment the credential is exposed though only the user running the playbook can read that temp file.

Ideally I would have like to have just done something like:

    - name: define cache service resource
      k8s:
        state: "{{ cache_service_state | default(omit) }}"
        definition: "{{ lookup('template', 'k8s-cache-service-def.yaml.j2') | from_yaml }}"
        kubeconfig: "{{ k8s_kubeconfig_src }}"

Where Ansible would read the kubeconfig file, catch that it's vault encrypted, decrypt it in memory and then proceed to execute API calls to the cluster with those credentials. No tempfile or one line template needed.

Add k8s_event module to collection

SUMMARY

A k8s_event module would be nice to have in certain circumstances. If you want to be able to add events attached to certain resources, this module could help you do that.

See: https://github.com/operator-framework/operator-metering/blob/master/images/metering-ansible-operator/roles/meteringconfig/library/k8s_event.py

ISSUE TYPE
  • Feature Idea
ADDITIONAL INFORMATION

Example task:

- name: Create Kubernetes Event
  k8s_event:
    state: present
    name: test-https-emily109
    namespace: default
    message: Event created
    reason: Created
    reportingComponent: Reporting components
    type: Normal
    source:
      component: Metering components
    involvedObject:
      apiVersion: v1
      kind: Service
      name: test-https-emily107
      namespace: default

Figure out what to do with OpenShift tests

SUMMARY

Currently, the tests run by ansible-test (e.g. ansible-test integration --docker -v --color) are running on OpenShift 3.9.0 (openshift/origin:v3.9.0), which has been out of support for some time.

The current release of OpenShift 3 is 3.11, though the Docker Hub image for that version is 2 years old (https://hub.docker.com/r/openshift/origin/tags), and also uses Kubernetes 1.11 as a base, which has not been supported upstream for some time either (see the K8s version skew policy).

It would be good to use a supported image for the CI environment k8s cluster testing—for Operator SDK, the bsycorp/kind (see tags). There's also the official SIG kind, which I've successfully used on other ansible testing projects (example).

Ideally, we would have something from CRC or OKD that's equivalent to the single-container approach in openshift/origin, but I'm not sure if there's any timeline for something CI/local-friendly for OpenShift 4.x.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

ansible-test / CI tests

Find a home for new module documentation

SUMMARY

We have two new modules in this collection as of today:

These modules have appropriate documentation with examples, but currently there is nowhere public that the docs are displayed. For the other existing modules, which were included in Ansible 2.9 and earlier, we link in the README file to the Ansible module documentation.

For this issue, we need to figure out:

  • Will documentation be displayed in the collection UI in Ansible Galaxy?
  • When will docs in the main Ansible documentation go away? (If ever)
  • Should we move the links in the README to somewhere else? (right now k8s_exec and k8s_log link to their actual module python files, which are not very readable)
ISSUE TYPE
  • Documentation
ADDITIONAL INFORMATION

Related issues:

CI integration test started failing last night with missing 'pip' module

SUMMARY

Example build: https://github.com/ansible-collections/kubernetes/runs/474129151#step:5:401

This error occurs during the CI 'integration' workflow:

ERROR! couldn't resolve module/action 'pip'. This often indicates a misspelling, missing collection, or incorrect module path.

The error appears to be in '/root/ansible/ansible_collections/community/kubernetes/tests/output/.tmp/integration/kubernetes-jcg040db-ÅÑŚÌβŁÈ/tests/integration/targets/kubernetes/tasks/main.yml': line 11, column 3, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:

- pip:
  ^ here
Command exited with status 4 after 0.996269702911377 seconds.
ISSUE TYPE
  • Bug Report
COMPONENT NAME

ansible-test integration

ANSIBLE VERSION

ansible-base (latest)

CONFIGURATION

N/A

OS / ENVIRONMENT

N/A

STEPS TO REPRODUCE
pip3 uninstall ansible
pip3 install git+https://github.com/ansible-collection-migration/ansible-base
ansible-test integration --docker -v
EXPECTED RESULTS

Tests should pass.

ACTUAL RESULTS

Tests fail (see above error message).

Bare variable deprecation warning in CI tests

SUMMARY

The CI molecule test currently results in a few deprecation warnings; we should fix those so we don't run into failures once Ansible 2.12 is out.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

CI

ANSIBLE VERSION

N/A

CONFIGURATION

N/A

OS / ENVIRONMENT

N/A

STEPS TO REPRODUCE
molecule test
EXPECTED RESULTS

No deprecation warnings are in the output.

ACTUAL RESULTS
TASK [Assert that recreating crd is as expected] *******************************
[DEPRECATION WARNING]: evaluating 'recreate_crd_default_merge_0m
 bare variable, this behaviour will go away and you might need to add |bool to
the expression in the future. Also see CONDITIONAL_BARE_VARS configuration
toggle. This feature will be removed in version 2.12. Deprecation warnings can
be disabled by setting deprecation_warnings=False in ansible.cfg.

Figure out the best way to handle CHANGELOG updates

SUMMARY

Ansible core uses 'changelog fragments' (see docs for creating a changelog fragment) to make building a release-specific CHANGELOG file easier.

That might be overkill for a project with the reduced scope we have in this collection, but we still need a way to make sure all relevant changes (e.g. anything other than a grammar fix to the README or an adjustment to CI to fix failing tests) are encapsulated in the CHANGELOG file.

Right now, there are already two new module additions that are not documented in the CHANGELOG, but should be:

Ideally, we would have some mechanism by which the CHANGELOG could be updated on a per-PR basis (and this would be one of the gates that would have to be passed before merging it)—but as with any other project where it's a requirement, one of the hardest parts is making sure users don't have to babysit their PRs while other ones come in and cause merge conflicts in the CHANGELOG file—there's no better way to discourage contributors than to make them do annoying tasks over and over again while their PR rots.

We could use a similar changelog generator that ansible/ansible uses, with sections including:

  • major_changes
  • minor_changes
  • deprecated_features
  • removed_features
  • bugfixes
  • known_issues

But we would need to integrate reno (or is it https://github.com/openstack/reno ?) into the release process (see more reno docs).

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

CHANGELOG.md

ADDITIONAL INFORMATION

N/A

Update README with installation and usage instructions

SUMMARY

Currently the README covers the former role whose history this project inherited. The README needs to be updated to cover the current scope of the project, as well as installation and usage instructions.

ISSUE TYPE
  • Documentation Report
COMPONENT NAME
README.md

unrelated files provided in collection release

SUMMARY
ISSUE TYPE
  • Bug Report
COMPONENT NAME

I just did a ansible-galaxy collection install community.kubernetes and look what was provided and found such files.

tests
    ├── output
    │   ├── bot
    │   │   ├── ansible-test-sanity-ansible-doc.json
    │   │   ├── ansible-test-sanity-future-import-boilerplate.json
    │   │   ├── ansible-test-sanity-metaclass-boilerplate.json
    │   │   ├── ansible-test-sanity-pep8.json
    │   │   └── ansible-test-sanity-validate-modules.json
    │   ├── coverage
    │   └── data
    │       └── integration-2020-01-30-20-10-01.json

and also .DStore

.DS_Store
plugins/.DS_Store
plugins/module_utils/.DS_Store
plugins/modules/.DS_Store
tests/.DS_Store
tests/integration/targets/kubernetes/.DS_Store
ANSIBLE VERSION

N/A

CONFIGURATION

N/a

OS / ENVIRONMENT

N/a

CI Builds started failing with "cannot import name '_distro'"

SUMMARY

CI builds (using ansible-base) started failing in the past day or two with "cannot import name '_distro'"

ISSUE TYPE
  • Bug Report
COMPONENT NAME

ansible-test

ANSIBLE VERSION

ansible-base devel

CONFIGURATION

N/A

OS / ENVIRONMENT

N/A

STEPS TO REPRODUCE
EXPECTED RESULTS

Tests pass.

ACTUAL RESULTS
Run command: ansible-doc -t connection community.kubernetes.kubectl
ERROR: Command "ansible-doc -t connection community.kubernetes.kubectl" returned exit status 250.
>>> Standard Error
ERROR! Unexpected Exception, this is probably a bug: cannot import name '_distro'
Run command: importer.py
See documentation for help: https://docs.ansible.com/ansible/devel/dev_guide/testing/sanity/import.html
ERROR: Found 8 import issue(s) on python 3.6 which need to be resolved:
ERROR: plugins/module_utils/common.py:27:0: traceback: ImportError: cannot import name '_distro'
ERROR: plugins/module_utils/raw.py:29:0: traceback: ImportError: cannot import name '_distro'
ERROR: plugins/module_utils/scale.py:26:0: traceback: ImportError: cannot import name '_distro' (at plugins/module_utils/common.py:27:0)
ERROR: plugins/modules/k8s.py:279:0: traceback: ImportError: cannot import name '_distro' (at plugins/module_utils/raw.py:29:0)
ERROR: plugins/modules/k8s_auth.py:152:0: traceback: ImportError: cannot import name '_distro'
ERROR: plugins/modules/k8s_info.py:140:0: traceback: ImportError: cannot import name '_distro' (at plugins/module_utils/common.py:27:0)
ERROR: plugins/modules/k8s_scale.py:121:0: traceback: ImportError: cannot import name '_distro' (at plugins/module_utils/common.py:27:0)
ERROR: plugins/modules/k8s_service.py:172:0: traceback: ImportError: cannot import name '_distro' (at plugins/module_utils/common.py:27:0)

Remove module inheritance

SUMMARY

Currently we have the KubernetesAnsibleModule and KubernetesRawModule that most of our modules inherit from, but this is a bad pattern and has already bitten us with the k8s_scale module. We have an AnsibleMixin that works fine, though we should remove the argspec from it.

The main reason this pattern is harmful is that parameters are shared across modules, and adding or changing arguments in one will passively propagate to subclasses of that module. k8s_scale for example, inherits parameters from k8s, and has broken in the past when we added arguments to the k8s module (issue here). Each module should own the arguments it accepts, and code sharing should be limited to shared utilities rather than module definitions.

ISSUE TYPE
  • Technical Debt

Support for helm package management

SUMMARY

This collection should include a module for Helm package management.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

helm and/or helm_cli

ADDITIONAL INFORMATION

Helm is the Kubernetes package manager and maintained by the CNCF. While not every Kubernetes tool belongs in this one collection, Helm is too vital not to be included. It should be a core part of any Ansible Kubernetes collection given its status in the ecosystem and the common use of Ansible to deploy, upgrade, manage and remove applications of any type.

The existing helm module in Ansible core is insufficient in that it uses special python libraries that emulate helm instead of wrapping the helm tool itself. It also currently only supports Helm2 while the ecosystem has quickly moved to Helm3. There are many other known outstanding issues.

I propose we instead consider this implementation that utilizes the actual Helm CLI for better compatibility and reliability over time: ansible/ansible#62450

Fix validate_certs test in molecule integration test 'full.yml'

SUMMARY

As part of fixing ansible/ansible#57418 (ansible/ansible PR ansible/ansible#56640), a task was added to the full.yml test playbook which is run in molecule's default scenario:

    - name: Setting validate_certs to true causes a failure
      k8s:
        name: testing
        kind: Namespace
        validate_certs: yes
      ignore_errors: yes
      register: output
    
    - name: assert that validate_certs caused a failure (and therefore was correctly translated to verify_ssl)
      assert:
        that:
          - output is failed

This fails now that we have molecule running tests on a full cluster (see #22) with the following message:

    TASK [assert that validate_certs caused a failure (and therefore was correctly translated to verify_ssl)] ***
    fatal: [localhost]: FAILED! => {
        "assertion": "output is failed",
        "changed": false,
        "evaluated_to": false,
        "msg": "Assertion failed"
    }

We should make it so that test is correct and passes, and/or fix the underlying bug if it's showing that it actually doesn't work like it should.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

CI

ANSIBLE VERSION

N/A

CONFIGURATION

N/A

OS / ENVIRONMENT

N/A

STEPS TO REPRODUCE
  1. Uncomment the commented tasks in molecule/default/tasks/full.yml
  2. Run molecule test
EXPECTED RESULTS

Tests should pass.

ACTUAL RESULTS

Tests fail on:

    TASK [assert that validate_certs caused a failure (and therefore was correctly translated to verify_ssl)] ***
    fatal: [localhost]: FAILED! => {
        "assertion": "output is failed",
        "changed": false,
        "evaluated_to": false,
        "msg": "Assertion failed"
    }

Get tests working for k8s modules

SUMMARY

The old modules used openshift/origin, which is no longer maintained. We could use kind like the Operator SDK uses for Ansible tests, or we could find some other way to run a test cluster and test against it with all the modules in this collection.

ISSUE TYPE
  • Feature Idea

k8s_info ignores authorization parameters with OCP 4.3

From @kazito1 on Mar 16, 2020 22:34

SUMMARY

k8s_info is ignoring username: and password: fields

ISSUE TYPE
  • Bug Report
COMPONENT NAME

k8s_info

ANSIBLE VERSION
ansible 2.9.5
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /var/lib/awx/venv/ansible2_8/lib/python2.7/site-packages/ansible
  executable location = bin/ansible
  python version = 2.7.5 (default, Feb 20 2018, 09:19:12) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]

CONFIGURATION
DEFAULT_ROLES_PATH(/etc/ansible/ansible.cfg) = [u'/etc/ansible/roles', u'/usr/share/ansible/roles']
OS / ENVIRONMENT

RHEL 7.7, ansible 2.9. The remote OCP cluster is OCP 4.3

STEPS TO REPRODUCE
  1. Create a playbook that connects to a RHEL 7.7 box that is trying to reach an OCP cluster
  2. Make sure that the aforementioned box has python2-openshift and python2-kubernetes installed
  3. Use the k8s_info module
- name: Test for the k8s_info module
  hosts: ProxyA

  vars_files:
    - site/network_variables.yaml
    - site/ocp_variables.yaml

  tasks:
    - name: Try to login to OCP cluster
      k8s_auth:
        host: https://api.{{ ClusterName }}.{{ DNS_Domain }}:6443
        username: "{{ OCP_admin }}"
        password: "{{ OCP_Password }}"
        validate_certs: no
      register: k8s_auth_result
      ignore_errors: yes

    - name: test the description of a role
      k8s_info:
        host: https://api.{{ ClusterName }}.{{ DNS_Domain }}:6443
        username: "{{ OCP_admin }}"
        password: "{{ OCP_Password }}"
        validate_certs: no
        kind: ClusterRoleBinding
      register: blah
EXPECTED RESULTS

Receive information about the OCP requested resource in blah variable

ACTUAL RESULTS

The cluster returns Forbidden even if the user has cluster-admin privileges

Traceback (most recent call last):\r\n  File \"/root/.ansible/tmp/ansible-tmp-1584395020.86-82300935789067/AnsiballZ_k8s_info.py\", line 102, in <module>\r\n    _ansiballz_main()\r\n  File \"/root/.ansible/tmp/ansible-tmp-1584395020.86-82300935789067/AnsiballZ_k8s_info.py\", line 94, in _ansiballz_main\r\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\r\n  File \"/root/.ansible/tmp/ansible-tmp-1584395020.86-82300935789067/AnsiballZ_k8s_info.py\", line 40, in invoke_module\r\n    runpy.run_module(mod_name='ansible.modules.clustering.k8s.k8s_info', init_globals=None, run_name='__main__', alter_sys=True)\r\n  File \"/usr/lib64/python2.7/runpy.py\", line 176, in run_module\r\n    fname, loader, pkg_name)\r\n  File \"/usr/lib64/python2.7/runpy.py\", line 82, in _run_module_code\r\n    mod_name, mod_fname, mod_loader, pkg_name)\r\n  File \"/usr/lib64/python2.7/runpy.py\", line 72, in _run_code\r\n    exec code in run_globals\r\n  File \"/tmp/ansible_k8s_info_payload_yE6iRL/ansible_k8s_info_payload.zip/ansible/modules/clustering/k8s/k8s_info.py\", line 179, in <module>\r\n  File \"/tmp/ansible_k8s_info_payload_yE6iRL/ansible_k8s_info_payload.zip/ansible/modules/clustering/k8s/k8s_info.py\", line 175, in main\r\n  File \"/tmp/ansible_k8s_info_payload_yE6iRL/ansible_k8s_info_payload.zip/ansible/modules/clustering/k8s/k8s_info.py\", line 148, in execute_module\r\n  File \"/tmp/ansible_k8s_info_payload_yE6iRL/ansible_k8s_info_payload.zip/ansible/module_utils/k8s/common.py\", line 200, in get_api_client\r\n  File \"/usr/lib/python2.7/site-packages/openshift/dynamic/client.py\", line 108, in __init__\r\n    self.__init_cache()\r\n  File \"/usr/lib/python2.7/site-packages/openshift/dynamic/client.py\", line 137, in __init_cache\r\n    self.__resources.update(self.parse_api_groups())\r\n  File \"/usr/lib/python2.7/site-packages/openshift/dynamic/client.py\", line 187, in parse_api_groups\r\n    groups_response = load_json(self.request('GET', '/{}'.format(prefix)))['groups']\r\n  File \"/usr/lib/python2.7/site-packages/openshift/dynamic/client.py\", line 395, in request\r\n    _return_http_data_only=params.get('_return_http_data_only', True)\r\n  File \"/usr/lib/python2.7/site-packages/kubernetes/client/api_client.py\", line 321, in call_api\r\n    _return_http_data_only, collection_formats, _preload_content, _request_timeout)\r\n  File \"/usr/lib/python2.7/site-packages/kubernetes/client/api_client.py\", line 155, in __call_api\r\n    _request_timeout=_request_timeout)\r\n  File \"/usr/lib/python2.7/site-packages/kubernetes/client/api_client.py\", line 342, in request\r\n    headers=headers)\r\n  File \"/usr/lib/python2.7/site-packages/kubernetes/client/rest.py\", line 231, in GET\r\n    query_params=query_params)\r\n  File \"/usr/lib/python2.7/site-packages/kubernetes/client/rest.py\", line 222, in request\r\n    raise ApiException(http_resp=r)\r\nkubernetes.client.rest.ApiException: (403)\r\nReason: Forbidden\r\nHTTP response headers: HTTPHeaderDict({'Date': 'Mon, 16 Mar 2020 21:43:41 GMT', 'Content-Length': '189', 'Content-Type': 'application/json', 'X-Content-Type-Options': 'nosniff', 'Cache-Control': 'no-cache, private'})\r\nHTTP response body: {\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"forbidden: User \\\"system:anonymous\\\" cannot get path \\\"/apis\\\"\",\"reason\":\"Forbidden\",\"details\":{},\"code\":403}\r\n\r\n\r\n

Copied from original issue: ansible/ansible#68267

Release version 0.10.0

SUMMARY

With the addition of k8s_exec and k8s_log, we need to bump a minor pre-1.0 release version. There are also a number of other small cleanups that will be included in this release. The CHANGELOG needs to be updated (manually, for now, since #40 is not complete), and then I'll make sure to follow the new process to make sure extra files aren't dropped in (thus preventing #29 from happening for this next release).

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

N/A

ADDITIONAL INFORMATION

N/A

support for synchronize / rsync module

SUMMARY

Support for the synchronize (rsync front end) module in the kubectl and oc connection plugins. Or create an equivalent.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME
  • oc
  • kubectl
  • (external) synchronize
ADDITIONAL INFORMATION

This likely requires modification of the synchronize module. I'm not even sure this is even possible! :)

An alternative, which might be equivalent, is support for oc/kubectl cp command.

references:

Sanity tests failing with "No module named 'jinja2'"

SUMMARY
ERROR: Command "importer.py" returned exit status 1.
>>> Standard Error
Traceback (most recent call last):
  File "/root/ansible/ansible_collections/community/kubernetes/tests/output/.tmp/sanity/import/minimal-py36/bin/importer.py", line 447, in <module>
    main()
  File "/root/ansible/ansible_collections/community/kubernetes/tests/output/.tmp/sanity/import/minimal-py36/bin/importer.py", line 51, in main
    from ansible.utils.collection_loader import AnsibleCollectionLoader
  File "/root/ansible/lib/ansible/utils/collection_loader.py", line 15, in <module>
    from ansible import constants as C
  File "/root/ansible/lib/ansible/constants.py", line 12, in <module>
    from jinja2 import Template
ModuleNotFoundError: No module named 'jinja2'

See failed run: https://github.com/ansible-collections/kubernetes/runs/541824333

ISSUE TYPE
  • Bug Report

Add k8s_status module to collection

SUMMARY

A k8s_status module would be nice to have in certain circumstances. If you want to be able to set a status for a given resource, this module could help you do that.

See: https://github.com/operator-framework/operator-metering/blob/master/images/metering-ansible-operator/roles/meteringconfig/library/k8s_status.py

ISSUE TYPE
  • Feature Idea
ADDITIONAL INFORMATION

Example task:

- name: Set custom status fields on TestCR
  k8s_status:
    api_version: apps.example.com/v1alpha1
    kind: TestCR
    name: my-test
    namespace: testing
    status:
        hello: world
        custom: entries

Decide whether to add a meta/action_groups.yml file

SUMMARY

In #49, I found that there was an extra action_groups.yml file added to the automatically-migrated Kubernetes collection: https://github.com/ansible-collection-migration/community.kubernetes/blob/master/meta/action_groups.yml

The file seems to be useful for defining module_defaults groups (see docs on Module Defaults), and is currently waiting on a PR (ansible/ansible#67291) to make it into ansible/ansible before the file does anything useful.

For the Kubernetes collection, I'm not sure whether it's valuable to have this file/support or not... it could add convenience in the case of specifying a bunch of defaults for a bunch of tasks relating to different k8s_* family modules, but if we don't include it, would much be lost?

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

Metadata

ADDITIONAL INFORMATION

Related issues / PRs:

k8s module src directive does not reference files directory when used in a role

SUMMARY

The k8s module has a src directive for specifying the location of files to provide a path to a file containing a valid YAML definition of an object or objects to be created or updated. It appears that
this requires a full path name. When used in a role, this directive doesn't appear to use the roles/x/{files,templates,tasks}/ by default. It will use it if "{{ role_path }}/files/my_file.yml is used however.

ISSUE TYPE
  • Bug Report
COMPONENT NAME
ANSIBLE VERSION
ansible 2.7.8
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
CONFIGURATION
OS / ENVIRONMENT

RHEL 7.7

STEPS TO REPRODUCE
- name: Create secret for htpasswd
  k8s:
    kubeconfig: clusters/{{ cluster }}/ocp4/auth/kubeconfig
    state: present
    src: files/secret.yml
EXPECTED RESULTS

Secret created based on the contents of files/secret.yml

ACTUAL RESULTS

Playbook fails with "Error accessing files/secret.yml. Does the file exist?"

ansible-playbook 2.7.8
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible-playbook
  python version = 2.7.5 (default, Jun 11 2019, 14:33:56) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /etc/ansible/ansible.cfg as config file
/root/ocp4/inventories/hosts.bal1 did not meet host_list requirements, check plugin documentation if this is unexpected
/root/ocp4/inventories/hosts.bal1 did not meet script requirements, check plugin documentation if this is unexpected
/root/ocp4/inventories/hosts.bal1 did not meet yaml requirements, check plugin documentation if this is unexpected
Parsed /root/ocp4/inventories/hosts.bal1 inventory source with ini plugin
 
PLAYBOOK: postconfig.yml ********************************************************************************************
1 plays in postconfig.yml
 
PLAY [localhost] ****************************************************************************************************
META: ran handlers
 
TASK [user-registry-backdoor : Create secret for htpasswd] ******************************************************
task path: /root/ocp4/roles/user-registry-backdoor/tasks/main.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1585752693.84-12716652062932 `" && echo ansible-tmp-1585752693.84-12716652062932="` echo /root/.ansible/tmp/ansible-tmp-1585752693.84-12716652062932 `" ) && sleep 0'
Using module file /usr/lib/python2.7/site-packages/ansible/modules/clustering/k8s/k8s.py
<127.0.0.1> PUT /root/.ansible/tmp/ansible-local-48900CduyEt/tmpc6jyoA TO /root/.ansible/tmp/ansible-tmp-1585752693.84-12716652062932/AnsiballZ_k8s.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1585752693.84-12716652062932/ /root/.ansible/tmp/ansible-tmp-1585752693.84-12716652062932/AnsiballZ_k8s.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python2 /root/.ansible/tmp/ansible-tmp-1585752693.84-12716652062932/AnsiballZ_k8s.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1585752693.84-12716652062932/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "api_key": null,
            "cert_file": null,
            "context": null,
            "force": false,
            "host": null,
            "key_file": null,
            "kubeconfig": "clusters/bal1/ocp4/auth/kubeconfig",
            "merge_type": null,
            "password": null,
            "ssl_ca_cert": null,
            "state": "present",
            "username": null,
            "verify_ssl": null
        }
    },
    "msg": "Error accessing files/secret.yml. Does the file exist?"
}
        to retry, use: --limit @/root/ocp4/postconfig.retry
 
PLAY RECAP **********************************************************************************************************
localhost                  : ok=0    changed=0    unreachable=0    failed=1
 

k8s module with multi resources manifest has a bad "changed" behaviour

SUMMARY

When create task with k8s module that use multi resources manifest, ansible always report "changed: true"

ISSUE TYPE
  • Bug Report
COMPONENT NAME

k8s module

ANSIBLE VERSION
ansible 2.9.6
  config file = None
  configured module search path = ['/Users/hennessy/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/Cellar/ansible/2.9.6_1/libexec/lib/python3.8/site-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 3.8.2 (default, Mar 11 2020, 00:29:50) [Clang 11.0.0 (clang-1100.0.33.17)]
CONFIGURATION
Empty
OS / ENVIRONMENT
Darwin 19.3.0 Darwin Kernel Version 19.3.0: Thu Jan  9 20:58:23 PST 2020; root:xnu-6153.81.5~1/RELEASE_X86_64 x86_64
ProductName:	Mac OS X
ProductVersion:	10.15.3
BuildVersion:	19D76
STEPS TO REPRODUCE
---
- name: Deploy kubernetes-dashboard
  hosts: localhost
  tasks:
    - name: Create recomended dashboard deployment from url
       k8s:
         state: present
         definition: "{{ lookup('url', 'https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml', split_lines=False) }}"
EXPECTED RESULTS
changed: [localhost] => {

>     "changed": false,

    "invocation": {
        "module_args": {
            "api_key": null,
            "api_version": "v1",
            "append_hash": false,
            "apply": false,
            "ca_cert": null,
            "client_cert": null,
            "client_key": null,
            "context": null,
            "force": false,
            "host": null,
            "kind": null,
            "kubeconfig": "/Users/hennessy/.kube/config",
            "merge_type": null,
            "name": null,
            "namespace": null,
            "password": null,
            "proxy": null,
            "resource_definition": "# Copyright 2017 The Kubernetes Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\napiVersion: v1\nkind: Namespace\nmetadata:\n  name: kubernetes-dashboard\n\n---\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard\n  namespace: kubernetes-dashboard\n\n---\n\nkind: Service\napiVersion: v1\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard\n  namespace: kubernetes-dashboard\nspec:\n  ports:\n    - port: 443\n      targetPort: 8443\n  selector:\n    k8s-app: kubernetes-dashboard\n\n---\n\napiVersion: v1\nkind: Secret\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard-certs\n  namespace: kubernetes-dashboard\ntype: Opaque\n\n---\n\napiVersion: v1\nkind: Secret\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard-csrf\n  namespace: kubernetes-dashboard\ntype: Opaque\ndata:\n  csrf: \"\"\n\n---\n\napiVersion: v1\nkind: Secret\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard-key-holder\n  namespace: kubernetes-dashboard\ntype: Opaque\n\n---\n\nkind: ConfigMap\napiVersion: v1\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard-settings\n  namespace: kubernetes-dashboard\n\n---\n\nkind: Role\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard\n  namespace: kubernetes-dashboard\nrules:\n  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.\n  - apiGroups: [\"\"]\n    resources: [\"secrets\"]\n    resourceNames: [\"kubernetes-dashboard-key-holder\", \"kubernetes-dashboard-certs\", \"kubernetes-dashboard-csrf\"]\n    verbs: [\"get\", \"update\", \"delete\"]\n    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.\n  - apiGroups: [\"\"]\n    resources: [\"configmaps\"]\n    resourceNames: [\"kubernetes-dashboard-settings\"]\n    verbs: [\"get\", \"update\"]\n    # Allow Dashboard to get metrics.\n  - apiGroups: [\"\"]\n    resources: [\"services\"]\n    resourceNames: [\"heapster\", \"dashboard-metrics-scraper\"]\n    verbs: [\"proxy\"]\n  - apiGroups: [\"\"]\n    resources: [\"services/proxy\"]\n    resourceNames: [\"heapster\", \"http:heapster:\", \"https:heapster:\", \"dashboard-metrics-scraper\", \"http:dashboard-metrics-scraper\"]\n    verbs: [\"get\"]\n\n---\n\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard\nrules:\n  # Allow Metrics Scraper to get metrics from the Metrics server\n  - apiGroups: [\"metrics.k8s.io\"]\n    resources: [\"pods\", \"nodes\"]\n    verbs: [\"get\", \"list\", \"watch\"]\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard\n  namespace: kubernetes-dashboard\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: Role\n  name: kubernetes-dashboard\nsubjects:\n  - kind: ServiceAccount\n    name: kubernetes-dashboard\n    namespace: kubernetes-dashboard\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: kubernetes-dashboard\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: kubernetes-dashboard\nsubjects:\n  - kind: ServiceAccount\n    name: kubernetes-dashboard\n    namespace: kubernetes-dashboard\n\n---\n\nkind: Deployment\napiVersion: apps/v1\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard\n  namespace: kubernetes-dashboard\nspec:\n  replicas: 1\n  revisionHistoryLimit: 10\n  selector:\n    matchLabels:\n      k8s-app: kubernetes-dashboard\n  template:\n    metadata:\n      labels:\n        k8s-app: kubernetes-dashboard\n    spec:\n      containers:\n        - name: kubernetes-dashboard\n          image: kubernetesui/dashboard:v2.0.0-rc7\n          imagePullPolicy: Always\n          ports:\n            - containerPort: 8443\n              protocol: TCP\n          args:\n            - --auto-generate-certificates\n            - --namespace=kubernetes-dashboard\n            # Uncomment the following line to manually specify Kubernetes API server Host\n            # If not specified, Dashboard will attempt to auto discover the API server and connect\n            # to it. Uncomment only if the default does not work.\n            # - --apiserver-host=http://my-address:port\n          volumeMounts:\n            - name: kubernetes-dashboard-certs\n              mountPath: /certs\n              # Create on-disk volume to store exec logs\n            - mountPath: /tmp\n              name: tmp-volume\n          livenessProbe:\n            httpGet:\n              scheme: HTTPS\n              path: /\n              port: 8443\n            initialDelaySeconds: 30\n            timeoutSeconds: 30\n          securityContext:\n            allowPrivilegeEscalation: false\n            readOnlyRootFilesystem: true\n            runAsUser: 1001\n            runAsGroup: 2001\n      volumes:\n        - name: kubernetes-dashboard-certs\n          secret:\n            secretName: kubernetes-dashboard-certs\n        - name: tmp-volume\n          emptyDir: {}\n      serviceAccountName: kubernetes-dashboard\n      nodeSelector:\n        \"beta.kubernetes.io/os\": linux\n      # Comment the following tolerations if Dashboard must not be deployed on master\n      tolerations:\n        - key: node-role.kubernetes.io/master\n          effect: NoSchedule\n\n---\n\nkind: Service\napiVersion: v1\nmetadata:\n  labels:\n    k8s-app: dashboard-metrics-scraper\n  name: dashboard-metrics-scraper\n  namespace: kubernetes-dashboard\nspec:\n  ports:\n    - port: 8000\n      targetPort: 8000\n  selector:\n    k8s-app: dashboard-metrics-scraper\n\n---\n\nkind: Deployment\napiVersion: apps/v1\nmetadata:\n  labels:\n    k8s-app: dashboard-metrics-scraper\n  name: dashboard-metrics-scraper\n  namespace: kubernetes-dashboard\nspec:\n  replicas: 1\n  revisionHistoryLimit: 10\n  selector:\n    matchLabels:\n      k8s-app: dashboard-metrics-scraper\n  template:\n    metadata:\n      labels:\n        k8s-app: dashboard-metrics-scraper\n      annotations:\n        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'\n    spec:\n      containers:\n        - name: dashboard-metrics-scraper\n          image: kubernetesui/metrics-scraper:v1.0.4\n          ports:\n            - containerPort: 8000\n              protocol: TCP\n          livenessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /\n              port: 8000\n            initialDelaySeconds: 30\n            timeoutSeconds: 30\n          volumeMounts:\n          - mountPath: /tmp\n            name: tmp-volume\n          securityContext:\n            allowPrivilegeEscalation: false\n            readOnlyRootFilesystem: true\n            runAsUser: 1001\n            runAsGroup: 2001\n      serviceAccountName: kubernetes-dashboard\n      nodeSelector:\n        \"beta.kubernetes.io/os\": linux\n      # Comment the following tolerations if Dashboard must not be deployed on master\n      tolerations:\n        - key: node-role.kubernetes.io/master\n          effect: NoSchedule\n      volumes:\n        - name: tmp-volume\n          emptyDir: {}\n",
            "src": null,
            "state": "present",
            "username": null,
            "validate": null,
            "validate_certs": null,
            "wait": false,
            "wait_condition": null,
            "wait_sleep": 5,
            "wait_timeout": 120
        }
    },
    "result": {
        "results": [
            {
                "changed": false,
                "diff": {},
                "method": "patch",
                "result": {
                    "apiVersion": "v1",
                    "kind": "Namespace",
                    "metadata": {
                        "creationTimestamp": "2020-04-02T23:13:39Z",
                        "name": "kubernetes-dashboard",
                        "resourceVersion": "611598",
                        "selfLink": "/api/v1/namespaces/kubernetes-dashboard",
                        "uid": "3a19ebb3-964c-4d85-9acc-8d83a3bb0bb5"
                    },
                    "spec": {
                        "finalizers": [
                            "kubernetes"
                        ]
                    },
                    "status": {
                        "phase": "Active"
                    }
                },
                "warnings": []
            },
            {
                "changed": false,
                "diff": {},
                "method": "patch",
                "result": {
                    "apiVersion": "v1",
                    "kind": "ServiceAccount",
                    "metadata": {
                        "creationTimestamp": "2020-04-02T23:13:39Z",
                        "labels": {
                            "k8s-app": "kubernetes-dashboard"
                        },
                        "name": "kubernetes-dashboard",
                        "namespace": "kubernetes-dashboard",
                        "resourceVersion": "611611",
                        "selfLink": "/api/v1/namespaces/kubernetes-dashboard/serviceaccounts/kubernetes-dashboard",
                        "uid": "375f02c5-085a-48b1-a3a4-26c88da56f63"
                    },
                    "secrets": [
                        {
                            "name": "kubernetes-dashboard-token-hnbj9"
                        }
                    ]
                },
                "warnings": []
            },
            ... list continue
ACTUAL RESULTS
changed: [localhost] => {

>     "changed": true,

    "invocation": {
        "module_args": {
            "api_key": null,
            "api_version": "v1",
            "append_hash": false,
            "apply": false,
            "ca_cert": null,
            "client_cert": null,
            "client_key": null,
            "context": null,
            "force": false,
            "host": null,
            "kind": null,
            "kubeconfig": "/Users/hennessy/.kube/config",
            "merge_type": null,
            "name": null,
            "namespace": null,
            "password": null,
            "proxy": null,
            "resource_definition": "# Copyright 2017 The Kubernetes Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n#     http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\napiVersion: v1\nkind: Namespace\nmetadata:\n  name: kubernetes-dashboard\n\n---\n\napiVersion: v1\nkind: ServiceAccount\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard\n  namespace: kubernetes-dashboard\n\n---\n\nkind: Service\napiVersion: v1\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard\n  namespace: kubernetes-dashboard\nspec:\n  ports:\n    - port: 443\n      targetPort: 8443\n  selector:\n    k8s-app: kubernetes-dashboard\n\n---\n\napiVersion: v1\nkind: Secret\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard-certs\n  namespace: kubernetes-dashboard\ntype: Opaque\n\n---\n\napiVersion: v1\nkind: Secret\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard-csrf\n  namespace: kubernetes-dashboard\ntype: Opaque\ndata:\n  csrf: \"\"\n\n---\n\napiVersion: v1\nkind: Secret\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard-key-holder\n  namespace: kubernetes-dashboard\ntype: Opaque\n\n---\n\nkind: ConfigMap\napiVersion: v1\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard-settings\n  namespace: kubernetes-dashboard\n\n---\n\nkind: Role\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard\n  namespace: kubernetes-dashboard\nrules:\n  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.\n  - apiGroups: [\"\"]\n    resources: [\"secrets\"]\n    resourceNames: [\"kubernetes-dashboard-key-holder\", \"kubernetes-dashboard-certs\", \"kubernetes-dashboard-csrf\"]\n    verbs: [\"get\", \"update\", \"delete\"]\n    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.\n  - apiGroups: [\"\"]\n    resources: [\"configmaps\"]\n    resourceNames: [\"kubernetes-dashboard-settings\"]\n    verbs: [\"get\", \"update\"]\n    # Allow Dashboard to get metrics.\n  - apiGroups: [\"\"]\n    resources: [\"services\"]\n    resourceNames: [\"heapster\", \"dashboard-metrics-scraper\"]\n    verbs: [\"proxy\"]\n  - apiGroups: [\"\"]\n    resources: [\"services/proxy\"]\n    resourceNames: [\"heapster\", \"http:heapster:\", \"https:heapster:\", \"dashboard-metrics-scraper\", \"http:dashboard-metrics-scraper\"]\n    verbs: [\"get\"]\n\n---\n\nkind: ClusterRole\napiVersion: rbac.authorization.k8s.io/v1\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard\nrules:\n  # Allow Metrics Scraper to get metrics from the Metrics server\n  - apiGroups: [\"metrics.k8s.io\"]\n    resources: [\"pods\", \"nodes\"]\n    verbs: [\"get\", \"list\", \"watch\"]\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard\n  namespace: kubernetes-dashboard\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: Role\n  name: kubernetes-dashboard\nsubjects:\n  - kind: ServiceAccount\n    name: kubernetes-dashboard\n    namespace: kubernetes-dashboard\n\n---\n\napiVersion: rbac.authorization.k8s.io/v1\nkind: ClusterRoleBinding\nmetadata:\n  name: kubernetes-dashboard\nroleRef:\n  apiGroup: rbac.authorization.k8s.io\n  kind: ClusterRole\n  name: kubernetes-dashboard\nsubjects:\n  - kind: ServiceAccount\n    name: kubernetes-dashboard\n    namespace: kubernetes-dashboard\n\n---\n\nkind: Deployment\napiVersion: apps/v1\nmetadata:\n  labels:\n    k8s-app: kubernetes-dashboard\n  name: kubernetes-dashboard\n  namespace: kubernetes-dashboard\nspec:\n  replicas: 1\n  revisionHistoryLimit: 10\n  selector:\n    matchLabels:\n      k8s-app: kubernetes-dashboard\n  template:\n    metadata:\n      labels:\n        k8s-app: kubernetes-dashboard\n    spec:\n      containers:\n        - name: kubernetes-dashboard\n          image: kubernetesui/dashboard:v2.0.0-rc7\n          imagePullPolicy: Always\n          ports:\n            - containerPort: 8443\n              protocol: TCP\n          args:\n            - --auto-generate-certificates\n            - --namespace=kubernetes-dashboard\n            # Uncomment the following line to manually specify Kubernetes API server Host\n            # If not specified, Dashboard will attempt to auto discover the API server and connect\n            # to it. Uncomment only if the default does not work.\n            # - --apiserver-host=http://my-address:port\n          volumeMounts:\n            - name: kubernetes-dashboard-certs\n              mountPath: /certs\n              # Create on-disk volume to store exec logs\n            - mountPath: /tmp\n              name: tmp-volume\n          livenessProbe:\n            httpGet:\n              scheme: HTTPS\n              path: /\n              port: 8443\n            initialDelaySeconds: 30\n            timeoutSeconds: 30\n          securityContext:\n            allowPrivilegeEscalation: false\n            readOnlyRootFilesystem: true\n            runAsUser: 1001\n            runAsGroup: 2001\n      volumes:\n        - name: kubernetes-dashboard-certs\n          secret:\n            secretName: kubernetes-dashboard-certs\n        - name: tmp-volume\n          emptyDir: {}\n      serviceAccountName: kubernetes-dashboard\n      nodeSelector:\n        \"beta.kubernetes.io/os\": linux\n      # Comment the following tolerations if Dashboard must not be deployed on master\n      tolerations:\n        - key: node-role.kubernetes.io/master\n          effect: NoSchedule\n\n---\n\nkind: Service\napiVersion: v1\nmetadata:\n  labels:\n    k8s-app: dashboard-metrics-scraper\n  name: dashboard-metrics-scraper\n  namespace: kubernetes-dashboard\nspec:\n  ports:\n    - port: 8000\n      targetPort: 8000\n  selector:\n    k8s-app: dashboard-metrics-scraper\n\n---\n\nkind: Deployment\napiVersion: apps/v1\nmetadata:\n  labels:\n    k8s-app: dashboard-metrics-scraper\n  name: dashboard-metrics-scraper\n  namespace: kubernetes-dashboard\nspec:\n  replicas: 1\n  revisionHistoryLimit: 10\n  selector:\n    matchLabels:\n      k8s-app: dashboard-metrics-scraper\n  template:\n    metadata:\n      labels:\n        k8s-app: dashboard-metrics-scraper\n      annotations:\n        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'\n    spec:\n      containers:\n        - name: dashboard-metrics-scraper\n          image: kubernetesui/metrics-scraper:v1.0.4\n          ports:\n            - containerPort: 8000\n              protocol: TCP\n          livenessProbe:\n            httpGet:\n              scheme: HTTP\n              path: /\n              port: 8000\n            initialDelaySeconds: 30\n            timeoutSeconds: 30\n          volumeMounts:\n          - mountPath: /tmp\n            name: tmp-volume\n          securityContext:\n            allowPrivilegeEscalation: false\n            readOnlyRootFilesystem: true\n            runAsUser: 1001\n            runAsGroup: 2001\n      serviceAccountName: kubernetes-dashboard\n      nodeSelector:\n        \"beta.kubernetes.io/os\": linux\n      # Comment the following tolerations if Dashboard must not be deployed on master\n      tolerations:\n        - key: node-role.kubernetes.io/master\n          effect: NoSchedule\n      volumes:\n        - name: tmp-volume\n          emptyDir: {}\n",
            "src": null,
            "state": "present",
            "username": null,
            "validate": null,
            "validate_certs": null,
            "wait": false,
            "wait_condition": null,
            "wait_sleep": 5,
            "wait_timeout": 120
        }
    },
    "result": {
        "results": [
            {
                "changed": false,
                "diff": {},
                "method": "patch",
                "result": {
                    "apiVersion": "v1",
                    "kind": "Namespace",
                    "metadata": {
                        "creationTimestamp": "2020-04-02T23:13:39Z",
                        "name": "kubernetes-dashboard",
                        "resourceVersion": "611598",
                        "selfLink": "/api/v1/namespaces/kubernetes-dashboard",
                        "uid": "3a19ebb3-964c-4d85-9acc-8d83a3bb0bb5"
                    },
                    "spec": {
                        "finalizers": [
                            "kubernetes"
                        ]
                    },
                    "status": {
                        "phase": "Active"
                    }
                },
                "warnings": []
            },
            {
                "changed": false,
                "diff": {},
                "method": "patch",
                "result": {
                    "apiVersion": "v1",
                    "kind": "ServiceAccount",
                    "metadata": {
                        "creationTimestamp": "2020-04-02T23:13:39Z",
                        "labels": {
                            "k8s-app": "kubernetes-dashboard"
                        },
                        "name": "kubernetes-dashboard",
                        "namespace": "kubernetes-dashboard",
                        "resourceVersion": "611611",
                        "selfLink": "/api/v1/namespaces/kubernetes-dashboard/serviceaccounts/kubernetes-dashboard",
                        "uid": "375f02c5-085a-48b1-a3a4-26c88da56f63"
                    },
                    "secrets": [
                        {
                            "name": "kubernetes-dashboard-token-hnbj9"
                        }
                    ]
                },
                "warnings": []
            },
            ... list continue

k8s module should have some basic filtering power(JSON output)

SUMMARY

While using k8s_info modules we get the output in JSON format in the register variable then we have to add an extra task (can be a set_fact ) to filter the useful values. Can/ Do we have k8s_info modules with some basic filtering power so that we don't need to add an extra task for getting one value inside our JSON output?

ISSUE TYPE

  • Feature Idea

COMPONENT NAME

MODULE NAMES

  • K8s
  • k8s_info / k8s_facts(before ansible 2.9)

ADDITIONAL INFORMATION

I'm just correlating these tasks with shell commands
We do -o jsonpath={.metadata.name} to filter name (for example) how to filter this or any other component using the k8s Ansible module?

Test with Molecule 3.0 alpha

SUMMARY

Molecule 3 will be a bit of a rearchitecture, which means some things are going, and some things are changing. We are using the delegated provider for KinD (for now, at least), which should work fine with Molecule 3. See the Molecule 3 migration checklist and Molecule 3 changelog PR for details.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

CI - Molecule tests

ADDITIONAL INFORMATION

N/A

Stop using json_query in integration test waiter.yml

SUMMARY

Currently there's a task in the molecule integration tests (part of #22 / #10) which uses the json_query() filter, which is moving out of Ansible core into the community.general collection:

    - name: Check that paused deployment wait worked
      assert:
        that:
          - condition.reason == "DeploymentPaused"
          - condition.status == "Unknown"
      vars:
        condition: '{{ pause_deploy.result.status.conditions | community.general.json_query("[?type==`Progressing`]") | first }}'

This filter depends on the jmespath library, which might not be actively maintained anymore, and it also requires the installation of the community.general collection just to make integration tests pass (afaict, there are no other dependencies on non-core plugins/filters/modules).

If possible, we should remove this dependency so we can drop the installation of the community.general collection from our CI workflow, and also drop the dependency on jmespath in our CI environment, especially since it seems the module and underlying library may not be in the best state, maintenance-wise.

ISSUE TYPE
  • Feature request
COMPONENT NAME

CI

Add `k8s_wait` module

SUMMARY

Just wanted to throw this out here, since it's something that would be convenient in a number of circumstances.

A common pattern for K8s deployments is:

  1. Apply a manifest to create a new Deployment.
  2. Wait for all the Pods in this Deployment to be Ready.
  3. Do other stuff.

Currently, for number 2, you can futz around with the returned data from k8s or k8s_info and use until/retries to get something working, or you can do a more simple method (if you have kubectl available) using kubectl wait:

kubectl wait --for=condition=Ready pods --selector app=my-app-name --timeout=60s

At a basic level, I'd want something like:

- name: Wait for my-app-name pods to be ready.
  k8s_wait:
    for: condition=Ready
    type: pods
    selector:
      - app=my-app-name
    timeout: 60s

Something along those lines... not sure. But it would be nice to be able to specify this in a more structured way, and not have to rely on kubectl being present for a task like:

- name: Wait for my-app-name pods to be ready.
  command: >
    kubectl wait --for=condition=Ready
    pods --selector app=my-app-name --timeout=60s
  changed_when: false
ISSUE TYPE
  • Feature Idea
COMPONENT NAME

N/A

ADDITIONAL INFORMATION

N/A

Add Probot/stale bot to mark issues as stale and close after certain period

SUMMARY

For my own GitHub repos, I've configured the probot/stale bot to mark issues as stale after 90 days with no activity, and close after an additional 30 days if no further activity is found.

For the long-term maintenance of this collection, it might be nice to do the same thing here, just to make sure that 'rotting' issues and PRs (especially ones with no active interest) are automatically pruned.

Not sure the exact time we'd want to configure, but it's pretty easy to set up via the stale.yml configuration...

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

Organization / maintenance

ADDITIONAL INFORMATION

N/A

Code coverage support

SUMMARY

One ansible-minimal is working again.

  • Copy what grafana has for code coverage
  • generate codecov.io token
  • Enable Codecov commenting on PRs
ISSUE TYPE
  • Feature Idea
COMPONENT NAME
ADDITIONAL INFORMATION

Add k8s_exec module to collection

SUMMARY

A new k8s_exec module would allow execution of command in a pod container through the API:
https://docs.okd.io/latest/dev_guide/executing_remote_commands.html#protocol

See: ansible/ansible#55029

Currently the only easy way to do this (without using a custom module) is via kubectl and the command or shell module, which requires more dependencies and setup, and is more fragile.

ISSUE TYPE
  • Feature Idea
COMPONENT NAME

k8s_exec

ADDITIONAL INFORMATION

Example playbook:

- hosts: localhost
  gather_facts: no
  vars:
    pod: example-sf-zuul-scheduler-84bf645594-kw9lr
    namespace: myproject
  tasks:
    - name: Test Exec
      k8s_exec:
        pod: "{{ pod }}"
        namespace: "{{ namespace }}"
        command: zuul --version

    - name: Test Stderr
      k8s_exec:
        pod: "{{ pod }}"
        namespace: "{{ namespace }}"
        command: sh -c "echo toto > /dev/stderr"

Results in:

$ ansible-playbook -v test-exec.yaml
PLAY [localhost] *************************************************************

TASK [Test Exec] *************************************************************
changed: [localhost] => {
  "changed": true, "stderr": "", "stderr_lines": [],
  "stdout": "Zuul version: 3.7.2.dev37\n",
  "stdout_lines": ["Zuul version: 3.7.2.dev37"]
}

TASK [Test Stderr] ***********************************************************
changed: [localhost] => {
  "changed": true, "stderr": "toto\n", "stderr_lines": ["toto"],
  "stdout": "", "stdout_lines": []
}

PLAY RECAP *******************************************************************
localhost                  : ok=2    changed=2    unreachable=0    failed=0

Code coverage upload issues to codecov.io

SUMMARY

Code coverage reports from master branch CI runs are not being uploaded to codecov.io.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

CI

ADDITIONAL INFO

See:

It looks like maybe when I fixed the syntax issue in the coverage.sh file (see commit 426cf88), it stopped uploading results?

The logs from the CI runs are showing:

WARNING: Reviewing previous 1 warning(s):
WARNING: Ignored 28190 characters from 28 invalid coverage path(s).
+ for file in tests/output/coverage/coverage=*.xml
+ flags='*.xml'
+ flags='*'
+ flags='*'
+ flags=_
+ bash /dev/fd/63 -f 'tests/output/coverage/coverage=*.xml' -F _ -t d6ff3062-7455-4de8-a8cb-55b3b1ddf5b2 -X coveragepy -X gcov -X fix -X search -X xcode -K
++ curl -s https://codecov.io/bash

  _____          _
 / ____|        | |
| |     ___   __| | ___  ___ _____   __
| |    / _ \ / _` |/ _ \/ __/ _ \ \ / /
| |___| (_) | (_| |  __/ (_| (_) \ V /
 \_____\___/ \__,_|\___|\___\___/ \_/
                              Bash-20191211-b8db533


==> GitHub Actions detected.
    project root: .
    Yaml found at: codecov.yml
--> No coverage report found.
    Please visit http://docs.codecov.io/docs/supported-languages

What to do about vendor-specific Kubernetes features

SUMMARY

Currently in the k8s module we do a bit of special casing for Projects, an OpenShift-specific resource, and there is another PR raised to do something similar for BuildRequests, also OpenShift-specific. As a project, we should decide whether we want to accept and support vendor-specific Kubernetes resources, or if we should target vanilla Kubernetes and encourage other vendors to add their own collections (perhaps that build off this one, if that is possible) that handle the special-casing of their resources. If we decide the latter, we should also refactor the OpenShift-specific code out and move it to an OpenShift specific collection.

We could also consider a hybrid, where we remove the vendor-specific bits from the core k8s modules but allow additional vendor-specific modules, so instead of special casing Projects in the k8s module, we may have an openshift or openshift_project module that properly handles the behavior.

I'd be happy with any of these decisions, but I think it would be good to have general guidance documented, to avoid the perception of favouritism between vendors.

ISSUE TYPE
  • Documentation Report

Two sanity tests are failing in CI

SUMMARY

ansible-test sanity checks are currently failing in CI:

Running sanity test 'validate-modules' with Python 3.6
Run command: /usr/bin/python3.6 /root/ansible/test/lib/ansible_test/_data/sanity/validate-modules/validate-modules --format json --arg-spec plugins/modules/k8s.py plugins/modules/k8s_auth.py plugins/modules/k8s_exec.py plugins/modules/k8s_info.py plugins/modules/k8s_log.py plugins/modules/k8s_scale.py plugins/modules/k8s_service.py --collection ansible_collections/community/kubernetes
ERROR: Found 2 validate-modules issue(s) which need to be resolved:
ERROR: plugins/modules/k8s_service.py:0:0: mutually_exclusive-unknown: mutually_exclusive contains terms which are not part of argument_spec: apply
ERROR: plugins/modules/k8s_service.py:0:0: mutually_exclusive-unknown: mutually_exclusive contains terms which are not part of argument_spec: src
See documentation for help: https://docs.ansible.com/ansible/devel/dev_guide/testing/sanity/validate-modules.html
ISSUE TYPE
  • Bug Report
COMPONENT NAME

CI / Documentation

k8s_log fails to read job logs

SUMMARY

k8s_log module fails to read logs for k8s jobs

ISSUE TYPE
  • Bug Report
ANSIBLE VERSION
ansible 2.9.6
  config file = None
  configured module search path = ['/Users/1041791/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/Cellar/ansible/2.9.6_1/libexec/lib/python3.8/site-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 3.8.2 (default, Mar 11 2020, 00:28:52) [Clang 11.0.0 (clang-1100.0.33.17)]
STEPS TO REPRODUCE
- hosts: localhost
  connection: local
  gather_facts: false

  collections:
    - community.kubernetes

  tasks:
  - community.kubernetes.k8s:
      state: present
      wait: yes
      wait_timeout: 120
      wait_condition:
        type: Complete
        status: 'True'
      definition:
        apiVersion: batch/v1
        kind: Job
        metadata:
          name: pi
          namespace: test
        spec:
          template:
            spec:
              containers:
              - name: pi
                image: perl
                command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
              restartPolicy: Never
          backoffLimit: 4

  - community.kubernetes.k8s_log:
      api_version: batch/v1
      kind: Job
      namespace: test
      name: pi
EXPECTED RESULTS

It should read logs for the pi job

ACTUAL RESULTS
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: 'float' object is not subscriptable
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n  File \"/Users/gabology/.ansible/tmp/ansible-tmp-1585628952.305501-193777297450161/AnsiballZ_k8s_log.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"/Users/gabology/.ansible/tmp/ansible-tmp-1585628952.305501-193777297450161/AnsiballZ_k8s_log.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/Users/gabology/.ansible/tmp/ansible-tmp-1585628952.305501-193777297450161/AnsiballZ_k8s_log.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible_collections.community.kubernetes.plugins.modules.k8s_log', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/usr/local/opt/[email protected]/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py\", line 206, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/usr/local/opt/[email protected]/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py\", line 96, in _run_module_code\n    _run_code(code, mod_globals, init_globals,\n  File \"/usr/local/opt/[email protected]/Frameworks/Python.framework/Versions/3.8/lib/python3.8/runpy.py\", line 86, in _run_code\n    exec(code, run_globals)\n  File \"/var/folders/n9/q00g1dkd0p17ct_99j_0ybjrn6w01w/T/ansible_community.kubernetes.k8s_log_payload_fa6l6osp/ansible_community.kubernetes.k8s_log_payload.zip/ansible_collections/community/kubernetes/plugins/modules/k8s_log.py\", line 236, in <module>\n  File \"/var/folders/n9/q00g1dkd0p17ct_99j_0ybjrn6w01w/T/ansible_community.kubernetes.k8s_log_payload_fa6l6osp/ansible_community.kubernetes.k8s_log_payload.zip/ansible_collections/community/kubernetes/plugins/modules/k8s_log.py\", line 232, in main\n  File \"/var/folders/n9/q00g1dkd0p17ct_99j_0ybjrn6w01w/T/ansible_community.kubernetes.k8s_log_payload_fa6l6osp/ansible_community.kubernetes.k8s_log_payload.zip/ansible_collections/community/kubernetes/plugins/modules/k8s_log.py\", line 185, in execute_module\n  File \"/usr/local/Cellar/ansible/2.9.6_1/libexec/lib/python3.8/site-packages/openshift/dynamic/client.py\", line 94, in get\n    return self.request('get', path, **kwargs)\n  File \"/usr/local/Cellar/ansible/2.9.6_1/libexec/lib/python3.8/site-packages/openshift/dynamic/client.py\", line 49, in inner\n    return serializer(self, json.loads(resp.data.decode('utf8')))\n  File \"/usr/local/Cellar/ansible/2.9.6_1/libexec/lib/python3.8/site-packages/openshift/dynamic/resource.py\", line 276, in __init__\n    kind = instance['kind']\nTypeError: 'float' object is not subscriptable\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

k8s_module fails on subsequent runs for unpatchable resources

Cross posted from ansible/ansible#68547. Not sure if this issue tracker should be used instead for k8s module issues.

SUMMARY

Module attempts to patch K8s Jobs (and possibly other k8s resources) which is not supported causing a fatal error

ISSUE TYPE
  • Bug Report
COMPONENT NAME

k8s_module

ANSIBLE VERSION
anansible 2.9.6
  config file = None
  configured module search path = ['/Users/foo/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/Cellar/ansible/2.9.6_1/libexec/lib/python3.8/site-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 3.8.2 (default, Mar 11 2020, 00:28:52) [Clang 11.0.0 (clang-1100.0.33.17)]
STEPS TO REPRODUCE

Create a K8s Job using the Ansible k8s module. Execute the task once, then run it again.

EXPECTED RESULTS

Playbook execution should be successful on subsequent invocations regardless of if the Job is already present or not.

ACTUAL RESULTS

Following error is returned from K8s

FAILED! => {"changed": false, "error": 422, "msg": "Failed to patch object: ...", "reason": "Unprocessable Entity", "status": 422}

[WARNING]: - collection was NOT installed successfully: Content has no field named 'owner'

SUMMARY

This collection willl not install as described in README.md because an owner field is missing.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

Ansible Galaxy

ANSIBLE VERSION
user:~] 1 $ ansible --version
ansible 2.8.3
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/home/ostraaten/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
  executable location = /usr/local/bin/ansible
  python version = 2.7.15+ (default, Oct  7 2019, 17:39:04) [GCC 7.4.0]
[user:~] $ 

CONFIGURATION
[user:~] $ ansible-config dump --only-changed
[yser $ 


OS / ENVIRONMENT

Ubuntu 18.04 LTS

STEPS TO REPRODUCE

You only need Ansible installed to reproduce this bug

EXPECTED RESULTS

i expect Ansible to download and install the collection.

ACTUAL RESULTS
[user:~] 5 $ ansible-galaxy collection install community.kubernetes
- downloading role 'collection', owned by 
 [WARNING]: - collection was NOT installed successfully: Content has no field
named 'owner'

ERROR! - you can use --ignore-errors to skip failed roles and finish processing the list.

When ignoring errors it still will not install

- downloading role 'collection', owned by 
 [WARNING]: - collection was NOT installed successfully: Content has no field named 'owner'

- downloading role 'kubernetes', owned by community
 [WARNING]: - community.kubernetes was NOT installed successfully: - sorry, community.kubernetes was not found on
https://galaxy.ansible.com.

k8s inventory plugin not working as expected

From @magick93 on Dec 14, 2018 01:04

SUMMARY

Im trying to create a dynamic inventory from a kubernetes cluster, using the k8s plugin, however I'm unable to get it to work. I've tried various plugin configuration options but almost always get the same result.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

k8s plugin

ANSIBLE VERSION
ansible 2.7.4
  config file = /home/anton/git/k8s_inventory/ansible.cfg
  configured module search path = [u'/home/anton/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0]

CONFIGURATION
INVENTORY_ENABLED(env: ANSIBLE_INVENTORY_ENABLED) = [u'k8s']
OS / ENVIRONMENT
NAME="Ubuntu"
VERSION="18.04.1 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.1 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic

STEPS TO REPRODUCE

I've created a very simple play, enabled the k8s plugin, and created a k8s.yaml file that is passed as the -i parameter value.

k8s.yaml attempts

plugin: k8s
connections:
    namespaces:
    - awx
plugin: k8s
connections:
    host: https://REDACTED:6443
    token: REDACTED
    ssl_verify: false

Playbook

---
- name: Hello World!
  hosts: awx
 
  tasks:
 
  - name: Hello World!
    shell: echo "Hi! Tower is working."

EXPECTED RESULTS

Expected that the simple hello world playbook would run on containers within the awx kubernetes namespace.

ACTUAL RESULTS

The above attempts resulted in:

ansible-playbook playbook.yaml -i k8s.yaml -e hosts=k8s  -vvvv
ansible-playbook 2.7.4
  config file = /home/anton/git/k8s_inventory/ansible.cfg
  configured module search path = [u'/home/anton/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/dist-packages/ansible
  executable location = /usr/bin/ansible-playbook
  python version = 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0]
Using /home/anton/git/k8s_inventory/ansible.cfg as config file
setting up inventory plugins
 [WARNING]:  * Failed to parse /home/anton/git/k8s_inventory/k8s.yaml with ini plugin: /home/anton/git/k8s_inventory/k8s.yaml:4: Expected key=value host variable assignment, got: k8s

  File "/usr/lib/python2.7/dist-packages/ansible/plugins/inventory/ini.py", line 132, in parse
    self._parse(path, data)
  File "/usr/lib/python2.7/dist-packages/ansible/plugins/inventory/ini.py", line 210, in _parse
    hosts, port, variables = self._parse_host_definition(line)
  File "/usr/lib/python2.7/dist-packages/ansible/plugins/inventory/ini.py", line 308, in _parse_host_definition
    self._raise_error("Expected key=value host variable assignment, got: %s" % (t))
  File "/usr/lib/python2.7/dist-packages/ansible/plugins/inventory/ini.py", line 137, in _raise_error
    raise AnsibleError("%s:%d: " % (self._filename, self.lineno) + message)

 [WARNING]: Unable to parse /home/anton/git/k8s_inventory/k8s.yaml as an inventory source

 [WARNING]: No inventory was parsed, only implicit localhost is available

 [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/default.pyc

PLAYBOOK: playbook.yaml ******************************************************************************************************************************************************************************************************************************************************************
1 plays in playbook.yaml
 [WARNING]: Could not match supplied host pattern, ignoring: awx


PLAY [Hello World!] **********************************************************************************************************************************************************************************************************************************************************************
skipping: no hosts matched

PLAY RECAP *******************************************************************************************************************************************************************************************************************************************************************************

Copied from original issue: ansible/ansible#49918

Add a diff option to k8s module

SUMMARY

Is it possible to add to the k8s module a "diff" option to run a kubectl diff instead of a kubectl apply

I'm using this code to run a diff before applying my configuration :

- name: kubectl diff config
  shell: >
    cat <(echo "$tmp_manifest") | {{ kubernetes_kubectl }} diff --kubeconfig=environments/{{ kubernetes_env }}/kubeconfigs/$(whoami).kubeconfig.yml -f - 2>&1
  args:
    executable: /bin/bash
  environment:
    tmp_manifest: "{{ lookup('template', 'roles/test/templates/test.yml.j2') }}"
  register: diff_output_secrets
  check_mode: false
  ignore_errors: true
  failed_when: false
ISSUE TYPE
- name: kubectl diff
  k8s:
    definition: "{{ lookup('template', 'roles/test/templates/test.yml.j2') }}"
    kubeconfig: "environments/{{ kubernetes_env }}/kubeconfigs/$(whoami).kubeconfig.yml"
    diff: true
COMPONENT NAME

k8s_module

ADDITIONAL INFORMATION

My role :

- name: kubectl diff config
  shell: >
    cat <(echo "$tmp_manifest") | {{ kubernetes_kubectl }} diff --kubeconfig=environments/{{ kubernetes_env }}/kubeconfigs/$(whoami).kubeconfig.yml -f - 2>&1
  args:
    executable: /bin/bash
  environment:
    tmp_manifest: "{{ lookup('template', 'roles/test/templates/test.yml.j2') }}"
  register: diff_output_config
  check_mode: false
  ignore_errors: true
  failed_when: false

- name: show {{ kubernetes_kubectl }} diff config output
  debug:
    var: diff_output_config.stdout 
  when:
    - diff_output_config is defined and diff_output_config.stdout != ''
	
- name: "{{ kubernetes_kubectl }} apply config"
  k8s:
    state: present
    definition: "{{ lookup('file', 'environments/' + kubernetes_env + '/k8sconfigs/test.yml') }}"
    kubeconfig: "environments/{{ kubernetes_env }}/kubeconfigs/$(whoami).kubeconfig.yml"
  register: apply_config_output
  when:
    - diff_output_config is defined and diff_output_config.stdout != ''

Thanks

Use Github Actions 'checkout@v2' instead of v1 to prevent CI re-run errors

SUMMARY

Testing the PR #61, I've found errors like:

git checkout --progress --force 29436c4030820f6feb760e730d09c2d819ee5a57
##[error]fatal: reference is not a tree: 29436c4030820f6feb760e730d09c2d819ee5a57
##[error]Git checkout failed with exit code: 128
##[error]Exit code 1 returned from process: file name '/home/runner/runners/2.169.0/bin/Runner.PluginHost', arguments 'action "GitHub.Runner.Plugins.Repository.v1_0.CheckoutTask, Runner.Plugins"'.

It seems like that was tracked in actions/checkout#23, and that issue recommends upgrading to the checkout action v2 (we're currently using v1). I don't believe this will break anything in our CI workflow, but it should lead to better stability when re-running failed jobs like I was doing for the PR linked earlier.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

CI

k8s inventory plugin not working

Im trying to create a dynamic inventory from a kubernetes cluster, using the k8s plugin, however I'm unable to get it to work. It's not good to configure according to the document. How to configure?

1、deploy

[dev] [root@k8s-node1 ~]# cat k8s.yml 
---
- hosts: localhost
  gather_facts: false
  connection: local

  collections:
    - community.kubernetes
  tasks:
    - name: Ensure the myapp Namespace exists.
      k8s:
        api_version: v1
        kind: Namespace
        name: testing
        state: present

2、

[dev] [root@k8s-node1 ~]# cat k8s.yml 
---
- hosts: localhost
  gather_facts: false
  connection: local

  collections:
    - community.kubernetes
  tasks:
    - name: Ensure the myapp Namespace exists.
      k8s:
        api_version: v1
        kind: Namespace
        name: testing
        state: present
[dev] [root@k8s-node1 ~]# cat /root/.kube/
cache/       .config.swd  .config.swf  .config.swh  .config.swj  .config.swl  .config.swn  .config.swp
config       .config.swe  .config.swg  .config.swi  .config.swk  .config.swm  .config.swo  http-cache/
[dev] [root@k8s-node1 ~]# cat /root/.kube/config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR2akNDQXFhZ0F3SUJBZ0lVS25YeHg1NjV1aGpSSFFuY3QvOXY3VS8xdUVVd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEREQUtCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1CNFhEVEU1TURjeE1EQTJOVGN3TUZvWERUSTBNRGN3T0RBMk5UY3dNRm93WlRFTE1Ba0cKQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbGFXcHBibWN4RERBSwpCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByZFdKbGNtNWxkR1Z6Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBc0I1TndHRyt3QzVYdTdPNFV3QUIKcENhNFBSbVhzM2NDZFpsTW8rV2huSVVvT3UwRmVqdDNCQ1U4RHBvTXZpNGRZNzBVVnIrWVVNRS9BenczR1UvTgpYclBiR2drZHlmL291ZEIzMk94ZmxyOXhYeThCeGVUWnhqWlp5TkE0RmVVWFI1VXV3NWxoL0ErQkVRV1U2MW1MClRRSU4xYUk3RXMvREZMUDVHT3lXYkNOcnVwNVRMU1ZCZ3dEVzM5Rkh5YWhzNytPV0xhM1JyRTYxZFc2blRMdE4KbmZwazZEV05GMzRtRC8vM1BnTDZ2N0VteXEwZWcxbFplWVEzT1h1d1ZGUWV2ZlVnYzlaK2RpaTdTWVhzY3FKRwpLRDVwb1lNRzM5bmxTQnhVQytFMDFVemV4dTl5cFRCNXJ3M1hOL013SmVxTU93K21TTnJCTFZCbGZ2TTdRb1gzCjFRSURBUUFCbzJZd1pEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0VnWURWUjBUQVFIL0JBZ3dCZ0VCL3dJQkFqQWQKQmdOVkhRNEVGZ1FVK29HMm1QYmtaTDNyeEN0cEc3KzFXeHpSRVhFd0h3WURWUjBqQkJnd0ZvQVUrb0cybVBiawpaTDNyeEN0cEc3KzFXeHpSRVhFd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJQVdEOUpibWdOOHJneWVGM2dRCjRnUGFZQ1BiMit3QzQzd1BVVUtvcjlnTnZjNkVUMVFWY2lmV0dMZHExS3JDREN0QWxJSzVGUVVHak8xNUVMV3MKdWFOeGMzZURDY0NqaUE4SlBhUXNyeWpkeit2R0Iyd0xtN0VGQTVtdy9TcnN3cG5uWGo1RlpoVDdubHlDOTZwRwpOakZyQzlvK2taelRzVndqaUZmdzRUL0J3TWxXaUw2YTZ3bFowY0x1V1JWUnVZRSszQ3NMcHE2NFFJbnBxa3NDCllJOEFrTkY0anN1QmhMSjlMcXE1eDlMdEE2a1NzVFA5M0lnaCtSR2M2UGkreUZEMzcxeDlqcFJBcHNZRDlUOHcKTHB0RTBGWmNjNWx3RGZzSTJUaTVodzI1enZGa3F6OHl4U0RSQVFERXk2Q280aG9zODJwdEFJTjhzZlNyT0lobQpINms9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://192.168.10.130:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: cluser-admin
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: cluster-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQzVENDQXNXZ0F3SUJBZ0lVR3liT3hCMFhqTmlWd09ma01OdkpqVnVoenBBd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEREQUtCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1CNFhEVEU1TURjeE1EQTJOVGN3TUZvWERUSTVNRGN3TnpBMk5UY3dNRm93YXpFTE1Ba0cKQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFVcHBibWN4RURBT0JnTlZCQWNUQjBKbGFVcHBibWN4RnpBVgpCZ05WQkFvVERuTjVjM1JsYlRwdFlYTjBaWEp6TVE4d0RRWURWUVFMRXdaVGVYTjBaVzB4RGpBTUJnTlZCQU1UCkJXRmtiV2x1TUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUFydzgvQ1c3dzVuUVEKdTFrYkRqVkVEMCtSaWtaT29lTjY3WTd3Q0xucmxENVFGekV4UmlkbmxHajdzRklJQ2Y0VDIxVWNKQzhkWUNFOQpHcDdPYytmL0RselpIMi8zYW5QSFNlU0pjSEZNT05pL2U5c0ZnY0dXVkd1dFVYL1ZnTVV1WkV1VGRzRUNDSUxwCk54YnZ6WEhLWjZoSG4yTjhHSU1OR3k1WGdiNnFYUVhxYnZScjBsRGp5SXc4V29FMEJmejFndnRWMy9UbkZKQUUKTUE3cVVjYTVyQnVqZi9acHNJRklGdVFqYVFjNVgyNkhUNFA1NVJmSXY4K1BFanBwY2JoU29yajBTOVhTdm0vZAp6UlUxTnBCbnQ2MXlSdmRMalZhZWpab0ZjeFlFb2NFbUtuNzEzN2hDODlnbDVhZWlNZXNFWTk0SE1hUy9QMEhkCnE1V1FBREk1cndJREFRQUJvMzh3ZlRBT0JnTlZIUThCQWY4RUJBTUNCYUF3SFFZRFZSMGxCQll3RkFZSUt3WUIKQlFVSEF3RUdDQ3NHQVFVRkJ3TUNNQXdHQTFVZEV3RUIvd1FDTUFBd0hRWURWUjBPQkJZRUZCNTF3QkFESzhJZAo1MWt5N01xM0tHMWJVVHl3TUI4R0ExVWRJd1FZTUJhQUZQcUJ0cGoyNUdTOTY4UXJhUnUvdFZzYzBSRnhNQTBHCkNTcUdTSWIzRFFFQkN3VUFBNElCQVFCbmpHc0Iyd3RubnZFVjIzZ0h2SnpQcUoxNXk3b3dLK1lSbjJtV2wydnYKV2RQWHdhNnRvOGdNV0RVK2hpeTdkVHFtTFloOUlpWG5PUlNMWDRXZHFVQVNMUjFaVUNYekRyc0xFRlZQVjFaawpYR1d5bGVUVG1meWw3Z0ZoSmx2WWw0SjVDenpucDNNVGFDeDJRUzZqWW1qNUczSlRNWUNnNm9hZzNDVlVNZzg2CjNUOFFQZ0V6Tm1rb1AvNzd1RW1Jb1hCMm94U1UvTVdLY1Nyb0JiN1VDblBvVEQ5dll3b2UybmJhMlFUYkxRRWwKTGsva2hCT1E3d1FvZFVpY2tyV1BaQ1hVcFJjMGdobzZzR2hwYmdHam5HRmVITGxPeGl4d1poeXlqZUpXMzdhaQo0Z1Azc2wycWt4bFV0T1loNlRaTys0SlRpZzNtTW9JZ1NubHdiOEFTa3JWRgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBcnc4L0NXN3c1blFRdTFrYkRqVkVEMCtSaWtaT29lTjY3WTd3Q0xucmxENVFGekV4ClJpZG5sR2o3c0ZJSUNmNFQyMVVjSkM4ZFlDRTlHcDdPYytmL0RselpIMi8zYW5QSFNlU0pjSEZNT05pL2U5c0YKZ2NHV1ZHdXRVWC9WZ01VdVpFdVRkc0VDQ0lMcE54YnZ6WEhLWjZoSG4yTjhHSU1OR3k1WGdiNnFYUVhxYnZScgowbERqeUl3OFdvRTBCZnoxZ3Z0VjMvVG5GSkFFTUE3cVVjYTVyQnVqZi9acHNJRklGdVFqYVFjNVgyNkhUNFA1CjVSZkl2OCtQRWpwcGNiaFNvcmowUzlYU3ZtL2R6UlUxTnBCbnQ2MXlSdmRMalZhZWpab0ZjeFlFb2NFbUtuNzEKMzdoQzg5Z2w1YWVpTWVzRVk5NEhNYVMvUDBIZHE1V1FBREk1cndJREFRQUJBb0lCQUJCSk9kTVYyQ0dJY0xvTgpPeUFpUW5ldUxsc1AyV2JrTTk1LzZzTFZFUjZVZ1h6MjNaK3FNTSsweUoySnRDZkIxSFVXUU96NDJTSEZWZHJ4CkpVSFJOb0JPa1FDRXVSN1ZNSmdtUThjTE0wMGlsUVhmeFc1aDVTdHJiUTlrOWlicHNUd3hiOEdmaVNIamsvREYKR0lBamN2SWJ6TFgrV21BcGFRRzdXUGJBRnpkYUd3dlhjaFFHaGZtVjdyRFlxekNIeTBic2RRTzkyaUkzSXRQbwpWS3BoSWwzMzI2czVvdWFYbkJxV3VwMFZJRURYNHBQSm9YM0FzY0hIODl6ODlGUVVMTVZDNCtlbkNFZ21NbkFHCmhwSStBajc0dzI3cnVkd0RpSitTVmFTRTY1dWg2UTR2MFR0UnBmM2FOSXBCRVNoMkdmbVd4UFp6QlhRdzlsUDMKbEszTFZBa0NnWUVBNDY2MDRLQzFpTE9iWUEyOXlKOFRpdHBFdmNTSGN1d1JrTmcvRXdoeEVIZ054Ym5uMzZ1Vwp1ZHBTT1p1QWQ4VlhEYkdTSGRNWWRKcFh5aUx1Um91bWhPc09MRkRybExYVkJrQldpMTNQNDRUS01sWTdnMUpuClkvL3VIbGQrdll5b1B3cW1nQWpjbE9Pa09hNU9nNU15OXVrS240RmV1TCs3dmVWd0tiZlZvWVVDZ1lFQXhOVU4KdFI2djQwa3RlcnEyNTloOVdSUDJIOWgyWndOZGg3OU9LSXVPMFN2bUtUanNPdElNS0o0ZFkzTVljS2FPYUxXbwp2MEFpWGY1WTdGSmxlOVBlSDNNU3ZIUGllakhSRzQrUE8rNmxOQ3UxRm8yNExqQUxDamlRV1lmbFhxZ0lkT25RCkFOQTZwTGlRM2d3SWZGQ1JrczNqZHBYSHBWOE13ZVJobFVCWGVxTUNnWUVBdFgyaFAyRzc4MEZBZkp2WGlhR00KZ1dXbDRDTlYyVHptYjdDQTd0b096cEwwWDRYbW1Mdjl4UjZMNXRIVzRTSlVWMTBSM1daVkd6V2cvMGRDNnNjTgpNT3p4K2s5eXlyTDdJU1dPRnovcnBEQkl3VUZONVV0OWtSQUVydmtOMVdqWEFKR3IwV20rODR4V2I0aExtOFJ0Cm5yWjdPbFIwdmc1UVNIb3BJNGdmNmNVQ2dZQXJHRzY0NGpBbWRtWXp3ZCs4SVdWSWRKdGwyNUlJK2U2bmd4Wk0Kd0VtVHVLWGJEckNDTEcwbkUzOWh2OWh4Q2JhU2JIdTI3QWJhUjQ4V3B1KzdUZWNMUWJtdmN6djUveUJHaFljWgoyeVZtcDg4dFVmZ3FmTEJlRzRaWFkrNnZhK0QySUI4L25sZklxdlJrK1lOK0hISFRENnNtMHFKMHJidndVOTJkCnZRbXFPd0tCZ1FDTjR4aXJpZXcwdVltZitpbFNVUVgrTEdsTFJFQjIyZ1U3SW9pSGFKWTJXMmNDcG1raVdKT3AKSktFVHgwdmNUSi9NandpbzV3MVJxeXNqeDFyTlFkczM3cGZLMDVSc29vLzlrZlpzS2dXU3FrOEJhc3RReE5zTQp0TWhvelpBNFBlTTExTmlPZTAxOUdGUStiRmc1WHBSbWw3L0hnaU92T2VQREF1SjZYandvSXc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

3、error

[dev] [root@k8s-node1 ~]# ansible-playbook  k8s.yml
[WARNING]:  * Failed to parse /etc/ansible/hosts with k8s plugin: Syntax Error while loading YAML.   expected
'<document start>', but found '<scalar>'  The error appears to be in '/etc/ansible/hosts': line 2, column 1, but may
be elsewhere in the file depending on the exact syntax problem.  The offending line appears to be:  [test]
192.168.10.130 ^ here
[WARNING]: Unable to parse /etc/ansible/hosts as an inventory source
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not
match 'all'

PLAY [localhost] ****************************************************************************************************

TASK [Ensure the myapp Namespace exists.] ***************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: argument of type 'NoneType' is not iterable
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n  File \"/tmp/ansible_k8s_payload__9mlod7c/ansible_k8s_payload.zip/ansible_collections/community/kubernetes/plugins/module_utils/common.py\", line 193, in get_api_client\n  File \"/data/apps/python3/lib/python3.6/site-packages/kubernetes/config/incluster_config.py\", line 96, in load_incluster_config\n    cert_filename=SERVICE_CERT_FILENAME).load_and_set()\n  File \"/data/apps/python3/lib/python3.6/site-packages/kubernetes/config/incluster_config.py\", line 47, in load_and_set\n    self._load_config()\n  File \"/data/apps/python3/lib/python3.6/site-packages/kubernetes/config/incluster_config.py\", line 53, in _load_config\n    raise ConfigException(\"Service host/port is not set.\")\nkubernetes.config.config_exception.ConfigException: Service host/port is not set.\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File \"/root/.ansible/tmp/ansible-tmp-1586849534.6105824-263930027939916/AnsiballZ_k8s.py\", line 102, in <module>\n    _ansiballz_main()\n  File \"/root/.ansible/tmp/ansible-tmp-1586849534.6105824-263930027939916/AnsiballZ_k8s.py\", line 94, in _ansiballz_main\n    invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n  File \"/root/.ansible/tmp/ansible-tmp-1586849534.6105824-263930027939916/AnsiballZ_k8s.py\", line 40, in invoke_module\n    runpy.run_module(mod_name='ansible_collections.community.kubernetes.plugins.modules.k8s', init_globals=None, run_name='__main__', alter_sys=True)\n  File \"/data/apps/python3/lib/python3.6/runpy.py\", line 205, in run_module\n    return _run_module_code(code, init_globals, run_name, mod_spec)\n  File \"/data/apps/python3/lib/python3.6/runpy.py\", line 96, in _run_module_code\n    mod_name, mod_spec, pkg_name, script_name)\n  File \"/data/apps/python3/lib/python3.6/runpy.py\", line 85, in _run_code\n    exec(code, run_globals)\n  File \"/tmp/ansible_k8s_payload__9mlod7c/ansible_k8s_payload.zip/ansible_collections/community/kubernetes/plugins/modules/k8s.py\", line 273, in <module>\n  File \"/tmp/ansible_k8s_payload__9mlod7c/ansible_k8s_payload.zip/ansible_collections/community/kubernetes/plugins/modules/k8s.py\", line 269, in main\n  File \"/tmp/ansible_k8s_payload__9mlod7c/ansible_k8s_payload.zip/ansible_collections/community/kubernetes/plugins/module_utils/raw.py\", line 174, in execute_module\n  File \"/tmp/ansible_k8s_payload__9mlod7c/ansible_k8s_payload.zip/ansible_collections/community/kubernetes/plugins/module_utils/common.py\", line 195, in get_api_client\n  File \"/data/apps/python3/lib/python3.6/site-packages/kubernetes/config/kube_config.py\", line 743, in load_kube_config\n    loader.load_and_set(config)\n  File \"/data/apps/python3/lib/python3.6/site-packages/kubernetes/config/kube_config.py\", line 551, in load_and_set\n    self._load_cluster_info()\n  File \"/data/apps/python3/lib/python3.6/site-packages/kubernetes/config/kube_config.py\", line 517, in _load_cluster_info\n    file_base_path=base_path).as_file()\n  File \"/data/apps/python3/lib/python3.6/site-packages/kubernetes/config/kube_config.py\", line 100, in __init__\n    if data_key_name in obj:\nTypeError: argument of type 'NoneType' is not iterable\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

PLAY RECAP **********************************************************************************************************
localhost                  : ok=0    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.