Code Monkey home page Code Monkey logo

ubuntu18-cis's Introduction

UBUNTU18 CIS

Configure a Ubuntu18 machine to be CIS compliant


Org Stars Stars Forks followers Twitter URL

Ansible Galaxy Quality Discord Badge

Release Branch Release Tag Release Date

Main Pipeline Status

Devel Pipeline Status Devel Commits

Issues Open Issues Closed Pull Requests

License


Looking for support?

Lockdown Enterprise

Ansible support

Community

Join us on our Discord Server to ask questions, discuss features, or just chat with other Ansible-Lockdown users.


Caution(s)

This role will make changes to the system which may have unintended consequences. This is not an auditing tool but rather a remediation tool to be used after an audit has been conducted.

Check Mode is not supported! The role will complete in check mode without errors, but it is not supported and should be used with caution. The UBUNTU18-CIS-Audit role or a compliance scanner should be used for compliance checking over check mode.

This role was developed against a clean install of the Operating System. If you are implementing to an existing system please review this role for any site specific changes that are needed.

To use release version please point to main branch and relevant release for the cis benchmark you wish to work with.


Matching a security Level for CIS

It is possible to to only run level 1 or level 2 controls for CIS. This is managed using tags:

  • level1_server
  • level1_workstation
  • level2_server
  • level2_workstation

The control found in defaults main also need to reflect this as this control the testing thet takes place if you are using the audit component.

Coming from a previous release

CIS release always contains changes, it is highly recommended to review the new references and available variables. This have changed significantly since ansible-lockdown initial release. This is now compatible with python3 if it is found to be the default interpreter. This does come with pre-requisites which it configures the system accordingly.

Further details can be seen in the Changelog

Auditing (new)

This can be turned on or off within the defaults/main.yml file with the variable run_audit. The value is false by default, please refer to the wiki for more details. The defaults file also populates the goss checks to check only the controls that have been enabled in the ansible role.

This is a much quicker, very lightweight, checking (where possible) config compliance and live/running settings.

A new form of auditing has been developed, by using a small (12MB) go binary called goss along with the relevant configurations to check. Without the need for infrastructure or other tooling. This audit will not only check the config has the correct setting but aims to capture if it is running with that configuration also trying to remove false positives in the process.

Refer to UBUNTU18-CIS-Audit.

Example Audit Summary

This is based on a vagrant image with selections enabled. e.g. No Gui or firewall. Note: More tests are run during audit as we check config and running state.

ok: [default] => {
    "msg": [
        "The pre remediation results are: ['Total Duration: 5.454s', 'Count: 338, Failed: 47, Skipped: 5'].",
        "The post remediation results are: ['Total Duration: 5.007s', 'Count: 338, Failed: 46, Skipped: 5'].",
        "Full breakdown can be found in /var/tmp",
        ""
    ]
}

PLAY RECAP *******************************************************************************************************************************************
default                    : ok=270  changed=23   unreachable=0    failed=0    skipped=140  rescued=0    ignored=0

Documentation

Requirements

General:

  • Basic knowledge of Ansible, below are some links to the Ansible documentation to help get started if you are unfamiliar with Ansible

  • Functioning Ansible and/or Tower Installed, configured, and running. This includes all of the base Ansible/Tower configurations, needed packages installed, and infrastructure setup.

  • Please read through the tasks in this role to gain an understanding of what each control is doing. Some of the tasks are disruptive and can have unintended consiquences in a live production system. Also familiarize yourself with the variables in the defaults/main.yml file.

Technical Dependencies:

  • Access to download or add the goss binary and content to the system if using auditing (other options are available on how to get the content to the system.)
  • Python3
  • Ansible 2.10.1+
  • python-def
  • libselinux-python

Role Variables

This role is designed that the end user should not have to edit the tasks themselves. All customizing should be done via the defaults/main.yml file or with extra vars within the project, job, workflow, etc.

Tags

There are many tags available for added control precision. Each control has it's own set of tags noting what level, if it's scored/notscored, what OS element it relates to, if it's a patch or audit, and the rule number.

Below is an example of the tag section from a control within this role. Using this example if you set your run to skip all controls with the tag services, this task will be skipped. The opposite can also happen where you run only controls tagged with services.

      tags:
      - level1-server
      - level1-workstation
      - scored
      - avahi
      - services
      - patch
      - rule_2.2.4

Community Contribution

We encourage you (the community) to contribute to this role. Please read the rules below.

  • Your work is done in your own individual branch. Make sure to Signed-off and GPG sign all commits you intend to merge.
  • All community Pull Requests are pulled into the devel branch
  • Pull Requests into devel will confirm your commits have a GPG signature, Signed-off, and a functional test before being approved
  • Once your changes are merged and a more detailed review is complete, an authorized member will merge your changes into the main branch for a new release

Known Issues

cloud0init - due to a bug this will stop working if noexec is added to /var. ubtu18cis_rule_1_1_3_3

bug 1839899

Pipeline Testing

uses:

  • ansible-core 2.16
  • ansible collections - pulls in the latest version based on requirements file
  • runs the audit using the devel branch
  • This is an automated test that occurs on pull requests into devel

Added Extras

  • pre-commit can be tested and can be run from within the directory
pre-commit run

ubuntu18-cis's People

Contributors

carnells avatar cawamata avatar dderemiah avatar georgenalen avatar hankszeto avatar mrsteve81 avatar pre-commit-ci[bot] avatar uk-bolly avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ubuntu18-cis's Issues

`5.6 | Ensure access to the su command is restricted` tasks do not account for lines commented out in pam.d

The tasks for this CIS criteria first grep the /etc/pam.d/su file, and do not take into account a line that may be commented out, so the tasks get skipped ie.
# auth required pam_wheel.so

      - name: "SCORED | 5.6 | PATCH | Ensure access to the su command is restricted | Check for pam_wheel.so module"
        command: grep 'auth.*required.*pam_wheel' /etc/pam.d/su
        changed_when: false
        failed_when: false
        register: ubtu18cis_5_6_pam_wheel_status

      - name: "SCORED | 5.6 | PATCH | Ensure access to the su command is restricted | Set pam_wheel if does not exist"
        lineinfile:
            path: /etc/pam.d/su
            line: 'auth       required   pam_wheel.so use_uid group={{ ubtu18cis_su_group }}'
            create: yes
        when: ubtu18cis_5_6_pam_wheel_status.stdout == ""

Seems an update is needed to check if the line is commented out and uncomment it so that the control is applied properly.

Task for CIS 4.1.14 checks the wrong variable in when

Looks like the task for CIS 4.1.14 has a when clause referencing the ubtu18cis_rule_4_1_13 variable instead of the ubtu18cis_rule_4_1_14 variable:

- name: "SCORED | 4.1.14 | PATCH | Ensure changes to system administration scope (sudoers) is collected"
  template:
      src: audit/ubtu18cis_4_1_14_scope.rules.j2
      dest: /etc/audit/rules.d/scope.rules
      owner: root
      group: root
      mode: 0600
  notify: restart auditd
  when:
      - ubtu18cis_rule_4_1_13
  tags:
      - level2-server
      - level2-workstation
      - scored
      - patch
      - rule_4.1.14
      - auditd

audit.rules not generating properly in Ubuntu 18.04

Not sure if this is potentially an Ubuntu 18.04 or augenrules bug in auditd 2.8.2, but for some reason when the different /etc/audit/rules.d/ files from this Ansible role are processed into /etc/audit/audit.rules some of the lines are getting concatenated together.

AFAICT it seems to only affect the /etc/audit/rules.d/ files that do not have an empty line at the end of the file.

Here are some examples of different files with rules that are not processing correctly into /etc/audit/audit.rules:

# /etc/audit/rules.d/identity.rules
-w /etc/group -p wa -k identity
-w /etc/passwd -p wa -k identity
-w /etc/gshadow -p wa -k identity
-w /etc/shadow -p wa -k identity
-w /etc/security/opasswd -p wa -k identity

# /etc/audit/rules.d/logins.rules
-w /var/log/faillog -p wa -k logins
-w /var/log/lastlog -p wa -k logins
-w /var/log/tallylog -p wa -k logins

# /etc/audit/audit.rules
-w /etc/group -p wa -k identity
-w /etc/passwd -p wa -k identity
-w /etc/gshadow -p wa -k identity
-w /etc/shadow -p wa -k identity
-w /etc/security/opasswd -p wa -k identity-w /var/log/faillog -p wa -k logins
-w /var/log/lastlog -p wa -k logins
-w /var/log/tallylog -p wa -k logins-w /sbin/insmod -p x -k modules
-w /sbin/rmmod -p x -k modules

Potential fix/workaround may be to add a blank line to the end of each of the audit rules templates to ensure they are processed correctly into /etc/audit/audit.rules

Best way to update a dictionary value of a default variable?

The default values in defaults/main.yml are correct for 95% of the options we want, but there are a few that we'd like to change.

It's easy to override the default setting for true/false variables, like ubtu18cis_rule_1_5_3: false.

However, there doesn't seem to be an obvious way to change a setting that's stored in a dictionary, like ubtu18cis_sshd.

To change the ubtu18cis_sshd.allow_users setting in a playbook, it seems that my options are either:

  1. Manually change the setting in default/main.yml
  2. Copy the entire ubtu18cis_sshd dictionary into the playbook and edit the value.

The combine filter looked promising, but my attempt created a recursion error:

   - role: UBUNTU18-CIS
     ubtu18cis_sshd: '{{ ubtu18cis_sshd | combine({"allow_users":"user1 user2"}) }}'

Is there a way to avoid duplicating code and only modify the value I want to change?

Update:

It appears it's possible to update a value by using set_facts to avoid recursion.

vars:
  sshd_overrides:
    allow_users: "user1 user2"

pre_tasks:
  - name: Update sshd configuration dictionary
    set_fact:
      ubtu18cis_sshd: "{{ ubtu18cis_sshd|combine(sshd_overrides) }}"
  
roles:
  - role: UBUNTU18-CIS

Is there a better way or should I keep using this? If this is the best, I can document it in the readme since there are a number of dictionaries that users may want to modify.

`5.4.4 | Ensure default user umask is 027 or more restrictive` task assumes umask already present in bash.rc

The task for this CIS criteria uses replace assuming that there is already a umask setting in /etc/bash.bashrc and /etc/profile, but does not add the umask setting if it's missing (seems to be missing in bash.rc on a fresh Ubuntu 18.04 install) ie.

- name: "SCORED | 5.4.4 | PATCH | Ensure default user umask is 027 or more restrictive"
  replace:
      path: "{{ item }}"
      regexp: '(^\s+umask) 002'
      replace: '\1 027'
  with_items:
      - /etc/bash.bashrc
      - /etc/profile

An update is needed to ensure that the proper umask setting is added to these files if one does not already exist.

Update layout to match RHEL7 and 8 CIS

We need to update the layout to match the RHEL7 and 8 layout where controls are broken out in to files that are subsections. For example control 2.1.1 will be in tasks/section_1/cis_2.1.x.yml. This will make edits/updates to controls a bit easier separating them out.

MTA services should be removed if ubtu18cis_mail_server: false like other server vars

Feature Request or Enhancement

  • Feature []
  • Enhancement [x]

Summary of Request
Following up from #48 and how the Ansible role currently does not remove MTA services if ubtu18cis_mail_server is false

The ubtu18cis_mail_server var is confusing as the other server vars will remove their corresponding services if false, but not the ubtu18cis_mail_server var.

There's also a comment in the defaults.yml where the vars are specified which is misleading:
# Service configuration variables, set to true to keep service

For instance if ubtu18cis_dhcp_server is false then there is a task to ensure the service is removed:

- name: "AUTOMATED | 2.1.5 | PATCH | Ensure DHCP Server is not installed"
  apt:
      name: isc-dhcp-server
      state: absent
  when:
      - ubtu18cis_rule_2_1_5
      - not ubtu18cis_dhcp_server
  tags:
      - level1-server
      - level1-workstation
      - automated
      - patch
      - rule_2.2.5
      - dhcp
      - services

Ideally the Ansible role would handle the ubtu18cis_mail_server var just like the other server vars.
ie. if the ubtu18cis_mail_server variable is set to false then exim4 and postfix should be absent/removed

Describe alternatives you've considered
For now we have an additional role/task that runs after the lockdown role to remove MTA services

Suggested Code
As noted in #48 there is no specific CIS criteria for removing/uninstalling MTA services, so not entirely sure on the task naming here:

  block:
    - name: "AUTOMATED | 2.1.15 | PATCH | Ensure mail transfer agent is configured for local-only mode | remove exim4"
       apt:
         name: exim4
         state: absent
    - name: "AUTOMATED | 2.1.15 | PATCH | Ensure mail transfer agent is configured for local-only mode | remove postfix"
       apt:
         name: postfix
         state: absent
  when:
    - ubtu18cis_rule_2_1_15
    - not ubtu18cis_mail_server
  tags:
    - level1-server
    - level1-workstation
    - automated
    - patch
    - rule_2.2.15
    - postfix
    - services

Ensure Root Path Integrity

Control 6.2.7 - Ensure root PATH Integrity

We need to figure out the jinja2 filter to use. I pull the paths from $PATH. The control wants all of those paths to exist so I do a stat on them. Which creates a multi dictionary. I can have it do what I need but I want to message out the list of paths that don’t exist. I can get it to list the “exists” value or the “item” value with the map(attribute) however I can’t list the “item” value if “exists” is false. This is the gist of what I’m trying to do {{ ubtu18cis_6_2_7_path_stat.results | selectattr('stat.exists','equalto','false') | map(attribute='item') | list }}". Item is the path even if the file doesn’t exist and is in ubtu18cis_6_2_7_path_stat.results.item in the tree and the exists is in ubtu18cis_6_2_7_path_stat.results.stat.exists of the tree. With that I’m trying to just get the dictionary items where exists is false and list the item value. Hopefully that makes sense, I’ve done that before with single dictionary things but I can’t figure out with the multiple dict stuff.

Set Bootloader PW

Control 1.5.2 - Ensure bootloader password is set

You need to set the boot loader pw for grub. There is a custom module for RHEL that is used for this task. I was not able to get it working with Ubuntu, I am not sure why. We can use shell/command with the grub-mkpasswd-pbkdf2 command. However that command requires a password to be entered twice and I haven't found a good way to work that on the ansible side. The custom module creates a randomized pw to use and hides that random pw, which is great and what I hope to get for this.

Ubuntu 18.06 Pre-audit failing

TASK [/Users/chetan/Documents/UBUNTU18-CIS : capture data /var/tmp/ubuntu-VirtualBox_pre_scan_1663251929.json] ************************************************************************************
ok: [192.168.1.48]

TASK [/Users/chetan/Documents/UBUNTU18-CIS : Pre Audit | Capture pre-audit result] ****************************************************************************************************************
fatal: [192.168.1.48]: FAILED! => {"msg": "the field 'args' has an invalid value ({'pre_audit_summary': '{{ pre_audit.stdout | from_json |json_query(summary) }}'}), and could not be converted to an dict.The error was: Expecting value: line 1 column 1 (char 0)\n\nThe error appears to be in '/Users/chetan/Documents/UBUNTU18-CIS/tasks/pre_remediation_audit.yml': line 99, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n  - name: Pre Audit | Capture pre-audit result\n    ^ here\n"}

PLAY RECAP ****************************************************************************************************************************************************************************************
192.168.1.48               : ok=14   changed=2    unreachable=0    failed=1    skipped=11   rescued=0    ignored=0

It worked initially but failing in second attempt.

Ensure Lockout for Failed PW Attempts

Control 5.3.2 - Ensure lockout for failed password attempts is configured

When I add the account requisite pam_deny.so module this kills all users. I can't login with root or vagrant locally on my vagrant image, or with ssh. I have added the allow users/groups items for pam but that doesn't seem to help. There is a note about a bug in the tally2.so module however I don't think that is related. I have the workaround for that setup on the control.

`wheel` group being used for limiting su access which may not be empty

CIS 5.7 requires that access to the su command is restricted, and that an empty user group is used to ensure no users have access to run su directly:

Ensure access to the su command is restricted (Automated)

Remediation:
Create an empty group that will be specified for use of the su command. The group should
be named according to site policy.

Example:
# groupadd sugroup

Add the following line to the /etc/pam.d/su file, specifying the empty group:

Example:
auth required pam_wheel.so use_uid group=sugroup

Currently the UBUNTU18-CIS role adds the necessary line to /etc/pam.d/su but specifies the wheel group which may or may not be empty:
auth required pam_wheel.so use_uid group=wheel

pam_pwhistory (rule 5.3.3) is added at the wrong place.

Currently, pam_pwhistory is inserted after ^# end of pam-auth-update config. Which isn't the right approach, and causes chpasswd to always output Password has been already used..

Instead, it should probably be added before pam_linux (see the corresponding man page.)

Thanks.

`Ensure core dumps are restricted` task (1.6.4) missing changes needed in `/etc/security/limits.conf`

Seems the SCORED | 1.6.4 | PATCH | Ensure core dumps are restricted task is only partially complete, setting fs.suid_dumpable for sysctl, but missing the changes needed in /etc/security/limits.conf

These are the full remediation steps per the CIS benchmark for core dumps:

Remediation:
Add the following line to /etc/security/limits.conf or a /etc/security/limits.d/* file:
* hard core 0

Set the following parameter in /etc/sysctl.conf or a /etc/sysctl.d/* file:
fs.suid_dumpable = 0

Run the following command to set the active kernel parameter:
# sysctl -w fs.suid_dumpable=0

If systemd-coredump is installed:
edit /etc/systemd/coredump.conf and add/modify the following lines:

Storage=none
ProcessSizeMax=0

Run the command:
systemctl daemon-reload

Incorrect permissions with `5.1.8 | Ensure at/cron is restricted to authorized users` task

The task for this CIS criteria changes the cron file permissions to 0640 when they should be 0600 instead

      - name: "SCORED | 5.1.8 | PATCH | Ensure at/cron is restricted to authorized users | Create allow files"
        file:
            path: "{{ item }}"
            owner: root
            group: root
            mode: 0640
            state: touch
        with_items:
            - /etc/cron.allow
            - /etc/at.allow

Audit:
Run the following command and verify Uid and Gid are both 0/root and Access does not grant permissions to group or other for both /etc/cron.allow and /etc/at.allow :

stat /etc/cron.allow

Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root)

stat /etc/at.allow

Access: (0600/-rw-------) Uid: ( 0/ root) Gid: ( 0/ root)

When enabling ufw some sysctl settings get overridden causing CIS failures

When ufw is enabled it includes its own sysctl settings which override some of the settings needed for the CIS benchmark ie.

  • log_martians for SCORED | 3.2.4 | PATCH | Ensure suspicious packets are logged

https://serverfault.com/questions/745995/enabling-ufw-disables-some-of-the-settings-in-sysctl-conf

If ufw is enabled then it seems the Ansible role will need to also update any relevant sysctl settings in /etc/ufw/sysctl.conf

SSH connections break after homedir checks

Describe the Issue
SSH connections stop working during step 6.2.5, "Ensure users own their home directories".

Expected Behavior
Control 6.2.5 passes; ssh continues working.

Actual Behavior

TASK [UBUNTU18-CIS : AUTOMATED | 6.2.5 | PATCH | Ensure users own their home directories] **********************************************************************
task path: /ansible/roles/UBUNTU18-CIS/tasks/section_6/cis_6.2.x.yml:130
[...]
ok: [target] => (item=landscape: /var/lib/landscape) => {"ansible_loop_var": "item", "changed": false, "gid": 112, "group": "landscape", "item": {"dir": "/var/lib/landscape", "gecos": "", "gid": 112, "id": "landscape", "password": "x", "shell": "/usr/sbin/nologin", "uid": 108}, "mode": "0755", "owner": "landscape", "path": "/var/lib/landscape", "size": 4096, "state": "directory", "uid": 108}
fatal: [target]: FAILED! => {"msg": "Failed to connect to the host via ssh: kex_exchange_identification: Connection closed by remote host"}

(The next user in /etc/passwd is sshd, with a homedir of /run/sshd. When the owner of /run/sshd is changed to the sshd user, new ssh connections immediately stop working. Changing the user back to root:root restores functionality.)

more output
TASK [UBUNTU18-CIS : AUTOMATED | 6.2.5 | PATCH | Ensure users own their home directories] **********************************************************************
task path: /ansible/roles/UBUNTU18-CIS/tasks/section_6/cis_6.2.x.yml:130
ok: [target] => (item=root: /root) => {"ansible_loop_var": "item", "changed": false, "gid": 0, "group": "root", "item": {"dir": "/root", "gecos": "root", "gid": 0, "id": "root", "password": "x", "shell": "/bin/bash", "uid": 0}, "mode": "0700", "owner": "root", "path": "/root", "size": 4096, "state": "directory", "uid": 0}
changed: [target] => (item=daemon: /usr/sbin) => {"ansible_loop_var": "item", "changed": true, "gid": 0, "group": "root", "item": {"dir": "/usr/sbin", "gecos": "daemon", "gid": 1, "id": "daemon", "password": "x", "shell": "/usr/sbin/nologin", "uid": 1}, "mode": "0755", "owner": "daemon", "path": "/usr/sbin", "size": 12288, "state": "directory", "uid": 1}
changed: [target] => (item=bin: /bin) => {"ansible_loop_var": "item", "changed": true, "gid": 0, "group": "root", "item": {"dir": "/bin", "gecos": "bin", "gid": 2, "id": "bin", "password": "x", "shell": "/usr/sbin/nologin", "uid": 2}, "mode": "0755", "owner": "bin", "path": "/bin", "size": 4096, "state": "directory", "uid": 2}
changed: [target] => (item=sys: /dev) => {"ansible_loop_var": "item", "changed": true, "gid": 0, "group": "root", "item": {"dir": "/dev", "gecos": "sys", "gid": 3, "id": "sys", "password": "x", "shell": "/usr/sbin/nologin", "uid": 3}, "mode": "0755", "owner": "sys", "path": "/dev", "size": 4060, "state": "directory", "uid": 3}
changed: [target] => (item=sync: /bin) => {"ansible_loop_var": "item", "changed": true, "gid": 0, "group": "root", "item": {"dir": "/bin", "gecos": "sync", "gid": 65534, "id": "sync", "password": "x", "shell": "/bin/sync", "uid": 4}, "mode": "0755", "owner": "sync", "path": "/bin", "size": 4096, "state": "directory", "uid": 4}
changed: [target] => (item=games: /usr/games) => {"ansible_loop_var": "item", "changed": true, "gid": 0, "group": "root", "item": {"dir": "/usr/games", "gecos": "games", "gid": 60, "id": "games", "password": "x", "shell": "/usr/sbin/nologin", "uid": 5}, "mode": "0755", "owner": "games", "path": "/usr/games", "size": 4096, "state": "directory", "uid": 5}
ok: [target] => (item=man: /var/cache/man) => {"ansible_loop_var": "item", "changed": false, "gid": 12, "group": "man", "item": {"dir": "/var/cache/man", "gecos": "man", "gid": 12, "id": "man", "password": "x", "shell": "/usr/sbin/nologin", "uid": 6}, "mode": "0755", "owner": "man", "path": "/var/cache/man", "size": 4096, "state": "directory", "uid": 6}
ok: [target] => (item=lp: /var/spool/lpd) => {"ansible_loop_var": "item", "changed": false, "gid": 7, "group": "lp", "item": {"dir": "/var/spool/lpd", "gecos": "lp", "gid": 7, "id": "lp", "password": "x", "shell": "/usr/sbin/nologin", "uid": 7}, "mode": "0755", "owner": "lp", "path": "/var/spool/lpd", "size": 4096, "state": "directory", "uid": 7}
changed: [target] => (item=mail: /var/mail) => {"ansible_loop_var": "item", "changed": true, "gid": 8, "group": "mail", "item": {"dir": "/var/mail", "gecos": "mail", "gid": 8, "id": "mail", "password": "x", "shell": "/usr/sbin/nologin", "uid": 8}, "mode": "02775", "owner": "mail", "path": "/var/mail", "size": 4096, "state": "directory", "uid": 8}
ok: [target] => (item=news: /var/spool/news) => {"ansible_loop_var": "item", "changed": false, "gid": 9, "group": "news", "item": {"dir": "/var/spool/news", "gecos": "news", "gid": 9, "id": "news", "password": "x", "shell": "/usr/sbin/nologin", "uid": 9}, "mode": "0755", "owner": "news", "path": "/var/spool/news", "size": 4096, "state": "directory", "uid": 9}
ok: [target] => (item=uucp: /var/spool/uucp) => {"ansible_loop_var": "item", "changed": false, "gid": 10, "group": "uucp", "item": {"dir": "/var/spool/uucp", "gecos": "uucp", "gid": 10, "id": "uucp", "password": "x", "shell": "/usr/sbin/nologin", "uid": 10}, "mode": "0755", "owner": "uucp", "path": "/var/spool/uucp", "size": 4096, "state": "directory", "uid": 10}
changed: [target] => (item=proxy: /bin) => {"ansible_loop_var": "item", "changed": true, "gid": 0, "group": "root", "item": {"dir": "/bin", "gecos": "proxy", "gid": 13, "id": "proxy", "password": "x", "shell": "/usr/sbin/nologin", "uid": 13}, "mode": "0755", "owner": "proxy", "path": "/bin", "size": 4096, "state": "directory", "uid": 13}
ok: [target] => (item=www-data: /var/www) => {"ansible_loop_var": "item", "changed": false, "gid": 33, "group": "www-data", "item": {"dir": "/var/www", "gecos": "www-data", "gid": 33, "id": "www-data", "password": "x", "shell": "/usr/sbin/nologin", "uid": 33}, "mode": "0755", "owner": "www-data", "path": "/var/www", "size": 4096, "state": "directory", "uid": 33}
changed: [target] => (item=backup: /var/backups) => {"ansible_loop_var": "item", "changed": true, "gid": 0, "group": "root", "item": {"dir": "/var/backups", "gecos": "backup", "gid": 34, "id": "backup", "password": "x", "shell": "/usr/sbin/nologin", "uid": 34}, "mode": "0755", "owner": "backup", "path": "/var/backups", "size": 4096, "state": "directory", "uid": 34}
ok: [target] => (item=list: /var/list) => {"ansible_loop_var": "item", "changed": false, "gid": 38, "group": "list", "item": {"dir": "/var/list", "gecos": "Mailing List Manager", "gid": 38, "id": "list", "password": "x", "shell": "/usr/sbin/nologin", "uid": 38}, "mode": "0755", "owner": "list", "path": "/var/list", "size": 4096, "state": "directory", "uid": 38}
ok: [target] => (item=irc: /var/run/ircd) => {"ansible_loop_var": "item", "changed": false, "gid": 39, "group": "irc", "item": {"dir": "/var/run/ircd", "gecos": "ircd", "gid": 39, "id": "irc", "password": "x", "shell": "/usr/sbin/nologin", "uid": 39}, "mode": "0755", "owner": "irc", "path": "/var/run/ircd", "size": 100, "state": "directory", "uid": 39}
ok: [target] => (item=gnats: /var/lib/gnats) => {"ansible_loop_var": "item", "changed": false, "gid": 41, "group": "gnats", "item": {"dir": "/var/lib/gnats", "gecos": "Gnats Bug-Reporting System (admin)", "gid": 41, "id": "gnats", "password": "x", "shell": "/usr/sbin/nologin", "uid": 41}, "mode": "0755", "owner": "gnats", "path": "/var/lib/gnats", "size": 4096, "state": "directory", "uid": 41}
changed: [target] => (item=nobody: /nonexistent) => {"ansible_loop_var": "item", "changed": true, "gid": 107, "group": "messagebus", "item": {"dir": "/nonexistent", "gecos": "nobody", "gid": 65534, "id": "nobody", "password": "x", "shell": "/usr/sbin/nologin", "uid": 65534}, "mode": "0755", "owner": "nobody", "path": "/nonexistent", "size": 4096, "state": "directory", "uid": 65534}
ok: [target] => (item=systemd-network: /run/systemd/netif) => {"ansible_loop_var": "item", "changed": false, "gid": 102, "group": "systemd-network", "item": {"dir": "/run/systemd/netif", "gecos": "systemd Network Management,,,", "gid": 102, "id": "systemd-network", "password": "x", "shell": "/usr/sbin/nologin", "uid": 100}, "mode": "0755", "owner": "systemd-network", "path": "/run/systemd/netif", "size": 120, "state": "directory", "uid": 100}
ok: [target] => (item=systemd-resolve: /run/systemd/resolve) => {"ansible_loop_var": "item", "changed": false, "gid": 103, "group": "systemd-resolve", "item": {"dir": "/run/systemd/resolve", "gecos": "systemd Resolver,,,", "gid": 103, "id": "systemd-resolve", "password": "x", "shell": "/usr/sbin/nologin", "uid": 101}, "mode": "0755", "owner": "systemd-resolve", "path": "/run/systemd/resolve", "size": 80, "state": "directory", "uid": 101}
ok: [target] => (item=syslog: /home/syslog) => {"ansible_loop_var": "item", "changed": false, "gid": 106, "group": "syslog", "item": {"dir": "/home/syslog", "gecos": "", "gid": 106, "id": "syslog", "password": "x", "shell": "/usr/sbin/nologin", "uid": 102}, "mode": "0755", "owner": "syslog", "path": "/home/syslog", "size": 4096, "state": "directory", "uid": 102}
changed: [target] => (item=messagebus: /nonexistent) => {"ansible_loop_var": "item", "changed": true, "gid": 107, "group": "messagebus", "item": {"dir": "/nonexistent", "gecos": "", "gid": 107, "id": "messagebus", "password": "x", "shell": "/usr/sbin/nologin", "uid": 103}, "mode": "0755", "owner": "messagebus", "path": "/nonexistent", "size": 4096, "state": "directory", "uid": 103}
changed: [target] => (item=_apt: /nonexistent) => {"ansible_loop_var": "item", "changed": true, "gid": 107, "group": "messagebus", "item": {"dir": "/nonexistent", "gecos": "", "gid": 65534, "id": "_apt", "password": "x", "shell": "/usr/sbin/nologin", "uid": 104}, "mode": "0755", "owner": "_apt", "path": "/nonexistent", "size": 4096, "state": "directory", "uid": 104}
ok: [target] => (item=lxd: /var/lib/lxd/) => {"ansible_loop_var": "item", "changed": false, "gid": 65534, "group": "nogroup", "item": {"dir": "/var/lib/lxd/", "gecos": "", "gid": 65534, "id": "lxd", "password": "x", "shell": "/bin/false", "uid": 105}, "mode": "0755", "owner": "lxd", "path": "/var/lib/lxd/", "size": 4096, "state": "directory", "uid": 105}
changed: [target] => (item=uuidd: /run/uuidd) => {"ansible_loop_var": "item", "changed": true, "gid": 0, "group": "root", "item": {"dir": "/run/uuidd", "gecos": "", "gid": 110, "id": "uuidd", "password": "x", "shell": "/usr/sbin/nologin", "uid": 106}, "mode": "0755", "owner": "uuidd", "path": "/run/uuidd", "size": 60, "state": "directory", "uid": 106}
changed: [target] => (item=dnsmasq: /var/lib/misc) => {"ansible_loop_var": "item", "changed": true, "gid": 0, "group": "root", "item": {"dir": "/var/lib/misc", "gecos": "dnsmasq,,,", "gid": 65534, "id": "dnsmasq", "password": "x", "shell": "/usr/sbin/nologin", "uid": 107}, "mode": "0755", "owner": "dnsmasq", "path": "/var/lib/misc", "size": 4096, "state": "directory", "uid": 107}
ok: [target] => (item=landscape: /var/lib/landscape) => {"ansible_loop_var": "item", "changed": false, "gid": 112, "group": "landscape", "item": {"dir": "/var/lib/landscape", "gecos": "", "gid": 112, "id": "landscape", "password": "x", "shell": "/usr/sbin/nologin", "uid": 108}, "mode": "0755", "owner": "landscape", "path": "/var/lib/landscape", "size": 4096, "state": "directory", "uid": 108}
fatal: [target]: FAILED! => {"msg": "Failed to connect to the host via ssh: kex_exchange_identification: Connection closed by remote host"}

Control(s) Affected
Anything run after 6.2.5

Environment (please complete the following information):

  • Host Python Version: 3.6.8
ansible-playbook 2.10.9.post0
  config file = /ansible/ansible.cfg
  configured module search path = ['/home/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
  executable location = /usr/local/bin/ansible-playbook
  python version = 3.8.3 (default, Aug 31 2020, 16:03:14) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]

Possible Solution
This seems to have been fixed elsewhere by only changing home dirs for interactive users (uid >= 1000).

Update to Version Tagging

Hello,
I wanted to give an update on a tagging change that will take place on the next release, scheduled at some point in May. Without realizing that Ansible Galaxy needs version numbers in the Semantic format that excludes the preceding “v”, for example 1.2.1 vs v1.2., we have been using tags with the preceding v. This has caused our galaxy space to not update with our latest releases.

The plan going forward we plan to adjust the version number formatting on the first release for each repo in May. Please make note that if you are relying on release tags to keep up with latest versions, the numbering format will change. The cadence of the version numbers will continue and progress through as they have been, however the preceding “v” will be dropped from the tag.

George

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.