Code Monkey home page Code Monkey logo

wazuh-ansible's Introduction

Wazuh-Ansible

Slack Email Documentation Documentation

These playbooks install and configure Wazuh agent, manager and indexer and dashboard.

Branches

  • master branch contains the latest code, be aware of possible bugs on this branch.
  • stable branch on correspond to the last Wazuh stable version.

Compatibility Matrix

Wazuh version Elastic ODFE
v5.0.0
v4.9.0
v4.8.1
v4.8.0
v4.7.5
v4.7.4
v4.7.3
v4.7.2
v4.7.1
v4.7.0
v4.6.0
v4.5.4
v4.5.3
v4.5.2
v4.5.1
v4.5.0
v4.4.5
v4.4.4
v4.4.3
v4.4.2
v4.4.1
v4.4.0
v4.3.11
v4.3.10
v4.4.0
v4.3.9
v4.3.8
v4.3.7
v4.3.6
v4.3.5
v4.3.4
v4.3.3
v4.3.2
v4.3.1
v4.3.0
v4.2.6 7.10.2 1.13.2
v4.2.5 7.10.2 1.13.2
v4.2.4 7.10.2 1.13.2
v4.2.3 7.10.2 1.13.2
v4.2.2 7.10.2 1.13.2
v4.2.1 7.10.2 1.13.2
v4.2.0 7.10.2 1.13.2
v4.1.5 7.10.2 1.13.2
v4.1.4 7.10.0 1.12.0
v4.1.3 7.10.0 1.12.0
v4.1.2 7.10.0 1.12.0
v4.1.1 7.10.0 1.12.0

Documentation

Directory structure

├── wazuh-ansible
│ ├── roles
│ │ ├── wazuh
│ │ │ ├── ansible-filebeat-oss
│ │ │ ├── ansible-wazuh-manager
│ │ │ ├── ansible-wazuh-agent
│ │ │ ├── wazuh-dashboard
│ │ │ ├── wazuh-indexer
│ │
│ │ ├── ansible-galaxy
│ │ │ ├── meta
│
│ ├── playbooks
│ │ ├── wazuh-agent.yml
│ │ ├── wazuh-dashboard.yml
│ │ ├── wazuh-indexer.yml
│ │ ├── wazuh-manager-oss.yml
| | ├── wazuh-production-ready
│ │ ├── wazuh-single.yml
│
│ ├── README.md
│ ├── VERSION
│ ├── CHANGELOG.md

Example: production-ready distributed environment

Playbook

The hereunder example playbook uses the wazuh-ansible role to provision a production-ready Wazuh environment. The architecture includes 2 Wazuh nodes, 3 Wazuh indexer nodes, and a Wazuh dashboard node.

---
# Certificates generation
    - hosts: wi1
      roles:
        - role: ../roles/wazuh/wazuh-indexer
          indexer_network_host: "{{ private_ip }}"
          indexer_cluster_nodes:
            - "{{ hostvars.wi1.private_ip }}"
            - "{{ hostvars.wi2.private_ip }}"
            - "{{ hostvars.wi3.private_ip }}"
          indexer_discovery_nodes:
            - "{{ hostvars.wi1.private_ip }}"
            - "{{ hostvars.wi2.private_ip }}"
            - "{{ hostvars.wi3.private_ip }}"
          perform_installation: false
      become: no
      vars:
        indexer_node_master: true
        instances:
          node1:
            name: node-1       # Important: must be equal to indexer_node_name.
            ip: "{{ hostvars.wi1.private_ip }}"   # When unzipping, the node will search for its node name folder to get the cert.
            role: indexer
          node2:
            name: node-2
            ip: "{{ hostvars.wi2.private_ip }}"
            role: indexer
          node3:
            name: node-3
            ip: "{{ hostvars.wi3.private_ip }}"
            role: indexer
          node4:
            name: node-4
            ip: "{{ hostvars.manager.private_ip }}"
            role: wazuh
            node_type: master
          node5:
            name: node-5
            ip: "{{ hostvars.worker.private_ip }}"
            role: wazuh
            node_type: worker
          node6:
            name: node-6
            ip: "{{ hostvars.dashboard.private_ip }}"
            role: dashboard
      tags:
        - generate-certs

# Wazuh indexer cluster
    - hosts: wi_cluster
      strategy: free
      roles:
        - role: ../roles/wazuh/wazuh-indexer
          indexer_network_host: "{{ private_ip }}"
      become: yes
      become_user: root
      vars:
        indexer_cluster_nodes:
          - "{{ hostvars.wi1.private_ip }}"
          - "{{ hostvars.wi2.private_ip }}"
          - "{{ hostvars.wi3.private_ip }}"
        indexer_discovery_nodes:
          - "{{ hostvars.wi1.private_ip }}"
          - "{{ hostvars.wi2.private_ip }}"
          - "{{ hostvars.wi3.private_ip }}"
        indexer_node_master: true
        instances:
          node1:
            name: node-1       # Important: must be equal to indexer_node_name.
            ip: "{{ hostvars.wi1.private_ip }}"   # When unzipping, the node will search for its node name folder to get the cert.
            role: indexer
          node2:
            name: node-2
            ip: "{{ hostvars.wi2.private_ip }}"
            role: indexer
          node3:
            name: node-3
            ip: "{{ hostvars.wi3.private_ip }}"
            role: indexer
          node4:
            name: node-4
            ip: "{{ hostvars.manager.private_ip }}"
            role: wazuh
            node_type: master
          node5:
            name: node-5
            ip: "{{ hostvars.worker.private_ip }}"
            role: wazuh
            node_type: worker
          node6:
            name: node-6
            ip: "{{ hostvars.dashboard.private_ip }}"
            role: dashboard

# Wazuh cluster
    - hosts: manager
      roles:
        - role: "../roles/wazuh/ansible-wazuh-manager"
        - role: "../roles/wazuh/ansible-filebeat-oss"
          filebeat_node_name: node-4
      become: yes
      become_user: root
      vars:
        wazuh_manager_config:
          connection:
              - type: 'secure'
                port: '1514'
                protocol: 'tcp'
                queue_size: 131072
          api:
              https: 'yes'
          cluster:
              disable: 'no'
              node_name: 'master'
              node_type: 'master'
              key: 'c98b62a9b6169ac5f67dae55ae4a9088'
              nodes:
                  - "{{ hostvars.manager.private_ip }}"
              hidden: 'no'
        wazuh_api_users:
          - username: custom-user
            password: SecretPassword1!
        filebeat_output_indexer_hosts:
                - "{{ hostvars.wi1.private_ip }}"
                - "{{ hostvars.wi2.private_ip }}"
                - "{{ hostvars.wi3.private_ip }}"

    - hosts: worker
      roles:
        - role: "../roles/wazuh/ansible-wazuh-manager"
        - role: "../roles/wazuh/ansible-filebeat-oss"
          filebeat_node_name: node-5
      become: yes
      become_user: root
      vars:
        wazuh_manager_config:
          connection:
              - type: 'secure'
                port: '1514'
                protocol: 'tcp'
                queue_size: 131072
          api:
              https: 'yes'
          cluster:
              disable: 'no'
              node_name: 'worker_01'
              node_type: 'worker'
              key: 'c98b62a9b6169ac5f67dae55ae4a9088'
              nodes:
                  - "{{ hostvars.manager.private_ip }}"
              hidden: 'no'
        filebeat_output_indexer_hosts:
                - "{{ hostvars.wi1.private_ip }}"
                - "{{ hostvars.wi2.private_ip }}"
                - "{{ hostvars.wi3.private_ip }}"

# Wazuh dashboard node
    - hosts: dashboard
      roles:
        - role: "../roles/wazuh/wazuh-dashboard"
      become: yes
      become_user: root
      vars:
        indexer_network_host: "{{ hostvars.wi1.private_ip }}"
        dashboard_node_name: node-6
        wazuh_api_credentials:
          - id: default
            url: https://{{ hostvars.manager.private_ip }}
            port: 55000
            username: custom-user
            password: SecretPassword1!
        ansible_shell_allow_world_readable_temp: true

Inventory file

  • The ansible_host variable should contain the address/FQDN used to gather facts and provision each node.
  • The private_ip variable should contain the address/FQDN used for the internal cluster communications.
  • Whether the environment is located in a local subnet, ansible_host and private_ip variables should match.
  • The ssh credentials used by Ansible during the provision can be specified in this file too. Another option is including them directly on the playbook.
wi1 ansible_host=<wi1_ec2_public_ip> private_ip=<wi1_ec2_private_ip> indexer_node_name=node-1
wi2 ansible_host=<wi2_ec2_public_ip> private_ip=<wi2_ec2_private_ip> indexer_node_name=node-2
wi3 ansible_host=<wi3_ec2_public_ip> private_ip=<wi3_ec2_private_ip> indexer_node_name=node-3
dashboard  ansible_host=<dashboard_node_public_ip> private_ip=<dashboard_ec2_private_ip>
manager ansible_host=<manager_node_public_ip> private_ip=<manager_ec2_private_ip>
worker  ansible_host=<worker_node_public_ip> private_ip=<worker_ec2_private_ip>

[wi_cluster]
wi1
wi2
wi3

[all:vars]
ansible_ssh_user=vagrant
ansible_ssh_private_key_file=/path/to/ssh/key.pem
ansible_ssh_extra_args='-o StrictHostKeyChecking=no'

Launching the playbook

sudo ansible-playbook wazuh-production-ready.yml -i inventory

After the playbook execution, the Wazuh UI should be reachable through https://<dashboard_host>

Example: single-host environment

Playbook

The hereunder example playbook uses the wazuh-ansible role to provision a single-host Wazuh environment. This architecture includes all the Wazuh and Opensearch components in a single node.

---
# Certificates generation
  - hosts: aio
    roles:
      - role: ../roles/wazuh/wazuh-indexer
        perform_installation: false
    become: no
    #become_user: root
    vars:
      indexer_node_master: true
      instances:
        node1:
          name: node-1       # Important: must be equal to indexer_node_name.
          ip: 127.0.0.1
          role: indexer
    tags:
      - generate-certs
# Single node
  - hosts: aio
    become: yes
    become_user: root
    roles:
      - role: ../roles/wazuh/wazuh-indexer
      - role: ../roles/wazuh/ansible-wazuh-manager
      - role: ../roles/wazuh/ansible-filebeat-oss
      - role: ../roles/wazuh/wazuh-dashboard
    vars:
      single_node: true
      minimum_master_nodes: 1
      indexer_node_master: true
      indexer_network_host: 127.0.0.1
      filebeat_node_name: node-1
      filebeat_output_indexer_hosts:
      - 127.0.0.1
      instances:
        node1:
          name: node-1       # Important: must be equal to indexer_node_name.
          ip: 127.0.0.1
          role: indexer
      ansible_shell_allow_world_readable_temp: true

Inventory file

[aio]
<your server host>

[all:vars]
ansible_ssh_user=vagrant
ansible_ssh_private_key_file=/path/to/ssh/key.pem
ansible_ssh_extra_args='-o StrictHostKeyChecking=no'

Launching the playbook

sudo ansible-playbook wazuh-single.yml -i inventory

After the playbook execution, the Wazuh UI should be reachable through https://<your server host>

Example: Wazuh server cluster (without Filebeat)

Playbook

The hereunder example playbook uses the wazuh-ansible role to provision a Wazuh server cluster without Filebeat. This architecture includes 2 Wazuh servers distributed in two different nodes.

---
# Wazuh cluster without Filebeat
    - hosts: manager
      roles:
        - role: "../roles/wazuh/ansible-wazuh-manager"
      become: yes
      become_user: root
      vars:
        wazuh_manager_config:
          connection:
              - type: 'secure'
                port: '1514'
                protocol: 'tcp'
                queue_size: 131072
          api:
              https: 'yes'
          cluster:
              disable: 'no'
              node_name: 'master'
              node_type: 'master'
              key: 'c98b62a9b6169ac5f67dae55ae4a9088'
              nodes:
                  - "{{ hostvars.manager.private_ip }}"
              hidden: 'no'
        wazuh_api_users:
          - username: custom-user
            password: SecretPassword1!

    - hosts: worker01
      roles:
        - role: "../roles/wazuh/ansible-wazuh-manager"
      become: yes
      become_user: root
      vars:
        wazuh_manager_config:
          connection:
              - type: 'secure'
                port: '1514'
                protocol: 'tcp'
                queue_size: 131072
          api:
              https: 'yes'
          cluster:
              disable: 'no'
              node_name: 'worker_01'
              node_type: 'worker'
              key: 'c98b62a9b6169ac5f67dae55ae4a9088'
              nodes:
                  - "{{ hostvars.manager.private_ip }}"
              hidden: 'no'

Inventory file

[manager]
<your manager master server host>

[worker01]
<your manager worker01 server host>

[all:vars]
ansible_ssh_user=vagrant
ansible_ssh_private_key_file=/path/to/ssh/key.pem
ansible_ssh_extra_args='-o StrictHostKeyChecking=no'

Adding additional workers

Add the following block at the end of the playbook

    - hosts: worker02
      roles:
        - role: "../roles/wazuh/ansible-wazuh-manager"
      become: yes
      become_user: root
      vars:
        wazuh_manager_config:
          connection:
              - type: 'secure'
                port: '1514'
                protocol: 'tcp'
                queue_size: 131072
          api:
              https: 'yes'
          cluster:
              disable: 'no'
              node_name: 'worker_02'
              node_type: 'worker'
              key: 'c98b62a9b6169ac5f67dae55ae4a9088'
              nodes:
                  - "{{ hostvars.manager.private_ip }}"
              hidden: 'no'

NOTE: hosts and wazuh_manager_config.cluster_node_name are the only parameters that differ from the worker01 configuration.

Add the following lines to the inventory file:

[worker02]
<your manager worker02 server host>

Launching the playbook

sudo ansible-playbook wazuh-manager-oss-cluster.yml -i inventory

Contribute

If you want to contribute to our repository, please fork our Github repository and submit a pull request.

If you are not familiar with Github, you can also share them through our users mailing list, to which you can subscribe by sending an email to [email protected].

Modified by Wazuh

The playbooks have been modified by Wazuh, including some specific requirements, templates and configuration to improve integration with Wazuh ecosystem.

Credits and Thank you

Based on previous work from dj-wasabi.

https://github.com/dj-wasabi/ansible-ossec-server

License and copyright

WAZUH Copyright (C) 2016, Wazuh Inc. (License GPLv2)

Web references

wazuh-ansible's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wazuh-ansible's Issues

Release 3.9.0

Wazuh version: 3.9.0
Elastic version: 6.7.1

  • Adapt to new versions (3.9.0 - 6.7.1)
  • Update changelog
  • Tests
  • Tag: v3.9.0
  • Draft release

Playbooks default configuration

Hello all,
it looks like Wazuh manager and agents configuration is being built using older Wazuh versions tags and behavior assumptions.

We should review how this works and what we want to include as a default configuration, updating it to Wazuh current release.

Regards.

Release 3.8.0

Wazuh version: 3.8.0
Elastic version: 6.5.4

  • Adapt to new versions (3.8.0 - 6.5.4)
  • Update changelog
  • Tests
  • Tag: v3.8.0
  • Draft release

Multiple remote connection should be possible

Hello,

If you setup your manager with multiple connection, you have a block like this in ossec.conf :

<remote>
  <connection>syslog</connection>
  <port>514</port>
  <protocol>udp</protocol>
  <allowed-ips>192.168.1.0/24</allowed-ips>
  <local_ip>192.168.1.5</local_ip>
  <connection>secure</connection>
  <port>1514</port>
  <protocol>udp</protocol>
  <queue_size>16384</queue_size>
</remote>

It can't work.

Exepected configuration :

<remote>
  <connection>syslog</connection>
  <port>514</port>
  <protocol>udp</protocol>
  <allowed-ips>192.168.1.0/24</allowed-ips>
  <local_ip>192.168.1.5</local_ip>
</remote>

<remote>
  <connection>secure</connection>
  <port>1514</port>
  <protocol>udp</protocol>
  <queue_size>16384</queue_size>
</remote>

Create configuration templates based on Operating Systems

Hello team,

Referring to the issue #77,

To complement the quick response solution, but doing a more detailed work, we will proceed to create different configuration templates of the agent depending on the operating system in which it is installed to avoid errors monitoring non-existent directories and to avoid losing events by not monitoring files as for example was the case of /var/logs/auth.log.

Therefore, we should identify the main cases like RPM, DEBIAN and WINDOWS (where we install agents via ansible) and create a specific configuration, adapting it if necessary to specific versions. This work will be useful in the future when we share the configurations from the manager using the groups.

Additionally, we should replicate what we do in a simple installation and keep the configuration shared from the empty manager.

Regards,

Alfonso

adding molecule tests

It would be convenient to have molecule scenarios to verify the roles works as expected. Such scenarios could run on every pull request via Travis CI and reduce the works required from reviewers.

Upgrade Wazuh minor version tests

Testing: Upgrade Wazuh minor version

Version Revision Branch
3.8.0_6.5.4 3800 3.8.0_6.5.4

Basic tests

  • Deployment of the Wazuh-manager in different environments.

    • Ubuntu 18.04.
    • Ubuntu 14.04.
    • CentOS 7.
    • Amazon Linux.
  • Deployment of the Wazuh-agent in different environments.

    • Ubuntu 18.04.
      • Agent registration.
      • Check the flow of the alerts.
    • Ubuntu 14.04.
      • Agent registration.
      • Check the flow of the alerts.
    • CentOS 7.
      • Agent registration.
      • Check the flow of the alerts.
    • Amazon Linux.
      • Agent registration.
      • Check the flow of the alerts.
    • Windows 7.
      • Agent registration.
      • Check the flow of the alerts.
  • Upgrade of the Wazuh-manager in different environments.

    • Ubuntu 18.04.
    • Ubuntu 14.04.
    • CentOS 7.
    • Amazon Linux.
  • Upgrade of the Wazuh-agent in different environments.

    • Ubuntu 18.04.
      • Agent registration.
      • Check the flow of the alerts.
    • Ubuntu 14.04.
      • Agent registration.
      • Check the flow of the alerts.
    • CentOS 7.
      • Agent registration.
      • Check the flow of the alerts.
    • Amazon Linux.
      • Agent registration.
      • Check the flow of the alerts.
    • Windows 7.
      • Agent registration.
      • Check the flow of the alerts.
  • Deployment of the Wazuh app

Need the ability to install an agent, but not register it

Sometimes it's necessary to install the Wazuh agent, but not register it with a manager. For example, when working with Amazon Machine Images (AMIs). You want to reduce the time it takes to install the agent by baking the installation into the image, however, you don't want to have duplicate client.keys files by doing registration at the time of image creation. Having the ability to filter these tasks would be useful, perhaps with tags.

Re-register an agent

Hello team,

When we register an agent and then deregister it, Ansible does not have the ability to re-register it due to the conditions of the registration tasks.

check_keys.stat.exists == false or check_keys.stat.size == 0

An option could be to consult the API of Wazuh if it has the agent in question and in case of not having it to enable an additional condition for the tasks of registry.

We can find related information and ideas on how to proceed on this mailing list:

https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/wazuh/4-UhadhubEI/z9CLjt0lAQAJ

Regards,

Alfonso

Failed to find required executable systemctl in paths

The role README files state support for Ubuntu, however, they don't state which versions of Ubuntu are supported. That being said, I've come across an issue with Ubuntu 14.04 servers where the following error message occurs:

msg: 'Failed to find required executable systemctl in paths: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin'

This is caused by the following task:

- name: Reload systemd
  systemd: daemon_reload=yes
  ignore_errors: yes
when: not (ansible_distribution == "Amazon" and ansible_distribution_major_version == "NA")

According to this reference:

https://en.wikipedia.org/wiki/Systemd

Systemd wasn't the default service manager until v15.04. You'll probably want to account for that in your conditional logic.

Support multiple Elasticsearch output in ansible-logstash-role

The parameters "elasticsearch_network_host" does not support multiple hosts. It would be nice to support this to benefit from the "LoadBalance" feature of elasticsearch outputs.

The loadbalance option is available for Redis, Logstash, and Elasticsearch outputs. The Kafka output handles load balancing internally.

Please see Logstash documentation:
https://www.elastic.co/guide/en/beats/packetbeat/current/logstash-output.html#loadbalance

Provide a means to select the installed version number

There should be a means to select the installed version. For example in ansible-wazuh-manager/agent"latest" is always used.

- name: Install wazuh-manager, wazuh-api and expect
  package: pkg={{ item }} state=latest
  with_items:
    - wazuh-manager
    - wazuh-api
    - expect

A couple of minor warnings with 3.8

Hi,

I noticed just a couple of minor warnings in the ossec.log file when starting a Manager based upon the defaults when using the 3.8 branch of this and the ansible-wazuh-manager role.

2019/01/20 23:02:56 ossec-analysisd: WARNING: Detected a deprecated configuration for cluster. Interval option is not longer available.
2019/01/20 23:02:56 ossec-remoted: WARNING: Detected a deprecated configuration for cluster. Interval option is not longer available.
2019/01/20 23:02:56 wazuh-modulesd: WARNING: The specific definition of the Red Hat feeds is deprecated. Use only redhat instead.
2019/01/20 23:02:56 ossec-testrule: WARNING: Detected a deprecated configuration for cluster. Interval option is not longer available.
2019/01/20 23:02:56 ossec-authd: WARNING: Detected a deprecated configuration for cluster. Interval option is not longer available.

I can send a PR to resolve this as of my understanding of this if have an alternative fix that is fine too.

Cheers.

Incorrect Exit Statuses

A new installation of the ansible-wazuh-manager role does not, by default, enable the following daemons:

  • ossec-maild
  • wazuh-clusterd

As a result, service wazuh-manager status returns an exit code of 1, which is customarily reserved for failures.

This behavior is caused by the following code, found in /var/ossec/bin/ossec-control :

DAEMONS="wazuh-modulesd ossec-monitord ossec-logcollector ossec-remoted ossec-syscheckd ossec-analysisd ossec-maild ossec-execd wazuh-db ${DB_DAEMON} ${CSYSLOG_DAEMON} ${AGENTLESS_DAEMON} ${INTEGRATOR_DAEMON} ${AUTH_DAEMON

...

if ! is_rhel_le_5
then
    DAEMONS="wazuh-clusterd $DAEMONS"
fi

...

    for i in ${DAEMONS}; do
        if [ $USE_JSON = true ] && [ $first = false ]; then
            echo -n ','
        else
            first=false
        fi
        pstatus ${i};
        if [ $? = 0 ]; then
            if [ $USE_JSON = true ]; then
                echo -n '{"daemon":"'${i}'","status":"stopped"}'
            else
                echo "${i} not running..."
            fi
            RETVAL=1
        else
            if [ $USE_JSON = true ]; then
                echo -n '{"daemon":"'${i}'","status":"running"}'
            else
                echo "${i} is running..."
            fi
        fi
    done

Given that a user might not want clustering nor email email notifications, having those services not enabled is not necessarily indicative of a failure. Having a return code of 1 causes misreporting of playbook executions.

A better test might be: checking the configuration to see if they are configured to be enabled, but are not able to start successfully.

Or, less ideally IMO, configuring a one node cluster with email support by default.

Add wazuh-agent /var/ossec/etc/internal_options.conf template

In order to use centralized configuration it would be nice to have a template for the internal_options.conf file.

To use centralized configuration this is a requirement. From the doc:

When setting up remote commands in the shared agent configuration, you must enable remote commands for Agent Modules. This is enabled by adding the following line to the file etc/local_internal_options.conf in the agent:

wazuh_command.remote_commands=1

This would allow to set the logcollector.remote_commands value. With this value set to 1 we could configure the centralized configuration during the installation process.

For additional information see the doc
https://documentation.wazuh.com/3.x/user-manual/reference/internal-options.html?highlight=logcollector%20remote_commands#logcollector
https://documentation.wazuh.com/3.x/user-manual/reference/centralized-configuration.html?highlight=centralized%20configuration

Need Easier Way to Override Select Portions of the Configuration Settings

It would be ideal if the wazuh_manager_config hash was either restructured or the code was modified to allow for easier overriding of smaller chunks of settings:

---
wazuh_manager_config:
  active_responses:
    - command: host-deny
      level: 6
      location: local
      timeout: 600
    - command: restart-ossec
      location: local
      rules_id: '100002'
    - command: win_restart-ossec
      location: local
      rules_id: '100003'
  alerts_log: 'yes'
  api:
    basic_auth: 'yes'
    behind_proxy_server: 'no'
    bind_addr: 0.0.0.0
    ciphers: ''
    drop_privileges: 'true'
    experimental_features: 'false'
    honor_cipher_order: 'true'
    https: 'no'
    https_ca: ''
    https_cert: /var/ossec/etc/sslmanager.cert
    https_key: /var/ossec/etc/sslmanager.key
    https_use_ca: 'no'
    port: 55000
    secure_protocol: TLSv1_2_method
    use_only_authd: 'false'
  authd:
    enable: true
    force_insert: 'yes'
    force_time: 0
    port: 1515
    purge: 'no'
    ssl_agent_ca: null
    ssl_auto_negotiate: 'no'
    ssl_manager_cert: /var/ossec/etc/sslmanager.cert
    ssl_manager_key: /var/ossec/etc/sslmanager.key
    ssl_verify_host: 'no'
    use_password: 'no'
    use_source_ip: 'yes'

...etc. etc. etc.

The reason being is that it's not easy to override individual portions of this config (e.g. cluster settings). It's all or nothing.
In other words, if I want to have most of these variables defined in group_vars/wazuh_managers.yml and override a couple settings using something like host_vars/manager01.yml it's not easily possible w/o modifying the tasks.

Currently, each of my host variables, has a complete copy of this hash in their host variable file with one or two settings changed. Doesn't seem DRY. It would be nice if most of the common settings were in a group variable file and the one or two settings specific to the host were in a host variable file.

Maybe there's a way to leverage the combine filter to allow for easier overrides of portions of the bigger config:

e.g.
{{ {'a':{'foo':1, 'bar':2}, 'b':2} | combine({'a':{'bar':3, 'baz':4}}, recursive=True) }}

Output:
{'a':{'foo':1, 'bar':3, 'baz':4}, 'b':2}

(Source: https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#combining-hashes-dictionaries)

Wrong installation directory for Wazuh agent

Hi team,

The Ansible playbook for deploying a Wazuh agent is not installing it where the documentation currently indicates. According to the official documentation:

By default, all agent files will be found in: C:\Program Files(x86)\ossec-agent.

Instead, it's being installed in the root C:/.

Thanks

Centos7 Wazuh-Agent installation fails to start Wazuh Agent Service ERROR: (1210): Queue

I am trying to use this ansible playbook to install the wazuh-agent on a centos7 minimal virtual machine.

The ansible-playbook gives the following error.

TASK [ansible-wazuh-agent : Linux | Ensure Wazuh Agent service is started and enabled] ***************************************************************************************************************************************
fatal: [wazuh-client01.ipa.home.lab]: FAILED! => {"changed": false, "msg": "Unable to start service wazuh-agent: Job for wazuh-agent.service failed because the control process exited with error code. See \"systemctl status wazuh-agent.service\" and \"journalctl -xe\" for details.\n"}

Feb 10 16:17:37 wazuh-client01.ipa.home.lab systemd[1]: Starting SYSV: Starts and stops Wazuh (Host Intrusion Detection System)...
Feb 10 16:17:37 wazuh-client01.ipa.home.lab wazuh-agent[2229]: Starting Wazuh-agent: 2018/02/10 16:17:37 ossec-agentd: INFO: Using notify time: 10 and max time to reconnect: 60
Feb 10 16:17:40 wazuh-client01.ipa.home.lab wazuh-agent[2229]: 2018/02/10 16:17:40 ossec-syscheckd: ERROR: (1210): Queue '/var/ossec/queue/ossec/queue' not accessible: 'Connection refused'.
Feb 10 16:17:40 wazuh-client01.ipa.home.lab wazuh-agent[2229]: 2018/02/10 16:17:40 rootcheck: ERROR: (1210): Queue '/var/ossec/queue/ossec/queue' not accessible: 'Connection refused'.
Feb 10 16:17:48 wazuh-client01.ipa.home.lab wazuh-agent[2229]: 2018/02/10 16:17:48 ossec-syscheckd: ERROR: (1210): Queue '/var/ossec/queue/ossec/queue' not accessible: 'Connection refused'.
Feb 10 16:17:48 wazuh-client01.ipa.home.lab wazuh-agent[2229]: 2018/02/10 16:17:48 rootcheck: ERROR: (1210): Queue '/var/ossec/queue/ossec/queue' not accessible: 'Connection refused'.
Feb 10 16:18:01 wazuh-client01.ipa.home.lab wazuh-agent[2229]: 2018/02/10 16:18:01 ossec-syscheckd: ERROR: (1210): Queue '/var/ossec/queue/ossec/queue' not accessible: 'Connection refused'.
Feb 10 16:18:01 wazuh-client01.ipa.home.lab wazuh-agent[2229]: 2018/02/10 16:18:01 rootcheck: CRITICAL: (1211): Unable to access queue: '/var/ossec/queue/ossec/queue'. Giving up..
Feb 10 16:18:01 wazuh-client01.ipa.home.lab wazuh-agent[2229]: [FAILED]
Feb 10 16:18:01 wazuh-client01.ipa.home.lab systemd[1]: wazuh-agent.service: control process exited, code=exited status=1
Feb 10 16:18:01 wazuh-client01.ipa.home.lab systemd[1]: Failed to start SYSV: Starts and stops Wazuh (Host Intrusion Detection System).
Feb 10 16:18:01 wazuh-client01.ipa.home.lab systemd[1]: Unit wazuh-agent.service entered failed state.
Feb 10 16:18:01 wazuh-client01.ipa.home.lab systemd[1]: wazuh-agent.service failed.
Feb 10 17:01:45 wazuh-client01.ipa.home.lab systemd[1]: Starting SYSV: Starts and stops Wazuh (Host Intrusion Detection System)...
Feb 10 17:01:47 wazuh-client01.ipa.home.lab wazuh-agent[2530]: Starting Wazuh-agent: 2018/02/10 17:01:47 ossec-agentd: INFO: Using notify time: 10 and max time to reconnect: 60
Feb 10 17:01:50 wazuh-client01.ipa.home.lab wazuh-agent[2530]: 2018/02/10 17:01:50 ossec-syscheckd: ERROR: (1210): Queue '/var/ossec/queue/ossec/queue' not accessible: 'Connection refused'.
Feb 10 17:01:50 wazuh-client01.ipa.home.lab wazuh-agent[2530]: 2018/02/10 17:01:50 rootcheck: ERROR: (1210): Queue '/var/ossec/queue/ossec/queue' not accessible: 'Connection refused'.
Feb 10 17:01:58 wazuh-client01.ipa.home.lab wazuh-agent[2530]: 2018/02/10 17:01:58 ossec-syscheckd: ERROR: (1210): Queue '/var/ossec/queue/ossec/queue' not accessible: 'Connection refused'.
Feb 10 17:01:58 wazuh-client01.ipa.home.lab wazuh-agent[2530]: 2018/02/10 17:01:58 rootcheck: ERROR: (1210): Queue '/var/ossec/queue/ossec/queue' not accessible: 'Connection refused'.
Feb 10 17:02:11 wazuh-client01.ipa.home.lab wazuh-agent[2530]: 2018/02/10 17:02:11 ossec-syscheckd: ERROR: (1210): Queue '/var/ossec/queue/ossec/queue' not accessible: 'Connection refused'.
Feb 10 17:02:11 wazuh-client01.ipa.home.lab wazuh-agent[2530]: 2018/02/10 17:02:11 rootcheck: CRITICAL: (1211): Unable to access queue: '/var/ossec/queue/ossec/queue'. Giving up..
Feb 10 17:02:11 wazuh-client01.ipa.home.lab wazuh-agent[2530]: [FAILED]
Feb 10 17:02:11 wazuh-client01.ipa.home.lab systemd[1]: wazuh-agent.service: control process exited, code=exited status=1
Feb 10 17:02:11 wazuh-client01.ipa.home.lab systemd[1]: Failed to start SYSV: Starts and stops Wazuh (Host Intrusion Detection System).
Feb 10 17:02:11 wazuh-client01.ipa.home.lab systemd[1]: Unit wazuh-agent.service entered failed state.
Feb 10 17:02:11 wazuh-client01.ipa.home.lab systemd[1]: wazuh-agent.service failed.

This is the playbook I am using

- hosts: wazuh-clients
  roles:
    - { role: ansible-wazuh-agent, wazuh_manager_ip: 10.0.5.20, wazuh_register_client: true, wazuh_authd_port: 1515 }

I used the wazuh-elastic_search-single.yml included with this repo.

Interval option is no longer available.

Current wazuh code issues a deprecation warning when the cluster.interval option is set:

Detected a deprecated configuration for cluster. Interval option is not longer available.

However, wazuh-ansible code requires cluster.interval and fails with the following error if it is missing:

AnsibleUndefinedVariable: 'dict object' has no attribute 'interval'

At a minimum, the template should be modified to skip the cluster.interval option if it is not defined.

Error Ansible wazuh-agen in centos 7

Hello everyone, I have a question, I'm trying to use ansible to install agents, but when I run the scripts I get the following error

error


TASK [ansible-wazuh-agent : RedHat/CentOS 5 | Install Wazuh repo] *********************************
fatal: [192.168.2.207]: FAILED! => {"msg": "The conditional check 'ansible_distribution_major_version|int = 5' failed. The error was: template error while templating string: expected token 'end of statement block', got '='. String: {% if ansible_distribution_major_version|int = 5 %} True {% else %} False {% endif %}\n\nThe error appears to have been in '/etc/ansible/roles/ansible-wazuh-agent/tasks/RedHat.yml': line 12, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: RedHat/CentOS 5 | Install Wazuh repo\n ^ here\n"}
to retry, use: --limit @/etc/ansible/roles/wazuh-agent.retry

PLAY RECAP ****************************************************************************************
192.168.2.200 : ok=0 changed=0 unreachable=1 failed=0
192.168.2.207 : ok=2 changed=0 unreachable=0 failed=1

[root@localhost roles]#

_________my configuration

host file
192.168.2.200 es mi wazuh server
192.168.2.207 es donde quiero instalar agente


This is the default ansible 'hosts' file.

It should live in /etc/ansible/hosts

- Comments begin with the '#' character

- Blank lines are ignored

- Groups of hosts are delimited by [header] elements

- You can enter hostnames or ip addresses

- A hostname/ip can be a member of multiple groups

Ex 1: Ungrouped hosts, specify before any group headers.

192.168.2.200
192.168.2.207

green.example.com

blue.example.com

192.168.100.1

192.168.100.10

Ex 2: A collection of hosts belonging to the 'webservers' group

[webservers]

alpha.example.org

beta.example.org

192.168.1.100

192.168.1.110


wazuh-agent.yml


hosts: all:!wazuh-manager
roles:
- ansible-wazuh-agent
vars:
wazuh_managers:
- address: 192.168.2.200
port: 1514
protocol: udp
api_port: 55000
api_proto: 'http'
api_user: ansible
wazuh_agent_authd:
enable: true
port: 1515
ssl_agent_ca: null
ssl_auto_negotiate: 'no'


Wazuh-agent install not working

Set up per https://documentation.wazuh.com/current/deploying-with-ansible/reference.html#wazuh-agent
Agent to be installed on Ubuntu 14.04

Received following error when running ansible-playbook wazuh-agent.yml:

The error appears to have been in '/etc/ansible/roles/ansible-wazuh-agent/tasks/main.yml': line 2, column 3, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:


  • import_tasks: "Windows.yml"
    ^ here

The error appears to have been in '/etc/ansible/roles/ansible-wazuh-agent/tasks/main.yml': line 2, column 3, but may
be elsewhere in the file depending on the exact syntax problem.

The offending line appears to be:


  • import_tasks: "Windows.yml"
    ^ here

`elasticsearch` and `logstash` roles: condition and parameters to install java

The JRE is being installed automatically by elasticsearch and logstash roles.

Is there a plan to review these two roles to behave like wazuh_agent and wazuh_manager roles as far as java install is concerned? These two roles use the condition wazuh_xxx_config.cis_cat.install_java to install Java and both of them installs the OpenJDK JRE instead of Oracle JRE.

Syscheck configuration for C:\wazuh-agent consume all 256 Inotify watchers on Windows

Hi,

I am having an issue with one of the templates on Windows. I have not tested if the same thing is happening on Linux.

Issues details

In the var-ossec-etc-ossec-agent.conf the following line is added for Windows machines.
<directories check_all="yes" realtime="yes" restrict="^C:\wazuh-agent/shared/agent.conf$">C:\wazuh-agent</directories>

Having the realtime=yes attribute seems fine but when registering Inotify watchers it ignores the restrict statement. Therefore all 256 Inotify watchers are consumed. 256 seems to be the maximum on Windows. I am not sure what the intended behaviour was here, but I suggest the following modification if the intended behaviour was to provide real-time monitoring for C:\wazuh-agent/shared/agent.conf .

<directories check_all="yes" realtime="yes" restrict="^C:\wazuh-agent/shared/agent.conf$">C:\wazuh-agent/shared</directories>

Logs:

2018/07/20 12:00:45 ossec-agent: ERROR: Unable to add directory to real time monitoring: 'C:\wazuh-agent/queue/diff/local/..blankedpath../2018041311243011.tox' - Maximum size permitted.
2018/07/20 12:00:45 ossec-agent: ERROR: Unable to add directory to real time monitoring: 'C:\wazuh-agent/queue/diff/local/..blankedpath../201804131124302.tox' - Maximum size permitted.
2018/07/20 12:00:45 ossec-agent: ERROR: Unable to add directory to real time monitoring: 'C:\wazuh-agent/queue/diff/local/..blankedpath../201804131124303.tox' - Maximum size permitted.
2018/07/20 12:00:45 ossec-agent: ERROR: Unable to add directory to real time monitoring: 'C:\wazuh-agent/queue/diff/local/..blankedpath../201804131124304.tox' - Maximum size permitted.
2018/07/20 12:00:45 ossec-agent: ERROR: Unable to add directory to real time monitoring: 'C:\wazuh-agent/queue/diff/local/..blankedpath../201804131124305.tox' - Maximum size permitted.
2018/07/20 12:00:45 ossec-agent: ERROR: Unable to add directory to real time monitoring: 'C:\wazuh-agent/queue/diff/local/..blankedpath../201804131124306.tox' - Maximum size permitted.
2018/07/20 12:00:45 ossec-agent: ERROR: Unable to add directory to real time monitoring: 'C:\wazuh-agent/queue/diff/local/..blankedpath../201804131124307.tox' - Maximum size permitted.
2018/07/20 12:00:45 ossec-agent: ERROR: Unable to add directory to real time monitoring: 'C:\wazuh-agent/queue/diff/local/..blankedpath../201804131124308.tox' - Maximum size permitted.
2018/07/20 12:00:45 ossec-agent: ERROR: Unable to add directory to real time monitoring: 'C:\wazuh-agent/queue/diff/local/..blankedpath../201804131124309.tox' - Maximum size permitted.
2018/07/20 12:00:45 ossec-agent: ERROR: Unable to add directory to real time monitoring: 'C:\wazuh-agent/queue/diff/local/..blankedpath../201804131124311.tox' - Maximum size permitted.
2018/07/20 12:00:45 ossec-agent: ERROR: Unable to add directory to real time monitoring: 'C:\wazuh-agent/queue/diff/local/..blankedpath../201804131124312.tox' - Maximum size permitted.
2018/07/20 12:00:45 ossec-agent: ERROR: Unable to add directory to real time monitoring: 'C:\wazuh-agent/queue/diff/local/..blankedpath../201804131124313.tox' - Maximum size permitted.

Path to var-ossec-etc-ossec-agent.conf

wazuh-ansible\ansible-wazuh-agent\templates\var-ossec-etc-ossec-agent.conf.j2

The lines causing the issue

    {% if ansible_os_family == "Windows" %}
    <directories check_all="yes" realtime="yes" restrict="^C:\wazuh-agent/shared/agent.conf$">C:\wazuh-agent</directories>
    {% endif %}

    {% if ansible_system == "Linux" %}
    <directories check_all="yes" realtime="yes" restrict="^/var/ossec/etc/shared/agent.conf$">/var/ossec/etc/shared</directories>
    {% endif %}

** Not tested on Linux.

Add timeout on yum install

Hello,

On CentOS, I have run into yum install wazuh-agent taking forever because it was waiting for yum.lock held by another process. This was also causing ssh disconnections with ansible yum zombie processes on the host.

I think the role would be more robust by using async and poll:

- name: Linux | Install wazuh-agent
  package: name=wazuh-agent state=latest
  async: 60
  poll: 10
  tags:
    - init

With these settings, when yum does not complete within 60 seconds, the playbook fails as below:

TASK [ansible-wazuh-agent : Linux | Install wazuh-agent] *******************
                                                                            
fatal: [ps-majlnx.example.local]: FAILED! => {                             
    "changed": false                                                        
}                                                                           
                                                                            
MSG:                                                                        
                                                                            
async task did not complete within the requested time 

Maybe with a variable for timeout value. I can submit a PR if agreed.

Packages should be installed concurrently

When installing packages, it's encouraged to provide a list of packages to install, as opposed to iterating over a list one by one. Most modern package managers can install multiple packages concurrently. When you using a loop, you're doing one transaction at a time, consecutively, so it's less performant and it generates warnings such as:

[DEPRECATION WARNING]: Invoking "apt" only once while using a loop via squash_actions is deprecated. Instead of using a loop to supply multiple items and specifying 
`name: {{ item }}`, please use `name: ['apt-transport-https', 'ca-certificates']` and remove the loop. This feature will be removed in version 2.11. Deprecation 
warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.

Update Elastic Stack

The version of Wazuh App is outdated. The current version is 3.7 and find 3.6. We should update the files main.yml of each role of Elastic Stack:

  • wazuh-ansible/roles/elastic-stack/ansible-kibana/defaults/main.yml
  • wazuh-ansible/roles/elastic-stack/ansible-elasticsearch/defaults/main.yml
  • wazuh-ansible/roles/elastic-stack/ansible-logstash/defaults/main.yml

Auth.log not monitorized in managers debian

When we install a manager using Ansible, if the operating systems is Debian based we arent monitoring the file auth.log.

Due this all the alerts of ssh and all are missing.

Install agents with ansible, different servers and systems

A question,
can I do an agent installation on solaris machines ?, using ansible

could you guide me in some way to install agents on servers, firewall or systems that do not have an already defined role, using ansible.

Regards
Felipe

Installing wazuh manager based on documentation example fails

Hi,

I am following ansible deployment based on documentation and installation of wazuh-manager fails.
It explicitly expects configuration of at least the following:
cis_cat:
disable: 'no'
install_java: 'yes'
openscap:
disable: 'no'

I assume some defaults can be defined.
Then it started failing on email configuration and finally ansible playbook failed on task "Configure ossec.conf".

Playbook Tasks Should Be Idempotent

By this I mean, if you run a playbook twice the first playbook run should bring about the desired state while the second playbook run reports zero changes. This allows for better reporting of things that are actually changing in your environment. Otherwise, it looks like your environment is constantly being updated in reporting consoles. I did a quick scan of the code. I think a few things that would need to be updated are any state checks using execs (you can add params to them so they report correctly based on some test) as well repos (if you pin versions, there's no harm having the repos enabled and not constantly disabled and re-enabled). It's also usually cheaper when tasks are not unnecessarily run every time.

How to provide local_rules.xml?

The changelog indicates "allow providing own local_rules.xml template with var ossec_server_… #5 (By pull request: recunius (Thanks!))". But this does not seem to be implemented anywhere. Is this changelog left over from the dj-wasabi version?

- name: Installing the local_rules.xml (default local_rules.xml)
  template: src=var-ossec-rules-local_rules.xml.j2
            dest=/var/ossec/etc/rules/local_rules.xml
            owner=root
            group=ossec
            mode=0640
  notify: restart wazuh-manager
  tags:
    - init
    - config
    - rules

Conflicts of configuration and monitoring of non-existent directories

Hello team,

We are going to proceed with the creation of two issues related to each other.

The problems detected are the following:

The problem is that there are events that we are not collecting, such as those belonging to the /var/logs/auth.log file. Additionally, there are duplicate directories in the shared agent.conf.

As a quick response we need to delete the directories monitored by syscheck duplicates, proceeding to remove the directories from Shared agent.conf.

It is also necessary to remove duplicate localfile entries in the shared configuration, as well as add entries to monitor events such as those stored in /var/logs/auth.log.

Regards,

Alfonso

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.