Code Monkey home page Code Monkey logo

ansible-power-hmc's Introduction

IBM Power Systems HMC Collection

Scope

The IBM Power Systems HMC collection provides modules that can be used to manage configurations of Power Hardware Management Console and Power systems managed by the Hardware Management Console. The collection content helps to include Hardware Management Console as part of enterprise automation strategy through the Ansible ecosystem.

The IBM Power Systems HMC collection is included as an upstream collection under the Ansible Content for IBM Power Systems umbrella of community content.

Usage

This repository contains some example best practices for open source repositories:

Requirements

Platforms

  • HMC V9R1 or later
  • HMC V8R8.7.0

Ansible

  • Requires Ansible 2.14.0 or newer
  • For help installing Ansible, refer to the Installing Ansible section of the Ansible Documentation
  • For help installing the ibm.power_hmc collection, refer to the install page of this project

Python

  • Requires Python 3
  • lxml

Resources

Documentation of modules is generated on GitHub Pages.

Question, Issue or Contribute

If you have any questions or issues you can create a new issue here.

Pull requests are very welcome! Make sure your patches are well tested. Ideally create a topic branch for every separate change you make. For example:

  1. Fork the repo
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Added some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create new Pull Request

License & Authors

If you would like to see the detailed LICENSE click here.

Copyright:: 2020- IBM, Inc

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program.  If not, see <https://www.gnu.org/licenses/>.

Authors:

ansible-power-hmc's People

Contributors

anilvijayan avatar hari-g-m avatar imgbotapp avatar kavyaneelapala950 avatar mariomds avatar naveenkandakur avatar neikei avatar pbfinley1911 avatar sreenidhis1 avatar stevemar avatar thedoubl3j avatar torinreilly avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ansible-power-hmc's Issues

cannot unmount only one remote NFS directory.

Trying to use the amount option to unmount one remote NFS mounted directory.
It only allows me to unmount all remote directories via the mount_all: remote option.

How can I just unmount on remote directory?

Thanks

This is my playbook:

  • name: Unmount remote filesystems
    ibm.power_aix.mount:
    state: umount
    mount_dir: /mnt

powervm_lpar_instance generates a "EPOW_SUS_CHRP" environmental error in the error log and fails to shut the LPAR down

powervm_lpar_instance generates a "EPOW_SUS_CHRP" environmental error in the error log and fails to shut the LPAR down
To Reproduce

Using the YAML playbook below, the LPAR fails to shutdown and the HMC displays a D200A200 reference code.
On invocation of the playbook, the following error is noted in the error log and timeout occurs.
HMC GUI lpar shutdown works without any issues. There are no applications running on the test lpar

ErrorLog
`LABEL: EPOW_SUS_CHRP
IDENTIFIER: BE0A03E5

Date/Time: Wed Apr 19 11:22:12 2023
Sequence Number: 8331
Machine Id: 00FADCA74C00
Node Id: goldaix72
Class: H
Type: PERM
WPAR: Global
Resource Name: sysplanar0
Resource Class: planar
Resource Type: sysplanar_rspc
Location:

Description
ENVIRONMENTAL PROBLEM

Probable Causes
Power Turned Off Without a Shutdown
POWER OR FAN COMPONENT

    Recommended Actions
    RUN SYSTEM DIAGNOSTICS.
    PERFORM PROBLEM DETERMINATION PROCEDURES

Detail Data
POWER STATUS REGISTER
0000 0003
PROBLEM DATA
0624 0040 0000 00BC 8600 8E00 0000 0000 0000 0000 4942 4D00 5048 0030 0100 A207
2023 0419 0922 1200 0000 0000 0000 0000 4C00 0004 0000 0000 0000 0000 0000 0000
83B8 CD1A 0000 0000 5548 0018 0100 A207 8303 0001 0000 0000 0000 0000 0000 0000
4548 0050 0100 A207 3832 3836 2D34 3241 3231 4443 4137 5700 0000 0000 5356 3836
305F 3234 3300 0000 0000 0000 5046 3231 3131 3034 2E70 6677 3836 3000 0000 0000
0000 0000 0000 0000 0000 0004 0000 0000 4550 0014 0200 A207 0301 0100 0420 0000
0000 0000

Diagnostic Analysis
Diagnostic Log sequence number: 31
Resource tested: sysplanar0
Menu Number: 651303
Description:

The following informational event was reported by Platform Firmware.

Platform Firmware Miscellaneous, Information Only.

Normal system shutdown with no additional delay.
---

  • name: HMC LPAR poweroff
    hosts: HMC
    collections:
    - ibm.power_hmc
    connection: local
    environment:
    http_proxy: ''
    https_proxy: ''

    vars:
    curr_hmc_auth:
    username: hscroot
    password: !vault |
    $ANSIBLE_VAULT;1.1;AES256
    66363164363561646239316636653832373263316437323132643436383835376631626161303166
    3463306561313462663765616162396236623539373233320a636631663935616537656635396432
    39643461313763386134303439643461396666386431323565633464663265393861363337323265
    6339653837366361350a363535373933333230306633306630333464646637633637346662323631
    3532

    tasks:

    • name: list managed system details
      power_system:
      hmc_host: '{{ inventory_hostname }}'
      hmc_auth: "{{ curr_hmc_auth }}"
      system_name: SAPPRD
      state: facts
      register: testout

    • name: Stdout the managed system details
      debug:
      msg: '{{ testout }}'

    • name: Shutdown a logical partition.
      powervm_lpar_instance:
      hmc_host: '{{ inventory_hostname }}'
      hmc_auth: "{{ curr_hmc_auth }}"
      system_name: SAPPRD
      vm_name: goldaix72
      action: shutdown
      ansible-playbook [core 2.13.3]
      config file = /home/ansibleaix/.ansible.cfg
      configured module search path = ['/usr/share/ansible/fos/library']
      ansible python module location = /usr/lib/python3.9/site-packages/ansible
      ansible collection location = /home/ansibleaix/.ansible/collections:/usr/share/ansible/collections
      executable location = /usr/bin/ansible-playbook
      python version = 3.9.13 (main, Nov 9 2022, 13:16:24) [GCC 8.5.0 20210514 (Red Hat 8.5.0-15)]
      jinja version = 3.1.2
      libyaml = True
      Using /home/ansibleaix/.ansible.cfg as config file
      Vault password:
      host_list declined parsing /home/ansibleaix/etc/hosts as it did not pass its verify_file() method
      script declined parsing /home/ansibleaix/etc/hosts as it did not pass its verify_file() method
      auto declined parsing /home/ansibleaix/etc/hosts as it did not pass its verify_file() method
      Parsed /home/ansibleaix/etc/hosts inventory source with ini plugin
      Skipping callback 'default', as we already have a stdout callback.
      Skipping callback 'minimal', as we already have a stdout callback.
      Skipping callback 'oneline', as we already have a stdout callback.

PLAYBOOK: testoff.yml ************************************************************************************************************************************************************************
1 plays in testoff.yml

PLAY [HMC poweroff] **************************************************************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************************************************************
task path: /home/ansibleaix/Power/AIX/HOHAFAIL/roles/testoff.yml:2
ESTABLISH LOCAL CONNECTION FOR USER: ansibleaix
EXEC /bin/sh -c 'echo ~ansibleaix && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /home/ansibleaix/.ansible/tmp"&& mkdir "echo /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201" && echo ansible-tmp-1681896125.1607473-1410988-279106458273201="echo /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201" ) && sleep 0'
Attempting python interpreter discovery
EXEC /bin/sh -c 'echo PLATFORM; uname; echo FOUND; command -v '"'"'python3.10'"'"'; command -v '"'"'python3.9'"'"'; command -v '"'"'python3.8'"'"'; command -v '"'"'python3.7'"'"'; command -v '"'"'python3.6'"'"'; command -v '"'"'python3.5'"'"'; command -v '"'"'/usr/bin/python3'"'"'; command -v '"'"'/usr/libexec/platform-python'"'"'; command -v '"'"'python2.7'"'"'; command -v '"'"'/usr/bin/python'"'"'; command -v '"'"'python'"'"'; echo ENDFOUND && sleep 0'
EXEC /bin/sh -c '/usr/bin/python3.9 && sleep 0'
Using module file /usr/lib/python3.9/site-packages/ansible/modules/setup.py
PUT /home/ansibleaix/.ansible/tmp/ansible-local-1410953e0mq67ia/tmp047lclkk TO /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201/AnsiballZ_setup.py
EXEC /bin/sh -c 'chmod u+x /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201/ /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201/AnsiballZ_setup.py && sleep 0'
EXEC /bin/sh -c 'http_proxy='"'"''"'"' https_proxy='"'"''"'"' /usr/libexec/platform-python /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201/AnsiballZ_setup.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201/ > /dev/null 2>&1 && sleep 0'
ok: [hmc-hodc1]
META: ran handlers

TASK [list managed system details] ***********************************************************************************************************************************************************
task path: /home/ansibleaix/Power/AIX/HOHAFAIL/roles/testoff.yml:23
ESTABLISH LOCAL CONNECTION FOR USER: ansibleaix
EXEC /bin/sh -c 'echo ~ansibleaix && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /home/ansibleaix/.ansible/tmp"&& mkdir "echo /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751" && echo ansible-tmp-1681896126.2477672-1411102-100846569923751="echo /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751" ) && sleep 0'
Using module file /home/ansibleaix/.ansible/collections/ansible_collections/ibm/power_hmc/plugins/modules/power_system.py
PUT /home/ansibleaix/.ansible/tmp/ansible-local-1410953e0mq67ia/tmp51dvmydf TO /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751/AnsiballZ_power_system.py
EXEC /bin/sh -c 'chmod u+x /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751/ /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751/AnsiballZ_power_system.py && sleep 0'
EXEC /bin/sh -c 'http_proxy='"'"''"'"' https_proxy='"'"''"'"' /usr/libexec/platform-python /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751/AnsiballZ_power_system.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751/ > /dev/null 2>&1 && sleep 0'
ok: [hmc-hodc1] => {
"changed": false,
"invocation": {
"module_args": {
"action": null,
"hmc_auth": {
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
},
"hmc_host": "hmc-hodc1",
"mem_mirroring_mode": null,
"new_name": null,
"pend_mem_region_size": null,
"power_off_policy": null,
"power_on_lpar_start_policy": null,
"requested_num_sys_huge_pages": null,
"state": "facts",
"system_name": "SAPPRD"
}
},
"system_info": {
"ActivatedLevel": "243",
"ActivatedServicePackNameAndLevel": "FW860.B1 (243)",
"BMCVersion": null,
"CapacityOnDemandMemoryCapable": "true",
"CapacityOnDemandProcessorCapable": "true",
"ConfigurableSystemMemory": 524288,
"ConfigurableSystemProcessorUnits": 24,
"CurrentAvailableSystemMemory": 484096,
"CurrentAvailableSystemProcessorUnits": 18.6,
"DeferredLevel": null,
"DeferredServicePackNameAndLevel": null,
"Description": null,
"IPAddress": "10.128.128.8",
"InstalledSystemMemory": 524288,
"InstalledSystemProcessorUnits": 24,
"IsClassicHMCManagement": "true",
"IsNotPowerVMManagementMaster": "false",
"IsPowerVMManagementMaster": "false",
"MTMS": "8286-42A*21DCA7W",
"ManufacturingDefaultConfigurationEnabled": "false",
"MaximumPartitions": 480,
"MemoryDefragmentationState": "Not_In_Progress",
"MergedReferenceCode": " ",
"MeteredPoolID": null,
"PNORVersion": null,
"PermanentSystemMemory": 524288,
"PermanentSystemProcessors": 24,
"PhysicalSystemAttentionLEDState": "false",
"ProcessorThrottling": "false",
"ReferenceCode": " ",
"ServiceProcessorVersion": "0008000C",
"State": "operating",
"SystemFirmware": "SV860_FW860.B1 (243)",
"SystemLocation": null,
"SystemName": "SAPPRD",
"SystemType": "fsp"
}
}

TASK [Stdout the managed system details] *****************************************************************************************************************************************************
task path: /home/ansibleaix/Power/AIX/HOHAFAIL/roles/testoff.yml:31
ok: [hmc-hodc1] => {
"msg": {
"changed": false,
"failed": false,
"system_info": {
"ActivatedLevel": "243",
"ActivatedServicePackNameAndLevel": "FW860.B1 (243)",
"BMCVersion": null,
"CapacityOnDemandMemoryCapable": "true",
"CapacityOnDemandProcessorCapable": "true",
"ConfigurableSystemMemory": 524288,
"ConfigurableSystemProcessorUnits": 24,
"CurrentAvailableSystemMemory": 484096,
"CurrentAvailableSystemProcessorUnits": 18.6,
"DeferredLevel": null,
"DeferredServicePackNameAndLevel": null,
"Description": null,
"IPAddress": "10.128.128.8",
"InstalledSystemMemory": 524288,
"InstalledSystemProcessorUnits": 24,
"IsClassicHMCManagement": "true",
"IsNotPowerVMManagementMaster": "false",
"IsPowerVMManagementMaster": "false",
"MTMS": "8286-42A*21DCA7W",
"ManufacturingDefaultConfigurationEnabled": "false",
"MaximumPartitions": 480,
"MemoryDefragmentationState": "Not_In_Progress",
"MergedReferenceCode": " ",
"MeteredPoolID": null,
"PNORVersion": null,
"PermanentSystemMemory": 524288,
"PermanentSystemProcessors": 24,
"PhysicalSystemAttentionLEDState": "false",
"ProcessorThrottling": "false",
"ReferenceCode": " ",
"ServiceProcessorVersion": "0008000C",
"State": "operating",
"SystemFirmware": "SV860_FW860.B1 (243)",
"SystemLocation": null,
"SystemName": "ansible-playbook [core 2.13.3]
config file = /home/ansibleaix/.ansible.cfg
configured module search path = ['/usr/share/ansible/fos/library']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /home/ansibleaix/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-playbook
python version = 3.9.13 (main, Nov 9 2022, 13:16:24) [GCC 8.5.0 20210514 (Red Hat 8.5.0-15)]
jinja version = 3.1.2
libyaml = True
Using /home/ansibleaix/.ansible.cfg as config file
Vault password:
host_list declined parsing /home/ansibleaix/etc/hosts as it did not pass its verify_file() method
script declined parsing /home/ansibleaix/etc/hosts as it did not pass its verify_file() method
auto declined parsing /home/ansibleaix/etc/hosts as it did not pass its verify_file() method
Parsed /home/ansibleaix/etc/hosts inventory source with ini plugin
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.

PLAYBOOK: testoff.yml ************************************************************************************************************************************************************************
1 plays in testoff.yml

PLAY [HMC poweroff] **************************************************************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************************************************************
task path: /home/ansibleaix/Power/AIX/HOHAFAIL/roles/testoff.yml:2
ESTABLISH LOCAL CONNECTION FOR USER: ansibleaix
EXEC /bin/sh -c 'echo ~ansibleaix && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /home/ansibleaix/.ansible/tmp"&& mkdir "echo /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201" && echo ansible-tmp-1681896125.1607473-1410988-279106458273201="echo /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201" ) && sleep 0'
Attempting python interpreter discovery
EXEC /bin/sh -c 'echo PLATFORM; uname; echo FOUND; command -v '"'"'python3.10'"'"'; command -v '"'"'python3.9'"'"'; command -v '"'"'python3.8'"'"'; command -v '"'"'python3.7'"'"'; command -v '"'"'python3.6'"'"'; command -v '"'"'python3.5'"'"'; command -v '"'"'/usr/bin/python3'"'"'; command -v '"'"'/usr/libexec/platform-python'"'"'; command -v '"'"'python2.7'"'"'; command -v '"'"'/usr/bin/python'"'"'; command -v '"'"'python'"'"'; echo ENDFOUND && sleep 0'
EXEC /bin/sh -c '/usr/bin/python3.9 && sleep 0'
Using module file /usr/lib/python3.9/site-packages/ansible/modules/setup.py
PUT /home/ansibleaix/.ansible/tmp/ansible-local-1410953e0mq67ia/tmp047lclkk TO /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201/AnsiballZ_setup.py
EXEC /bin/sh -c 'chmod u+x /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201/ /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201/AnsiballZ_setup.py && sleep 0'
EXEC /bin/sh -c 'http_proxy='"'"''"'"' https_proxy='"'"''"'"' /usr/libexec/platform-python /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201/AnsiballZ_setup.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201/ > /dev/null 2>&1 && sleep 0'
ok: [hmc-hodc1]
META: ran handlers

TASK [list managed system details] ***********************************************************************************************************************************************************
task path: /home/ansibleaix/Power/AIX/HOHAFAIL/roles/testoff.yml:23
ESTABLISH LOCAL CONNECTION FOR USER: ansibleaix
EXEC /bin/sh -c 'echo ~ansibleaix && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /home/ansibleaix/.ansible/tmp"&& mkdir "echo /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751" && echo ansible-tmp-1681896126.2477672-1411102-100846569923751="echo /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751" ) && sleep 0'
Using module file /home/ansibleaix/.ansible/collections/ansible_collections/ibm/power_hmc/plugins/modules/power_system.py
PUT /home/ansibleaix/.ansible/tmp/ansible-local-1410953e0mq67ia/tmp51dvmydf TO /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751/AnsiballZ_power_system.py
EXEC /bin/sh -c 'chmod u+x /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751/ /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751/AnsiballZ_power_system.py && sleep 0'
EXEC /bin/sh -c 'http_proxy='"'"''"'"' https_proxy='"'"''"'"' /usr/libexec/platform-python /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751/AnsiballZ_power_system.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751/ > /dev/null 2>&1 && sleep 0'
ok: [hmc-hodc1] => {
"changed": false,
"invocation": {
"module_args": {
"action": null,
"hmc_auth": {
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
},
"hmc_host": "hmc-hodc1",
"mem_mirroring_mode": null,
"new_name": null,
"pend_mem_region_size": null,
"power_off_policy": null,
"power_on_lpar_start_policy": null,
"requested_num_sys_huge_pages": null,
"state": "facts",
"system_name": "SAPPRD"
}
},
"system_info": {
"ActivatedLevel": "243",
"ActivatedServicePackNameAndLevel": "FW860.B1 (243)",
"BMCVersion": null,
"CapacityOnDemandMemoryCapable": "true",
"CapacityOnDemandProcessorCapable": "true",
"ConfigurableSystemMemory": 524288,
"ConfigurableSystemProcessorUnits": 24,
"CurrentAvailableSystemMemory": 484096,
"CurrentAvailableSystemProcessorUnits": 18.6,
"DeferredLevel": null,
"DeferredServicePackNameAndLevel": null,
"Description": null,
"IPAddress": "10.128.128.8",
"InstalledSystemMemory": 524288,
"InstalledSystemProcessorUnits": 24,
"IsClassicHMCManagement": "true",
"IsNotPowerVMManagementMaster": "false",
"IsPowerVMManagementMaster": "false",
"MTMS": "8286-42A*21DCA7W",
"ManufacturingDefaultConfigurationEnabled": "false",
"MaximumPartitions": 480,
"MemoryDefragmentationState": "Not_In_Progress",
"MergedReferenceCode": " ",
"MeteredPoolID": null,
"PNORVersion": null,
"PermanentSystemMemory": 524288,
"PermanentSystemProcessors": 24,
"PhysicalSystemAttentionLEDState": "false",
"ProcessorThrottling": "false",
"ReferenceCode": " ",
"ServiceProcessorVersion": "0008000C",
"State": "operating",
"SystemFirmware": "SV860_FW860.B1 (243)",
"SystemLocation": null,
"SystemName": "ansible-playbook [core 2.13.3]
config file = /home/ansibleaix/.ansible.cfg
configured module search path = ['/usr/share/ansible/fos/library']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /home/ansibleaix/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-playbook
python version = 3.9.13 (main, Nov 9 2022, 13:16:24) [GCC 8.5.0 20210514 (Red Hat 8.5.0-15)]
jinja version = 3.1.2
libyaml = True
Using /home/ansibleaix/.ansible.cfg as config file
Vault password:
host_list declined parsing /home/ansibleaix/etc/hosts as it did not pass its verify_file() method
script declined parsing /home/ansibleaix/etc/hosts as it did not pass its verify_file() method
auto declined parsing /home/ansibleaix/etc/hosts as it did not pass its verify_file() method
Parsed /home/ansibleaix/etc/hosts inventory source with ini plugin
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.

PLAYBOOK: testoff.yml ************************************************************************************************************************************************************************
1 plays in testoff.yml

PLAY [HMC poweroff] **************************************************************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************************************************************
task path: /home/ansibleaix/Power/AIX/HOHAFAIL/roles/testoff.yml:2
ESTABLISH LOCAL CONNECTION FOR USER: ansibleaix
EXEC /bin/sh -c 'echo ~ansibleaix && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /home/ansibleaix/.ansible/tmp"&& mkdir "echo /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201" && echo ansible-tmp-1681896125.1607473-1410988-279106458273201="echo /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201" ) && sleep 0'
Attempting python interpreter discovery
EXEC /bin/sh -c 'echo PLATFORM; uname; echo FOUND; command -v '"'"'python3.10'"'"'; command -v '"'"'python3.9'"'"'; command -v '"'"'python3.8'"'"'; command -v '"'"'python3.7'"'"'; command -v '"'"'python3.6'"'"'; command -v '"'"'python3.5'"'"'; command -v '"'"'/usr/bin/python3'"'"'; command -v '"'"'/usr/libexec/platform-python'"'"'; command -v '"'"'python2.7'"'"'; command -v '"'"'/usr/bin/python'"'"'; command -v '"'"'python'"'"'; echo ENDFOUND && sleep 0'
EXEC /bin/sh -c '/usr/bin/python3.9 && sleep 0'
Using module file /usr/lib/python3.9/site-packages/ansible/modules/setup.py
PUT /home/ansibleaix/.ansible/tmp/ansible-local-1410953e0mq67ia/tmp047lclkk TO /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201/AnsiballZ_setup.py
EXEC /bin/sh -c 'chmod u+x /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201/ /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201/AnsiballZ_setup.py && sleep 0'
EXEC /bin/sh -c 'http_proxy='"'"''"'"' https_proxy='"'"''"'"' /usr/libexec/platform-python /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201/AnsiballZ_setup.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201/ > /dev/null 2>&1 && sleep 0'
ok: [hmc-hodc1]
META: ran handlers

TASK [list managed system details] ***********************************************************************************************************************************************************
task path: /home/ansibleaix/Power/AIX/HOHAFAIL/roles/testoff.yml:23
ESTABLISH LOCAL CONNECTION FOR USER: ansibleaix
EXEC /bin/sh -c 'echo ~ansibleaix && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /home/ansibleaix/.ansible/tmp"&& mkdir "echo /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751" && echo ansible-tmp-1681896126.2477672-1411102-100846569923751="echo /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751" ) && sleep 0'
Using module file /home/ansibleaix/.ansible/collections/ansible_collections/ibm/power_hmc/plugins/modules/power_system.py
PUT /home/ansibleaix/.ansible/tmp/ansible-local-1410953e0mq67ia/tmp51dvmydf TO /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751/AnsiballZ_power_system.py
EXEC /bin/sh -c 'chmod u+x /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751/ /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751/AnsiballZ_power_system.py && sleep 0'
EXEC /bin/sh -c 'http_proxy='"'"''"'"' https_proxy='"'"''"'"' /usr/libexec/platform-python /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751/AnsiballZ_power_system.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751/ > /dev/null 2>&1 && sleep 0'
ok: [hmc-hodc1] => {
"changed": false,
"invocation": {
"module_args": {
"action": null,
"hmc_auth": {
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
},
"hmc_host": "hmc-hodc1",
"mem_mirroring_mode": null,
"new_name": null,
"pend_mem_region_size": null,
"power_off_policy": null,
"power_on_lpar_start_policy": null,
"requested_num_sys_huge_pages": null,
"state": "facts",
"system_name": "SAPR3PRD2-8286-42A-SN21DCA7W"
}
},
"system_info": {
"ActivatedLevel": "243",
"ActivatedServicePackNameAndLevel": "FW860.B1 (243)",
"BMCVersion": null,
"CapacityOnDemandMemoryCapable": "true",
"CapacityOnDemandProcessorCapable": "true",
"ConfigurableSystemMemory": 524288,
"ConfigurableSystemProcessorUnits": 24,
"CurrentAvailableSystemMemory": 484096,
"CurrentAvailableSystemProcessorUnits": 18.6,
"DeferredLevel": null,
"DeferredServicePackNameAndLevel": null,
"Description": null,
"IPAddress": "10.128.128.8",
"InstalledSystemMemory": 524288,
"InstalledSystemProcessorUnits": 24,
"IsClassicHMCManagement": "true",
"IsNotPowerVMManagementMaster": "false",
"IsPowerVMManagementMaster": "false",
"MTMS": "8286-42A*21DCA7W",
"ManufacturingDefaultConfigurationEnabled": "false",
"MaximumPartitions": 480,
"MemoryDefragmentationState": "Not_In_Progress",
"MergedReferenceCode": " ",
"MeteredPoolID": null,
"PNORVersion": null,
"PermanentSystemMemory": 524288,
"PermanentSystemProcessors": 24,
"PhysicalSystemAttentionLEDState": "false",
"ProcessorThrottling": "false",
"ReferenceCode": " ",
"ServiceProcessorVersion": "0008000C",
"State": "operating",
"SystemFirmware": "SV860_FW860.B1 (243)",
"SystemLocation": null,
"SystemName": "SAPR3PRD2-8286-42A-SN21DCA7W",
"SystemType": "fsp"
}
}

TASK [Stdout the managed system details] *****************************************************************************************************************************************************
task path: /home/ansibleaix/Power/AIX/HOHAFAIL/roles/testoff.yml:31
ok: [hmc-hodc1] => {
"msg": {
"changed": false,
"failed": false,
"system_info": {
"ActivatedLevel": "243",
"ActivatedServicePackNameAndLevel": "FW860.B1 (243)",
"BMCVersion": null,
"CapacityOnDemandMemoryCapable": "true",
"CapacityOnDemandProcessorCapable": "true",
"ConfigurableSystemMemory": 524288,
"ConfigurableSystemProcessorUnits": 24,
"CurrentAvailableSystemMemory": 484096,
"CurrentAvailableSystemProcessorUnits": 18.6,
"DeferredLevel": null,
"DeferredServicePackNameAndLevel": null,
"Description": null,
"IPAddress": "10.128.128.8",
"InstalledSystemMemory": 524288,
"InstalledSystemProcessorUnits": 24,
"IsClassicHMCManagement": "true",
"IsNotPowerVMManagementMaster": "false",
"IsPowerVMManagementMaster": "false",
"MTMS": "8286-42A*21DCA7W",
"ManufacturingDefaultConfigurationEnabled": "false",
"MaximumPartitions": 480,
"MemoryDefragmentationState": "Not_In_Progress",
"MergedReferenceCode": " ",
"MeteredPoolID": null,
"PNORVersion": null,
"PermanentSystemMemory": 524288,
"PermanentSystemProcessors": 24,
"PhysicalSystemAttentionLEDState": "false",
"ProcessorThrottling": "false",
"ReferenceCode": " ",
"ServiceProcessorVersion": "0008000C",
"State": "operating",
"SystemFirmware": "SV860_FW860.B1 (243)",
"SystemLocation": null,
"SystemName": "SAPPRD",
"SystemType": "fsp"
}
}
}

TASK [Shutdown a logical partition.] *********************************************************************************************************************************************************
task path: /home/ansibleaix/Power/AIX/HOHAFAIL/roles/testoff.yml:36
ESTABLISH LOCAL CONNECTION FOR USER: ansibleaix
EXEC /bin/sh -c 'echo ~ansibleaix && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /home/ansibleaix/.ansible/tmp"&& mkdir "echo /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879" && echo ansible-tmp-1681896128.8775172-1411121-60842275623879="echo /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879" ) && sleep 0'
Using module file /home/ansibleaix/.ansible/collections/ansible_collections/ibm/power_hmc/plugins/modules/powervm_lpar_instance.py
PUT /home/ansibleaix/.ansible/tmp/ansible-local-1410953e0mq67ia/tmpvp15qeun TO /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879/AnsiballZ_powervm_lpar_instance.py
EXEC /bin/sh -c 'chmod u+x /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879/ /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879/AnsiballZ_powervm_lpar_instance.py && sleep 0'
EXEC /bin/sh -c 'http_proxy='"'"''"'"' https_proxy='"'"''"'"' /usr/libexec/platform-python /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879/AnsiballZ_powervm_lpar_instance.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
File "/tmp/ansible_powervm_lpar_instance_payload_l83vtwl4/ansible_powervm_lpar_instance_payload.zip/ansible_collections/ibm/power_hmc/plugins/modules/powervm_lpar_instance.py", line 1357, in poweroff_partition
File "/tmp/ansible_powervm_lpar_instance_payload_l83vtwl4/ansible_powervm_lpar_instance_payload.zip/ansible_collections/ibm/power_hmc/plugins/module_utils/hmc_rest_client.py", line 870, in poweroffPartition
return self.fetchJobStatus(jobID, timeout_in_min=10)
File "/tmp/ansible_powervm_lpar_instance_payload_l83vtwl4/ansible_powervm_lpar_instance_payload.zip/ansible_collections/ibm/power_hmc/plugins/module_utils/hmc_rest_client.py", line 335, in fetchJobStatus
raise HmcError("Job: {0} timed out!!".format(job_name))
fatal: [hmc-hodc1]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"action": "shutdown",
"advanced_info": null,
"all_resources": null,
"delete_vdisks": null,
"hmc_auth": {
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
},
"hmc_host": "hmc-hodc1",
"iIPLsource": null,
"install_settings": null,
"keylock": null,
"max_mem": null,
"max_proc": null,
"max_proc_unit": null,
"max_virtual_slots": null,
"mem": null,
"min_mem": null,
"min_proc": null,
"min_proc_unit": null,
"npiv_config": null,
"os_type": null,
"physical_io": null,
"proc": null,
"proc_compatibility_mode": null,
"proc_mode": null,
"proc_unit": null,
"prof_name": null,
"restart_option": null,
"retain_vios_cfg": null,
"shared_proc_pool": null,
"shutdown_option": null,
"state": null,
"system_name": "SAPPRD",
"virt_network_config": null,
"vm_id": null,
"vm_name": "goldaix72",
"vnic_config": null,
"volume_config": null,
"weight": null
}
},
"msg": "HmcError: b'Job: PowerOff timed out!!'"
}
",
"SystemType": "fsp"
}
}

TASK [Stdout the managed system details] *****************************************************************************************************************************************************
task path: /home/ansibleaix/Power/AIX/HOHAFAIL/roles/testoff.yml:31
ok: [hmc-hodc1] => {
"msg": {
"changed": false,
"failed": false,
"system_info": {
"ActivatedLevel": "243",
"ActivatedServicePackNameAndLevel": "FW860.B1 (243)",
"BMCVersion": null,
"CapacityOnDemandMemoryCapable": "true",
"CapacityOnDemandProcessorCapable": "true",
"ConfigurableSystemMemory": 524288,
"ConfigurableSystemProcessorUnits": 24,
"CurrentAvailableSystemMemory": 484096,
"CurrentAvailableSystemProcessorUnits": 18.6,
"DeferredLevel": null,
"DeferredServicePackNameAndLevel": null,
"Description": null,
"IPAddress": "10.128.128.8",
"InstalledSystemMemory": 524288,
"InstalledSystemProcessorUnits": 24,
"IsClassicHMCManagement": "true",
"IsNotPowerVMManagementMaster": "false",
"IsPowerVMManagementMaster": "false",
"MTMS": "8286-42A*21DCA7W",
"ManufacturingDefaultConfigurationEnabled": "false",
"MaximumPartitions": 480,
"MemoryDefragmentationState": "Not_In_Progress",
"MergedReferenceCode": " ",
"MeteredPoolID": null,
"PNORVersion": null,
"PermanentSystemMemory": 524288,
"PermanentSystemProcessors": 24,
"PhysicalSystemAttentionLEDState": "false",
"ProcessorThrottling": "false",
"ReferenceCode": " ",
"ServiceProcessorVersion": "0008000C",
"State": "operating",
"SystemFirmware": "SV860_FW860.B1 (243)",
"SystemLocation": null,
"SystemName": "SAPPRD",
"SystemType": "fsp"
}
}
}

TASK [Shutdown a logical partition.] *********************************************************************************************************************************************************
task path: /home/ansibleaix/Power/AIX/HOHAFAIL/roles/testoff.yml:36
ESTABLISH LOCAL CONNECTION FOR USER: ansibleaix
EXEC /bin/sh -c 'echo ~ansibleaix && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /home/ansibleaix/.ansible/tmp"&& mkdir "echo /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879" && echo ansible-tmp-1681896128.8775172-1411121-60842275623879="echo /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879" ) && sleep 0'
Using module file /home/ansibleaix/.ansible/collections/ansible_collections/ibm/power_hmc/plugins/modules/powervm_lpar_instance.py
PUT /home/ansibleaix/.ansible/tmp/ansible-local-1410953e0mq67ia/tmpvp15qeun TO /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879/AnsiballZ_powervm_lpar_instance.py
EXEC /bin/sh -c 'chmod u+x /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879/ /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879/AnsiballZ_powervm_lpar_instance.py && sleep 0'
EXEC /bin/sh -c 'http_proxy='"'"''"'"' https_proxy='"'"''"'"' /usr/libexec/platform-python /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879/AnsiballZ_powervm_lpar_instance.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
File "/tmp/ansible_powervm_lpar_instance_payload_l83vtwl4/ansible_powervm_lpar_instance_payload.zip/ansible_collections/ibm/power_hmc/plugins/modules/powervm_lpar_instance.py", line 1357, in poweroff_partition
File "/tmp/ansible_powervm_lpar_instance_payload_l83vtwl4/ansible_powervm_lpar_instance_payload.zip/ansible_collections/ibm/power_hmc/plugins/module_utils/hmc_rest_client.py", line 870, in poweroffPartition
return self.fetchJobStatus(jobID, timeout_in_min=10)
File "/tmp/ansible_powervm_lpar_instance_payload_l83vtwl4/ansible_powervm_lpar_instance_payload.zip/ansible_collections/ibm/power_hmc/plugins/module_utils/hmc_rest_client.py", line 335, in fetchJobStatus
raise HmcError("Job: {0} timed out!!".format(job_name))
fatal: [hmc-hodc1]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"action": "shutdown",
"advanced_info": null,
"all_resources": null,
"delete_vdisks": null,
"hmc_auth": {
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
},
"hmc_host": "hmc-hodc1",
"iIPLsource": null,
"install_settings": null,
"keylock": null,
"max_mem": null,
"max_proc": null,
"max_proc_unit": null,
"max_virtual_slots": null,
"mem": null,
"min_mem": null,
"min_proc": null,
"min_proc_unit": null,
"npiv_config": null,
"os_type": null,
"physical_io": null,
"proc": null,
"proc_compatibility_mode": null,
"proc_mode": null,
"proc_unit": null,
"prof_name": null,
"restart_option": null,
"retain_vios_cfg": null,
"shared_proc_pool": null,
"shutdown_option": null,
"state": null,
"system_name": "Sansible-playbook [core 2.13.3]
config file = /home/ansibleaix/.ansible.cfg
configured module search path = ['/usr/share/ansible/fos/library']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /home/ansibleaix/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-playbook
python version = 3.9.13 (main, Nov 9 2022, 13:16:24) [GCC 8.5.0 20210514 (Red Hat 8.5.0-15)]
jinja version = 3.1.2
libyaml = True
Using /home/ansibleaix/.ansible.cfg as config file
Vault password:
host_list declined parsing /home/ansibleaix/etc/hosts as it did not pass its verify_file() method
script declined parsing /home/ansibleaix/etc/hosts as it did not pass its verify_file() method
auto declined parsing /home/ansibleaix/etc/hosts as it did not pass its verify_file() method
Parsed /home/ansibleaix/etc/hosts inventory source with ini plugin
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.

PLAYBOOK: testoff.yml ************************************************************************************************************************************************************************
1 plays in testoff.yml

PLAY [HMC poweroff] **************************************************************************************************************************************************************************

TASK [Gathering Facts] ***********************************************************************************************************************************************************************
task path: /home/ansibleaix/Power/AIX/HOHAFAIL/roles/testoff.yml:2
ESTABLISH LOCAL CONNECTION FOR USER: ansibleaix
EXEC /bin/sh -c 'echo ~ansibleaix && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /home/ansibleaix/.ansible/tmp"&& mkdir "echo /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201" && echo ansible-tmp-1681896125.1607473-1410988-279106458273201="echo /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201" ) && sleep 0'
Attempting python interpreter discovery
EXEC /bin/sh -c 'echo PLATFORM; uname; echo FOUND; command -v '"'"'python3.10'"'"'; command -v '"'"'python3.9'"'"'; command -v '"'"'python3.8'"'"'; command -v '"'"'python3.7'"'"'; command -v '"'"'python3.6'"'"'; command -v '"'"'python3.5'"'"'; command -v '"'"'/usr/bin/python3'"'"'; command -v '"'"'/usr/libexec/platform-python'"'"'; command -v '"'"'python2.7'"'"'; command -v '"'"'/usr/bin/python'"'"'; command -v '"'"'python'"'"'; echo ENDFOUND && sleep 0'
EXEC /bin/sh -c '/usr/bin/python3.9 && sleep 0'
Using module file /usr/lib/python3.9/site-packages/ansible/modules/setup.py
PUT /home/ansibleaix/.ansible/tmp/ansible-local-1410953e0mq67ia/tmp047lclkk TO /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201/AnsiballZ_setup.py
EXEC /bin/sh -c 'chmod u+x /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201/ /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201/AnsiballZ_setup.py && sleep 0'
EXEC /bin/sh -c 'http_proxy='"'"''"'"' https_proxy='"'"''"'"' /usr/libexec/platform-python /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201/AnsiballZ_setup.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896125.1607473-1410988-279106458273201/ > /dev/null 2>&1 && sleep 0'
ok: [hmc-hodc1]
META: ran handlers

TASK [list managed system details] ***********************************************************************************************************************************************************
task path: /home/ansibleaix/Power/AIX/HOHAFAIL/roles/testoff.yml:23
ESTABLISH LOCAL CONNECTION FOR USER: ansibleaix
EXEC /bin/sh -c 'echo ~ansibleaix && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /home/ansibleaix/.ansible/tmp"&& mkdir "echo /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751" && echo ansible-tmp-1681896126.2477672-1411102-100846569923751="echo /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751" ) && sleep 0'
Using module file /home/ansibleaix/.ansible/collections/ansible_collections/ibm/power_hmc/plugins/modules/power_system.py
PUT /home/ansibleaix/.ansible/tmp/ansible-local-1410953e0mq67ia/tmp51dvmydf TO /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751/AnsiballZ_power_system.py
EXEC /bin/sh -c 'chmod u+x /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751/ /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751/AnsiballZ_power_system.py && sleep 0'
EXEC /bin/sh -c 'http_proxy='"'"''"'"' https_proxy='"'"''"'"' /usr/libexec/platform-python /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751/AnsiballZ_power_system.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896126.2477672-1411102-100846569923751/ > /dev/null 2>&1 && sleep 0'
ok: [hmc-hodc1] => {
"changed": false,
"invocation": {
"module_args": {
"action": null,
"hmc_auth": {
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
},
"hmc_host": "hmc-hodc1",
"mem_mirroring_mode": null,
"new_name": null,
"pend_mem_region_size": null,
"power_off_policy": null,
"power_on_lpar_start_policy": null,
"requested_num_sys_huge_pages": null,
"state": "facts",
"system_name": "SAPPRD"
}
},
"system_info": {
"ActivatedLevel": "243",
"ActivatedServicePackNameAndLevel": "FW860.B1 (243)",
"BMCVersion": null,
"CapacityOnDemandMemoryCapable": "true",
"CapacityOnDemandProcessorCapable": "true",
"ConfigurableSystemMemory": 524288,
"ConfigurableSystemProcessorUnits": 24,
"CurrentAvailableSystemMemory": 484096,
"CurrentAvailableSystemProcessorUnits": 18.6,
"DeferredLevel": null,
"DeferredServicePackNameAndLevel": null,
"Description": null,
"IPAddress": "10.128.128.8",
"InstalledSystemMemory": 524288,
"InstalledSystemProcessorUnits": 24,
"IsClassicHMCManagement": "true",
"IsNotPowerVMManagementMaster": "false",
"IsPowerVMManagementMaster": "false",
"MTMS": "8286-42A*21DCA7W",
"ManufacturingDefaultConfigurationEnabled": "false",
"MaximumPartitions": 480,
"MemoryDefragmentationState": "Not_In_Progress",
"MergedReferenceCode": " ",
"MeteredPoolID": null,
"PNORVersion": null,
"PermanentSystemMemory": 524288,
"PermanentSystemProcessors": 24,
"PhysicalSystemAttentionLEDState": "false",
"ProcessorThrottling": "false",
"ReferenceCode": " ",
"ServiceProcessorVersion": "0008000C",
"State": "operating",
"SystemFirmware": "SV860_FW860.B1 (243)",
"SystemLocation": null,
"SystemName": "SAPPRD",
"SystemType": "fsp"
}
}

TASK [Stdout the managed system details] *****************************************************************************************************************************************************
task path: /home/ansibleaix/Power/AIX/HOHAFAIL/roles/testoff.yml:31
ok: [hmc-hodc1] => {
"msg": {
"changed": false,
"failed": false,
"system_info": {
"ActivatedLevel": "243",
"ActivatedServicePackNameAndLevel": "FW860.B1 (243)",
"BMCVersion": null,
"CapacityOnDemandMemoryCapable": "true",
"CapacityOnDemandProcessorCapable": "true",
"ConfigurableSystemMemory": 524288,
"ConfigurableSystemProcessorUnits": 24,
"CurrentAvailableSystemMemory": 484096,
"CurrentAvailableSystemProcessorUnits": 18.6,
"DeferredLevel": null,
"DeferredServicePackNameAndLevel": null,
"Description": null,
"IPAddress": "10.128.128.8",
"InstalledSystemMemory": 524288,
"InstalledSystemProcessorUnits": 24,
"IsClassicHMCManagement": "true",
"IsNotPowerVMManagementMaster": "false",
"IsPowerVMManagementMaster": "false",
"MTMS": "8286-42A*21DCA7W",
"ManufacturingDefaultConfigurationEnabled": "false",
"MaximumPartitions": 480,
"MemoryDefragmentationState": "Not_In_Progress",
"MergedReferenceCode": " ",
"MeteredPoolID": null,
"PNORVersion": null,
"PermanentSystemMemory": 524288,
"PermanentSystemProcessors": 24,
"PhysicalSystemAttentionLEDState": "false",
"ProcessorThrottling": "false",
"ReferenceCode": " ",
"ServiceProcessorVersion": "0008000C",
"State": "operating",
"SystemFirmware": "SV860_FW860.B1 (243)",
"SystemLocation": null,
"SystemName": "SAPPRD",
"SystemType": "fsp"
}
}
}

TASK [Shutdown a logical partition.] *********************************************************************************************************************************************************
task path: /home/ansibleaix/Power/AIX/HOHAFAIL/roles/testoff.yml:36
ESTABLISH LOCAL CONNECTION FOR USER: ansibleaix
EXEC /bin/sh -c 'echo ~ansibleaix && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /home/ansibleaix/.ansible/tmp"&& mkdir "echo /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879" && echo ansible-tmp-1681896128.8775172-1411121-60842275623879="echo /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879" ) && sleep 0'
Using module file /home/ansibleaix/.ansible/collections/ansible_collections/ibm/power_hmc/plugins/modules/powervm_lpar_instance.py
PUT /home/ansibleaix/.ansible/tmp/ansible-local-1410953e0mq67ia/tmpvp15qeun TO /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879/AnsiballZ_powervm_lpar_instance.py
EXEC /bin/sh -c 'chmod u+x /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879/ /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879/AnsiballZ_powervm_lpar_instance.py && sleep 0'
EXEC /bin/sh -c 'http_proxy='"'"''"'"' https_proxy='"'"''"'"' /usr/libexec/platform-python /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879/AnsiballZ_powervm_lpar_instance.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
File "/tmp/ansible_powervm_lpar_instance_payload_l83vtwl4/ansible_powervm_lpar_instance_payload.zip/ansible_collections/ibm/power_hmc/plugins/modules/powervm_lpar_instance.py", line 1357, in poweroff_partition
File "/tmp/ansible_powervm_lpar_instance_payload_l83vtwl4/ansible_powervm_lpar_instance_payload.zip/ansible_collections/ibm/power_hmc/plugins/module_utils/hmc_rest_client.py", line 870, in poweroffPartition
return self.fetchJobStatus(jobID, timeout_in_min=10)
File "/tmp/ansible_powervm_lpar_instance_payload_l83vtwl4/ansible_powervm_lpar_instance_payload.zip/ansible_collections/ibm/power_hmc/plugins/module_utils/hmc_rest_client.py", line 335, in fetchJobStatus
raise HmcError("Job: {0} timed out!!".format(job_name))
fatal: [hmc-hodc1]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"action": "shutdown",
"advanced_info": null,
"all_resources": null,
"delete_vdisks": null,
"hmc_auth": {
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
},
"hmc_host": "hmc-hodc1",
"iIPLsource": null,
"install_settings": null,
"keylock": null,
"max_mem": null,
"max_proc": null,
"max_proc_unit": null,
"max_virtual_slots": null,
"mem": null,
"min_mem": null,
"min_proc": null,
"min_proc_unit": null,
"npiv_config": null,
"os_type": null,
"physical_io": null,
"proc": null,
"proc_compatibility_mode": null,
"proc_mode": null,
"proc_unit": null,
"prof_name": null,
"restart_option": null,
"retain_vios_cfg": null,
"shared_proc_pool": null,
"shutdown_option": null,
"state": null,
"system_name": "SAPPRD",
"virt_network_config": null,
"vm_id": null,
"vm_name": "goldaix72",
"vnic_config": null,
"volume_config": null,
"weight": null
}
},
"msg": "HmcError: b'Job: PowerOff timed out!!'"
}
",
"virt_network_config": null,
"vm_id": null,
"vm_name": "goldaix72",
"vnic_config": null,
"volume_config": null,
"weight": null
}
},
"msg": "HmcError: b'Job: PowerOff timed out!!'"
}
",
"SystemType": "fsp"
}
}
}

TASK [Shutdown a logical partition.] *********************************************************************************************************************************************************
task path: /home/ansibleaix/Power/AIX/HOHAFAIL/roles/testoff.yml:36
ESTABLISH LOCAL CONNECTION FOR USER: ansibleaix
EXEC /bin/sh -c 'echo ~ansibleaix && sleep 0'
EXEC /bin/sh -c '( umask 77 && mkdir -p "echo /home/ansibleaix/.ansible/tmp"&& mkdir "echo /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879" && echo ansible-tmp-1681896128.8775172-1411121-60842275623879="echo /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879" ) && sleep 0'
Using module file /home/ansibleaix/.ansible/collections/ansible_collections/ibm/power_hmc/plugins/modules/powervm_lpar_instance.py
PUT /home/ansibleaix/.ansible/tmp/ansible-local-1410953e0mq67ia/tmpvp15qeun TO /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879/AnsiballZ_powervm_lpar_instance.py
EXEC /bin/sh -c 'chmod u+x /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879/ /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879/AnsiballZ_powervm_lpar_instance.py && sleep 0'
EXEC /bin/sh -c 'http_proxy='"'"''"'"' https_proxy='"'"''"'"' /usr/libexec/platform-python /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879/AnsiballZ_powervm_lpar_instance.py && sleep 0'
EXEC /bin/sh -c 'rm -f -r /home/ansibleaix/.ansible/tmp/ansible-tmp-1681896128.8775172-1411121-60842275623879/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
File "/tmp/ansible_powervm_lpar_instance_payload_l83vtwl4/ansible_powervm_lpar_instance_payload.zip/ansible_collections/ibm/power_hmc/plugins/modules/powervm_lpar_instance.py", line 1357, in poweroff_partition
File "/tmp/ansible_powervm_lpar_instance_payload_l83vtwl4/ansible_powervm_lpar_instance_payload.zip/ansible_collections/ibm/power_hmc/plugins/module_utils/hmc_rest_client.py", line 870, in poweroffPartition
return self.fetchJobStatus(jobID, timeout_in_min=10)
File "/tmp/ansible_powervm_lpar_instance_payload_l83vtwl4/ansible_powervm_lpar_instance_payload.zip/ansible_collections/ibm/power_hmc/plugins/module_utils/hmc_rest_client.py", line 335, in fetchJobStatus
raise HmcError("Job: {0} timed out!!".format(job_name))
fatal: [hmc-hodc1]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"action": "shutdown",
"advanced_info": null,
"all_resources": null,
"delete_vdisks": null,
"hmc_auth": {
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
},
"hmc_host": "hmc-hodc1",
"iIPLsource": null,
"install_settings": null,
"keylock": null,
"max_mem": null,
"max_proc": null,
"max_proc_unit": null,
"max_virtual_slots": null,
"mem": null,
"min_mem": null,
"min_proc": null,
"min_proc_unit": null,
"npiv_config": null,
"os_type": null,
"physical_io": null,
"proc": null,
"proc_compatibility_mode": null,
"proc_mode": null,
"proc_unit": null,
"prof_name": null,
"restart_option": null,
"retain_vios_cfg": null,
"shared_proc_pool": null,
"shutdown_option": null,
"state": null,
"system_name": "SAPPRD
"virt_network_config": null,
"vm_id": null,
"vm_name": "goldaix72",
"vnic_config": null,
"volume_config": null,
"weight": null
}
},
"msg": "HmcError: b'Job: PowerOff timed out!!'"
}
`
HMC Release

"version= Version: 9 Release: 2 Service Pack: 953 HMC Build level 2301130234 MH01874 - HMC V9R2 M951 MH01893 - iFix for HMC V9R2 M951 MH01905 - HMC V9R2 M952 MH01913 - iFix for HMC V9R2 M952 MH01925 - iFix for HMC V9R2 M952 MH01934 - HMC V9R2 M953 MH01949 - iFix for HMC V9R2 M953 ","base_version=V9R2 "
Ansible 2.13.3
Python 3.6.8

Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Environment (please complete the following information):

  • HMC: [e.g. V8R870 V9R1M910]
  • Python Version [e.g. 3.6 3.7]
  • OpenSSH Version [e.g. 1.0.2]

Additional context
Add any other context about the problem here.

Get all facts from every managed system / lpar

Is your feature request related to a problem? Please describe.

I would like to have a "hmc scanner" like report in json (facts) from the hmc

Describe the solution you'd like

Either I am missing something but it seems you cannot do

  • list all managed systems
  • list all lpars of a managed system
    if this works I would be able to
loop every managed system 
  get lpars
    loop  lpar 
      get facts

Describe alternatives you've considered
read hmcscanner xls with a python script

Allow login into HMC with certificate instead of password

Is your feature request related to a problem? Please describe.
In our organization we strive to replace passwords with certificate files. The UNIX engineering team responsible for the IBM Power Infrastructure was not very pleased when I asked for a login with username/password to be able use the powervm_lpar_instance module.

Describe the solution you'd like
Instead of providing the parameters username and password for hmc_auth it should be possible to provide the username and the path to a certificate file to be used for login into the HMC.
Example to shutdown an LPAR:

- name: Shutdown a logical partition
  powervm_lpar_instance:
      hmc_host: '{{ inventory_hostname }}'
      hmc_auth:
         username: '{{ ansible_user }}'
         certificate: '{{ path_to_certfile }}'
      system_name: <system_name>
      vm_name: <vm_name>
      action: shutdown

Describe alternatives you've considered
Until the login with a certificate file is possible, we don't have much of an alternative to a login with username/password

Manage HMC users and their SSH-keys via Ansible module

Is your feature request related to a problem? Please describe.
As an admin I have to create, update and delete users on the HMC and manage their SSH-keys.

Describe the solution you'd like
Add an Ansible module to manage users and their keys.

Describe alternatives you've considered
Use the shell module and the following commands to manage users and keys, but this is not preferred.

  • chhmcusr
  • lshmcusr
  • mkhmcusr
  • rmhmcusr
  • mkauthkeys

Additional context
None

HMC_update_upgrade module _exit_ error during playbook execution.

HMC_update_upgrade module exit error during playbook execution.
We are getting error exit in case module hmc_update_upgrade is used. During update or upgrade.

To Reproduce
Steps to reproduce the behavior:
simple playbook.

  • name: Update the HMC to the V9R2M952 build level from sftp location
    hmc_update_upgrade:
    hmc_host: '{{ inventory_hostname }}'
    hmc_auth: '{{ curr_hmc_auth }}'
    build_config: '{{ build_config }}'
    location_type: sftp
    hostname: xxxxxxx
    userid: root
    passwd: xxx
    build_file: /export/hmc/V910/vMH01857_x86.iso
    state: updated
    connection: local
    register: updated_hmc_build_info
  1. See error
    2022-07-27 10:22:38,007 p=7995678 u=ansau n=ansible | <vhmc_ansible> EXEC /bin/sh -c 'sudo -H -S -u root /bin/sh -c '"'"'echo
    BECOME-SUCCESS-dqjezmhkbcpddkholxqnijtnmpyctcgh ; /usr/bin/python'"'"' && sleep 0'
    2022-07-27 10:22:41,758 p=21954856 u=ansau n=ansible | fatal: [vhmc_ansible]: FAILED! => {
    "changed": false,
    "invocation": {
    "module_args": {
    "build_config": {
    "build_file": "/export/hmc/V910/vMH01857_x86.iso",
    "hostname": "xxxxxxxx",
    "location_type": "sftp",
    "mount_location": null,
    "passwd": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
    "sshkey_file": null,
    "userid": "root"
    },
    "hmc_auth": {
    "password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
    "username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
    },
    "hmc_host": "vhmc_ansible",
    "state": "updated"
    }
    },
    "msg": "AttributeError('exit',)"
    }
    2022-07-27 10:22:41,759 p=21954856 u=ansau n=ansible | ...ignoring

Expected behavior
Module should not exit with error after HMC is rebooting, should wait until HMC is up and continue witch checks .

Environment (please complete the following information):

  • HMC: tested with several versions of HMC code [V9R952, V9R1M910]
  • Python 3.7.12
  • OpenSSH_8.1p1, OpenSSL 1.0.2u

vios module - fails with lpar_env=vioserver in settings

Yes I understand that it is implicit in the vios command module that the lpar_env should be "vioserver".
But I don't think this should be a failure condition.

However the documentation states:

settings
        To configure various supported attributes of VIOS partition.
        Supports all the attributes available for creation of VIOS on the mksyscfg command.

I would prefer this does not fail in my playbook if it is defined in settings.

Or the documentation should be changed?

example:

ansible localhost -m ibm.power_hmc.vios -a 'hmc_host=myhmc hmc_auth={"username":"hscroot","password":"abcd1234"} system_name=kurtkP8 name=myvios state=present settings={"lpar_env":"vioserver"}'
localhost | FAILED! => {
    "changed": false,
    "msg": "ParameterError: Invalid parameters: lpar_env"
}

from vios.py:

# Collection of attributes not supported by vios partition
not_support_settings = ['lpar_env', ...

powervm_lpar_instance incorrectly assigns I/O slots

Describe the bug
The first partition on any system defined with powervm_lpar_instance claims all I/O slots as required.

To Reproduce

---
- hosts: system_x1
  gather_facts: no
  serial: 1
  collections:
    - ibm.power_hmc
  tasks:
    - name: Create Logical Partition
      powervm_lpar_instance:
        hmc_host: "{{ hostvars['hmc'].ansible_host }}"
        hmc_auth:
          username: "{{ hostvars['hmc'].ansible_user }}"
          password: "{{ hostvars['hmc'].ansible_password }}"
        system_name: "{{ managed_system_name }}"
        vm_name: "{{ inventory_hostname_short }}"
        proc: "{{ proc }}"
        mem: "{{ mem }}"
        os_type: aix
        state: present
      delegate_to: localhost

system_x1 is a group with the hosts as1 and xsp in it. Prior to the playbook run:

hscroot@hmc:~> lssyscfg -r prof -m Server-9040-MR9-SNxxxxxxx -F lpar_name,io_slots
No results were found.

After the playbook run:

hscroot@hmc:~> lssyscfg -r prof -m Server-9040-MR9-SNxxxxxxx -F lpar_name,io_slots
xsp,"21010021/none/1,21010020/none/1,21010040/none/1,21010023/none/1,21020024/none/1,21030025/none/1,21010028/none/1,21010048/none/1,21010011/none/1,21010010/none/1,21010030/none/1,21010013/none/1,21020014/none/1,21030015/none/1,21010018/none/1,21010038/none/1"
as1,none

Expected behavior
Only resources defined in the task should be allocated to the LPAR. There is no way to tell the module which slots to assign, so it should not assign them. I realize that the HMC is "helping" to do this, but the module should either prevent that or allow slot assignments.

NPIV can't be read on fact

Hello this is an role for read fact of lpar that already exist but no NPIV information we be displayed

- name: "loop on lpar inventory"
  collections:
    - ibm.power_hmc
  powervm_lpar_instance:
    hmc_host: '{{ inventory_hostname }}'
    hmc_auth:
      username: '{{hmc_user}}'
      password: '{{hmc_pass}}'
    system_name: '{{item.frame}}'
    vm_name: '{{item.name}}'
    state: facts
    advanced_info: true
  register: powervm_lpar_instances
  loop: '{{lpar}}'

- name: Debug
  debug:
    var: powervm_lpar_instances

the output display this info
ok: [x.x.x.x] => (item={'name': 'tusdpsxs01v', 'frame': 'Server-9040-MR9-SNXXXXXXX', 'virtualcpu': 2, 'physicalcpu': 0.6, 'memory': 10240, 'netname': 'VLAN1167', 'virtual_switch': 'ETHERNET0', 'virtual_slot': 5, 'netname1': 'VLAN1166', 'virtual_slot1': 6, 'netname2': 'VLAN1168', 'virtual_slot2': 7, 'viosname': 'shvsdpvio1b', 'fcport': 'fcs0', 'fcport1': 'fcs1', 'viosname1': 'shvsdpvio2b', 'fcport2': 'fcs2', 'fcport3': 'fcs3', 'max_virtual_slots': 200, 'os_type': 'aix'})

no WWN displayed

Unable to run hmc_command in restricted hscroot shell

Describe the bug
Ansible returns unavailable with msg:

msg: 'Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in "/tmp", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p "echo ~./ansible/tmp"&& mkdir "echo ~./ansible/tmp/ansible-tmp-1652984846.243839-82432-233859373566647" && echo ansible-tmp-1652984846.243839-82432-233859373566647="echo ~./ansible/tmp/ansible-tmp-1652984846.243839-82432-233859373566647" ), exited with result 127'

Upon -vvv error observation, the error seems to point to not being able to run sh -c, due to sh not existing in /hmcrbin or /usr/hmcrbin. Symlinking corrects this issue, but will be overridden on the next HMC update.

-vvv output:

<hmc11> ESTABLISH SSH CONNECTION FOR USER: hscroot
<hmc11> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="hscroot"' -o ConnectTimeout=10 -o 'ControlPath=~/.ansible/tmp/ansible-ssh-%h-%p-%r' hmc11 '/bin/sh -c '"'"'echo ~. && sleep 0'"'"''
<hmc11> (1, b'', b"/bin/bash: /bin/sh: restricted: cannot specify `/' in command names\n")
<hmc11> Failed to connect to the host via ssh: /bin/bash: /bin/sh: restricted: cannot specify `/' in command names
<hmc11> ESTABLISH SSH CONNECTION FOR USER: hscroot
<hmc11> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="hscroot"' -o ConnectTimeout=10 -o 'ControlPath=~/.ansible/tmp/ansible-ssh-%h-%p-%r' hmc11 '/bin/sh -c '"'"'echo "`pwd`" && sleep 0'"'"''
<hmc11> (1, b'', b"/bin/bash: /bin/sh: restricted: cannot specify `/' in command names\n")
<hmc11> Failed to connect to the host via ssh: /bin/bash: /bin/sh: restricted: cannot specify `/' in command names
<hmc11> ESTABLISH SSH CONNECTION FOR USER: hscroot
<hmc11> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="hscroot"' -o ConnectTimeout=10 -o 'ControlPath=~/.ansible/tmp/ansible-ssh-%h-%p-%r' hmc11 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo ~./ansible/tmp `"&& mkdir "` echo ~./ansible/tmp/ansible-tmp-1652978341.5662153-14090-51854304798782 `" && echo ansible-tmp-1652978341.5662153-14090-51854304798782="` echo ~./ansible/tmp/ansible-tmp-1652978341.5662153-14090-51854304798782 `" ) && sleep 0'"'"''
<hmc11> (1, b'', b"/bin/bash: /bin/sh: restricted: cannot specify `/' in command names\n")
<hmc11> Failed to connect to the host via ssh: /bin/bash: /bin/sh: restricted: cannot specify `/' in command names
fatal: [hmc11]: UNREACHABLE! => changed=false 
  msg: 'Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in "/tmp", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p "` echo ~./ansible/tmp `"&& mkdir "` echo ~./ansible/tmp/ansible-tmp-1652978341.5662153-14090-51854304798782 `" && echo ansible-tmp-1652978341.5662153-14090-51854304798782="` echo ~./ansible/tmp/ansible-tmp-1652978341.5662153-14090-51854304798782 `" ), exited with result 1'
  unreachable: true

To Reproduce
Command run:

ansible-playbook -vvv -i hosts-hmc hmc-command-tst.yml -l hmc11

Playbook:

- name: Ansible playbook for testing hmc_command
  hosts: hosts-hmc
  order: sorted
  gather_facts: no
  connection: local
  tasks:
     - name: SSH To HMC and run uname
       ibm.power_hmc.hmc_command:
         hmc_host: '{{inventory_hostname}}'
         hmc_auth: 
            username: '{{ansible_user}}'
         cmd: uname -a

Expected behavior
normal stdout

Environment (please complete the following information):

  • HMC:
    hscroot@hmc11:~> lshmc -V
    "version= Version: 10
    Release: 1
    Service Pack: 1011
    HMC Build level 2202161537
    MF69180 - HMC V10R1 M1011
    MF69262 - iFix for HMC V10R1 M1011
    MF69288 - iFix for HMC V10R1 M1011
    ","base_version=V10R1
    "

  • Python Version
    [root@hmc11] # python3 -V Python 3.6.8

  • OpenSSH Version
    hscroot@hmc11:~> ssh -V OpenSSH_8.0p1, OpenSSL 1.1.1g FIPS 21 Apr 2020

Additional context
Not sure if the platform-python will execute successfully either.

Question:

I've downloaded and install the 1.4.0 collection to try some of the new features.

When I try and run a playbook using the hmc_command module I get a message back saying:

FAILED! => {"changed": false, "msg": "HmcError: Host public key is unknown. sshpass exits without confirming the new key."}

If I attempt to ssh in to the HMC using password or RSA key it works fine.

Any suggestions on how to fix this?

Thanks
Glenn

inventory settings - python interpreter

Do we need to specify a python interpreter version or any other parameters in the inventory hosts file? I am receiving the following error when executing a playbook using the hmc_command module:

TASK [Update customer information ] ***************************************************************************************************************************************
fatal: [server]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "msg": "HmcError: /bin/sh: -c: line 0: syntax error near unexpected token )'/bin/sh: -c: line 0: sshpass -p ******** ssh ********@server 'chsacfg -t custinfo -o set -i "admin_company_name=company,admin_name=name,admin_email=email,admin_phone=number,admin_addr=address,admin_city=,admin_country=US,admin_state=state,admin_postal_code=zip,sys_use_admin_info=1"''"}

IBM HMC groups

I have created group in IBM HMC i.e group for PROD servers, group for DR servers and a group for UAT servers.
UAT and PROD servers are in same frames but I want to differentiate that in dynamic inventory. How can I pull/import/filter/group those groups from IBM HMC to dynamic inventory file.
I have tried to define that like other filters: PartitionState or PartitionType but that did not work as it is not pulling/reading from my IBM HMC. I have tried defining like other groups AIX_72 or AIX_71 but that did not help either.
Is this feature already exists ? if it does how can I use that

power_lpar_instance HMC fails to log in

When running the module "power_lpar_instance" to power down a lpar, I get a HMC login failure. We run ssh keys and have a valid password. Neither allow access to the HMC.
This behaviour does not occur on the "power_lpar_migation".
The ansible server can log into our HMC without any issue. I tested this across 4 HMCs and all four have experienced the same issue.
Unless the documentation is wrong, there appears to be an issue with the authentication section.

fatal: [hmcdc1]: FAILED! => { "changed": false, "invocation": { "module_args": { "action": "shutdown", "advanced_info": null, "all_resources": null, "delete_vdisks": null, "hmc_auth": { "password": null, "username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER" }, "hmc_host": "hmcdc1", "iIPLsource": null, "keylock": null, "max_mem": null, "max_proc": null, "max_proc_unit": null, "max_virtual_slots": null, "mem": null, "min_mem": null, "min_proc": null, "min_proc_unit": null, "npiv_config": null, "os_type": null, "physical_io": null, "proc": null, "proc_compatibility_mode": null, "proc_mode": null, "proc_unit": null, "prof_name": null, "retain_vios_cfg": null, "shared_proc_pool": null, "state": null, "system_name": "MYMANAGED_SERVER", "virt_network_config": null, "vm_name": "aixgold", "volume_config": null, "weight": null } }, "msg": "Logon to HMC failed" }

---
- name: HMC poweroff
  hosts: HMC
  collections:
      - ibm.power_hmc
  connection: local

  tasks:

  - name: Shutdown a logical partition.
    powervm_lpar_instance:
        hmc_host: hmcdc1
        hmc_auth:
           username: '{{ ansible_user }}'
           password: *********
        system_name: MYMANAGED_SERVER
        vm_name: aixgold
        action: shutdown

Missing meta/runtime.yml

Describe the bug
this collection is missing a meta/runtime.yml file which will be used by Automation Hub/ansible to determine which version(s) of ansible this collection supports.
To Reproduce
navigate to top level of collection, no meta directory found.
Expected behavior

runtime.yml file present for housing versions of ansible supported

Screenshots
ex: requires_ansible: ">=2.9,<2.11"

Environment (please complete the following information):
n/a

Additional context
docs for meta directory https://docs.ansible.com/ansible/latest/dev_guide/developing_collections.html#meta-directory

Ansible PowerPC HMC: LPAR creation failed

  • LPAR creation using Ansible fails with error "urllib.error.HTTPError: HTTP Error 500"
  • Ansible Collection: ibm.power_hmc
  • Ansible Module: powervm_lpar_instance
  • create_lapr.yml:
----

- name: Create an AIX/Linux lpar instance and install OS
  hosts: '{{ myhost }}'
  collections:
    - ibm.power_hmc
  connection: local

  tasks:
    - name: Create lpar instance 
      powervm_lpar_instance:
        hmc_host: '{{ inventory_hostname }}'
        hmc_auth:
          username: 'user'
          password: 'pass'
        system_name: 'xx'
        vm_name: 'test001'
        virt_network_config:
          - network_name: 'xx'
        npiv_config:
          - vios_name: 'xx'
            fc_port: 'xx'
        os_type: aix_linux
        state: present

-ansible.cfg

[defaults]
host_key_checking = False
remote_tmp = /tmp/test/

[galaxy]
server_list = automation_hub

[galaxy_server.automation_hub]
url=https://cloud.redhat.com/api/automation-hub/
auth_url=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token

token=''

ansible --version

ansible [core 2.12.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/mahmab/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /home/mahmab/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.12 (default, Sep 16 2021, 10:52:37) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
jinja version = 2.10.3
libyaml = True

cat /etc/system-release

Red Hat Enterprise Linux release 8.6 (Ootpa)

Partition Mobility support please!!!

Hi!!

I think it is a very good and necessary initiative. Seems that is in very early stage... lots of encouragement!!!!. I dream of being able to run partition mobility tasks and include it in our automated ansible workflow validation for our disaster recovery plan.

Best regards.

Can you provide NPIV details as facts?

It will be very useful to retrieve WWPN of NPIV as facts - for the new created LPAR it will facilitate the provisioning. The WWPN can be then used to generate the host in the storage system and the zones in FC switch. It will result in a full automation of provisioning process.

Add capped uncapped mode and uncap_weight

is a good thing to add capped uncapped mode and uncapped weight for create more complete profile, other issue that i read is for max memory and proc so already submitted by other user.

ibm.power_hmc.powervm_lpar_instance allow to add/remove new hdisks

Is your feature request related to a problem? Please describe.
Adding new disks thu hmc gui is pain in the ass when you have thousends of luns in each vio server as you cant filter or search inside the gui to add disks.

Describe the solution you'd like
if the module would have a option "modify", so we could add disks, network adapters etc. to an existing lpar.

Describe alternatives you've considered
Using a lot of raw commands on VIO servers, HMC and the lpars :(

Additional context
Add a disk still sucks on HMC version 10 and i think even in 10 years it will be the same :(

grafik

Thanks,
Carsten

Key authentication doesn't work

Using version 1.6.0 of the collection, I found that for module powervm_lpar_instance authentication with SSH public/private key does not work.
The hscroot user can login to the HMC remotely using keys.

The issue
The respective part in the playbook looks like this:
...
tasks:
- name: create lpar
powervm_lpar_instance:
hmc_host: '{{ inventory_hostname }}'
hmc_auth:
username: '{{ ansible_user }}'
...

Running the playbook, I got this error:
...
The full traceback is:
File "/tmp/ansible_powervm_lpar_instance_payload_g9343xdc/ansible_powervm_lpar_instance_payload.zip/ansible_collections/ibm/power_hmc/plugins/modules/powervm_lpar_instance.py", line 987, in create_partition
File "/tmp/ansible_powervm_lpar_instance_payload_g9343xdc/ansible_powervm_lpar_instance_payload.zip/ansible_collections/ibm/power_hmc/plugins/module_utils/hmc_rest_client.py", line 239, in init
self.session = self.logon()
File "/tmp/ansible_powervm_lpar_instance_payload_g9343xdc/ansible_powervm_lpar_instance_payload.zip/ansible_collections/ibm/power_hmc/plugins/module_utils/hmc_rest_client.py", line 253, in logon
timeout=300)
File "/tmp/ansible_powervm_lpar_instance_payload_g9343xdc/ansible_powervm_lpar_instance_payload.zip/ansible/module_utils/urls.py", line 1390, in open_url
unredirected_headers=unredirected_headers)
File "/tmp/ansible_powervm_lpar_instance_payload_g9343xdc/ansible_powervm_lpar_instance_payload.zip/ansible/module_utils/urls.py", line 1294, in open
r = urllib_request.urlopen(*urlopen_args)
File "/opt/freeware/lib64/python3.7/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/opt/freeware/lib64/python3.7/urllib/request.py", line 531, in open
response = meth(req, response)
File "/opt/freeware/lib64/python3.7/urllib/request.py", line 641, in http_response
'http', request, response, code, msg, hdrs)
File "/opt/freeware/lib64/python3.7/urllib/request.py", line 569, in error
return self._call_chain(*args)
File "/opt/freeware/lib64/python3.7/urllib/request.py", line 503, in _call_chain
result = func(*args)
File "/opt/freeware/lib64/python3.7/urllib/request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
fatal: [XXXXXXXX]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"action": null,
"advanced_info": null,
"all_resources": null,
"delete_vdisks": null,
"hmc_auth": {
"password": null,
"username": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"
},
...
"msg": "PMCSS007: The authorization filter did not detect a valid session. Access has been denied. Check the allow remote access via the web setting in the console user properties, or check the request for a valid session id. "

Expected behavior
Playbook is being executed on HMC. The same authentication scheme is working without problems using module hmc_command.

Environment

  • HMC: V9R2 M953
  • Python Version: 3.7.12
  • OpenSSH Version: 8.1
  • Ansible control host: AIX 7.2TL5SP4

powervm_lpar_instance install_os action is stuck with HMC code 0608

Describe the bug
OS installation using powervm_lpar_instance module is stuck with code 0608 on HMC. Then installation fails with timeout error.

"msg": "AIX Installation failed even after waiting for 30 mins and the reference code is 0608"

To Reproduce
Command :
ansible-playbook lpar_install.yml

Playbook :

- hosts: localhost
  become: yes
  gather_facts: no

  tasks:

  - name: Launch AIX installation
    ibm.power_hmc.powervm_lpar_instance:
      hmc_host: "{{ hmc_host }}"
      hmc_auth:
        username: "{{ vault.hmc.username }}"
        password: "{{ vault.hmc.password }}"
      action: install_os
      system_name: "{{ client_mngsyst }}"
      vm_name: "{{ nim_client }}"
      install_settings:
        vm_ip: "{{ nim_client_ip }}"
        nim_ip: "{{ nim_server_ip }}"
        nim_gateway: "{{ nim_client_gw }}"
        nim_subnetmask: "{{ nim_client_nm }}"
        nim_vlan_id: "{{ vlan_number }}"
        timeout: 30

Expected behavior
AIX LPAR is started and installed.

Environment (please complete the following information):

  • HMC:
 hmc# lshmc -V
"version= Version: 9
 Release: 2
 Service Pack: 952
HMC Build level 2201042318
MH01874 - HMC V9R2 M951
MH01884 - iFix for HMC V9R2 M950
MH01890 - iFix for HMC V9R2 M951
MH01905 - HMC V9R2 M952
MH01917 - iFix for HMC V9R2 M952
","base_version=V9R2
"
  • Python Version : Python 3.6.8

  • OpenSSH Version : 1.0.2k

Additional context
Installation is working fine when we use ibm.power_hmc.hmc_command module with lpar_netboot :

- name: Launch AIX installation
  ibm.power_hmc.hmc_command:
    hmc_host: "{{ hmc_host }}"
    hmc_auth:
      username: "{{ vault.hmc.username }}"
      password: "{{ vault.hmc.password }}"
    cmd: "lpar_netboot -t ent -s auto -d auto -S {{ nim_server_ip }} -G {{ nim_client_gw }} -K {{ nim_client_nm }} -C {{ nim_client_ip }} {{ nim_client }} {{ hmc_boot_profile }} {{ client_mngsyst }}"

Incorrect check for available core count in powervm_lpar_instance

Hi, when using powervm_lpar_instance modules to build a new LPAR profile, the proc_units value is incorrectly checked.

For example on a frame with 1.4 cores available and proc_units: 0.5 and proc: 2 requested in the module, it fails saying please select 1 or less for proc value. If you lower to 1 virtual proc it works.

ibm.power_hmc.powervm_lpar_instance module: Allow Virtual Fibre slot numbers to be specified

ibm.power_hmc.powervm_lpar_instance module: Allow Virtual Fibre slot numbers to be specified
When creating virtual Fibre adapters for NPIV, the powervm_lpar_instance module uses the next available slots.
A client has requested that they would like to specify the slot numbers to conform with their existing standards.

Desired solution

  1. Allow selection of virtual fibre client adapter slot numbers to a given value.
  2. Allow selection of virtual fibre server adapter slot number to either
    a) a given value
    or
    b) the next available number after a starting value.

Client standard procedure dictates that the server slots created would be in a sequential block from a given starting value.
Such that if 3 adapters are requested per VIOS, then the next available block of 3 consecutive numbers would be used after the starting value.

Workaround

Currently the client has created additional virtual fibre adapters with the module to ensure the client slot numbers match their standards, and then created a playbook to remove the unwanted adapters.
Whilst the client slot numbering can conform to their standards with this method, the server slot numbers are in the wrong range and could interfere with their numbering convention for virtual network adapters.

As the selection of a free consecuative block may be tricky to engineer, providing a given slot number is probably a simpler solution.

Power an lpar with network boot

Hello,
It would be very useful if we could the bootlist of an LPAR/VM before powering it on, with something similar to :

chsyscfg -r lpar -m {{ powervm }} -i "name={{ lparname }},boot_string=\"{{network[0].of_path_name | default ('/vdevice/l-lan@3000000c')}}:speed=auto,duplex=auto,bootp,{{kickstart_server.ip}},,{{ network[0].ip }},{{ network[0].gateway }}\""

Or, may be allow lpar_netboot :

lpar_netboot -f -T off -t ent -s auto -d auto -m mac_address -S serverIP -G gw_IP -C clientIP lpar_name profile mnged_node

ibm.power_hmc.hmc_command doesn't provide command_output

Describe the bug
When I run the ibm.power_hmc.hmc_command in a playbook and register to a variable I am unable to call that output further in the playbook.

To Reproduce
Steps to reproduce the behavior:
Run the following playbook, in AWX, with a custom EE which includes the ibm.power_hmc collection.

---
- hosts: "{{ node }}"
  gather_facts: true
  vars:
    hmc: 192.168.51.33
    ansible_user: ansible
    hmc_password:xxxxxx

  tasks:
  - name: Get the lpar_name from the node value
    shell: |
      lparstat -i | 
      grep "Partition Name" | 
      awk -F ": " '{print $2}'
    register: lpar_name

  - name: Get the system_name from the hmc
    ibm.power_hmc.hmc_command:
      hmc_host: "{{ hmc }}"
      hmc_auth:
        username: '{{ ansible_user }}'
        password: '{{ hmc_password }}'
      cmd: "for sys in $(lssyscfg -r sys -Fname); do echo ${sys}:$(lssyscfg -r lpar -m $sys -Fname,lpar_id); done | grep {{ lpar_name.stdout }} | cut -d : -f 1"
    register: frame
    delegate_to: 127.0.0.1

  - name : Print frame
    debug:
      msg: "Got {{ frame.command_output }}"

The following error is seen:

{
  "msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'command_output'\n\nThe error appears to be in '/runner/project/ansible/playbooks/powerppc_snap_lun_all.yml': line 51, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n  - name : Print frame\n    ^ here\n",
  "_ansible_no_log": false
}

Expected behavior
Expecting frame.command_output to show the system name for the specified node (lpar name).

Environment (please complete the following information):

  • HMC: V10R2
  • Python Version Python 3.9.2
  • OpenSSH Version OpenSSH_8.0p1, OpenSSL 1.1.1g

Update the hyperlinks in the documentation

Hi,

Looks like you've copied over the paths (which contains dev-collection) from the other doc sources found within ansible-power.. you'll need to carefully review all source links and modify/confirm they match branches within the hmc collection.

Max/min support for proc en memory

Hello,

Could you add parameter for min/max proc unit and memory to module powervm_lpar_instance
for example :
proc: 1
proc_unit: 0.5
mem: 2048
min_mem: 1024
max_mem: 8196
min_proc: 1
max_proc: 4
min_proc_unit: 0.1
max_proc_unit: 2.0

Regards

Shutdown parameter

Hello

I'd like to know how to specify the type of shutdown I want to initiate :

  • os shutdown
  • os shutdown immediate
  • immediate
  • ...
    I'm not able to find that parameter in the documentation.

Thanks you

ibm.power_hmc.powervm_lpar_instance module - Shared Processor Pool

Hi there.
Shared Processor Pool is important attribute from licensing point of view for IBMi LPARS
Please can you let me know if this feature is currently available in this module? At a glance I could not see anything related to it.

This will be good to have if it can also be included in the next content release

Kind Regards
Sasi

HMC update/upgrade from latest at ibmwebsite

From HMC Webgui there is the option to load the version from ibmwebsite and install it.
Also the module ibm.power_hmc.firmware_update have this feature.
Would it be possible also do this on HMC update module?

ibm.power_hmc.powervm_lpar_instance module: CurrentAvailableSystemProcessorUnits check does not work for Shared CPU LPARs (see issue #32)

Module: ibm.power_hmc.powervm_lpar_instance
Version: 1.3.0

Issue described in #32 still exists as the current code performs a check that is only valid when creating Dedicated CPU Partitions.

When creating Dedicated CPU LPARs the function "validate_proc_mem" checks the CurrentAvailableSystemProcessorUnits and exits if the number of required processors "proc" is higher.
However, for Shared Processor partitions "proc" is the number of required Virtual processors and "proc_unit" is the number of required processors, but the function still performs the same check (i.e proc >CurrentAvailableSystemProcessorUnits )

e.g:
Available CPUs = 0.3
proc = 1
proc_unit = 0.1

Module will error:

fatal: [hmc10]: FAILED! => changed=false
  msg: 'HmcError: b''Available system proc units is not enough. Provide value on or below 0'''

Looking at the module I see a couple of issues exist.

1. The "validate_proc_mem" function defines a new variable "proc_units" that is never set
Line 371:
def validate_proc_mem(system_dom, proc, mem, proc_units=None):
proc_units is only used in this function and is not set when function is invoked.

Line 750:
validate_proc_mem(server_dom, int(proc), int(mem))
should be something like:
validate_proc_mem(server_dom, int(proc), int(mem), proc_unit)

2. The "validate_proc_mem" function always performs check for sufficient dedicated CPUs
Lines 387-388:

    if proc > int_avail_proc:
        raise HmcError("Available system proc units is not enough. Provide value on or below {0}".format(str(int_avail_proc)))

This test is only valid when creating Dedicated CPU LPARs - not Shared CPU LPARs

Original function:

    371 def validate_proc_mem(system_dom, proc, mem, proc_units=None):
    372
    373     curr_avail_proc_units = system_dom.xpath('//CurrentAvailableSystemProcessorUnits')[0].text
    374     int_avail_proc = int(float(curr_avail_proc_units))
    375
    376     if proc_units:
    377         min_proc_unit_per_virtproc = system_dom.xpath('//MinimumProcessorUnitsPerVirtualProcessor')[0].text
    378         float_min_proc_unit_per_virtproc = float(min_proc_unit_per_virtproc)
    379         if round(float(proc_units) % float_min_proc_unit_per_virtproc, 2) != float_min_proc_unit_per_virtproc:
    380             raise HmcError("Input processor units: {0} must be a multiple of {1}".format(proc_units, min_proc_unit_per_virtproc))
    381
    382     curr_avail_mem = system_dom.xpath('//CurrentAvailableSystemMemory')[0].text
    383     int_avail_mem = int(curr_avail_mem)
    384     curr_avail_lmb = system_dom.xpath('//CurrentLogicalMemoryBlockSize')[0].text
    385     lmb = int(curr_avail_lmb)
    386
    387     if proc > int_avail_proc:
    388         raise HmcError("Available system proc units is not enough. Provide value on or below {0}".format(str(int_avail_proc)))
    389
    390     if mem % lmb > 0:
    391         raise HmcError("Requested mem value not in mutiple of block size:{0}".format(curr_avail_lmb))
    392
    393     if mem > int_avail_mem:
    394         raise HmcError("Available system memory is not enough. Provide value on or below {0}".format(curr_avail_mem))
    395

My solution is a follows:

    371 def validate_proc_mem(system_dom, proc, mem, proc_unit=None):
    372
    373     curr_avail_proc_units = system_dom.xpath('//CurrentAvailableSystemProcessorUnits')[0].text
    374     curr_avail_procs = float(curr_avail_proc_units)
    375     int_avail_proc = int(curr_avail_procs)
    376
    377     if proc_unit:
    378         min_proc_unit_per_virtproc = system_dom.xpath('//MinimumProcessorUnitsPerVirtualProcessor')[0].text
    379         float_min_proc_unit_per_virtproc = float(min_proc_unit_per_virtproc)
    380         if round(float(proc_unit) % float_min_proc_unit_per_virtproc, 2) != float_min_proc_unit_per_virtproc:
    381             raise HmcError("Input processor units: {0} must be a multiple of {1}".format(proc_unit, min_proc_unit_per_virtproc))
    382
    383         if proc_unit > curr_avail_procs:
    384             raise HmcError("{0} Available system proc units is not enough for {1} shared CPUs. Provide value on or below {0}".format(str(curr_avail_procs),str(proc_unit)))
    385
    386     else:
    387         if proc > curr_avail_procs:
    388             raise HmcError("{2} Available system proc units is not enough for {1} dedicated CPUs. Provide value on or below {0} CPUs".format(str(int_avail_proc),str(proc),str(curr_avail_procs)))
    389
    390     curr_avail_mem = system_dom.xpath('//CurrentAvailableSystemMemory')[0].text
    391     int_avail_mem = int(curr_avail_mem)
    392     curr_avail_lmb = system_dom.xpath('//CurrentLogicalMemoryBlockSize')[0].text
    393     lmb = int(curr_avail_lmb)
    394
    395     if mem % lmb > 0:
    396         raise HmcError("Requested mem value not in mutiple of block size:{0}".format(curr_avail_lmb))
    397
    398     if mem > int_avail_mem:
    399         raise HmcError("Available system memory is not enough. Provide value on or below {0}".format(curr_avail_mem))
    400

In my code I have also changed "proc_units" to "proc_unit" for consistency with the rest of the module.
I have also added more detail to the messages to explain the failure.

Example 1. Shared CPU:
Available CPUs = 0.3
proc = 1
proc_unit = 0.5

TASK [Create logical partition] ******************************************************************************************************************************************************************************
fatal: [hmc10]: FAILED! => changed=false
  msg: 'HmcError: b''0.3 Available system proc units is not enough for 0.5 shared CPUs. Provide value on or below 0.3'''

Example 2. Dedicated CPU:
Available CPUs = 0.3
proc = 1

TASK [Create logical partition] ******************************************************************************************************************************************************************************
fatal: [hmc10]: FAILED! => changed=false
  msg: 'HmcError: b''0.3 Available system proc units is not enough for 1 dedicated CPUs. Provide value on or below 0 CPUs'''

Vios facts should contain cpu/memory just like powervm_lpar_instance

Is your feature request related to a problem? Please describe.
When gathering facts from vio it does not show cpu/memory stats like

"vio01": {
        "changed": false,
        "failed": false,
        "vios_info": {
            "affinity_group_id": "none",
            "allow_perf_collection": "0",
            "auto_start": "0",
            "boot_mode": "norm",
            "curr_lpar_proc_compat_mode": "POWER8",
            "curr_profile": "xxxx",
            "default_profile": "xxxx",
            "desired_lpar_proc_compat_mode": "default",
            "logical_serial_num": "xxxxxx",
            "lpar_avail_priority": "191",
            "lpar_env": "vioserver",
            "lpar_id": "2",
            "lpar_keylock": "norm",
            "migr_storage_vios_data_status": "unavailable",
            "migr_storage_vios_data_timestamp": "unavailable",
            "msp": "1",
            "name": "xxxxx",
            "os": "vios",
            "os_version": "VIOS 3.1.1.21",
            "power_ctrl_lpar_ids": "none",
            "powervm_mgmt_capable": "0",
            "redundant_err_path_reporting": "0",
            "resource_config": "1",
            "rmc_ipaddr": "xxx.xxx.xxx.xxx",
            "rmc_state": "active",
            "shared_proc_pool_util_auth": "0",
            "state": "Running",
            "sync_curr_profile": "0",
            "time_ref": "0",
            "vtpm_enabled": "0",
            "vtpm_encryption": "null",
            "vtpm_version": "null",
            "work_group_id": "none"
        }
    }

Describe the solution you'd like

add stuff like

"CurrentMemory": 30720.0,
            "CurrentProcessingUnits": 0.5,
            "CurrentProcessors": 2,
            "CurrentSharedProcessorPoolID": "4",
            "CurrentUncappedWeight": 10,
            "DedicatedVirtualNICs": [],
            "Description": "",
            "HasDedicatedProcessors": "false",
            "HasPhysicalIO": "false",
            "IsVirtualServiceAttentionLEDOn": "false",
            "LastActivatedProfile": "xxxxx",
            "MaximumMemory": "30720",
            "MaximumProcessingUnits": "10",
            "MaximumVirtualProcessors": "10",
            "MemoryMode": "Dedicated",
            "MigrationState": "Not_Migrating",
            "MinimumMemory": "3072",
            "MinimumProcessingUnits": "0.1",
            "MinimumVirtualProcessors": "1",

Describe alternatives you've considered
using powervm_lpar_instance on a vio but that does not work

Additional context
Add any other context or screenshots about the feature request here.

When reporting cpu using facts gathered from the hmc we miss vio CPU/memory setting resulting in the belief that there is cpu/memory available for new lpars while those resources are allocated to VIOS.

power_hmc.powervm_lpar_instance may assign a physical_io location that is a superstring of the requested one

Describe the bug
Adding the following list as the parameter for physical_io:
- P1-C12
- P1-C4
- P1-C3
- P1-C9
- P1-T4

Results in these "slots" being added to the LPAR:
- P1-C12
- P1-C49
- P1-C3
- P1-C9
- P1-T4

FWIW, C4 is the FC HBA I want to use, and C49 is an internal disk controller (with no attached drives). This is probably just a broken match that isn't anchored at the end of the comparison properly.

Additional context
The managed system is a Power S922 (A-model). A different model of system may not have any locations that are a proper substring of another one.

Allow multiple virt_networks and volumes on lpar creation

Is your feature request related to a problem? Please describe.
Most LPARs i have to create have a mirrorred rootvg and at least 2 network interfaces in different vlans. it would be wonderfull if it is possible to add them at creation.

Describe the solution you'd like
For the virtual network interfaces something like a list of dictionaries to configure them

      virt_network_config:
        interface0:
          network_name: VLAN1233-ETHERNET0
          slot_number: 49
        interface1:
          network_name: VLAN1234-ETHERNET0
          slot_number: 50

and same for volume configuration (with name or size)

      volume_config:
        volume0:
          vios_name: zepp31-vios1
          volume_name: hdisk4
        volume1:
          vios_name: zepp31-vios1
          volume_name: hdisk5

Describe alternatives you've considered
Create LPARs with one interface and 1 volume and then use the HMC UI to add other interface + volume.
Takes a lot of time and is slow manualy done :(

Alias from /etc/host not working in powervm_inventory

Aliases defined in /etc/hosts are not recognized when used in a dynamic inventory source file.

To Reproduce
Create an dynamic inventory source file, with the hostname set to an alias from /etc/hosts. Run ansible-inventory -i <inv_file.power_hmc.yml> --list

WARNING]: Unable to connect to HMC host hmc_prod4: Unknown http error
[WARNING]: * Failed to parse /home/e6094159gx/tio_opensystems_aix/inventory_hmc.power_hmc.yml with auto plugin: There are no systems defined to any valid HMCs provided or no valid
connections were established.

Expected behavior
Aliases should be followed and FQDN should not be required.

Ansible/System info

ansible 2.9.6
python version = 3.6.8

I'll take a look at this..

ibm.power_hmc.powervm_inventory SystemName name missing of "identify_unknown_by" lpars

Describe the bug
When gathering facts of an HMC I have added the not running lpars. I assumed that even if the lpar is powered off the SystemName of where the lpar is would be added

plugin: ibm.power_hmc.powervm_inventory
hmc_hosts:
  - hmc: "xxx"
    user: "xxx"
    password: "xxx"
identify_unknown_by: PartitionName
compose:
  server_name: SystemName
keyed_groups:
  - prefix: "server_group"
    key: SystemName
  - prefix: "partition_type"
    key: PartitionType

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
SystemName should be available even if the lpar is powered off

Additional context

For reporting purposes I am gathering facts of the lpars via the HMC. I would suspect that gathering fact of an powered off system should work yet it needs the SystemName and that is not inserted into the inventory test

Please allow physical I/O slot assignments when creating LPAR

Is your feature request related to a problem? Please describe.
powervm_lpar_instance is so very close to being what I need.

Describe the solution you'd like
It would be extremely helpful if the module would allow assigning I/O slots when defining a partition. It would be great if they could be specified with full or partial (but unique) location codes. For example, if there is only one drawer in the system, we could avoid specifying the part of the location before the first '-', which identifies the drawer by serial.

Please do the DRC_index lookups in the module, as most mortals don't use them regularly.

If I still have to do the lookups for the first part of the DRC_slot_name it would still be much better than having to assign the slot myself later.

Describe alternatives you've considered
I am currently interacting with the HMC via the raw module, doing the text processing necessary to convert locations to DRC_index values with Jinja2, and then updating the partition profile to contain only the slots I need with raw again. This is a ball of ugly: more than half of the playbook is is dealing with slot assignments, and the Ansible facilities available for error checking with raw are less than elegant.

Create Lpar with vNIC network adapter

Is your feature request related to a problem? Please describe.
Hello the new method to deploy lpars is vNIC.

Describe the solution you'd like
I would like to have a vNIC Lpar with multiple backing adapters.

Describe alternatives you've considered

Additional context
grafik

lpar_netboot unable to find bootable adapter

Running the module ibm.power_hmc.powervm_lpar_instance with action: install_os is not able to get pingable virtual ethernet adapter.
Manual attempt from SMS menu does ping the nim server.

The issue is, that in fetchIODetailsForNetboot in plugins/module_utils/hmc_resource.py that calls lpar_netboot, the -K netmask parameter is missing and an attempt to ping the nim server fails.
In order to resolve it I added
self.OPT['LPAR_NETBOOT']['-K'] + submask +
in the function, expanded input parameters of the functions to include netmask and fixed calling of this function in power_hmc/plugins/modules/powervm_lpar_instance.py to include netmask

To Reproduce
ibm.power_hmc.powervm_lpar_instance:
hmc_auth:
username: xxx
password: yyy
hmc_host: hmc
action: install_os
install_settings:
vm_ip: 192.168.0.64
nim_ip: 192.168.0.9
nim_gateway: 192.168.0.1
nim_subnetmask: 255.255.255.0
vm_name: "{{lparname}}"
system_name: "{{servername}}"

Expected behavior
installing OS from nim

Environment (please complete the following information):
ibm.power_hmc 1.6.0
nim oslevel: 7300-00-02-2220
hmc version: HMC V9R1 M942
python: 3.7.9

Unable to install latest fix for HMC, aka MF71299

Describe the bug
Unable to install latest fix for HMC, aka MF71299

To Reproduce
Execute that playbook :

  • hosts: all
    connection: local
    gather_facts: false

    tasks:

    • name: update hmc
      ibm.power_hmc.hmc_update_upgrade:
      hmc_host: "{{ inventory_hostname }}"
      hmc_auth:
      username: XXXXXXXXX
      password: XXXXXXXXX
      build_config:
      location_type: disk
      build_file: /ansible/dev/rpzbg569/hmc/MF71299
      state: updated
      register: results

Error :
TASK [update hmc] ***************************************************************************************************************************************************************************************************************************
fatal: [labhmc342]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "msg": "Error: copy of image to hmc is incomplete. Necessary files are missing"}

Expected behavior
Fix should be installed successfully
FYI, with previous fix, aka MF71191, I was able to install it without any error.

Screenshots

Environment (please complete the following information):

  • HMC: V2R10M1041
  • Python Version : 3.6.8
  • OpenSSH Version :openssh-8.0p1-19.el8_8.x86_64

Additional context
I had a quick look in the code of the module hmc_update_upgrade. After the copy of all files in the folder, it launches a "ls" command to check if all files have been copied.
Then it checks in the output if the first file is an ISO file.

For MF71299, the first file is not an ISO file, and so, the module is failing=>
ls -l
total 2003276
-rw-rw-r--. 1 rpzbg569 rpzbg569 3980 Oct 11 08:08 vMF71299.dd.xml
-rw-rw-r--. 1 rpzbg569 rpzbg569 2531 Oct 11 08:08 vMF71299.pd.sdd
-rw-rw-r--. 1 rpzbg569 rpzbg569 2051323904 Oct 11 08:14 vMF71299_ppc.iso
-rw-rw-r--. 1 rpzbg569 rpzbg569 10460 Oct 11 08:08 vMF71299.readme.html
-rw-rw-r--. 1 rpzbg569 rpzbg569 5335 Oct 11 08:08 vMF71299.txt

For MF71191, the first file is an ISO file, and the module is not failing =>
ls -al
total 9845424
-rw-r--r-x 1 rpzbg569 suadm 5040805888 Aug 30 07:09 HMC_Update_V10R2M1041_ppc.iso
-rw-r--r-x 1 rpzbg569 suadm 4435 Aug 30 07:09 MF71191.dd.xml
-rw-r--r-x 1 rpzbg569 suadm 2861 Aug 30 07:09 MF71191.pd.sdd
-rw-r--r-x 1 rpzbg569 suadm 16150 Aug 30 07:09 MF71191.readme.html
-rw-r--r-x 1 rpzbg569 suadm 9855 Aug 30 07:09 MF71191.txt

Could you please fix that module ?

Thanks you

powervm_dlpar shows "changed" as result even if nothing was changed

Describe the bug
ibm.power_hmc.powervm_dlpar shows "changed" as result even if nothing was changed.

changed: [hmc0001] => changed=true
  partition_info:
    mem: '2048'
    pool_id: '0'
    proc: '2'
    proc_unit: '0.2'
    sharing_mode: uncapped
    uncapped_weight: '128'
    vm_name: lpar0001

To Reproduce
Steps to reproduce the behavior:

  1. Add a task "ibm.power_hmc.powervm_dlpar" to your playbook
  2. Run the playbook
  3. Run the playbook again
  4. See "changed" as result, but there was nothing changed

Expected behavior
The result should show "changed" only when changes are made. Otherwise it would be great to get "ok" as result.

Environment (please complete the following information):

  • HMC: V10R2.1041.0
  • Python Version: 3.9.16
  • Ansible Collection: 1.8.1

Logon to HMC failed

Hi,

I'm using the following playbook :

  • hosts: "{{ groups['hmc'][0] }}"
    tasks:
    • name: reboot lpar
      ibm.power_hmc.powervm_lpar_instance:
      hmc_host: "{{ inventory_hostname }}"
      hmc_auth:
      username: "ansusr"
      password: "{{ acompw }}"
      system_name: uxauray01
      vm_name: admin-xt02-lp
      action: restart

"ansusr" is our Ansible user and has been created on the HMCs. I verified and can login with keys (passwordless) or with password from our Ansible controller to all HMCs by command line without issues. Yet the collection (Version 1.4) is returning "Logon to HMC failed" when i run the playbook... What can be causing this ? What am i missing here ? lxml has been installed by pip on the controller and Python version currently is at 3.7.11.
Debug output is :
The full traceback is:
File "/tmp/ansible_ibm.power_hmc.powervm_lpar_instance_payload_379kz9wn/ansible_ibm.power_hmc.powervm_lpar_instance_payload.zip/ansible_collections/ibm/power_hmc/plugins/modules/powervm_lpar_instance.py", line 982, in poweroff_partition
File "/tmp/ansible_ibm.power_hmc.powervm_lpar_instance_payload_379kz9wn/ansible_ibm.power_hmc.powervm_lpar_instance_payload.zip/ansible_collections/ibm/power_hmc/plugins/module_utils/hmc_rest_client.py", line 237, in init
self.session = self.logon()
File "/tmp/ansible_ibm.power_hmc.powervm_lpar_instance_payload_379kz9wn/ansible_ibm.power_hmc.powervm_lpar_instance_payload.zip/ansible_collections/ibm/power_hmc/plugins/module_utils/hmc_rest_client.py", line 251, in logon
timeout=300)
File "/tmp/ansible_ibm.power_hmc.powervm_lpar_instance_payload_379kz9wn/ansible_ibm.power_hmc.powervm_lpar_instance_payload.zip/ansible/module_utils/urls.py", line 1390, in open_url
unredirected_headers=unredirected_headers)
File "/tmp/ansible_ibm.power_hmc.powervm_lpar_instance_payload_379kz9wn/ansible_ibm.power_hmc.powervm_lpar_instance_payload.zip/ansible/module_utils/urls.py", line 1294, in open
r = urllib_request.urlopen(*urlopen_args)
File "/opt/freeware/lib64/python3.7/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/opt/freeware/lib64/python3.7/urllib/request.py", line 525, in open
response = self._open(req, data)
File "/opt/freeware/lib64/python3.7/urllib/request.py", line 543, in _open
'_open', req)
File "/opt/freeware/lib64/python3.7/urllib/request.py", line 503, in _call_chain
result = func(*args)
File "/tmp/ansible_ibm.power_hmc.powervm_lpar_instance_payload_379kz9wn/ansible_ibm.power_hmc.powervm_lpar_instance_payload.zip/ansible/module_utils/urls.py", line 467, in https_open
return self.do_open(self._build_https_connection, req)
File "/opt/freeware/lib64/python3.7/urllib/request.py", line 1352, in do_open
raise URLError(err)
fatal: [uxhmc01c1]: FAILED! => changed=false
invocation:
module_args:
action: restart
advanced_info: null
all_resources: null
delete_vdisks: null
hmc_auth:
password: VALUE_SPECIFIED_IN_NO_LOG_PARAMETER
username: VALUE_SPECIFIED_IN_NO_LOG_PARAMETER
hmc_host: uxhmc01c1
iIPLsource: null
keylock: null
max_virtual_slots: null
mem: null
npiv_config: null
os_type: null
physical_io: null
proc: null
proc_unit: null
prof_name: null
retain_vios_cfg: null
state: null
system_name: uxauray01
virt_network_config: null
vm_name: admin-xt02-lp
volume_config: null
msg: Logon to HMC failed

Pass ad-hoc HMC commands via SSH

I'd like to pass HMC commands via SSH through ansible out to multiple HMCs. Can we have a module/plugin created for hmc_cli

Example:
Run 'chsacfg -t custinfo -o set' to update customer information

VIOS Update

Upgrade/Update ioslevel on an IBM POWER VIO server

An ansible module to upgrade the VIO Server version

Using localised scripts, commands, mostly a manual process to perform the VIOS upgrade/update

While there is an ansible module available as part of this POWER_HMC collection to install VIOS, I don't find a module to be used to perform upgrade/update the existing VIO server, upgrade of the existing VIO servers is a more frequent requirement than installing fresh VIOs, Kindly look into this requirement to accomplish the VIOS upgrade using an ansible module.

HMC_update_upgrade module finish with FAILED: Hmc not responding after reboot

Describe the bug
HMC_update_upgrade module finish with FAILED: Hmc not responding after reboot

TASK [debug] **********************************************************************************************************************
Tuesday 23 August 2022 07:51:03 EDT (0:00:00.092) 0:00:02.326 **********
ok: [vhmc_ansible] =>
missing_ifixes:

  • MH01857

TASK [Installing missing ifixes] **************************************************************************************************
Tuesday 23 August 2022 07:51:03 EDT (0:00:00.067) 0:00:02.394 **********
included: /home/ansau/project/fiserv-ansible/playbooks/hmc_update.yml for vhmc_ansible

TASK [Update the HMC to the V9R2M952 build level from sftp location] **************************************************************
Tuesday 23 August 2022 07:51:03 EDT (0:00:00.105) 0:00:02.499 **********
fatal: [vhmc_ansible]: FAILED! => changed=false
msg: 'FAILED: Hmc not responding after reboot'
...ignoring

TASK [pause] **********************************************************************************************************************
Tuesday 23 August 2022 08:53:52 EDT (1:02:48.995) 1:02:51.495 **********
[pause]

hscroot@vhmcansible:> who -b
system boot Aug 23 11:53
hscroot@vhmcansible:
> lshmc -V
"version= Version: 9
Release: 1
Service Pack: 942
HMC Build level 2011270432
MH01759 - HMC V9R1 M920 [x86_64]
MH01787 - Required fix for HMC V9R1 M920 [x86_64]
MH01789 - HMC V9R1 Service Pack 1 Release (M921) [x86_64]
MH01800 - iFix for HMC V9R1 M921
MH01808 - iFix for HMC V9R1 M921
MH01810 - HMC V9R1 M930
MH01820 - iFix for HMC V9R1 M910+
MH01825 - iFix for HMC V9R1 M930
MH01857 - Save upgrade fix for HMC V9R1 M910+
MH01876 - HMC V9R1 M942
","base_version=V9R1
"

hscroot@vhmcansible:~>
Expected behavior
reconnect to rebooted HMC and check versions

Screenshots
If applicable, add screenshots to help explain your problem.

Environment (please complete the following information):
HMC: tested with several versions of HMC code [V9R952, V9R1M910]
Python 3.7.12
OpenSSH_8.1p1, OpenSSL 1.0.2u

Additional context

Question: Processor compatibility mode on a lpar definition

Can I use the lpar module to change the default profile of an lpar? For example when updating an AIX server I would like to change the Processor compatibility mode and make sure it is at POWER8. Thank you very much for your contribution to Ansible :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.