Code Monkey home page Code Monkey logo

dataverse-ansible's People

Contributors

atc0005 avatar bencomp avatar danschmidt5189 avatar dheles avatar djbrooke avatar donsizemore avatar janvanmansum avatar kcondon avatar landreev avatar pallinger avatar pdurbin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dataverse-ansible's Issues

Install Script - Default credentials are different

The default credentials are different depending on which install script you use. As a first time deployer of the ansible version on AWS, I had to ask around to those with more experience to learn the default dataverseAdmin password is different.

Running ansible inside EC2 instance?

As part of this issue in the Dataverse repo, we are looking into ways to spin up branches for better testing and design use. The goal is to script the creation of AWS EC2 instances that we can then run Dataverse inside.

I have a script working that creates the Centos 7 AWS EC2 instance and drops in the name of a branch to be deployed (into a text file). My understanding from talking to @pdurbin (who is away at a conference) is that I should be able to leverage this Ansible repo to deploy the resources needed by Dataverse. I am a bit confused reading through this repo on how to move forward, as a lot of this works seems to be for using vagrant alongside ansible. I'm also confused because the usage line in the README.md points to a dataverse.pb file that seems to have been pulled from the repo during refactor.

I am admittedly a complete ansible noob but if any guidance can be provided it would be a lot of help! Thanks!

makecache exception

I'm using https://github.com/IQSS/dataverse/blob/1a9808beb317a1092711e4b379af0eefca7f9c4d/scripts/installer/ec2-create-instance.sh and d1dcc4c and getting this error:

Using /etc/ansible/ansible.cfg as config file

PLAY [Install Dataverse] *******************************************************

TASK [Gathering Facts] *********************************************************
ok: [localhost] => {"ansible_facts": {"ansible_all_ipv4_addresses": ["172.31.84.86"], "ansible_all_ipv6_addresses": ["fe80::1033:8dff:fe55:9890"], "ansible_apparmor": {"status": "disabled"}, "ansible_architecture": "x86_64", "ansible_bios_date": "08/24/2006", "ansible_bios_version": "4.2.amazon", "ansible_cmdline": {"BOOT_IMAGE": "/boot/vmlinuz-3.10.0-862.3.2.el7.x86_64", "LANG": "en_US.UTF-8", "console": "ttyS0,115200", "crashkernel": "auto", "ro": true, "root": "UUID=8c1540fa-e2b4-407d-bcd1-59848a73e463"}, "ansible_date_time": {"date": "2019-05-29", "day": "29", "epoch": "1559126448", "hour": "10", "iso8601": "2019-05-29T10:40:48Z", "iso8601_basic": "20190529T104048866438", "iso8601_basic_short": "20190529T104048", "iso8601_micro": "2019-05-29T10:40:48.866524Z", "minute": "40", "month": "05", "second": "48", "time": "10:40:48", "tz": "UTC", "tz_offset": "+0000", "weekday": "Wednesday", "weekday_number": "3", "weeknumber": "21", "year": "2019"}, "ansible_default_ipv4": {"address": "172.31.84.86", "alias": "eth0", "broadcast": "172.31.95.255", "gateway": "172.31.80.1", "interface": "eth0", "macaddress": "12:33:8d:55:98:90", "mtu": 9001, "netmask": "255.255.240.0", "network": "172.31.80.0", "type": "ether"}, "ansible_default_ipv6": {}, "ansible_device_links": {"ids": {}, "labels": {}, "masters": {}, "uuids": {"xvda1": ["8c1540fa-e2b4-407d-bcd1-59848a73e463"]}}, "ansible_devices": {"xvda": {"holders": [], "host": "", "links": {"ids": [], "labels": [], "masters": [], "uuids": []}, "model": null, "partitions": {"xvda1": {"holders": [], "links": {"ids": [], "labels": [], "masters": [], "uuids": ["8c1540fa-e2b4-407d-bcd1-59848a73e463"]}, "sectors": "16775168", "sectorsize": 512, "size": "8.00 GB", "start": "2048", "uuid": "8c1540fa-e2b4-407d-bcd1-59848a73e463"}}, "removable": "0", "rotational": "0", "sas_address": null, "sas_device_handle": null, "scheduler_mode": "deadline", "sectors": "16777216", "sectorsize": "512", "size": "8.00 GB", "support_discard": "0", "vendor": null, "virtual": 1}}, "ansible_distribution": "CentOS", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/redhat-release", "ansible_distribution_file_variety": "RedHat", "ansible_distribution_major_version": "7", "ansible_distribution_release": "Core", "ansible_distribution_version": "7", "ansible_dns": {"nameservers": ["172.31.0.2"], "search": ["ec2.internal"]}, "ansible_domain": "ec2.internal", "ansible_effective_group_id": 0, "ansible_effective_user_id": 0, "ansible_env": {"HISTSIZE": "1000", "HOME": "/root", "HOSTNAME": "ip-172-31-84-86.ec2.internal", "LANG": "en_US.UTF-8", "LOGNAME": "root", "MAIL": "/var/spool/mail/centos", "PATH": "/sbin:/bin:/usr/sbin:/usr/bin", "PWD": "/home/centos/dataverse", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/sh -c echo BECOME-SUCCESS-ihkygujwppxnogfkjndfeemlnhtnrtck ; /usr/bin/python /home/centos/.ansible/tmp/ansible-tmp-1559126448.0-137737360116954/AnsiballZ_setup.py", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "centos", "TERM": "unknown", "USER": "root", "USERNAME": "root", "_": "/usr/bin/python"}, "ansible_eth0": {"active": true, "device": "eth0", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "off [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "off [fixed]", "netns_local": "off [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "off [fixed]", "tx_checksum_ipv4": "on [fixed]", "tx_checksum_ipv6": "off [requested on]", "tx_checksum_sctp": "off [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "on [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "off [fixed]", "tx_nocache_copy": "off", "tx_scatter_gather": "on", "tx_scatter_gather_fraglist": "off [fixed]", "tx_sctp_segmentation": "off [fixed]", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "off [requested on]", "tx_tcp_ecn_segmentation": "off [fixed]", "tx_tcp_mangleid_segmentation": "off", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "off [fixed]", "vlan_challenged": "off [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "172.31.84.86", "broadcast": "172.31.95.255", "netmask": "255.255.240.0", "network": "172.31.80.0"}, "ipv6": [{"address": "fe80::1033:8dff:fe55:9890", "prefix": "64", "scope": "link"}], "macaddress": "12:33:8d:55:98:90", "module": "xen_netfront", "mtu": 9001, "pciid": "vif-0", "promisc": false, "timestamping": ["rx_software", "software"], "type": "ether"}, "ansible_fibre_channel_wwn": [], "ansible_fips": false, "ansible_form_factor": "Other", "ansible_fqdn": "ip-172-31-84-86.ec2.internal", "ansible_hostname": "ip-172-31-84-86", "ansible_hostnqn": "", "ansible_interfaces": ["lo", "eth0"], "ansible_is_chroot": false, "ansible_iscsi_iqn": "", "ansible_kernel": "3.10.0-862.3.2.el7.x86_64", "ansible_lo": {"active": true, "device": "lo", "features": {"busy_poll": "off [fixed]", "fcoe_mtu": "off [fixed]", "generic_receive_offload": "on", "generic_segmentation_offload": "on", "highdma": "on [fixed]", "hw_tc_offload": "off [fixed]", "l2_fwd_offload": "off [fixed]", "large_receive_offload": "off [fixed]", "loopback": "on [fixed]", "netns_local": "on [fixed]", "ntuple_filters": "off [fixed]", "receive_hashing": "off [fixed]", "rx_all": "off [fixed]", "rx_checksumming": "on [fixed]", "rx_fcs": "off [fixed]", "rx_udp_tunnel_port_offload": "off [fixed]", "rx_vlan_filter": "off [fixed]", "rx_vlan_offload": "off [fixed]", "rx_vlan_stag_filter": "off [fixed]", "rx_vlan_stag_hw_parse": "off [fixed]", "scatter_gather": "on", "tcp_segmentation_offload": "on", "tx_checksum_fcoe_crc": "off [fixed]", "tx_checksum_ip_generic": "on [fixed]", "tx_checksum_ipv4": "off [fixed]", "tx_checksum_ipv6": "off [fixed]", "tx_checksum_sctp": "on [fixed]", "tx_checksumming": "on", "tx_fcoe_segmentation": "off [fixed]", "tx_gre_csum_segmentation": "off [fixed]", "tx_gre_segmentation": "off [fixed]", "tx_gso_partial": "off [fixed]", "tx_gso_robust": "off [fixed]", "tx_ipip_segmentation": "off [fixed]", "tx_lockless": "on [fixed]", "tx_nocache_copy": "off [fixed]", "tx_scatter_gather": "on [fixed]", "tx_scatter_gather_fraglist": "on [fixed]", "tx_sctp_segmentation": "on", "tx_sit_segmentation": "off [fixed]", "tx_tcp6_segmentation": "on", "tx_tcp_ecn_segmentation": "on", "tx_tcp_mangleid_segmentation": "on", "tx_tcp_segmentation": "on", "tx_udp_tnl_csum_segmentation": "off [fixed]", "tx_udp_tnl_segmentation": "off [fixed]", "tx_vlan_offload": "off [fixed]", "tx_vlan_stag_hw_insert": "off [fixed]", "udp_fragmentation_offload": "on", "vlan_challenged": "on [fixed]"}, "hw_timestamp_filters": [], "ipv4": {"address": "127.0.0.1", "broadcast": "host", "netmask": "255.0.0.0", "network": "127.0.0.0"}, "ipv6": [{"address": "::1", "prefix": "128", "scope": "host"}], "mtu": 65536, "promisc": false, "timestamping": ["rx_software", "software"], "type": "loopback"}, "ansible_local": {}, "ansible_lsb": {}, "ansible_machine": "x86_64", "ansible_machine_id": "b30d0f2110ac3807b210c19ede3ce88f", "ansible_memfree_mb": 2972, "ansible_memory_mb": {"nocache": {"free": 3500, "used": 288}, "real": {"free": 2972, "total": 3788, "used": 816}, "swap": {"cached": 0, "free": 0, "total": 0, "used": 0}}, "ansible_memtotal_mb": 3788, "ansible_mounts": [{"block_available": 1731437, "block_size": 4096, "block_total": 2094336, "block_used": 362899, "device": "/dev/xvda1", "fstype": "xfs", "inode_available": 4154609, "inode_total": 4193792, "inode_used": 39183, "mount": "/", "options": "rw,seclabel,relatime,attr2,inode64,noquota", "size_available": 7091965952, "size_total": 8578400256, "uuid": "8c1540fa-e2b4-407d-bcd1-59848a73e463"}], "ansible_nodename": "ip-172-31-84-86.ec2.internal", "ansible_os_family": "RedHat", "ansible_pkg_mgr": "yum", "ansible_proc_cmdline": {"BOOT_IMAGE": "/boot/vmlinuz-3.10.0-862.3.2.el7.x86_64", "LANG": "en_US.UTF-8", "console": ["tty0", "ttyS0,115200n8", "ttyS0,115200"], "crashkernel": "auto", "ro": true, "root": "UUID=8c1540fa-e2b4-407d-bcd1-59848a73e463"}, "ansible_processor": ["0", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz", "1", "GenuineIntel", "Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz"], "ansible_processor_cores": 2, "ansible_processor_count": 1, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 2, "ansible_product_name": "HVM domU", "ansible_product_serial": "ec26cfe9-50a5-8150-63f3-b2d353884ceb", "ansible_product_uuid": "EC26CFE9-50A5-8150-63F3-B2D353884CEB", "ansible_product_version": "4.2.amazon", "ansible_python": {"executable": "/usr/bin/python", "has_sslcontext": true, "type": "CPython", "version": {"major": 2, "micro": 5, "minor": 7, "releaselevel": "final", "serial": 0}, "version_info": [2, 7, 5, "final", 0]}, "ansible_python_version": "2.7.5", "ansible_real_group_id": 0, "ansible_real_user_id": 0, "ansible_selinux": {"config_mode": "enforcing", "mode": "enforcing", "policyvers": 31, "status": "enabled", "type": "targeted"}, "ansible_selinux_python_present": true, "ansible_service_mgr": "systemd", "ansible_ssh_host_key_ecdsa_public": "AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEkr34+Lttr0p87H4S4cOUUxGjCV4RC93VWjNjxyS15vMTjWWtPPKs9ZtxAPtyHAqoCDFpya3EGRprUHgBTTTBY=", "ansible_ssh_host_key_ed25519_public": "AAAAC3NzaC1lZDI1NTE5AAAAICNKwjbY6hKqgYHinuMTMPPyxQjaH6pNVM5zpwb0MqeJ", "ansible_ssh_host_key_rsa_public": "AAAAB3NzaC1yc2EAAAADAQABAAABAQCpoo9EyTlz2prnMwo9rVe9qDvv+o4hvS69e4V6JofVAVX9UrL4FBwp/jdYqF32NiIJ09/FzHgigPPQbvp6/gLEC7482DDIcsyEYYEIv5z7Nu6Dc9Rkv3JeBGOliYGzQwHZ6F0awxsp9i1IFwHgaa1nS0RX3OcaQCzrhdIfS+TmmyCQiEr6TEqkbXRLp4Xv9qka6mzPV8mTzdvPjo8F/80J4ob93/EJqgstsqtMSN51ADWJki3kPXD1TiCTWUu18j9JvXAUusVew+8wQTFdRJrj/6ClzghQG/exR94/r5WpEmkqYxSFDWKiLaJd0mkWW0bnjvLJQndVagscxl/Ls+eZ", "ansible_swapfree_mb": 0, "ansible_swaptotal_mb": 0, "ansible_system": "Linux", "ansible_system_capabilities": ["cap_chown", "cap_dac_override", "cap_dac_read_search", "cap_fowner", "cap_fsetid", "cap_kill", "cap_setgid", "cap_setuid", "cap_setpcap", "cap_linux_immutable", "cap_net_bind_service", "cap_net_broadcast", "cap_net_admin", "cap_net_raw", "cap_ipc_lock", "cap_ipc_owner", "cap_sys_module", "cap_sys_rawio", "cap_sys_chroot", "cap_sys_ptrace", "cap_sys_pacct", "cap_sys_admin", "cap_sys_boot", "cap_sys_nice", "cap_sys_resource", "cap_sys_time", "cap_sys_tty_config", "cap_mknod", "cap_lease", "cap_audit_write", "cap_audit_control", "cap_setfcap", "cap_mac_override", "cap_mac_admin", "cap_syslog", "35", "36+ep"], "ansible_system_capabilities_enforced": "True", "ansible_system_vendor": "Xen", "ansible_uptime_seconds": 56, "ansible_user_dir": "/root", "ansible_user_gecos": "root", "ansible_user_gid": 0, "ansible_user_id": "root", "ansible_user_shell": "/bin/bash", "ansible_user_uid": 0, "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "xen", "discovered_interpreter_python": "/usr/bin/python", "gather_subset": ["all"], "module_setup": true}, "changed": false}

TASK [dataverse : ensure EPEL repository for RedHat7/CentOS7] ******************
ok: [localhost] => {"changed": false, "changes": {"installed": [], "updated": []}, "msg": "", "obsoletes": {"grub2": {"dist": "x86_64", "repo": "installed", "version": "1:2.02-0.65.el7.centos.2"}, "grub2-tools": {"dist": "x86_64", "repo": "installed", "version": "1:2.02-0.65.el7.centos.2"}}, "rc": 0, "results": ["All packages providing epel-release are up to date", ""]}

TASK [dataverse : let's use the closest centos mirror] *************************
changed: [localhost] => {"changed": true, "path": "/var/cache/yum/x86_64/7/timedhosts.txt", "state": "absent"}

TASK [dataverse : makecache] ***************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: AttributeError: 'YumModule' object has no attribute 'yum_basecmd'
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File "/home/centos/.ansible/tmp/ansible-tmp-1559126451.09-77356773344508/AnsiballZ_yum.py", line 114, in \n _ansiballz_main()\n File "/home/centos/.ansible/tmp/ansible-tmp-1559126451.09-77356773344508/AnsiballZ_yum.py", line 106, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File "/home/centos/.ansible/tmp/ansible-tmp-1559126451.09-77356773344508/AnsiballZ_yum.py", line 49, in invoke_module\n imp.load_module('main', mod, module, MOD_DESC)\n File "/tmp/ansible_yum_payload_ovYKnb/main.py", line 1608, in \n File "/tmp/ansible_yum_payload_ovYKnb/main.py", line 1604, in main\n File "/tmp/ansible_yum_payload_ovYKnb/main.py", line 1498, in run\nAttributeError: 'YumModule' object has no attribute 'yum_basecmd'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}

PLAY RECAP *********************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0

Googling for the error I wonder if ansible/ansible#56638 is related.

Glasshfish download with vagrant failes

I am running the ansible playbook with vagrant and get this error:

==> default: TASK: [dataverse | download and unzip glassfish4] ***************************** ==> default: failed: [dataverse] => {"failed": true} ==> default: msg: Source 'http://dlc-cdn.sun.com/glassfish/4.1/release/glassfish-4.1.zip' does not exist ==> default: ==> default: FATAL: all hosts have already failed -- aborting ==> default: ==> default: PLAY RECAP ******************************************************************** ==> default: to retry, use: --limit @/root/dataverse.pb.retry ==> default: ==> default: dataverse : ok=22 changed=21 unreachable=0 failed=1

It says it can not find the glassfish zip file, but when I open the link in a browser I get a download presented. Any ideas how to fix this?

Configure CI

We should integrate CI so that changes are automatically tested. At minimum, I would want to verify that a first-run completes successfully using Ansible 1.9 against a CentOS 7 target. Afterwards, we could add more Ansible versions (e.g. 2.4, which is what my team uses).

Does anyone have a CI preference? Travis CI is free for open-source projects, easy to setup, and happens to be what ansible-galaxy scaffolds by default.

Note that this will require admin rights on the repository (to configure the webhook) as well as an account on the relevant CI platform. I'm happy to demo these on my fork for now (danschmidt5189/dataverse-ansible), but think it would be better to integrate them into the main repository.

from Vagrant, can't create dataverses or datasets due to absence of Dataverse-specific Solr schema

After running vagrant up and logging in at http://localhost:8080 I'm getting Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://localhost:8983/solr/collection1: undefined field parentId in /usr/local/glassfish4/glassfish/domains/domain1/logs/server.log when I try to create a dataset or dataverse.

A quick look at http://localhost:8983/solr/collection1/schema/fields confirms that the Dataverse-specific list of Solr fields is not present.

Screenshot:

screen shot 2019-01-11 at 10 49 25 am

error

I am getting the following error when I'm trying to install from windows using vagrant.
Does anyone have a suggestion?

==> default: TASK: [dataverse | start glassfish with asadmin so subsequent Ansible-initiated restarts succeed on RedHat/CentOS] ***
==> default: <127.0.0.1> REMOTE_MODULE command nohup /usr/local/glassfish4/bin/asadmin start-domain #USE_SHELL
==> default: c
==> default: hanged: [dataverse] => {"changed": true, "cmd": "nohup /usr/local/glassfish4/bin/asadmin start-domain", "delta": "0:00:16.131758", "end": "2016-11-11 20:20:36.225497", "rc": 0, "start": "2016-11-11 20:20:20.093739", "stderr": "nohup: ignoring input", "stdout": "Waiting for domain1 to start .............\nSuccessfully started the domain : domain1\ndomain Location: /usr/local/glassfish4/glassfish/domains/domain1\nLog File: /usr/local/glassfish4/glassfish/domains/domain1/logs/server.log\nAdmin Port: 4848\nCommand start-domain executed successfully.", "warnings": []}
==> default:
==> default: TASK: [dataverse | start glassfish on Debian/Ubuntu] **************************
==> default: skipping: [dataverse]
==> default:
==> default: TASK: [dataverse | download solr. unarchive urls supported in 2.0.] ***********
==> default: <127.0.0.1> REMOTE_MODULE get_url url=https://archive.apache.org/dist/lucene/solr/4.6.0/solr-4.6.0.tgz dest=/tmp mode=0644
==> default: failed: [dataverse] => {"failed": true}
==> default: msg: failed to create temporary content file: The read operation timed out
==> default:
==> default: FATAL: all hosts have already failed -- aborting
==> default:
==> default: PLAY RECAP ********************************************************************
==> default: to retry, use: --limit @/root/dataverse.yaml.retry
==> default:
==> default: dataverse : ok=27 changed=26 unreachable=0 failed=1
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.

support for dataverse-metrics

dataverse-metrics is a new reporting tool that will appear on a future version of http://guides.dataverse.org/en/4.11/admin/reporting-tools.html

@donsizemore and I were talking at http://irclog.iq.harvard.edu/dataverse/2019-03-26#i_89120 about how it might be nice to allow dataverse-metrics to be installed on the same host as Dataverse. One could even configure :MetricsUrl ( http://guides.dataverse.org/en/4.11/installation/config.html#metricsurl ) to point at the URL where metrics are installed.

IQSS/dataverse-metrics#5 hasn't been merged yet but once it is I'm planning on cutting a new release (v0.1.0 doesn't contain the plots, which are coming in that pull request).

403 error with yum install shibboleth

Installation of the shibboleth rpm packages fails:

==> default: msg: http://download.opensuse.org/repositories/security%3A/shibboleth/CentOS_7/x86_64/libcurl-openssl-7.51.0-2.2.x86_64.rpm: [Errno 14] HTTP Error 403 - Forbidden
==> default: Trying other mirror.
==> default: To address this issue please refer to the below knowledge base article
==> default: 
==> default: https://access.redhat.com/solutions/69319
==> default: 
==> default: If above article doesn't help to resolve this issue please create a bug on https://bugs.centos.org/
==> default: 
==> default: http://download.opensuse.org/repositories/security%3A/shibboleth/CentOS_7/x86_64/liblog4shib1-1.0.9-3.1.x86_64.rpm: [Errno 14] HTTP Error 403 - Forbidden
==> default: Trying other mirror.
==> default: http://download.opensuse.org/repositories/security%3A/shibboleth/CentOS_7/x86_64/libsaml9-2.6.0-1.1.x86_64.rpm: [Errno 14] HTTP Error 403 - Forbidden
==> default: Trying other mirror.
==> default: http://download.opensuse.org/repositories/security%3A/shibboleth/CentOS_7/x86_64/libxml-security-c17-1.7.3-3.1.x86_64.rpm: [Errno 14] HTTP Error 403 - Forbidden
==> default: Trying other mirror.
==> default: http://download.opensuse.org/repositories/security%3A/shibboleth/CentOS_7/x86_64/libxmltooling7-1.6.0-1.1.x86_64.rpm: [Errno 14] HTTP Error 403 - Forbidden
==> default: Trying other mirror.
==> default: http://download.opensuse.org/repositories/security%3A/shibboleth/CentOS_7/x86_64/opensaml-schemas-2.6.0-1.1.x86_64.rpm: [Errno 14] HTTP Error 403 - Forbidden
==> default: Trying other mirror.
==> default: http://download.opensuse.org/repositories/security%3A/shibboleth/CentOS_7/x86_64/shibboleth-2.6.0-2.1.x86_64.rpm: [Errno 14] HTTP Error 403 - Forbidden
==> default: Trying other mirror.
==> default: http://download.opensuse.org/repositories/security%3A/shibboleth/CentOS_7/noarch/shibboleth-embedded-ds-1.2.0-4.2.noarch.rpm: [Errno 14] HTTP Error 403 - Forbidden
==> default: Trying other mirror.
==> default: http://download.opensuse.org/repositories/security%3A/shibboleth/CentOS_7/x86_64/xmltooling-schemas-1.6.0-1.1.x86_64.rpm: [Errno 14] HTTP Error 403 - Forbidden
==> default: Trying other mirror.
==> default: 
==> default: 
==> default: Error downloading packages:
==> default:   libcurl-openssl-7.51.0-2.2.x86_64: [Errno 256] No more mirrors to try.
==> default:   libsaml9-2.6.0-1.1.x86_64: [Errno 256] No more mirrors to try.
==> default:   shibboleth-2.6.0-2.1.x86_64: [Errno 256] No more mirrors to try.
==> default:   opensaml-schemas-2.6.0-1.1.x86_64: [Errno 256] No more mirrors to try.
==> default:   libxmltooling7-1.6.0-1.1.x86_64: [Errno 256] No more mirrors to try.
==> default:   liblog4shib1-1.0.9-3.1.x86_64: [Errno 256] No more mirrors to try.
==> default:   libxml-security-c17-1.7.3-3.1.x86_64: [Errno 256] No more mirrors to try.
==> default:   shibboleth-embedded-ds-1.2.0-4.2.noarch: [Errno 256] No more mirrors to try.
==> default:   xmltooling-schemas-1.6.0-1.1.x86_64: [Errno 256] No more mirrors to try.

Also when I login to the VM with ssh [email protected] and run sudo yum install shibboleth I get the same message. I get a ping to the internet, so it is not a network issue. Would this be a general opensuse problem, or a specific ansible thing?

Provide usage example in README

The README should explain how to use this role in context, for example in a ansible-playbook command applied to a single machine.

dataverse.siteUrl JVM option required for many features

As discussed in IQSS/dataverse#4517 the JVM option dataverse.siteUrl is very important for a properly functioning installation of Dataverse.

Yesterday I spun up an EC2 instance and forgot to set it before asking @jggautier to try export. He got to http://ec2-100-27-31-230.compute-1.amazonaws.com:8080/dataset.xhtml?persistentId=doi:10.5072/FK2/G251YB ok but when he clicked export he got https://dataverse.yourinstitution.edu/api/datasets/export?exporter=oai_datacite&persistentId=doi%3A10.5072/FK2/G251YB which won't work, of course.

The challenge (the fun!) of this is that on EC2 we tell people to start on URLs like http://ec2-100-27-31-230.compute-1.amazonaws.com:8080 and we can't know in advance what the hostname will be. So I can't put http://ec2-100-27-31-230.compute-1.amazonaws.com:8080 into my main.yml file. Instead, we have to update domain.xml afterwards. I've been doing this manually (and coached @mheppler through it, who took notes) but it would be great to automate this in some way.

Role does not conform to ansible-galaxy structure

Handled by #13.

We integrate all third-party roles by way of an ansible-galaxy requirements.yml file, e.g.:

- src: [email protected]:IQSS/dataverse-ansible.git
  scm: git
  version: v1.2.3

That's not possible given the current structure of the role:

  • The actual role is nested under ./roles
  • There's no meta/main.yml

Failed to get patched weld jar

I get this error:

==> default: TASK: [dataverse | get patched weld jar] ************************************** 
==> default: <127.0.0.1> REMOTE_MODULE get_url url=http://central.maven.org/maven2/org/jboss/weld/weld-osgi-bundle/2.2.10.SP1/weld-osgi-bundle-2.2.10.SP1-glassfish4.jar dest=/usr/local/glassfish4/glassfish/modules owner=root group=root mode=0644
==> default: failed: [dataverse] => {"dest": "/usr/local/glassfish4/glassfish/modules", "failed": true, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "response": "Request failed: <urlopen error [Errno -2] Name or service not known>", "size": 12288, "state": "directory", "status_code": -1, "uid": 0, "url": "http://central.maven.org/maven2/org/jboss/weld/weld-osgi-bundle/2.2.10.SP1/weld-osgi-bundle-2.2.10.SP1-glassfish4.jar"}
==> default: msg: Request failed
==> default: 
==> default: FATAL: all hosts have already failed -- aborting
==> default: 

The jar file (http://central.maven.org/maven2/org/jboss/weld/weld-osgi-bundle/2.2.10.SP1/weld-osgi-bundle-2.2.10.SP1-glassfish4.jar) is accessible. @donsizemore: Would this be a problem with the download command?

Playbook assumes that sudo is present

We're going through the process of using the playbooks for the first time within LXD CentOS 7 containers and encountered an error regarding a missing sudo command:

TASK [dataverse : start glassfish with asadmin so subsequent Ansible-initiated restarts succeed on RedHat/CentOS] *******************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "/bin/sh: sudo: command not found\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 127}
	to retry, use: --limit @/root/dataverse.retry

PLAY RECAP **************************************************************************************************************************************************************************************************
localhost                  : ok=37   changed=32   unreachable=0    failed=1   

Installing the sudo package resolves the issue. Please let me know if you'd like a PR for this.

Does this role support running Dataverse in a suburi?

For example, we're attempting to run Dataverse in a container and have Apache on a different host reverse-proxy connections to port 8080 exposed on the container host. From what I can recall, the application receiving connections has to be told about the sub-uri in order to properly generate URLs that reference it, otherwise sub-URI references refer to the base '/' path instead.

I didn't spot this support in the role defaults/main.yml file, but wanted to double-check and make sure I wasn't overlooking anything.

Thanks.

ansible install question - seems to be failing on last glassfish4 step

I've been able to use the vagrant install with no trouble. But running the ansible script I do get a fatal error on the last glassfish step.

fatal: [dataverse]: FAILED! => {"changed": true, "cmd": "unzip -d /tmp /tmp/glassfish-4.1.zip", "delta": "0:00:00.109161", "end": "2017-10-31 23:23:59.418016", "failed": true, "rc": 1, "start": "2017-10-31 23:23:59.308855", "stderr": "replace /tmp/glassfish4/bin/asadmin? [y]es, [n]o, [A]ll, [N]one, [r]ename: NULL\n(EOF or read error, treating as "[N]one" ...)", "stdout": "Archive: /tmp/glassfish-4.1.zip\n creating: /tmp/glassfish4/.org.opensolaris,pkg/download/\n creating: /tmp/glassfish4/.org.opensolaris,pkg/file/", "stdout_lines": ["Archive: /tmp/glassfish-4.1.zip", " creating: /tmp/glassfish4/.org.opensolaris,pkg/download/", " creating: /tmp/glassfish4/.org.opensolaris,pkg/file/"], "warnings": ["Consider using unarchive module rather than running unzip"]}

I can log in and check all the services in the readme document but am unable to log into the dataverse app (as I did with the vagrant install on my laptop).

Thanks,
Jamie Jamison
UCLA Social Science Data Archive
[email protected]

Vagrantfile does not support interactive development

Also addressed by #10.

The current Vagrantfile does a few things that make it a bit hard to work with:

  • Instead of syncing the repository into the VM, the code git-clones it into place. Thus, you can't develop interactively.
  • It wraps Ansible in a shell provisioner rather than using one of Vagrant's built-in ansible provisioners.
  • Port forwarding doesn't take advantage of Vagrant's auto_correct feature, which handles host port collision.

the Perl script at roles/dataverse/templates/install.de.j2 is unused and should be deleted

I'm looking at the Perl script at https://github.com/IQSS/dataverse-ansible/blob/master/roles/dataverse/templates/install.de.j2 and wondering if it's used anywhere. I can only assume it's there for a reason but I can't tell where it gets called.

If that install.de.j2 file is used, I'd imagine that one would always be looking upstream to see if the Perl script changed when Dataverse releases a new version, https://github.com/IQSS/dataverse/blob/v4.6.2/scripts/installer/install for example.

I'm asking because in IQSS/dataverse#3937 (comments welcome!) we're talking about switching from Perl to something else. I'm wondering if the Dataverse developers should adopt Ansible. I haven't played with this repo much but I use parts of https://github.com/pdurbin/dataverse-osx-playbook for my dev environment (see also "pick your poison" at http://bl.ocks.org/pdurbin/raw/7847a0642f8bd6601a07c3619b4a35f6/#2 ).

PostgreSQL 10

At standup we just talked about IQSS/dataverse#5809 and the note from @donsizemore that he's been playing around with running Dataverse on PostgreSQL 10 and 11. This is exactly the sort of innovation we have come to love from @donsizemore such as how he forged the way toward Shibboleth 3. He also added support for changing Java versions in this dataverse-ansible repo so that we can experiment with Java 11 some day. In that spirit, it would be great if dataverse-ansible supported the ability to switch between PostgreSQL 9 and 10 (and 11 I guess, but it isn't yet supported by Flyway).

I found this in the code:

  postgres:
    reporpm: https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-7-x86_64/pgdg-centos96-9.6-3.noarch.rpm
    version: 9.6

I assume the reporpm needs to change to the "latest" one @donsizemore found in the other issue? And probably other stuff. Drivers maybe. I'm not sure.

In short, we'd like to use the ec2 spin up scripts to experiment with versions of PostgreSQL beyond version 9 some day. Thanks!

Upgrade from Solr 7.3.0 to 7.3.1

Over at IQSS/dataverse#5442 we are upgrading Solr from 7.3.0 to 7.3.1 so we should update the Ansible code as well. From a quick look there seem to be two places to make the change to that the value passed around as "dataverse.solr.version" is the newer version:

  • defaults/main.yml
  • tests/group_vars/vagrant.yml

The dataset could not be created: ArrayIndexOutOfBoundsException edu.ucsb.nceas.ezid.EZIDService.getMetadata(EZIDService.java:268)

Tested on IQSS/dataverse@635b208

I can't create a dataset.

Screenshot:

screen shot 2018-10-01 at 8 59 21 pm

Stack trace:

Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
at edu.ucsb.nceas.ezid.EZIDService.getMetadata(EZIDService.java:268)
at edu.harvard.iq.dataverse.DOIEZIdServiceBean.alreadyExists(DOIEZIdServiceBean.java:56)

We probably need a fix similar to IQSS/dataverse#5124 to put the EZID username and password back in the config. I was removed in 4.9.3. Longer term we need to figure out what to do when EZID really goes away for real. See IQSS/dataverse#5024 (comment) for the possibility of programmatic access to test DataCite credentials. For now, the documented process for getting these credentials is manual: http://guides.dataverse.org/en/4.9.3/installation/config.html#persistent-identifiers-and-publishing-datasets

screen shot 2018-10-01 at 9 05 36 pm

unable to upload files

I've run the install on aws and vagrant. Now that I have the AWS machine size corrected the install runs to completion.

One problem I am having both with vagrant and aws is that I'm unable to upload files. Out-of-the-box the directory is /usr/local/dvn/data.

These are the jvm options:
-Ddataverse.files.directory=/usr/local/dvn/data
-Ddataverse.rserve.host=rserve.ec2-54-219-54-246.us-west-1.compute.amazonaws.com
-Ddataverse.rserve.port=6311
-Ddataverse.rserve.user=rserve
-Ddataverse.rserve.password=rserve
-Ddataverse.fqdn=ec2-54-219-54-246.us-west-1.compute.amazonaws.com
-Ddataverse.siteUrl=http://ec2-54-219-54-246.us-west-1.compute.amazonaws.com:8080
-Ddataverse.auth.password-reset-timeout-in-minutes=60
-Djavax.xml.parsers.SAXParserFactory=com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl
-Ddoi.password=apitest
-Ddoi.username=apitest
-Ddoi.baseurlstring=https://ezid.cdlib.org
-Ddataverse.timerServer=true

I have to check UCLA's EZID setup but otherwise I'm not sure what to look for.

Right now the test site is at: http://ec2-54-219-54-246.us-west-1.compute.amazonaws.com:8080/

Sorry to keep coming up with these beginner questions.

question about restrictred file permissions

Not sure if this should be posted here or at the dataverse section.

I have a test site at: http://ec2-54-219-54-246.us-west-1.compute.amazonaws.com:8080/. I'm trying out restricted file settings. As a non-admin user I request access, email generated to the admin account includes the link to the Restricted File Permissions page. From the link I get a 404 file (permissions-manage-files.xhtml not found) but in the admin account I can navigate to the permissions page from my notifications.

Obviously I've configured something wrong. I'm going through the documentation but haven't found a solution yet.

support API test suite

is it really as simple as

mvn test -Dtest=DataversesIT,DatasetsIT,SwordIT,AdminIT,BuiltinUsersIT,UsersIT,UtilIT,ConfirmEmailIT,FileMetadataIT,FilesIT,SearchIT,InReviewWorkflowIT,HarvestingServerIT,MoveIT,MakeDataCountApiIT -Ddataverse.test.baseurl=

glassfish.service:12] Unknown lvalue 'DefaultTimeoutStartSec' in section 'Service'

I mentioned earlier on #45 that my friend successfully ran the playbook in a LXD container, but that I repeatedly ran into issues when attempting to do the same thing.

Details:

  • My test system: Ubuntu 16.04 VM with LXD v3.0.3 (local VMware Workstation, modern hardware)
  • His system: Ubuntu 18.04 VM with v3.0.3 (ESXi 6.0, MUCH older hardware)

Guessing that GlassFish was failing to start due to a timeout issue, I modified a local copy of the GlassFish unit file to increase the timeout. When I ran systemctl daemon-reload, systemd complained about the directive I just modified (found via systemctl status glassfish or journalctl -u glassfish):

Mar 15 14:13:00 java-test systemd[1]: [/usr/lib/systemd/system/glassfish.service:12] Unknown lvalue 'DefaultTimeoutStartSec' in section 'Service'

I changed the directive to TimeoutStartSec instead and that appeared to resolve the syntax error and it allowed the playbook to complete on my Ubuntu 16.04 LTS system.

Going to test with original playbook (+ sudo package install) on a local Ubuntu 18.04 box to see if it's something to do with my hardware.

Will submit a PR for this soon.

CC: @auadamw

quickstart for newbies

I'm an Ansible newbie and was having trouble getting the playbook to run with the README as of 1eb8a4e

This version seems to be working for me:

git clone https://github.com/IQSS/dataverse-ansible.git dataverse

export ANSIBLE_ROLES_PATH=.

ansible-playbook --connection=local -v -i dataverse/inventory dataverse/dataverse.pb -e dataverse/defaults/main.yml

Maybe there's more stuff in here than I need. I have spun up a CentOS 7 VM on EC2 and ran the commands above as root.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.