Code Monkey home page Code Monkey logo

puppet-proxysql's Introduction

Puppet module for ProxySQL

CI Code Coverage Puppet Forge Puppet Forge - downloads Puppet Forge - endorsement Puppet Forge - scores

Table of Contents

  1. Overview
  2. Setup - The basics of getting started with proxysql
  3. Usage - Configuration options and additional functionality
  4. Reference - An under-the-hood peek at what the module is doing and how
  5. Limitations - OS compatibility, etc.
  6. Development - Guide for contributing to the module

Overview

The proxysql module installs, configures and manages the ProxySQL service.

This module will install the ProxySQL and manage it's configuration. It also extends Puppet to be able to manage mysql_users, mysql_servers, mysql_replication_hostgroups, mysql_galera_hostgroups, mysql_query_rules, proxysql_servers, scheduler and global_variables.

Setup

Setup Requirements

The module requires Puppet 5.5.8 and above. It also depends on:

For up to date details on external dependencies, please see the metadata.json or for released versions, the puppet forge page.

puppet/selinux is an optional soft dependency and not even listed in the metadata.json. No Operating Systems require this module to be present, but if it is, it will be used to install SELinux rules that allow logrotate to work. See manage_selinux

Beginning with proxysql

To install the ProxySQL software with all the default options:

include proxysql

By default, packages come from the official upstream package repositories which the module will configure. On new installations, (by default), the 2.0.x repository will be configured. If ProxySQL is already installed, then the repository matching the currently installed version will be used.

To force the use of 1.4.x packages, use the version parameter. (Note, the example below does not force the installation of 1.4.16, it only ensures the correct repository is configured and that ProxySQL will be configured as if the version installed is 1.4.16)

class { 'proxysql':
  version => '1.4.16',
}

To use your Operating System's own packages set manage_repo => false.

class { 'proxysql':
  manage_repo => false,
}

or you can configure your own repository by eg. declaring your own yumrepo, pulp_rpmbind or rhn_channel resource and setting manage_repo => false.

For example, using pulp and katello/pulp

pulp_rpmbind { 'my_proxysql_repo':}

class { 'proxysql':
  manage_repo => false,
  require     => Pulp_rpmbind['my_proxysql_repo'],
}

Alternatively, you can specify a package_source and associated options. This mimics the old, (pre version 4), behaviour of this module.

class { 'proxysql':
  package_source         => 'https://github.com/sysown/proxysql/releases/download/v1.4.11/proxysql_1.4.11-debian9_amd64.deb',
  package_checksum_value => '65a3c2b98eefa42946ee59eef18ba18534c2a39d',
  package_checksum_type  => 'sha1',
}

You can customize options such as (but not limited to) listen_port, admin_password, monitor_password, ...

  class { 'proxysql':
    listen_port              => 3306,
    admin_password           => Sensitive('654321'),
    monitor_password         => Sensitive('123456'),
    override_config_settings => $override_settings,
  }

You can configure users\hostgroups\rules\schedulers using class parameters

  class { 'proxysql':
     mysql_servers    => [
       {
         'db1' => {
           'port' => 3306,
           'hostgroup_id' => 1,
         }
       },
       {
         'db2' => {
           'hostgroup_id' => 2,
         }
       },
     ],
     mysql_users      => [
       {
         'app' => {
           'password' => '*92C74DFBDA5D60ABD41EFD7EB0DAE389F4646ABB',
           'default_hostgroup' => 1,
         }
       },
       {
         'ro'  => {
           'password' => mysql_password('MyReadOnlyUserPassword'),
           'default_hostgroup' => 2,
         }
       },
     ],
     mysql_hostgroups => [
       {
         '1-2' => {
           'writer' => 1,
           'reader' => 2,
         }
       },
     ],
     mysql_group_replication_hostgroups => [
       {
         'hostgroup 2' => {
           'reader'  => 10,
           'writer'  => 5,
           'backup'  => 2,
           'offline' => 11,
         }
       },
     ],
     mysql_galera_hostgroups => [
       {
         'galera hostgroup 1' => {
           'writer'  => 1,
           'backup'  => 2,
           'reader'  => 3,
           'offline' => 4,
         }
       },
     ],
     mysql_rules      => [
       {
         'mysql_query_rule-1' => {
           'rule_id'         => 1,
           'match_pattern'   => 'testtable',
           'replace_pattern' => 'test.newtable',
           'apply'           => 1,
           'active'          => 1,
         }
       },
     ],
     schedulers       => [
       {
         'scheduler-1' => {
           'scheduler_id'  => 1,
           'active'        => 0,
           'filename'      => '/usr/bin/whoami',
         }
       },
     ],

Or by using individual resources:

  class { 'proxysql':
    listen_port    => 3306,
    admin_password => Sensitive('SuperSecretPassword'),
  }

  proxy_mysql_server { '192.168.33.31:3306-31':
    hostname     => '192.168.33.31',
    port         => 3306,
    hostgroup_id => 31,
  }

  proxy_mysql_server { '192.168.33.32:3306-31':
    hostname     => '192.168.33.32',
    port         => 3306,
    hostgroup_id => 31,
  }

  proxy_mysql_server { '192.168.33.33:3306-31':
    hostname     => '192.168.33.33',
    port         => 3306,
    hostgroup_id => 31,
  }

  proxy_mysql_server { '192.168.33.34:3306-31':
    hostname     => '192.168.33.34',
    port         => 3306,
    hostgroup_id => 31,
  }

  proxy_mysql_server { '192.168.33.35:3306-31':
    hostname     => '192.168.33.35',
    port         => 3306,
    hostgroup_id => 31,
  }

  proxy_mysql_replication_hostgroup { '30-31':
    writer_hostgroup => 30,
    reader_hostgroup => 31,
    comment          => 'Replication Group 1',
  }

  proxy_mysql_replication_hostgroup { '20-21':
    writer_hostgroup => 20,
    reader_hostgroup => 21,
    comment          => 'Replication Group 2',
  }

  proxy_mysql_group_replication_hostgroup { '5-2-10-11':
    reader_hostgroup        => 10,
    writer_hostgroup        => 5,
    backup_writer_hostgroup => 2,
    offline_hostgroup       => 11,
  }

  proxy_mysql_galera_hostgroup { '1-2-3-4':
    writer_hostgroup        => 1,
    backup_writer_hostgroup => 2,
    reader_hostgroup        => 3,
    offline_hostgroup       => 4,
  }

  proxy_mysql_user { 'tester':
    password          => 'testerpwd',
    default_hostgroup => 30,
  }

  proxy_mysql_query_rule { 'mysql_query_rule-1':
    rule_id               => 1,
    match_pattern         => '^SELECT',
    apply                 => 1,
    active                => 1,
    destination_hostgroup => 31,
  }

  proxy_scheduler { 'scheduler-1':
    scheduler_id => 1,
    active       => 0,
    filename     => '/usr/bin/whoami',
  }

  proxy_scheduler { 'scheduler-2':
    scheduler_id => 2,
    active       => 0,
    interval_ms  => 1000,
    filename     => '/usr/bin/id',
  }

Usage

Configuration is done by the proxysql class.

Customize config settings

You can override any configuration setting by using the override_config_settings hash. This hash resembles the structure of the proxysql.cnf file

{
    admin_variables => {
      refresh_interval => 2000,
      ...
    },
    mysql_variables => {
      monitor_writer_is_also_reader => false,
      ...
    },
    mysql_servers => {
      '127.0.0.1:33061-1' => {
         'address'      => '127.0.0.1',
         'port'         => 33061,
         'hostgroup_id' => 1,
       },
      '127.0.0.1:33062-1' => {
         'address'      => '127.0.0.1',
         'port'         => 33062,
         'hostgroup_id' => 1,
       },
      ...
    },
    mysql_users => { ... },
    mysql_query_rules => { ... },
    scheduler => { ... },
    mysql_replication_hostgroups => { ... },
    mysql_galera_hostgroups => { ... },

}

Reference

see REFERENCE.md

proxysql\_runtime fact

This module ships a fact that you may find useful in your profiles. It is not used by the module itself. The proxysql_runtime fact queries ProxySQL and returns a hash containing the contents of several of the runtime tables.

The fact will only return data if the mysql2 library is installed in your puppet agent's gem environment.

For systems using official puppet packages, (All In One packages), the following code can be used to install this gem and make the fact available.

$dev_packages = ['mariadb-devel','make','gcc']
ensure_packages($dev_packages)

package { 'mysql2 gem':
  ensure   => present,
  name     => 'mysql',
  provider => 'puppet_gem',
  require  => Package[$dev_packages],
}

Limitations

The module requires Puppet 5.5 or above. The proxysql_runtime fact only works when using the default value for mycnf_file_name.

Development

This module is originally developed by Matthias Crauwels for use at Ghent University, Belgium. This module is published under the Apache 2.0 license. It is now maintained by Vox Pupuli

We are open to feature requests, bug reports, contributions, etc...

Contributors

Original author: Matthias Crauwels

puppet-proxysql's People

Contributors

alexjfisher avatar bastelfreak avatar cyberline avatar dhoppe avatar ekohl avatar ghoneycutt avatar goorzhel avatar igalic avatar jackdpeterson avatar jfroche avatar jjspark avatar juniorsysadmin avatar kenyon avatar kubicgruenfeld avatar mat1010 avatar maxfedotov avatar mcrauwel avatar narcisbcn avatar ncstate-daniel avatar nicolas-morel-claranet avatar rgevaert avatar rgomezcasas avatar rwaffen avatar sandra-thieme avatar saz avatar smortex avatar theosotr avatar tragiccode avatar wyardley avatar zilchms avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

puppet-proxysql's Issues

Please, could you add a release/tag for new RPM features?

Affected Puppet, Ruby, OS and module versions/distributions

  • Puppet:
  • Ruby:
  • Distribution:
  • Module version:

How to reproduce (e.g Puppet code you use)

What are you seeing

What behaviour did you expect instead

Output log

Any additional information you'd like to impart

puppet 5 compatibility (bump dependencies for puppetlabs-apt to something more modern)

Affected Puppet, Ruby, OS and module versions/distributions

  • Puppet: 5.4.0
  • Ruby: 2.5.1.p57
  • Distribution: Ubuntu 18.04
  • Module version: 1.1.2 (I see master has an updated dependency for puppetlabs-apt in metadata.json) ... does this just need a new release?

How to reproduce (e.g Puppet code you use)

This isn't specifically a problem with this module per-se ... rather, this module just sets a maximum version for puppetlabs-apt that's much lower than most other modern puppet modules out there resulting in librarian-puppet unresolvable dependency trees.

[Librarian]   Module puppet-proxysql found versions: 1.1.2, 1.1.1, 1.1.0
[Librarian]     Checking puppet-proxysql/1.1.2 <https://forgeapi.puppetlabs.com>
[Librarian]       Resolved puppet-proxysql (= 1.1.2) <https://forgeapi.puppetlabs.com> at puppet-proxysql/1.1.2 <https://forgeapi.puppetlabs.com>
[Librarian]   Resolved puppet-proxysql (= 1.1.2) <https://forgeapi.puppetlabs.com>
[Librarian] Conflict between puppetlabs-apt (< 5.0.0, >= 4.4.0) <https://forgeapi.puppetlabs.com> and puppetlabs-apt (< 3.0.0, >= 2.1.0) <https://forgeapi.puppetlabs.com>

As my sample output here when doing

librarian-puppet install -v

Debian 10 Buster support

At the moment ProxySQL packages for Buster have not been released:

sysown/proxysql#2172

However it would be useful to support this distribution in time, so I'm creating this issue as a placeholder.

Proxysql on Debian should now run under the proxysql user account

Affected Puppet, Ruby, OS and module versions/distributions

  • Puppet: 4.10
  • Ruby: 2.3.3
  • Distribution: Debian
  • Module version: 1.1.2

How to reproduce (e.g Puppet code you use)

Configure the Percona repository
Apply the proxysql class
Install version 1.4.6 of proxysql

What are you seeing

The configuration files are changed from proxysql user and group ownership to root.

The service fails to start with permissions errors.

What behaviour did you expect instead

Proxysql service starting

Output log


ar 28 09:37:08 staging-test1 proxysql[28009]: Starting ProxySQL: 2018-03-28 09:37:08 main.cpp:317:ProxySQL_Main_process_global_variables(): [WARNING] Unable to open config file /etc/proxysql.cnf
Mar 28 09:37:08 staging-test1 proxysql[28009]: Could not chdir into datadir: /var/lib/proxysql . Error: Permission denied

Any additional information you'd like to impart

It looks like the user and group that proxysql runs under on Debian has changed since this module was last updated.

Problem with escaped Strings on query rules

Affected Puppet, Ruby, OS and module versions/distributions

  • Puppet: 4.8
  • Ruby: 2.5.1p273
  • Distribution: Debian
  • Module version: 2.0.0

How to reproduce (e.g Puppet code you use)

  - "mysql_query_rule-2":
      rule_id: 2
      active: 1
      apply: 1
      match_pattern: "^SELECT `account`\\.\\* FROM `account` WHERE \\(\\(\\(`account`.`account_id` = .*\\)\\)\\)"
      cache_ttl: 10000
      destination_hostgroup: 20

What are you seeing

Notice: /Stage[main]/Test::Services::Proxysql/Proxy_mysql_query_rule[mysql_query_rule-2]/match_pattern: match_pattern changed '^SELECT account\\.\\* FROM account WHERE \\(\\(\\(account.account_id= .*\\)\\)\\)' to '^SELECTaccount\.\* FROM account WHERE \(\(\(account.account_id = .*\)\)\)'

What behaviour did you expect instead

Nothing changed

Output log

Any additional information you'd like to impart

After initial import of escaped query rules (yaml needs double backslashes) on every puppet run i got this notice telling me that the rule is changed because it doesn't compare correct as the rule in the proxy is correct with single backslash.

Stop mysql_servers loading to runtime

Affected Puppet, Ruby, OS and module versions/distributions

  • Puppet: 7.3.0
  • Ruby: 2.5.1p57
  • Distribution: Ubuntu 18.04.5
  • Module version: 5.1.0

How to reproduce (e.g Puppet code you use)


proxysql::mysql_servers:
  - "db-1.%{lookup('base::domain::internal')}":
      port: 3306
      hostgroup_id: 101
      max_connections: 2000
      weight: 1000
      load_to_runtime: false
  - "db-2.%{lookup('base::domain::internal')}":
      port: 3306
      hostgroup_id: 101
      max_connections: 2000
      weight: 1000
      load_to_runtime: false
  - "db-3.%{lookup('base::domain::internal')}":
      port: 3306
      hostgroup_id: 101
      max_connections: 2000
      weight: 1000
      load_to_runtime: false

What are you seeing

Error: Evaluation Error: Error while evaluating a Resource Statement, undefined method `set_optional' for #<Puppet::Pops::Types::ExtraneousKey:0x00007f82d90fd670> at /var/folders/wd/5ty8823s09s4f6h9r1_tkb240000gp/T/ocd-ipc-20220111-22542-1ok330z/ocd-builddir-20220111-22547-12gqe2m/environments/us_perftest/modules/fd_profiles/manifests/mysql/proxysql.pp:25:5

Line 25 is just calling the module

    class { '::proxysql':
      admin_password         => Sensitive($admin_password),
      errorlog_file          => $errorlog_file,
      errorlog_file_mode     => $errorlog_file_mode,
      errorlog_file_owner    => $errorlog_file_owner,
      errorlog_file_group    => $errorlog_file_group,
      monitor_password       => Sensitive($monitor_password),
      package_source         => $package_source,
      package_checksum_value => $package_checksum_value,
      package_checksum_type  => $package_checksum_type,
    }

What behaviour did you expect instead

load_to_runtime to be set false for each mysql server in the hash.

Output log

Any additional information you'd like to impart

It looks to be missing in https://github.com/voxpupuli/puppet-proxysql/blob/35865f2cecc747da6d310fe2d8918d1dddcbcb83/types/server.pp but present in

def flush
update_server(@property_flush) if @property_flush
@property_hash.clear
load_to_runtime = @resource[:load_to_runtime]
mysql([defaults_file, '-NBe', 'LOAD MYSQL SERVERS TO RUNTIME'].compact) if load_to_runtime == :true
save_to_disk = @resource[:save_to_disk]
mysql([defaults_file, '-NBe', 'SAVE MYSQL SERVERS TO DISK'].compact) if save_to_disk == :true
end

This could be a case of us misunderstanding the README, but the end goal is for us not to automatically load just the mysql_servers into runtime via the hiera hash, because our scheduler handles that for us.

It does work when calling proxy_mysql_server directly though

+ Proxy_mysql_server[db-1.blah] =>
   parameters =>
     "hostgroup_id": 100,
     "hostname": "db-1.blah",
     "load_to_runtime": false,
     "port": 3306

ProxySQL errors after 2.0.9 release

Affected Puppet, Ruby, OS and module versions/distributions

  • Puppet: 6.x
  • Ruby: 2.x
  • Distribution: Ubuntu 18.04
  • Module version: Current master

How to reproduce (e.g Puppet code you use)

Run acceptance tests

What are you seeing

img

What behaviour did you expect instead

Output log

$ PUPPET_INSTALL_TYPE=agent BEAKER_PUPPET_COLLECTION=puppet6 BEAKER_debug=true BEAKER_setfile=ubuntu1804-64 BEAKER_HYPERVISOR=docker CHECK=beaker bundle exec rake beaker
TEST_TIERS env variable not defined. Defaulting to run all tests.
/Users/mbaur/.rvm/rubies/ruby-2.6.3/bin/ruby -I/Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/rspec-core-3.9.1/lib:/Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/rspec-support-3.9.2/lib /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/rspec-core-3.9.1/exe/rspec spec/acceptance
/Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-rspec-6.2.4/lib/beaker-rspec/helpers/serverspec.rb:43: warning: already initialized constant Module::VALID_OPTIONS_KEYS
/Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/specinfra-2.82.10/lib/specinfra/configuration.rb:4: warning: previous definition of VALID_OPTIONS_KEYS was here

Hosts file 'ubuntu1804-64' does not exist.
Trying as beaker-hostgenerator input.

Hypervisor for ubuntu1804-64-1 is docker
Beaker::Hypervisor, found some docker boxes to create
get
/v1.16/version
{}

Provisioning docker
provisioning ubuntu1804-64-1
Creating image
Dockerfile is         FROM ubuntu:18.04
        ENV container docker
          RUN apt-get update
          RUN apt-get install -y openssh-server openssh-client curl ntpdate lsb-release
        RUN mkdir -p /var/run/sshd
        RUN echo root:root | chpasswd
        RUN sed -ri 's/^#?PermitRootLogin .*/PermitRootLogin yes/' /etc/ssh/sshd_config
        RUN sed -ri 's/^#?PasswordAuthentication .*/PasswordAuthentication yes/' /etc/ssh/sshd_config
        RUN sed -ri 's/^#?UseDNS .*/UseDNS no/' /etc/ssh/sshd_config
RUN cp /bin/true /sbin/agetty
RUN apt-get install -y net-tools wget locales apt-transport-https iproute2 gnupg
RUN locale-gen en_US.UTF-8
RUN echo LANG=en_US.UTF-8 > /etc/default/locale
        EXPOSE 22
        CMD ["/sbin/init"]
Docker build buildargs: {}
post
/v1.16/build
{:rm=>true, :buildargs=>"{}"}
Dockerfile0000640000000000000000000000133713624712027013314 0ustar00wheelwheel00000000000000        FROM ubuntu:18.04
        ENV container docker
          RUN apt-get update
          RUN apt-get install -y openssh-server openssh-client curl ntpdate lsb-release
        RUN mkdir -p /var/run/sshd
        RUN echo root:root | chpasswd
        RUN sed -ri 's/^#?PermitRootLogin .*/PermitRootLogin yes/' /etc/ssh/sshd_config
        RUN sed -ri 's/^#?PasswordAuthentication .*/PasswordAuthentication yes/' /etc/ssh/sshd_config
        RUN sed -ri 's/^#?UseDNS .*/UseDNS no/' /etc/ssh/sshd_config
RUN cp /bin/true /sbin/agetty
RUN apt-get install -y net-tools wget locales apt-transport-https iproute2 gnupg
RUN locale-gen en_US.UTF-8
RUN echo LANG=en_US.UTF-8 > /etc/default/locale
        EXPOSE 22
        CMD ["/sbin/init"]

Creating container from image 7116c6f73d2d
post
/v1.16/containers/create
{}
{"Image":"7116c6f73d2d","Hostname":"ubuntu1804-64-1","HostConfig":{"PortBindings":{"22/tcp":[{"HostPort":"4582","HostIp":"0.0.0.0"}]},"PublishAllPorts":true,"Privileged":true,"RestartPolicy":{"Name":"always"}}}
Starting container 2a895a52f14286b602a8ca833b5bee3c90c9b3fb1d07a1760a472cb2f9fe2f2c
post
/v1.16/containers/2a895a52f14286b602a8ca833b5bee3c90c9b3fb1d07a1760a472cb2f9fe2f2c/start
{}
{}
get
/v1.16/containers/2a895a52f14286b602a8ca833b5bee3c90c9b3fb1d07a1760a472cb2f9fe2f2c/json
{}

Using docker server at 0.0.0.0
get
/v1.16/containers/2a895a52f14286b602a8ca833b5bee3c90c9b3fb1d07a1760a472cb2f9fe2f2c/json
{}

node available as  ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no [email protected] -p 4582
get
/v1.16/containers/2a895a52f14286b602a8ca833b5bee3c90c9b3fb1d07a1760a472cb2f9fe2f2c/json
{}


ubuntu1804-64-1 10:15:03$ cat /etc/resolv.conf
  Attempting ssh connection to 0.0.0.0, user: root, opts: {:password=>"root", :port=>"4582", :forward_agent=>false, :auth_methods=>["password", "publickey", "hostbased", "keyboard-interactive"]}
  Warning: Try 1 -- Host 0.0.0.0 unreachable: Net::SSH::Disconnect - connection closed by remote host
  Warning: Trying again in 3 seconds
  Attempting ssh connection to 0.0.0.0, user: root, opts: {:password=>"root", :port=>"4582", :forward_agent=>false, :auth_methods=>["password", "publickey", "hostbased", "keyboard-interactive"], :logger=>#<Logger:0x00007fd073d36d78 @level=4, @progname=nil, @default_formatter=#<Logger::Formatter:0x00007fd073d36d28 @datetime_format=nil>, @formatter=nil, @logdev=#<Logger::LogDevice:0x00007fd073d36cd8 @shift_period_suffix=nil, @shift_size=nil, @shift_age=nil, @filename=nil, @dev=#<IO:<STDERR>>, @mon_mutex=#<Thread::Mutex:0x00007fd073d36c60>, @mon_mutex_owner_object_id=70266636580460, @mon_owner=nil, @mon_count=0>>, :password_prompt=>#<Net::SSH::Prompt:0x00007fd073d36c38>, :user=>"root"}
  # This file is included on the metadata iso
  nameserver 192.168.65.1

ubuntu1804-64-1 executed in 3.25 seconds

ubuntu1804-64-1 10:15:07$ echo '127.0.0.1	localhost localhost.localdomain
172.17.0.3	ubuntu1804-64-1. ubuntu1804-64-1
' >> /etc/hosts

ubuntu1804-64-1 executed in 0.01 seconds

ubuntu1804-64-1 10:15:07$ dpkg -s curl
  Package: curl
  Status: install ok installed
  Priority: optional
  Section: web
  Installed-Size: 387
  Maintainer: Ubuntu Developers <[email protected]>
  Architecture: amd64
  Multi-Arch: foreign
  Version: 7.58.0-2ubuntu3.8
  Depends: libc6 (>= 2.17), libcurl4 (= 7.58.0-2ubuntu3.8), zlib1g (>= 1:1.1.4)
  Description: command line tool for transferring data with URL syntax
   curl is a command line tool for transferring data with URL syntax, supporting
   DICT, FILE, FTP, FTPS, GOPHER, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3,
   POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, TELNET and TFTP.
   .
   curl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form
   based upload, proxies, cookies, user+password authentication (Basic, Digest,
   NTLM, Negotiate, kerberos...), file transfer resume, proxy tunneling and a
   busload of other useful tricks.
  Homepage: http://curl.haxx.se
  Original-Maintainer: Alessandro Ghedini <[email protected]>

ubuntu1804-64-1 executed in 0.04 seconds

ubuntu1804-64-1 10:15:07$ dpkg -s ntpdate
  Package: ntpdate
  Status: install ok installed
  Priority: optional
  Section: net
  Installed-Size: 179
  Maintainer: Ubuntu Developers <[email protected]>
  Architecture: amd64
  Source: ntp
  Version: 1:4.2.8p10+dfsg-5ubuntu7.1
  Depends: netbase, libc6 (>= 2.17), libssl1.1 (>= 1.1.0)
  Pre-Depends: dpkg (>= 1.15.7.2)
  Breaks: dhcp3-client (<< 4.1.0-1)
  Conffiles:
   /etc/default/ntpdate 39415ec9778476795fdbb832adc43b9b
   /etc/dhcp/dhclient-exit-hooks.d/ntpdate cb47fd9d3e21a204fb3ba4ca3fc8ab46
   /etc/logcheck/ignore.d.server/ntpdate 68d4df7cceb0e97bde87126c3a56b219
   /etc/network/if-up.d/ntpdate ed06d5e57fe9ae0b17fe1f9cea1c0d57
  Description: client for setting system time from NTP servers
   NTP, the Network Time Protocol, is used to keep computer clocks
   accurate by synchronizing them over the Internet or a local network,
   or by following an accurate hardware receiver that interprets GPS,
   DCF-77, NIST or similar time signals.
   .
   ntpdate is a simple NTP client that sets a system's clock to match
   the time obtained by communicating with one or more NTP servers.  It
   is not sufficient, however, for maintaining an accurate clock in the
   long run.  ntpdate by itself is useful for occasionally setting the
   time on machines that do not have full-time network access, such as
   laptops.
   .
   If the full NTP daemon from the package "ntp" is installed, then
   ntpdate is not necessary.
  Homepage: http://support.ntp.org/
  Original-Maintainer: Debian NTP Team <[email protected]>

ubuntu1804-64-1 executed in 0.03 seconds
setting local environment on ubuntu1804-64-1

ubuntu1804-64-1 10:15:07$ mktemp -dt .XXXXXX
  /tmp/.QyvIKN

ubuntu1804-64-1 executed in 0.02 seconds

ubuntu1804-64-1 10:15:07$ echo 'PermitUserEnvironment yes' | cat - /etc/ssh/sshd_config > /tmp/.QyvIKN/sshd_config.permit

ubuntu1804-64-1 executed in 0.01 seconds

ubuntu1804-64-1 10:15:07$ mv /tmp/.QyvIKN/sshd_config.permit /etc/ssh/sshd_config

ubuntu1804-64-1 executed in 0.01 seconds

ubuntu1804-64-1 10:15:07$ service ssh restart

ubuntu1804-64-1 executed in 0.07 seconds

ubuntu1804-64-1 10:15:07$ mkdir -p ~/.ssh

ubuntu1804-64-1 executed in 0.02 seconds

ubuntu1804-64-1 10:15:07$ chmod 0600 ~/.ssh

ubuntu1804-64-1 executed in 0.01 seconds

ubuntu1804-64-1 10:15:07$ touch ~/.ssh/environment

ubuntu1804-64-1 executed in 0.02 seconds

ubuntu1804-64-1 10:15:07$ grep ^PATH=.*\$PATH ~/.ssh/environment

ubuntu1804-64-1 executed in 0.03 seconds
Exited: 1

ubuntu1804-64-1 10:15:07$ grep ^PATH= ~/.ssh/environment

ubuntu1804-64-1 executed in 0.02 seconds
Exited: 1

ubuntu1804-64-1 10:15:07$ echo "PATH=$PATH" >> ~/.ssh/environment

ubuntu1804-64-1 executed in 0.02 seconds
will not mirror environment to /etc/profile.d on non-sles platform host
ssh connection to ubuntu1804-64-1 has been terminated

ubuntu1804-64-1 10:15:07$ cat ~/.ssh/environment
  Attempting ssh connection to 0.0.0.0, user: root, opts: {:password=>"root", :port=>"4582", :forward_agent=>false, :auth_methods=>["password", "publickey", "hostbased", "keyboard-interactive"], :logger=>#<Logger:0x00007fd073d36d78 @level=4, @progname=nil, @default_formatter=#<Logger::Formatter:0x00007fd073d36d28 @datetime_format=nil>, @formatter=nil, @logdev=#<Logger::LogDevice:0x00007fd073d36cd8 @shift_period_suffix=nil, @shift_size=nil, @shift_age=nil, @filename=nil, @dev=#<IO:<STDERR>>, @mon_mutex=#<Thread::Mutex:0x00007fd073d36c60>, @mon_mutex_owner_object_id=70266636580460, @mon_owner=nil, @mon_count=0>>, :password_prompt=>#<Net::SSH::Prompt:0x00007fd073d36c38>, :user=>"root"}
  PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games

ubuntu1804-64-1 executed in 0.18 seconds
Disabling updates.puppetlabs.com by modifying hosts file to resolve updates to 127.0.0.1 on ubuntu1804-64-1

ubuntu1804-64-1 10:15:07$ echo '127.0.0.1	updates.puppetlabs.com
' >> /etc/hosts

ubuntu1804-64-1 executed in 0.01 seconds

ubuntu1804-64-1 10:15:07$ wget -O /tmp/puppet.deb http://apt.puppet.com/puppet6-release-bionic.deb
  --2020-02-24 09:15:07--  http://apt.puppet.com/puppet6-release-bionic.deb
  Resolving apt.puppet.com (apt.puppet.com)...   143.204.202.113, 143.204.202.46, 143.204.202.17, ...
  Connecting to apt.puppet.com (apt.puppet.com)|143.204.202.113|:80...   connected.
  HTTP request sent, awaiting response...   200 OK
  Length:   11736   (11K)   [application/x-debian-package]
  Saving to: ‘/tmp/puppet.deb’

       0K     .  .  .  .  .  .  .  .  .  .     .                                                            100% 3.57M=0.003s

  2020-02-24 09:15:07 (3.57 MB/s) - ‘/tmp/puppet.deb’ saved [11736/11736]

ubuntu1804-64-1 executed in 0.20 seconds

ubuntu1804-64-1 10:15:07$ dpkg -i --force-all /tmp/puppet.deb
  Selecting previously unselected package puppet6-release.
  (Reading database ... 11168 files and directories currently installed.)
  Preparing to unpack /tmp/puppet.deb ...
  Unpacking puppet6-release (6.0.0-5bionic) ...
  Setting up puppet6-release (6.0.0-5bionic) ...

  Configuration file '/etc/apt/sources.list.d/puppet6.list', does not exist on system.
  Installing new config file as you requested.

  Configuration file '/etc/apt/trusted.gpg.d/puppet6-keyring.gpg', does not exist on system.
  Installing new config file as you requested.

ubuntu1804-64-1 executed in 0.21 seconds

ubuntu1804-64-1 10:15:08$ apt-get update
  Hit:1 http://archive.ubuntu.com/ubuntu bionic InRelease
  Get:2 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]
  Get:3 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
  Get:4 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]
  Get:5 http://apt.puppetlabs.com bionic InRelease [85.3 kB]
  Get:6 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [11.4 kB]
  Get:7 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [1,127 kB]
  Get:8 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [44.7 kB]
  Get:9 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [1,350 kB]
  Get:10 http://archive.ubuntu.com/ubuntu bionic-backports/universe amd64 Packages [4,247 B]
  Get:11 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 Packages [7,348 B]
  Get:12 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [823 kB]
  Get:13 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [835 kB]
  Get:14 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [31.0 kB]
  Get:15 http://apt.puppetlabs.com bionic/puppet6 amd64 Packages [41.0 kB]
  Get:16 http://apt.puppetlabs.com bionic/puppet6 all Packages [17.0 kB]
  Fetched 4,630 kB in 1s (3,517 kB/s)
  Reading package lists...

ubuntu1804-64-1 executed in 2.54 seconds

ubuntu1804-64-1 10:15:10$ echo "/opt/puppetlabs/bin"
  /opt/puppetlabs/bin

ubuntu1804-64-1 executed in 0.01 seconds

ubuntu1804-64-1 10:15:10$ echo "/opt/puppetlabs/puppet/bin"
  /opt/puppetlabs/puppet/bin

ubuntu1804-64-1 executed in 0.01 seconds

ubuntu1804-64-1 10:15:10$ grep ^PATH=.*\/opt\/puppetlabs\/bin:\/opt\/puppetlabs\/puppet\/bin ~/.ssh/environment

ubuntu1804-64-1 executed in 0.01 seconds
Exited: 1

ubuntu1804-64-1 10:15:10$ grep ^PATH= ~/.ssh/environment
  PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games

ubuntu1804-64-1 executed in 0.01 seconds

ubuntu1804-64-1 10:15:10$ sed -i -e "s/^PATH=/PATH=\/opt\/puppetlabs\/bin:\/opt\/puppetlabs\/puppet\/bin:/" ~/.ssh/environment

ubuntu1804-64-1 executed in 0.01 seconds
will not mirror environment to /etc/profile.d on non-sles platform host

ubuntu1804-64-1 10:15:10$ grep ^PATH=.*PATH ~/.ssh/environment

ubuntu1804-64-1 executed in 0.01 seconds
Exited: 1

ubuntu1804-64-1 10:15:10$ grep ^PATH= ~/.ssh/environment
  PATH=/opt/puppetlabs/bin:/opt/puppetlabs/puppet/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games

ubuntu1804-64-1 executed in 0.01 seconds

ubuntu1804-64-1 10:15:10$ sed -i -e "s/^PATH=/PATH=PATH:/" ~/.ssh/environment

ubuntu1804-64-1 executed in 0.01 seconds
will not mirror environment to /etc/profile.d on non-sles platform host

ubuntu1804-64-1 10:15:10$ apt-get update
  Hit:1 http://security.ubuntu.com/ubuntu bionic-security InRelease
  Hit:2 http://archive.ubuntu.com/ubuntu bionic InRelease
  Hit:3 http://archive.ubuntu.com/ubuntu bionic-updates InRelease
  Hit:4 http://archive.ubuntu.com/ubuntu bionic-backports InRelease
  Hit:5 http://apt.puppetlabs.com bionic InRelease
  Reading package lists...

ubuntu1804-64-1 executed in 1.81 seconds

ubuntu1804-64-1 10:15:12$ apt-get install --force-yes  -y puppet-agent
  Reading package lists...
  Building dependency tree...
  Reading state information...
  The following NEW packages will be installed:
    puppet-agent
  0 upgraded, 1 newly installed, 0 to remove and 17 not upgraded.
  Need to get 22.7 MB of archives.
  After this operation, 133 MB of additional disk space will be used.
  Get:1 http://apt.puppetlabs.com bionic/puppet6 amd64 puppet-agent amd64 6.13.0-1bionic [22.7 MB]
  debconf: delaying package configuration, since apt-utils is not installed
  Fetched 22.7 MB in 1s (22.2 MB/s)
  Selecting previously unselected package puppet-agent.
  (Reading database ...   (Reading database ... 5%(Reading database ... 10%(Reading database ... 15%(Reading database ... 20%(Reading database ... 25%(Reading database ... 30%(Reading database ... 35%(Reading database ... 40%(Reading database ... 45%(Reading database ... 50%  (Reading database ... 55%  (Reading database ... 60%(Reading database ... 65%  (Reading database ... 70%  (Reading database ... 75%  (Reading database ... 80%  (Reading database ... 85%  (Reading database ... 90%  (Reading database ... 95%  (Reading database ... 100%(Reading database ... 11173 files and directories currently installed.)
  Preparing to unpack .../puppet-agent_6.13.0-1bionic_amd64.deb ...
  Unpacking puppet-agent (6.13.0-1bionic) ...
  Setting up puppet-agent (6.13.0-1bionic) ...
  Created symlink /etc/systemd/system/multi-user.target.wants/puppet.service → /lib/systemd/system/puppet.service.
  Created symlink /etc/systemd/system/multi-user.target.wants/pxp-agent.service → /lib/systemd/system/pxp-agent.service.
  Removed /etc/systemd/system/multi-user.target.wants/pxp-agent.service.
  Processing triggers for libc-bin (2.27-3ubuntu1) ...
  W  :   --force-yes is deprecated, use one of the options starting with --allow instead.

ubuntu1804-64-1 executed in 8.03 seconds

ubuntu1804-64-1 10:15:20$ echo "/opt/puppetlabs/bin"
  /opt/puppetlabs/bin

ubuntu1804-64-1 executed in 0.01 seconds

ubuntu1804-64-1 10:15:20$ echo "/opt/puppetlabs/puppet/bin"
  /opt/puppetlabs/puppet/bin

ubuntu1804-64-1 executed in 0.01 seconds

ubuntu1804-64-1 10:15:20$ grep ^PATH=.*\/opt\/puppetlabs\/bin:\/opt\/puppetlabs\/puppet\/bin ~/.ssh/environment
  PATH=PATH:/opt/puppetlabs/bin:/opt/puppetlabs/puppet/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games

ubuntu1804-64-1 executed in 0.01 seconds

ubuntu1804-64-1 10:15:20$ grep ^PATH=.*PATH ~/.ssh/environment
  PATH=PATH:/opt/puppetlabs/bin:/opt/puppetlabs/puppet/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games

ubuntu1804-64-1 executed in 0.01 seconds

ubuntu1804-64-1 10:15:20$ facter --json "os.release.major"
  {
    "os.release.major": "18.04"
  }

ubuntu1804-64-1 executed in 0.26 seconds

ubuntu1804-64-1 10:15:20$ facter --json "proxysql_version"
  {
    "proxysql_version": ""
  }

ubuntu1804-64-1 executed in 0.18 seconds

ubuntu1804-64-1 10:15:21$ echo /etc/puppetlabs/code/modules
  /etc/puppetlabs/code/modules

ubuntu1804-64-1 executed in 0.02 seconds
Using scp to transfer /Users/mbaur/Sources/puppet-proxysql to /etc/puppetlabs/code/modules/proxysql
localhost $ scp /Users/mbaur/Sources/puppet-proxysql ubuntu1804-64-1:/etc/puppetlabs/code/modules {:ignore => [".bundle", ".git", ".idea", ".vagrant", ".vendor", "vendor", "acceptance", "bundle", "spec", "tests", "log", ".svn", "junit", "pkg", "example", "tmp", ".", ".."]}
going to ignore (?-mix:((\/|\A)\.bundle(\/|\z))|((\/|\A)\.git(\/|\z))|((\/|\A)\.idea(\/|\z))|((\/|\A)\.vagrant(\/|\z))|((\/|\A)\.vendor(\/|\z))|((\/|\A)vendor(\/|\z))|((\/|\A)acceptance(\/|\z))|((\/|\A)bundle(\/|\z))|((\/|\A)spec(\/|\z))|((\/|\A)tests(\/|\z))|((\/|\A)log(\/|\z))|((\/|\A)\.svn(\/|\z))|((\/|\A)junit(\/|\z))|((\/|\A)pkg(\/|\z))|((\/|\A)example(\/|\z))|((\/|\A)tmp(\/|\z))|((\/|\A)\.(\/|\z))|((\/|\A)\.\.(\/|\z)))

ubuntu1804-64-1 10:15:22$ rm -rf /etc/puppetlabs/code/modules/proxysql

ubuntu1804-64-1 executed in 0.03 seconds

ubuntu1804-64-1 10:15:22$ mv /etc/puppetlabs/code/modules/puppet-proxysql /etc/puppetlabs/code/modules/proxysql

ubuntu1804-64-1 executed in 0.01 seconds

ubuntu1804-64-1 10:15:26$ puppet module install puppetlabs-mysql -v 10.3.0
  Notice: Preparing to install into /etc/puppetlabs/code/environments/production/modules ...
  Notice: Downloading from https://forgeapi.puppet.com ...
  Notice: Installing -- do not interrupt ...
  /etc/puppetlabs/code/environments/production/modules
  └─┬ puppetlabs-mysql (v10.3.0)
    ├── puppetlabs-stdlib (v6.2.0)
    └── puppetlabs-translate (v2.1.0)

ubuntu1804-64-1 executed in 7.29 seconds

ubuntu1804-64-1 10:15:34$ puppet module install puppetlabs-apt -v 7.3.0
  Notice: Preparing to install into /etc/puppetlabs/code/environments/production/modules ...
  Notice: Downloading from https://forgeapi.puppet.com ...
  Notice: Installing -- do not interrupt ...
  /etc/puppetlabs/code/environments/production/modules
  └─┬ puppetlabs-apt (v7.3.0)
    ├── puppetlabs-stdlib (v6.2.0)
    └── puppetlabs-translate (v2.1.0)

ubuntu1804-64-1 executed in 5.32 seconds

ubuntu1804-64-1 10:15:39$ puppet module install puppetlabs-stdlib -v 6.2.0
  Notice: Preparing to install into /etc/puppetlabs/code/environments/production/modules ...
  Notice: Module puppetlabs-stdlib 6.2.0 is already installed.

ubuntu1804-64-1 executed in 1.02 seconds

ubuntu1804-64-1 10:15:40$ puppet module install puppet-extlib -v 4.2.0
  Notice: Preparing to install into /etc/puppetlabs/code/environments/production/modules ...
  Notice: Downloading from https://forgeapi.puppet.com ...
  Notice: Installing -- do not interrupt ...
  /etc/puppetlabs/code/environments/production/modules
  └─┬ puppet-extlib (v4.2.0)
    └── puppetlabs-stdlib (v6.2.0)

ubuntu1804-64-1 executed in 4.05 seconds

ubuntu1804-64-1 10:15:44$ puppet module install camptocamp-systemd -v 2.8.0
  Notice: Preparing to install into /etc/puppetlabs/code/environments/production/modules ...
  Notice: Downloading from https://forgeapi.puppet.com ...
  Notice: Installing -- do not interrupt ...
  /etc/puppetlabs/code/environments/production/modules
  └─┬ camptocamp-systemd (v2.8.0)
    ├─┬ puppetlabs-inifile (v4.1.0)
    │ └── puppetlabs-translate (v2.1.0)
    └── puppetlabs-stdlib (v6.2.0)

ubuntu1804-64-1 executed in 5.53 seconds

ubuntu1804-64-1 10:15:50$ puppet module install puppet/selinux
  Notice: Preparing to install into /etc/puppetlabs/code/environments/production/modules ...
  Notice: Downloading from https://forgeapi.puppet.com ...
  Notice: Installing -- do not interrupt ...
  /etc/puppetlabs/code/environments/production/modules
  └─┬ puppet-selinux (v3.1.0)
    └── puppetlabs-stdlib (v6.2.0)

ubuntu1804-64-1 executed in 4.60 seconds

proxysql class
  Upgrading to version 2.0

ubuntu1804-64-1 10:15:54$ mktemp -t apply_manifest.pp.XXXXXX
  /tmp/apply_manifest.pp.NIpLPR

ubuntu1804-64-1 executed in 0.01 seconds
localhost $ scp /var/folders/rv/6kdq37x167bblm3k3n2tcz940000gn/T/beaker20200224-67522-1hvkp75 ubuntu1804-64-1:/tmp/apply_manifest.pp.NIpLPR {:ignore => }

ubuntu1804-64-1 10:15:54$ puppet apply --verbose --detailed-exitcodes /tmp/apply_manifest.pp.NIpLPR
  Info: Loading facts
  Info: Loading facts
  Info: Loading facts
  Info: Loading facts
  Info: Loading facts
  Info: Loading facts
  Info: Loading facts
  Notice: Compiled catalog for ubuntu1804-64-1 in environment production in 0.51 seconds
  Info: Applying configuration version '1582535765'
  Notice: /Stage[main]/Proxysql::Prerequisites/Group[proxysql]/ensure: created
  Notice: /Stage[main]/Proxysql::Prerequisites/User[proxysql]/ensure: created
  Notice: /Stage[main]/Mysql::Client::Install/Package[mysql_client]/ensure: created
  Notice: /Stage[main]/Apt/File[preferences]/ensure: created
  Info: /Stage[main]/Apt/File[preferences]: Scheduling refresh of Class[Apt::Update]
  Notice: /Stage[main]/Apt/Apt::Setting[conf-update-stamp]/File[/etc/apt/apt.conf.d/15update-stamp]/ensure: defined content as '{md5}0962d70c4ec78bbfa6f3544ae0c41974'
  Info: /Stage[main]/Apt/Apt::Setting[conf-update-stamp]/File[/etc/apt/apt.conf.d/15update-stamp]: Scheduling refresh of Class[Apt::Update]
  Notice: /Stage[main]/Proxysql::Repo/Apt::Source[proxysql_repo]/Apt::Key[Add key: 1448BF693CA600C799EB935804A562FB79953B49 from Apt::Source proxysql_repo]/Apt_key[Add key: 1448BF693CA600C799EB935804A562FB79953B49 from Apt::Source proxysql_repo]/ensure: created
  Notice: /Stage[main]/Proxysql::Repo/Apt::Source[proxysql_repo]/Apt::Setting[list-proxysql_repo]/File[/etc/apt/sources.list.d/proxysql_repo.list]/ensure: defined content as '{md5}d78ce01e2392a51db2f99f9e0cdcb3e6'
  Info: /Stage[main]/Proxysql::Repo/Apt::Source[proxysql_repo]/Apt::Setting[list-proxysql_repo]/File[/etc/apt/sources.list.d/proxysql_repo.list]: Scheduling refresh of Class[Apt::Update]
  Info: Class[Apt::Update]: Scheduling refresh of Exec[apt_update]
  Notice: /Stage[main]/Apt::Update/Exec[apt_update]: Triggered 'refresh' from 1 event
  Notice: /Stage[main]/Proxysql::Install/Package[proxysql]/ensure: created
  Notice: /Stage[main]/Proxysql::Install/File[proxysql-datadir]/mode: mode changed '0755' to '0700'
  Info: Class[Proxysql::Install]: Scheduling refresh of Class[Proxysql::Service]
  Info: Computing checksum on file /etc/proxysql.cnf
  Info: /Stage[main]/Proxysql::Config/File[proxysql-config-file]: Filebucketed /etc/proxysql.cnf to puppet with sum 7135f93878621028270c117834e13980
  Notice: /Stage[main]/Proxysql::Config/File[proxysql-config-file]/content: content changed '{md5}7135f93878621028270c117834e13980' to '{md5}857962d7be4409c7e655ce9c3f4c0161'
  Notice: /Stage[main]/Proxysql::Config/File[proxysql-config-file]/owner: owner changed 'root' to 'proxysql'
  Info: /Stage[main]/Proxysql::Config/File[proxysql-config-file]: Scheduling refresh of Exec[reload-config]
  Info: /Stage[main]/Proxysql::Config/File[proxysql-config-file]: Scheduling refresh of Exec[reload-config]
  Info: Class[Proxysql::Service]: Scheduling refresh of Systemd::Dropin_file[proxysql ExecStart override]
  Info: Class[Proxysql::Service]: Scheduling refresh of Service[proxysql]
  Info: Class[Proxysql::Service]: Scheduling refresh of Exec[wait_for_admin_socket_to_open]
  Notice: /Stage[main]/Proxysql::Service/Service[proxysql]/ensure: ensure changed 'stopped' to 'running'
  Info: /Stage[main]/Proxysql::Service/Service[proxysql]: Unscheduling refresh on Service[proxysql]
  Notice: /Stage[main]/Proxysql::Service/Exec[wait_for_admin_socket_to_open]/returns: executed successfully
  Notice: /Stage[main]/Proxysql::Service/Exec[wait_for_admin_socket_to_open]: Triggered 'refresh' from 1 event
  Notice: /Stage[main]/Proxysql::Admin_credentials/File[root-mycnf-file]/ensure: defined content as '{md5}a971c3f50a522ff6004b359ab15a8903'
  Notice: /Stage[main]/Proxysql::Reload_config/Exec[reload-config]/returns: ERROR 2013 (HY000) at line 2: Lost connection to MySQL server during query
  Info: Class[Proxysql]: Unscheduling all events on Class[Proxysql]
  Info: Stage[main]: Unscheduling all events on Stage[main]
  Error: /Stage[main]/Proxysql::Reload_config/Exec[reload-config]: Failed to call refresh: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --execute="
            LOAD ADMIN VARIABLES FROM CONFIG;           LOAD ADMIN VARIABLES TO RUNTIME;           SAVE ADMIN VARIABLES TO DISK;           LOAD MYSQL VARIABLES FROM CONFIG;           LOAD MYSQL VARIABLES TO RUNTIME;           SAVE MYSQL VARIABLES TO DISK; "
          ' returned 1 instead of one of [0]
  Error: /Stage[main]/Proxysql::Reload_config/Exec[reload-config]: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --execute="
            LOAD ADMIN VARIABLES FROM CONFIG;           LOAD ADMIN VARIABLES TO RUNTIME;           SAVE ADMIN VARIABLES TO DISK;           LOAD MYSQL VARIABLES FROM CONFIG;           LOAD MYSQL VARIABLES TO RUNTIME;           SAVE MYSQL VARIABLES TO DISK; "
          ' returned 1 instead of one of [0]
  Info: Creating state file /opt/puppetlabs/puppet/cache/state/state.yaml
  Notice: Applied catalog in 25.51 seconds

ubuntu1804-64-1 executed in 37.04 seconds
Exited: 6
    works idempotently with no errors (FAILED - 1)
    Package "proxysql"

ubuntu1804-64-1 10:16:31$ /bin/sh -c ls\ /etc/gentoo-release
  ls:   cannot access '/etc/gentoo-release'  : No such file or directory

ubuntu1804-64-1 executed in 0.02 seconds
Exited: 2

ubuntu1804-64-1 10:16:31$ /bin/sh -c uname\ -sr
  Linux 4.19.76-linuxkit

ubuntu1804-64-1 executed in 0.01 seconds

ubuntu1804-64-1 10:16:31$ /bin/sh -c ls\ /etc/Eos-release
  ls: cannot access '/etc/Eos-release'  : No such file or directory

ubuntu1804-64-1 executed in 0.01 seconds
Exited: 2

ubuntu1804-64-1 10:16:31$ /bin/sh -c uname\ -sr
  Linux 4.19.76-linuxkit

ubuntu1804-64-1 executed in 0.02 seconds

ubuntu1804-64-1 10:16:31$ /bin/sh -c ls\ /var/run/current-system/sw
  ls:   cannot access '/var/run/current-system/sw'  : No such file or directory

ubuntu1804-64-1 executed in 0.02 seconds
Exited: 2

ubuntu1804-64-1 10:16:31$ /bin/sh -c ls\ /etc/alpine-release
  ls:   cannot access '/etc/alpine-release'  : No such file or directory

ubuntu1804-64-1 executed in 0.02 seconds
Exited: 2

ubuntu1804-64-1 10:16:31$ /bin/sh -c uname\ -s
  Linux

ubuntu1804-64-1 executed in 0.01 seconds

ubuntu1804-64-1 10:16:31$ /bin/sh -c swupd\ info
  /bin/sh: 1: swupd: not found

ubuntu1804-64-1 executed in 0.01 seconds
Exited: 127

ubuntu1804-64-1 10:16:31$ /bin/sh -c ls\ /etc/arch-release
  ls:   cannot access '/etc/arch-release'  : No such file or directory

ubuntu1804-64-1 executed in 0.02 seconds
Exited: 2

ubuntu1804-64-1 10:16:31$ /bin/sh -c uname\ -sr
  Linux 4.19.76-linuxkit

ubuntu1804-64-1 executed in 0.01 seconds

ubuntu1804-64-1 10:16:31$ /bin/sh -c ls\ /etc/coreos/update.conf
  ls:   cannot access '/etc/coreos/update.conf'  : No such file or directory

ubuntu1804-64-1 executed in 0.02 seconds
Exited: 2

ubuntu1804-64-1 10:16:31$ /bin/sh -c cat\ /etc/debian_version
  buster/sid

ubuntu1804-64-1 executed in 0.02 seconds

ubuntu1804-64-1 10:16:31$ /bin/sh -c lsb_release\ -ir
  Distributor ID:	Ubuntu
  Release:	18.04

ubuntu1804-64-1 executed in 0.07 seconds

ubuntu1804-64-1 10:16:32$ /bin/sh -c uname\ -m
  x86_64

ubuntu1804-64-1 executed in 0.02 seconds

ubuntu1804-64-1 10:16:32$ /bin/sh -c dpkg-query\ -f\ \'\$\{Status\}\'\ -W\ proxysql\ \|\ grep\ -E\ \'\^\(install\|hold\)\ ok\ installed\$\'
  install ok installed

ubuntu1804-64-1 executed in 0.03 seconds
      is expected to be installed
    Service "proxysql"

ubuntu1804-64-1 10:16:32$ /bin/sh -c systemctl\ --quiet\ is-enabled\ proxysql

ubuntu1804-64-1 executed in 0.02 seconds
      is expected to be enabled

ubuntu1804-64-1 10:16:32$ /bin/sh -c systemctl\ is-active\ proxysql
  active

ubuntu1804-64-1 executed in 0.02 seconds
      is expected to be running
    Command "proxysql --version"
      exit_status

ubuntu1804-64-1 10:16:32$ /bin/sh -c proxysql\ --version
  ProxySQL version 2.0.9-209-g60923e24, codename Truls

ubuntu1804-64-1 executed in 0.03 seconds
        is expected to eq 0
      stdout
        is expected to match /^ProxySQL version 2\.0\./
  extended testing

ubuntu1804-64-1 10:16:32$ mktemp -t apply_manifest.pp.XXXXXX
  /tmp/apply_manifest.pp.teP6Yu

ubuntu1804-64-1 executed in 0.02 seconds
localhost $ scp /var/folders/rv/6kdq37x167bblm3k3n2tcz940000gn/T/beaker20200224-67522-khofab ubuntu1804-64-1:/tmp/apply_manifest.pp.teP6Yu {:ignore => }

ubuntu1804-64-1 10:16:32$ puppet apply --verbose --detailed-exitcodes /tmp/apply_manifest.pp.teP6Yu
  Info: Loading facts
  Info: Loading facts
  Info: Loading facts
  Info: Loading facts
  Info: Loading facts
  Info: Loading facts
  Info: Loading facts
  Notice: Compiled catalog for ubuntu1804-64-1 in environment production in 0.71 seconds
  Info: Applying configuration version '1582535802'
  Info: Computing checksum on file /etc/proxysql.cnf
  Info: /Stage[main]/Proxysql::Config/File[proxysql-config-file]: Filebucketed /etc/proxysql.cnf to puppet with sum 857962d7be4409c7e655ce9c3f4c0161
  Notice: /Stage[main]/Proxysql::Config/File[proxysql-config-file]/content: content changed '{md5}857962d7be4409c7e655ce9c3f4c0161' to '{md5}2dc9f6f882f06d2340947e637d674db2'
  Info: /Stage[main]/Proxysql::Config/File[proxysql-config-file]: Scheduling refresh of Exec[reload-config]
  Notice: /Stage[main]/Proxysql::Admin_credentials/Exec[proxysql-admin-credentials]/returns: executed successfully
  Info: Computing checksum on file /root/.my.cnf
  Info: /Stage[main]/Proxysql::Admin_credentials/File[root-mycnf-file]: Filebucketed /root/.my.cnf to puppet with sum a971c3f50a522ff6004b359ab15a8903
  Notice: /Stage[main]/Proxysql::Admin_credentials/File[root-mycnf-file]/content: content changed '{md5}a971c3f50a522ff6004b359ab15a8903' to '{md5}8178e7e905c966b685f274e4e387bc8a'
  Notice: /Stage[main]/Proxysql::Reload_config/Exec[reload-config]/returns: ERROR 2013 (HY000) at line 2: Lost connection to MySQL server during query
  Info: Class[Proxysql]: Unscheduling all events on Class[Proxysql]
  Error: /Stage[main]/Proxysql::Reload_config/Exec[reload-config]: Failed to call refresh: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --execute="
            LOAD ADMIN VARIABLES FROM CONFIG;           LOAD ADMIN VARIABLES TO RUNTIME;           SAVE ADMIN VARIABLES TO DISK;           LOAD MYSQL VARIABLES FROM CONFIG;           LOAD MYSQL VARIABLES TO RUNTIME;           SAVE MYSQL VARIABLES TO DISK; "
          ' returned 1 instead of one of [0]
  Error: /Stage[main]/Proxysql::Reload_config/Exec[reload-config]: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --execute="
            LOAD ADMIN VARIABLES FROM CONFIG;           LOAD ADMIN VARIABLES TO RUNTIME;           SAVE ADMIN VARIABLES TO DISK;           LOAD MYSQL VARIABLES FROM CONFIG;           LOAD MYSQL VARIABLES TO RUNTIME;           SAVE MYSQL VARIABLES TO DISK; "
          ' returned 1 instead of one of [0]
  Error: Could not prefetch proxy_mysql_replication_hostgroup provider 'proxysql': Execution of '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT `writer_hostgroup`, `reader_hostgroup`, `comment` FROM `mysql_replication_hostgroups`' returned 1: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/proxysql/proxysql_admin.sock' (111)
  Error: Failed to apply catalog: Execution of '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT `writer_hostgroup`, `reader_hostgroup`, `comment` FROM `mysql_replication_hostgroups`' returned 1: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/proxysql/proxysql_admin.sock' (111)

ubuntu1804-64-1 executed in 11.42 seconds
Exited: 1
    works idempotently with no errors (FAILED - 2)
    Package "proxysql"

ubuntu1804-64-1 10:16:43$ /bin/sh -c dpkg-query\ -f\ \'\$\{Status\}\'\ -W\ proxysql\ \|\ grep\ -E\ \'\^\(install\|hold\)\ ok\ installed\$\'
  install ok installed

ubuntu1804-64-1 executed in 0.03 seconds
      is expected to be installed
    Service "proxysql"

ubuntu1804-64-1 10:16:43$ /bin/sh -c systemctl\ --quiet\ is-enabled\ proxysql

ubuntu1804-64-1 executed in 0.03 seconds
      is expected to be enabled

ubuntu1804-64-1 10:16:43$ /bin/sh -c systemctl\ is-active\ proxysql
  active

ubuntu1804-64-1 executed in 0.02 seconds
      is expected to be running
    Command "mysql -NB -e "SELECT comment FROM mysql_servers WHERE hostname = '127.0.0.1' AND port = 3307 AND hostgroup_id = 1;""
      exit_status

ubuntu1804-64-1 10:16:43$ /bin/sh -c mysql\ -NB\ -e\ \"SELECT\ comment\ FROM\ mysql_servers\ WHERE\ hostname\ \=\ \'127.0.0.1\'\ AND\ port\ \=\ 3307\ AND\ hostgroup_id\ \=\ 1\;\"
  ERROR   2002 (HY000)  : Can't connect to local MySQL server through socket '/var/lib/proxysql/proxysql_admin.sock' (111)

ubuntu1804-64-1 executed in 0.03 seconds
Exited: 1
        is expected to eq 0 (FAILED - 3)
      stdout
        is expected to match "^localhost:3307-1$" (FAILED - 4)
    Command "mysql -NB -e "SELECT comment FROM mysql_servers WHERE hostname = '127.0.0.1' AND port = 3307 AND hostgroup_id = 2;""
      exit_status

ubuntu1804-64-1 10:16:43$ /bin/sh -c mysql\ -NB\ -e\ \"SELECT\ comment\ FROM\ mysql_servers\ WHERE\ hostname\ \=\ \'127.0.0.1\'\ AND\ port\ \=\ 3307\ AND\ hostgroup_id\ \=\ 2\;\"
  ERROR   2002 (HY000)  : Can't connect to local MySQL server through socket '/var/lib/proxysql/proxysql_admin.sock' (111)

ubuntu1804-64-1 executed in 0.03 seconds
Exited: 1
        is expected to eq 0 (FAILED - 5)
      stdout
        is expected to match "^localhost:3307-2$" (FAILED - 6)
    Command "mysql -NB -e 'SELECT comment FROM mysql_replication_hostgroups WHERE writer_hostgroup = 10 AND reader_hostgroup = 20;'"
      exit_status

ubuntu1804-64-1 10:16:43$ /bin/sh -c mysql\ -NB\ -e\ \'SELECT\ comment\ FROM\ mysql_replication_hostgroups\ WHERE\ writer_hostgroup\ \=\ 10\ AND\ reader_hostgroup\ \=\ 20\;\'
  ERROR   2002 (HY000)  : Can't connect to local MySQL server through socket '/var/lib/proxysql/proxysql_admin.sock' (111)

ubuntu1804-64-1 executed in 0.02 seconds
Exited: 1
        is expected to eq 0 (FAILED - 7)
      stdout
        is expected to match "^Test MySQL Cluster 10-20$" (FAILED - 8)
    Command "mysql -NB -e 'SELECT comment FROM mysql_replication_hostgroups WHERE writer_hostgroup = 10 AND reader_hostgroup = 30;'"
      exit_status

ubuntu1804-64-1 10:16:43$ /bin/sh -c mysql\ -NB\ -e\ \'SELECT\ comment\ FROM\ mysql_replication_hostgroups\ WHERE\ writer_hostgroup\ \=\ 10\ AND\ reader_hostgroup\ \=\ 30\;\'
  ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/proxysql/proxysql_admin.sock' (111)

ubuntu1804-64-1 executed in 0.03 seconds
Exited: 1
        is expected to eq 0 (FAILED - 9)
      stdout
        is expected to eq ""
    Command "mysql -NB -e 'SELECT comment FROM mysql_group_replication_hostgroups WHERE writer_hostgroup = 5 AND backup_writer_hostgroup = 2 AND reader_hostgroup = 10 AND offline_hostgroup = 11;'"
      exit_status

ubuntu1804-64-1 10:16:43$ /bin/sh -c mysql\ -NB\ -e\ \'SELECT\ comment\ FROM\ mysql_group_replication_hostgroups\ WHERE\ writer_hostgroup\ \=\ 5\ AND\ backup_writer_hostgroup\ \=\ 2\ AND\ reader_hostgroup\ \=\ 10\ AND\ offline_hostgroup\ \=\ 11\;\'
  ERROR   2002 (HY000)  : Can't connect to local MySQL server through socket '/var/lib/proxysql/proxysql_admin.sock' (111)

ubuntu1804-64-1 executed in 0.03 seconds
Exited: 1
        is expected to eq 0 (FAILED - 10)
      stdout
        is expected to match "^Test MySQL GR Cluster 5-2-10-11$" (FAILED - 11)
    Command "mysql -NB -e 'SELECT comment FROM mysql_group_replication_hostgroups WHERE writer_hostgroup = 6 AND backup_writer_hostgroup = 3 AND reader_hostgroup = 20 AND offline_hostgroup = 30;'"
      exit_status

ubuntu1804-64-1 10:16:43$ /bin/sh -c mysql\ -NB\ -e\ \'SELECT\ comment\ FROM\ mysql_group_replication_hostgroups\ WHERE\ writer_hostgroup\ \=\ 6\ AND\ backup_writer_hostgroup\ \=\ 3\ AND\ reader_hostgroup\ \=\ 20\ AND\ offline_hostgroup\ \=\ 30\;\'
  ERROR   2002 (HY000)  : Can't connect to local MySQL server through socket '/var/lib/proxysql/proxysql_admin.sock' (111)

ubuntu1804-64-1 executed in 0.02 seconds
Exited: 1
        is expected to eq 0 (FAILED - 12)
      stdout
        is expected to eq ""
    Command "mysql -NB -e "SELECT username FROM mysql_users WHERE username = 'tester';""
      exit_status

ubuntu1804-64-1 10:16:43$ /bin/sh -c mysql\ -NB\ -e\ \"SELECT\ username\ FROM\ mysql_users\ WHERE\ username\ \=\ \'tester\'\;\"
  ERROR   2002 (HY000)  : Can't connect to local MySQL server through socket '/var/lib/proxysql/proxysql_admin.sock' (111)

ubuntu1804-64-1 executed in 0.02 seconds
Exited: 1
        is expected to eq 0 (FAILED - 13)
      stdout
        is expected to eq ""
    Command "mysql -NB -e 'SELECT username FROM mysql_users;'"
      exit_status

ubuntu1804-64-1 10:16:44$ /bin/sh -c mysql\ -NB\ -e\ \'SELECT\ username\ FROM\ mysql_users\;\'
  ERROR   2002 (HY000)  : Can't connect to local MySQL server through socket '/var/lib/proxysql/proxysql_admin.sock' (111)

ubuntu1804-64-1 executed in 0.02 seconds
Exited: 1
        is expected to eq 0 (FAILED - 14)
      stdout
        is expected to match "tester1" (FAILED - 15)
    Command "mysql -NB -e "SELECT default_schema FROM mysql_users WHERE username = 'tester1';""
      exit_status

ubuntu1804-64-1 10:16:44$ /bin/sh -c mysql\ -NB\ -e\ \"SELECT\ default_schema\ FROM\ mysql_users\ WHERE\ username\ \=\ \'tester1\'\;\"

ubuntu1804-64-1 executed in 0.02 seconds
        is expected to eq 0
      stdout
        is expected to match "^test1$" (FAILED - 16)
    Command "mysql -NB -e "SELECT default_hostgroup FROM mysql_users WHERE username = 'tester1';""
      exit_status

ubuntu1804-64-1 10:16:44$ /bin/sh -c mysql\ -NB\ -e\ \"SELECT\ default_hostgroup\ FROM\ mysql_users\ WHERE\ username\ \=\ \'tester1\'\;\"

ubuntu1804-64-1 executed in 0.02 seconds
        is expected to eq 0
      stdout
        is expected to match "^1$" (FAILED - 17)
    Command "mysql -NB -e "SELECT default_schema FROM mysql_users WHERE username = 'tester2';""
      exit_status

ubuntu1804-64-1 10:16:44$ /bin/sh -c mysql\ -NB\ -e\ \"SELECT\ default_schema\ FROM\ mysql_users\ WHERE\ username\ \=\ \'tester2\'\;\"

ubuntu1804-64-1 executed in 0.02 seconds
        is expected to eq 0
      stdout
        is expected to match "^test2$" (FAILED - 18)
    Command "mysql -NB -e "SELECT default_hostgroup FROM mysql_users WHERE username = 'tester2';""
      exit_status

ubuntu1804-64-1 10:16:44$ /bin/sh -c mysql\ -NB\ -e\ \"SELECT\ default_hostgroup\ FROM\ mysql_users\ WHERE\ username\ \=\ \'tester2\'\;\"

ubuntu1804-64-1 executed in 0.02 seconds
        is expected to eq 0
      stdout
        is expected to match "^2$" (FAILED - 19)
    Command "mysql -NB -e 'SELECT username FROM runtime_mysql_users;'"
      exit_status

ubuntu1804-64-1 10:16:44$ /bin/sh -c mysql\ -NB\ -e\ \'SELECT\ username\ FROM\ runtime_mysql_users\;\'

ubuntu1804-64-1 executed in 0.02 seconds
        is expected to eq 0
      stdout
        is expected to match "^tester1$" (FAILED - 20)
    Command "mysql -NB -e "SELECT username FROM runtime_mysql_users WHERE username = 'tester2';""
      exit_status

ubuntu1804-64-1 10:16:44$ /bin/sh -c mysql\ -NB\ -e\ \"SELECT\ username\ FROM\ runtime_mysql_users\ WHERE\ username\ \=\ \'tester2\'\;\"

ubuntu1804-64-1 executed in 0.02 seconds
        is expected to eq 0
      stdout
        is expected to eq ""
    Command "mysql -NB -e 'SELECT username FROM mysql_query_rules WHERE rule_id = 1;'"
      exit_status

ubuntu1804-64-1 10:16:44$ /bin/sh -c mysql\ -NB\ -e\ \'SELECT\ username\ FROM\ mysql_query_rules\ WHERE\ rule_id\ \=\ 1\;\'

ubuntu1804-64-1 executed in 0.02 seconds
        is expected to eq 0
      stdout
        is expected to match "^tester1$" (FAILED - 21)
    Command "mysql -NB -e 'SELECT match_pattern FROM mysql_query_rules WHERE rule_id = 1;'"
      exit_status

ubuntu1804-64-1 10:16:44$ /bin/sh -c mysql\ -NB\ -e\ \'SELECT\ match_pattern\ FROM\ mysql_query_rules\ WHERE\ rule_id\ \=\ 1\;\'

ubuntu1804-64-1 executed in 0.02 seconds
        is expected to eq 0
      stdout
        is expected to match "^\\^SELECT$" (FAILED - 22)
  with restart => true

ubuntu1804-64-1 10:16:44$ mktemp -t apply_manifest.pp.XXXXXX
  /tmp/apply_manifest.pp.qymlpb

ubuntu1804-64-1 executed in 0.02 seconds
localhost $ scp /var/folders/rv/6kdq37x167bblm3k3n2tcz940000gn/T/beaker20200224-67522-132lirp ubuntu1804-64-1:/tmp/apply_manifest.pp.qymlpb {:ignore => }

ubuntu1804-64-1 10:16:44$ puppet apply --verbose --detailed-exitcodes /tmp/apply_manifest.pp.qymlpb
  Info: Loading facts
  Info: Loading facts
  Info: Loading facts
  Info: Loading facts
  Info: Loading facts
  Info: Loading facts
  Info: Loading facts
  Notice: Compiled catalog for ubuntu1804-64-1 in environment production in 0.56 seconds
  Info: Applying configuration version '1582535816'
  Info: Computing checksum on file /etc/proxysql.cnf
  Info: /Stage[main]/Proxysql::Config/File[proxysql-config-file]: Filebucketed /etc/proxysql.cnf to puppet with sum 2dc9f6f882f06d2340947e637d674db2
  Notice: /Stage[main]/Proxysql::Config/File[proxysql-config-file]/content: content changed '{md5}2dc9f6f882f06d2340947e637d674db2' to '{md5}1873d6fd93d5d97d99237358b9b9eb66'
  Info: /Stage[main]/Proxysql::Config/File[proxysql-config-file]: Scheduling refresh of Exec[reload-config]
  Info: Class[Proxysql::Config]: Scheduling refresh of Class[Proxysql::Service]
  Info: Class[Proxysql::Service]: Scheduling refresh of Systemd::Dropin_file[proxysql ExecStart override]
  Info: Class[Proxysql::Service]: Scheduling refresh of Service[proxysql]
  Info: Class[Proxysql::Service]: Scheduling refresh of Exec[wait_for_admin_socket_to_open]
  Notice: /Stage[main]/Proxysql::Service/Systemd::Dropin_file[proxysql ExecStart override]/File[/etc/systemd/system/proxysql.service.d]/ensure: created
  Notice: /Stage[main]/Proxysql::Service/Systemd::Dropin_file[proxysql ExecStart override]/File[/etc/systemd/system/proxysql.service.d/puppet.conf]/ensure: defined content as '{md5}6b2d2d6ee3a1fa6cfa4225e9d201734c'
  Info: /Stage[main]/Proxysql::Service/Systemd::Dropin_file[proxysql ExecStart override]/File[/etc/systemd/system/proxysql.service.d/puppet.conf]: Scheduling refresh of Class[Systemd::Systemctl::Daemon_reload]
  Info: Systemd::Dropin_file[proxysql ExecStart override]: Scheduling refresh of Service[proxysql]
  Notice: /Stage[main]/Proxysql::Service/Service[proxysql]: Triggered 'refresh' from 2 events
  Notice: /Stage[main]/Proxysql::Service/Exec[wait_for_admin_socket_to_open]/returns: executed successfully
  Notice: /Stage[main]/Proxysql::Service/Exec[wait_for_admin_socket_to_open]: Triggered 'refresh' from 1 event
  Notice: /Stage[main]/Proxysql::Reload_config/Exec[reload-config]/returns: ERROR 2013 (HY000) at line 2: Lost connection to MySQL server during query
  Info: Class[Proxysql]: Unscheduling all events on Class[Proxysql]
  Info: Class[Systemd::Systemctl::Daemon_reload]: Scheduling refresh of Exec[systemctl-daemon-reload]
  Error: /Stage[main]/Proxysql::Reload_config/Exec[reload-config]: Failed to call refresh: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --execute="
            LOAD ADMIN VARIABLES FROM CONFIG;           LOAD ADMIN VARIABLES TO RUNTIME;           SAVE ADMIN VARIABLES TO DISK;           LOAD MYSQL VARIABLES FROM CONFIG;           LOAD MYSQL VARIABLES TO RUNTIME;           SAVE MYSQL VARIABLES TO DISK; "
          ' returned 1 instead of one of [0]
  Error: /Stage[main]/Proxysql::Reload_config/Exec[reload-config]: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --execute="
            LOAD ADMIN VARIABLES FROM CONFIG;           LOAD ADMIN VARIABLES TO RUNTIME;           SAVE ADMIN VARIABLES TO DISK;           LOAD MYSQL VARIABLES FROM CONFIG;           LOAD MYSQL VARIABLES TO RUNTIME;           SAVE MYSQL VARIABLES TO DISK; "
          ' returned 1 instead of one of [0]
  Notice: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Triggered 'refresh' from 1 event
  Info: Stage[main]: Unscheduling all events on Stage[main]
  Notice: Applied catalog in 11.57 seconds

ubuntu1804-64-1 executed in 24.16 seconds
Exited: 6
    works idempotently with no errors (FAILED - 23)
    Service "proxysql"

ubuntu1804-64-1 10:17:08$ /bin/sh -c systemctl\ is-active\ proxysql
  active

ubuntu1804-64-1 executed in 0.05 seconds
      is expected to be running
ssh connection to ubuntu1804-64-1 has been terminated
Cleaning up docker
get
/v1.16/containers/json
{}

Looking for an existing container with ID 2a895a52f14286b602a8ca833b5bee3c90c9b3fb1d07a1760a472cb2f9fe2f2c
stop container 2a895a52f14286b602a8ca833b5bee3c90c9b3fb1d07a1760a472cb2f9fe2f2c
post
/v1.16/containers/2a895a52f14286b602a8ca833b5bee3c90c9b3fb1d07a1760a472cb2f9fe2f2c/kill
{}

delete container 2a895a52f14286b602a8ca833b5bee3c90c9b3fb1d07a1760a472cb2f9fe2f2c
delete
/v1.16/containers/2a895a52f14286b602a8ca833b5bee3c90c9b3fb1d07a1760a472cb2f9fe2f2c
{}

deleting image 7116c6f73d2d
delete
/v1.16/images/7116c6f73d2d
{}

Warning: deletion of image 7116c6f73d2d caused internal Docker error: conflict: unable to delete 7116c6f73d2d (must be forced) - image is being used by stopped container fec71b6100f9

Failures:

  1) proxysql class Upgrading to version 2.0 works idempotently with no errors
     Failure/Error: apply_manifest(pp, catch_failures: true)
     Beaker::Host::CommandFailure:
       Host 'ubuntu1804-64-1' exited with 6 running:
        puppet apply --verbose --detailed-exitcodes /tmp/apply_manifest.pp.NIpLPR
       Last 10 lines of output were:
       	Info: Class[Proxysql]: Unscheduling all events on Class[Proxysql]
       	Info: Stage[main]: Unscheduling all events on Stage[main]
       	Error: /Stage[main]/Proxysql::Reload_config/Exec[reload-config]: Failed to call refresh: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --execute="
       	          LOAD ADMIN VARIABLES FROM CONFIG;           LOAD ADMIN VARIABLES TO RUNTIME;           SAVE ADMIN VARIABLES TO DISK;           LOAD MYSQL VARIABLES FROM CONFIG;           LOAD MYSQL VARIABLES TO RUNTIME;           SAVE MYSQL VARIABLES TO DISK; "
       	        ' returned 1 instead of one of [0]
       	Error: /Stage[main]/Proxysql::Reload_config/Exec[reload-config]: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --execute="
       	          LOAD ADMIN VARIABLES FROM CONFIG;           LOAD ADMIN VARIABLES TO RUNTIME;           SAVE ADMIN VARIABLES TO DISK;           LOAD MYSQL VARIABLES FROM CONFIG;           LOAD MYSQL VARIABLES TO RUNTIME;           SAVE MYSQL VARIABLES TO DISK; "
       	        ' returned 1 instead of one of [0]
       	Info: Creating state file /opt/puppetlabs/puppet/cache/state/state.yaml
       	Notice: Applied catalog in 25.51 seconds

     # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-4.17.0/lib/beaker/host.rb:394:in `exec'
     # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-4.17.0/lib/beaker/dsl/helpers/host_helpers.rb:83:in `block in on'
     # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-4.17.0/lib/beaker/shared/host_manager.rb:130:in `run_block_on'
     # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-4.17.0/lib/beaker/dsl/patterns.rb:37:in `block_on'
     # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-4.17.0/lib/beaker/dsl/helpers/host_helpers.rb:63:in `on'
     # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-puppet-1.18.14/lib/beaker-puppet/helpers/puppet_helpers.rb:529:in `block in apply_manifest_on'
     # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-4.17.0/lib/beaker/shared/host_manager.rb:130:in `run_block_on'
     # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-4.17.0/lib/beaker/dsl/patterns.rb:37:in `block_on'
     # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-puppet-1.18.14/lib/beaker-puppet/helpers/puppet_helpers.rb:457:in `apply_manifest_on'
     # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-puppet-1.18.14/lib/beaker-puppet/helpers/puppet_helpers.rb:536:in `apply_manifest'
     # ./spec/acceptance/class_spec.rb:45:in `block (3 levels) in <top (required)>'

  2) proxysql class extended testing works idempotently with no errors
     Failure/Error: apply_manifest(pp, catch_failures: true)
     Beaker::Host::CommandFailure:
       Host 'ubuntu1804-64-1' exited with 1 running:
        puppet apply --verbose --detailed-exitcodes /tmp/apply_manifest.pp.teP6Yu
       Last 10 lines of output were:
       	Notice: /Stage[main]/Proxysql::Reload_config/Exec[reload-config]/returns: ERROR 2013 (HY000) at line 2: Lost connection to MySQL server during query
       	Info: Class[Proxysql]: Unscheduling all events on Class[Proxysql]
       	Error: /Stage[main]/Proxysql::Reload_config/Exec[reload-config]: Failed to call refresh: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --execute="
       	          LOAD ADMIN VARIABLES FROM CONFIG;           LOAD ADMIN VARIABLES TO RUNTIME;           SAVE ADMIN VARIABLES TO DISK;           LOAD MYSQL VARIABLES FROM CONFIG;           LOAD MYSQL VARIABLES TO RUNTIME;           SAVE MYSQL VARIABLES TO DISK; "
       	        ' returned 1 instead of one of [0]
       	Error: /Stage[main]/Proxysql::Reload_config/Exec[reload-config]: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --execute="
       	          LOAD ADMIN VARIABLES FROM CONFIG;           LOAD ADMIN VARIABLES TO RUNTIME;           SAVE ADMIN VARIABLES TO DISK;           LOAD MYSQL VARIABLES FROM CONFIG;           LOAD MYSQL VARIABLES TO RUNTIME;           SAVE MYSQL VARIABLES TO DISK; "
       	        ' returned 1 instead of one of [0]
       	Error: Could not prefetch proxy_mysql_replication_hostgroup provider 'proxysql': Execution of '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT `writer_hostgroup`, `reader_hostgroup`, `comment` FROM `mysql_replication_hostgroups`' returned 1: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/proxysql/proxysql_admin.sock' (111)
       	Error: Failed to apply catalog: Execution of '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT `writer_hostgroup`, `reader_hostgroup`, `comment` FROM `mysql_replication_hostgroups`' returned 1: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/proxysql/proxysql_admin.sock' (111)

     # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-4.17.0/lib/beaker/host.rb:394:in `exec'
     # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-4.17.0/lib/beaker/dsl/helpers/host_helpers.rb:83:in `block in on'
     # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-4.17.0/lib/beaker/shared/host_manager.rb:130:in `run_block_on'
     # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-4.17.0/lib/beaker/dsl/patterns.rb:37:in `block_on'
     # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-4.17.0/lib/beaker/dsl/helpers/host_helpers.rb:63:in `on'
     # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-puppet-1.18.14/lib/beaker-puppet/helpers/puppet_helpers.rb:529:in `block in apply_manifest_on'
     # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-4.17.0/lib/beaker/shared/host_manager.rb:130:in `run_block_on'
     # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-4.17.0/lib/beaker/dsl/patterns.rb:37:in `block_on'
     # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-puppet-1.18.14/lib/beaker-puppet/helpers/puppet_helpers.rb:457:in `apply_manifest_on'
     # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-puppet-1.18.14/lib/beaker-puppet/helpers/puppet_helpers.rb:536:in `apply_manifest'
     # ./spec/acceptance/class_spec.rb:206:in `block (3 levels) in <top (required)>'

  3) proxysql class extended testing Command "mysql -NB -e "SELECT comment FROM mysql_servers WHERE hostname = '127.0.0.1' AND port = 3307 AND hostgroup_id = 1;"" exit_status is expected to eq 0
     Failure/Error: its(:exit_status) { is_expected.to eq 0 }

       expected: 0
            got: 1

       (compared using ==)

     # ./spec/acceptance/class_spec.rb:220:in `block (4 levels) in <top (required)>'

  4) proxysql class extended testing Command "mysql -NB -e "SELECT comment FROM mysql_servers WHERE hostname = '127.0.0.1' AND port = 3307 AND hostgroup_id = 1;"" stdout is expected to match "^localhost:3307-1$"
     Failure/Error: its(:stdout) { is_expected.to match '^localhost:3307-1$' }
       expected "" to match "^localhost:3307-1$"

     # ./spec/acceptance/class_spec.rb:221:in `block (4 levels) in <top (required)>'

  5) proxysql class extended testing Command "mysql -NB -e "SELECT comment FROM mysql_servers WHERE hostname = '127.0.0.1' AND port = 3307 AND hostgroup_id = 2;"" exit_status is expected to eq 0
     Failure/Error: its(:exit_status) { is_expected.to eq 0 }

       expected: 0
            got: 1

       (compared using ==)

     # ./spec/acceptance/class_spec.rb:225:in `block (4 levels) in <top (required)>'

  6) proxysql class extended testing Command "mysql -NB -e "SELECT comment FROM mysql_servers WHERE hostname = '127.0.0.1' AND port = 3307 AND hostgroup_id = 2;"" stdout is expected to match "^localhost:3307-2$"
     Failure/Error: its(:stdout) { is_expected.to match '^localhost:3307-2$' }
       expected "" to match "^localhost:3307-2$"

     # ./spec/acceptance/class_spec.rb:226:in `block (4 levels) in <top (required)>'

  7) proxysql class extended testing Command "mysql -NB -e 'SELECT comment FROM mysql_replication_hostgroups WHERE writer_hostgroup = 10 AND reader_hostgroup = 20;'" exit_status is expected to eq 0
     Failure/Error: its(:exit_status) { is_expected.to eq 0 }

       expected: 0
            got: 1

       (compared using ==)

     # ./spec/acceptance/class_spec.rb:230:in `block (4 levels) in <top (required)>'

  8) proxysql class extended testing Command "mysql -NB -e 'SELECT comment FROM mysql_replication_hostgroups WHERE writer_hostgroup = 10 AND reader_hostgroup = 20;'" stdout is expected to match "^Test MySQL Cluster 10-20$"
     Failure/Error: its(:stdout) { is_expected.to match '^Test MySQL Cluster 10-20$' }
       expected "" to match "^Test MySQL Cluster 10-20$"

     # ./spec/acceptance/class_spec.rb:231:in `block (4 levels) in <top (required)>'

  9) proxysql class extended testing Command "mysql -NB -e 'SELECT comment FROM mysql_replication_hostgroups WHERE writer_hostgroup = 10 AND reader_hostgroup = 30;'" exit_status is expected to eq 0
     Failure/Error: its(:exit_status) { is_expected.to eq 0 }

       expected: 0
            got: 1

       (compared using ==)

     # ./spec/acceptance/class_spec.rb:235:in `block (4 levels) in <top (required)>'

  10) proxysql class extended testing Command "mysql -NB -e 'SELECT comment FROM mysql_group_replication_hostgroups WHERE writer_hostgroup = 5 AND backup_writer_hostgroup = 2 AND reader_hostgroup = 10 AND offline_hostgroup = 11;'" exit_status is expected to eq 0
      Failure/Error: its(:exit_status) { is_expected.to eq 0 }

        expected: 0
             got: 1

        (compared using ==)

      # ./spec/acceptance/class_spec.rb:262:in `block (4 levels) in <top (required)>'

  11) proxysql class extended testing Command "mysql -NB -e 'SELECT comment FROM mysql_group_replication_hostgroups WHERE writer_hostgroup = 5 AND backup_writer_hostgroup = 2 AND reader_hostgroup = 10 AND offline_hostgroup = 11;'" stdout is expected to match "^Test MySQL GR Cluster 5-2-10-11$"
      Failure/Error: its(:stdout) { is_expected.to match '^Test MySQL GR Cluster 5-2-10-11$' }
        expected "" to match "^Test MySQL GR Cluster 5-2-10-11$"

      # ./spec/acceptance/class_spec.rb:263:in `block (4 levels) in <top (required)>'

  12) proxysql class extended testing Command "mysql -NB -e 'SELECT comment FROM mysql_group_replication_hostgroups WHERE writer_hostgroup = 6 AND backup_writer_hostgroup = 3 AND reader_hostgroup = 20 AND offline_hostgroup = 30;'" exit_status is expected to eq 0
      Failure/Error: its(:exit_status) { is_expected.to eq 0 }

        expected: 0
             got: 1

        (compared using ==)

      # ./spec/acceptance/class_spec.rb:267:in `block (4 levels) in <top (required)>'

  13) proxysql class extended testing Command "mysql -NB -e "SELECT username FROM mysql_users WHERE username = 'tester';"" exit_status is expected to eq 0
      Failure/Error: its(:exit_status) { is_expected.to eq 0 }

        expected: 0
             got: 1

        (compared using ==)

      # ./spec/acceptance/class_spec.rb:272:in `block (4 levels) in <top (required)>'

  14) proxysql class extended testing Command "mysql -NB -e 'SELECT username FROM mysql_users;'" exit_status is expected to eq 0
      Failure/Error: its(:exit_status) { is_expected.to eq 0 }

        expected: 0
             got: 1

        (compared using ==)

      # ./spec/acceptance/class_spec.rb:277:in `block (4 levels) in <top (required)>'

  15) proxysql class extended testing Command "mysql -NB -e 'SELECT username FROM mysql_users;'" stdout is expected to match "tester1"
      Failure/Error: is_expected.to match 'tester1'
        expected "" to match "tester1"

      # ./spec/acceptance/class_spec.rb:279:in `block (4 levels) in <top (required)>'

  16) proxysql class extended testing Command "mysql -NB -e "SELECT default_schema FROM mysql_users WHERE username = 'tester1';"" stdout is expected to match "^test1$"
      Failure/Error: its(:stdout) { is_expected.to match '^test1$' }
        expected "" to match "^test1$"

      # ./spec/acceptance/class_spec.rb:286:in `block (4 levels) in <top (required)>'

  17) proxysql class extended testing Command "mysql -NB -e "SELECT default_hostgroup FROM mysql_users WHERE username = 'tester1';"" stdout is expected to match "^1$"
      Failure/Error: its(:stdout) { is_expected.to match '^1$' }
        expected "" to match "^1$"

      # ./spec/acceptance/class_spec.rb:291:in `block (4 levels) in <top (required)>'

  18) proxysql class extended testing Command "mysql -NB -e "SELECT default_schema FROM mysql_users WHERE username = 'tester2';"" stdout is expected to match "^test2$"
      Failure/Error: its(:stdout) { is_expected.to match '^test2$' }
        expected "" to match "^test2$"

      # ./spec/acceptance/class_spec.rb:296:in `block (4 levels) in <top (required)>'

  19) proxysql class extended testing Command "mysql -NB -e "SELECT default_hostgroup FROM mysql_users WHERE username = 'tester2';"" stdout is expected to match "^2$"
      Failure/Error: its(:stdout) { is_expected.to match '^2$' }
        expected "" to match "^2$"

      # ./spec/acceptance/class_spec.rb:301:in `block (4 levels) in <top (required)>'

  20) proxysql class extended testing Command "mysql -NB -e 'SELECT username FROM runtime_mysql_users;'" stdout is expected to match "^tester1$"
      Failure/Error: its(:stdout) { is_expected.to match '^tester1$' }
        expected "" to match "^tester1$"

      # ./spec/acceptance/class_spec.rb:306:in `block (4 levels) in <top (required)>'

  21) proxysql class extended testing Command "mysql -NB -e 'SELECT username FROM mysql_query_rules WHERE rule_id = 1;'" stdout is expected to match "^tester1$"
      Failure/Error: its(:stdout) { is_expected.to match '^tester1$' }
        expected "" to match "^tester1$"

      # ./spec/acceptance/class_spec.rb:316:in `block (4 levels) in <top (required)>'

  22) proxysql class extended testing Command "mysql -NB -e 'SELECT match_pattern FROM mysql_query_rules WHERE rule_id = 1;'" stdout is expected to match "^\\^SELECT$"
      Failure/Error: its(:stdout) { is_expected.to match '^\^SELECT$' }
        expected "" to match "^\\^SELECT$"

      # ./spec/acceptance/class_spec.rb:321:in `block (4 levels) in <top (required)>'

  23) proxysql class with restart => true works idempotently with no errors
      Failure/Error: apply_manifest(pp, catch_failures: true)
      Beaker::Host::CommandFailure:
        Host 'ubuntu1804-64-1' exited with 6 running:
         puppet apply --verbose --detailed-exitcodes /tmp/apply_manifest.pp.qymlpb
        Last 10 lines of output were:
        	Info: Class[Systemd::Systemctl::Daemon_reload]: Scheduling refresh of Exec[systemctl-daemon-reload]
        	Error: /Stage[main]/Proxysql::Reload_config/Exec[reload-config]: Failed to call refresh: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --execute="
        	          LOAD ADMIN VARIABLES FROM CONFIG;           LOAD ADMIN VARIABLES TO RUNTIME;           SAVE ADMIN VARIABLES TO DISK;           LOAD MYSQL VARIABLES FROM CONFIG;           LOAD MYSQL VARIABLES TO RUNTIME;           SAVE MYSQL VARIABLES TO DISK; "
        	        ' returned 1 instead of one of [0]
        	Error: /Stage[main]/Proxysql::Reload_config/Exec[reload-config]: '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf --execute="
        	          LOAD ADMIN VARIABLES FROM CONFIG;           LOAD ADMIN VARIABLES TO RUNTIME;           SAVE ADMIN VARIABLES TO DISK;           LOAD MYSQL VARIABLES FROM CONFIG;           LOAD MYSQL VARIABLES TO RUNTIME;           SAVE MYSQL VARIABLES TO DISK; "
        	        ' returned 1 instead of one of [0]
        	Notice: /Stage[main]/Systemd::Systemctl::Daemon_reload/Exec[systemctl-daemon-reload]: Triggered 'refresh' from 1 event
        	Info: Stage[main]: Unscheduling all events on Stage[main]
        	Notice: Applied catalog in 11.57 seconds

      # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-4.17.0/lib/beaker/host.rb:394:in `exec'
      # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-4.17.0/lib/beaker/dsl/helpers/host_helpers.rb:83:in `block in on'
      # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-4.17.0/lib/beaker/shared/host_manager.rb:130:in `run_block_on'
      # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-4.17.0/lib/beaker/dsl/patterns.rb:37:in `block_on'
      # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-4.17.0/lib/beaker/dsl/helpers/host_helpers.rb:63:in `on'
      # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-puppet-1.18.14/lib/beaker-puppet/helpers/puppet_helpers.rb:529:in `block in apply_manifest_on'
      # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-4.17.0/lib/beaker/shared/host_manager.rb:130:in `run_block_on'
      # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-4.17.0/lib/beaker/dsl/patterns.rb:37:in `block_on'
      # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-puppet-1.18.14/lib/beaker-puppet/helpers/puppet_helpers.rb:457:in `apply_manifest_on'
      # /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/beaker-puppet-1.18.14/lib/beaker-puppet/helpers/puppet_helpers.rb:536:in `apply_manifest'
      # ./spec/acceptance/class_spec.rb:341:in `block (3 levels) in <top (required)>'

Finished in 1 minute 51.21 seconds (files took 19.84 seconds to load)
44 examples, 23 failures

Failed examples:

rspec ./spec/acceptance/class_spec.rb:36 # proxysql class Upgrading to version 2.0 works idempotently with no errors
rspec ./spec/acceptance/class_spec.rb:69 # proxysql class extended testing works idempotently with no errors
rspec ./spec/acceptance/class_spec.rb:220 # proxysql class extended testing Command "mysql -NB -e "SELECT comment FROM mysql_servers WHERE hostname = '127.0.0.1' AND port = 3307 AND hostgroup_id = 1;"" exit_status is expected to eq 0
rspec ./spec/acceptance/class_spec.rb:221 # proxysql class extended testing Command "mysql -NB -e "SELECT comment FROM mysql_servers WHERE hostname = '127.0.0.1' AND port = 3307 AND hostgroup_id = 1;"" stdout is expected to match "^localhost:3307-1$"
rspec ./spec/acceptance/class_spec.rb:225 # proxysql class extended testing Command "mysql -NB -e "SELECT comment FROM mysql_servers WHERE hostname = '127.0.0.1' AND port = 3307 AND hostgroup_id = 2;"" exit_status is expected to eq 0
rspec ./spec/acceptance/class_spec.rb:226 # proxysql class extended testing Command "mysql -NB -e "SELECT comment FROM mysql_servers WHERE hostname = '127.0.0.1' AND port = 3307 AND hostgroup_id = 2;"" stdout is expected to match "^localhost:3307-2$"
rspec ./spec/acceptance/class_spec.rb:230 # proxysql class extended testing Command "mysql -NB -e 'SELECT comment FROM mysql_replication_hostgroups WHERE writer_hostgroup = 10 AND reader_hostgroup = 20;'" exit_status is expected to eq 0
rspec ./spec/acceptance/class_spec.rb:231 # proxysql class extended testing Command "mysql -NB -e 'SELECT comment FROM mysql_replication_hostgroups WHERE writer_hostgroup = 10 AND reader_hostgroup = 20;'" stdout is expected to match "^Test MySQL Cluster 10-20$"
rspec ./spec/acceptance/class_spec.rb:235 # proxysql class extended testing Command "mysql -NB -e 'SELECT comment FROM mysql_replication_hostgroups WHERE writer_hostgroup = 10 AND reader_hostgroup = 30;'" exit_status is expected to eq 0
rspec ./spec/acceptance/class_spec.rb:262 # proxysql class extended testing Command "mysql -NB -e 'SELECT comment FROM mysql_group_replication_hostgroups WHERE writer_hostgroup = 5 AND backup_writer_hostgroup = 2 AND reader_hostgroup = 10 AND offline_hostgroup = 11;'" exit_status is expected to eq 0
rspec ./spec/acceptance/class_spec.rb:263 # proxysql class extended testing Command "mysql -NB -e 'SELECT comment FROM mysql_group_replication_hostgroups WHERE writer_hostgroup = 5 AND backup_writer_hostgroup = 2 AND reader_hostgroup = 10 AND offline_hostgroup = 11;'" stdout is expected to match "^Test MySQL GR Cluster 5-2-10-11$"
rspec ./spec/acceptance/class_spec.rb:267 # proxysql class extended testing Command "mysql -NB -e 'SELECT comment FROM mysql_group_replication_hostgroups WHERE writer_hostgroup = 6 AND backup_writer_hostgroup = 3 AND reader_hostgroup = 20 AND offline_hostgroup = 30;'" exit_status is expected to eq 0
rspec ./spec/acceptance/class_spec.rb:272 # proxysql class extended testing Command "mysql -NB -e "SELECT username FROM mysql_users WHERE username = 'tester';"" exit_status is expected to eq 0
rspec ./spec/acceptance/class_spec.rb:277 # proxysql class extended testing Command "mysql -NB -e 'SELECT username FROM mysql_users;'" exit_status is expected to eq 0
rspec ./spec/acceptance/class_spec.rb:278 # proxysql class extended testing Command "mysql -NB -e 'SELECT username FROM mysql_users;'" stdout is expected to match "tester1"
rspec ./spec/acceptance/class_spec.rb:286 # proxysql class extended testing Command "mysql -NB -e "SELECT default_schema FROM mysql_users WHERE username = 'tester1';"" stdout is expected to match "^test1$"
rspec ./spec/acceptance/class_spec.rb:291 # proxysql class extended testing Command "mysql -NB -e "SELECT default_hostgroup FROM mysql_users WHERE username = 'tester1';"" stdout is expected to match "^1$"
rspec ./spec/acceptance/class_spec.rb:296 # proxysql class extended testing Command "mysql -NB -e "SELECT default_schema FROM mysql_users WHERE username = 'tester2';"" stdout is expected to match "^test2$"
rspec ./spec/acceptance/class_spec.rb:301 # proxysql class extended testing Command "mysql -NB -e "SELECT default_hostgroup FROM mysql_users WHERE username = 'tester2';"" stdout is expected to match "^2$"
rspec ./spec/acceptance/class_spec.rb:306 # proxysql class extended testing Command "mysql -NB -e 'SELECT username FROM runtime_mysql_users;'" stdout is expected to match "^tester1$"
rspec ./spec/acceptance/class_spec.rb:316 # proxysql class extended testing Command "mysql -NB -e 'SELECT username FROM mysql_query_rules WHERE rule_id = 1;'" stdout is expected to match "^tester1$"
rspec ./spec/acceptance/class_spec.rb:321 # proxysql class extended testing Command "mysql -NB -e 'SELECT match_pattern FROM mysql_query_rules WHERE rule_id = 1;'" stdout is expected to match "^\\^SELECT$"
rspec ./spec/acceptance/class_spec.rb:325 # proxysql class with restart => true works idempotently with no errors

/Users/mbaur/.rvm/rubies/ruby-2.6.3/bin/ruby -I/Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/rspec-core-3.9.1/lib:/Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/rspec-support-3.9.2/lib /Users/mbaur/.rvm/gems/ruby-2.6.3@puppet-proxysql/gems/rspec-core-3.9.1/exe/rspec spec/acceptance failed

Any additional information you'd like to impart

Mysql query rule changed

Affected Puppet, Ruby, OS and module versions/distributions

  • Puppet: 4.8.1
  • Ruby: 2.0.0p648
  • Distribution: Centos7
  • Module version: 1.1.2

How to reproduce (e.g Puppet code you use)

We use the galaera check script (https://github.com/Tusamarco/proxy_sql_tools/blob/master/galera_check.pl) in combination with proxysql and query_rules. Puppet runs now give a "notice" but don't change data. There are no changes without the "FOR" keyword in the query "SELECT. * FOR UPDATE".

Hiera data:
'mysql_query_rule-1':
'rule_id': '1'
'active': '1'
'destination_hostgroup': '1'
'match_pattern': '^SELECT .'
'apply': '0'
'mysql_query_rule-2':
'rule_id': '2'
'active': '1'
'destination_hostgroup': '2'
'match_pattern': '^SELECT .
FOR UPDATE'
'apply': '1'

What are you seeing

Info: Applying configuration version '1490088162'
Notice: /Stage[main]/Dbproxy::Queryrule/Proxy_mysql_query_rule[mysql_query_rule-2]/match_pattern: match_pattern changed '^SELECT' to '^SELECT .* FOR UPDATE'
Notice: /Stage[main]/Dbproxy::Queryrule/Proxy_mysql_query_rule[mysql_query_rule-2]/negate_match_pattern: negate_match_pattern changed '.*' to '0'
Notice: Applied catalog in 7.99 seconds

Before and after puppet run (select * from mysql_servers):
| rule_id | match_pattern |
| 1 | ^SELECT.* |
| 2 | ^SELECT .* FOR UPDATE |

Module push 'OFFLINE_SOFT' back online

Affected Puppet, Ruby, OS and module versions/distributions

Puppet: 4.8.1
Ruby: 2.0.0p648
Distribution: Centos7
Module version: 1.1.2

How to reproduce (e.g Puppet code you use)

We use the galaera check script (https://github.com/Tusamarco/proxy_sql_tools/blob/master/galera_check.pl) in combination with proxysql. After each run puppet push the "OFFLINE_SOFT" servers back online.

What are you seeing

/Stage[main]/Dbproxy::Mysqlserver/Proxy_mysql_server[..*.64:3306-1]/status defined 'status' as 'ONLINE'

select hostgroup_id, status from mysql_servers;
+--------------+-------------+--------------+
| hostgroup_id | hostname | status |
+--------------+-------------+--------------+
| 1 | ...63 | ONLINE |
| 1 | ..
.64 | ONLINE |
| 1 | ...65 | ONLINE |
| 1 | ..
.66 | ONLINE |
| 2 | ...63 | ONLINE |
| 2 | ..
.64 | OFFLINE_SOFT |
| 2 | ...65 | OFFLINE_SOFT |
| 2 | ..
.66 | OFFLINE_SOFT |
+--------------+-------------+--------------+

What behaviour did you expect instead

Keep the status "OFFLINE_SOFT".

ProxySQL 2.4.x not supported

How to reproduce (e.g Puppet code you use)

  $override_settings = {

  }

  $admin_password = Sensitive('something_secret')
  $monitor_password = Sensitive('not_quite_as_secret')

  class { '::proxysql':
    listen_port              => 3306,
    admin_password           => $admin_password,
    monitor_password         => $monitor_password,
    override_config_settings => $override_settings,
    version                  => "2.4.5"
  }

What are you seeing

No repository available

What behaviour did you expect instead

Ability to install > 2.3.x versions of ProxySQL

Output log

Evaluation Error: Error while evaluating a Function Call, Unsupported `proxysql::version` 2.4.5 

Disabling Proxysql Conf Generation makes module fail

Affected Puppet, Ruby, OS and module versions/distributions

  • Puppet:5.5.22
  • Ruby:
  • Distribution:
  • Module version: 5.0.0

How to reproduce (e.g Puppet code you use)

proxysql::manage_config_file: false

What are you seeing

After setting this value to false, i expect that it should not manage my conf file.
But it looks like that
proxysql::reload_config class checks for the resource :

File['proxysql-config-file']
@
https://github.com/voxpupuli/puppet-proxysql/blob/master/manifests/reload_config.pp#L5-L7

Now this resource is created @
https://github.com/voxpupuli/puppet-proxysql/blob/master/manifests/config.pp#L10-L18
in turn which requires variable proxysql::manage_config_file: false to true.

Which sounds like dead lock to me if i interpret correctly.

What behaviour did you expect instead

If i set proxysql::manage_config_file: false
i think subscriber value should be avoided or it should be able to look up file $proxysql::mycnf_file_name
for this variable
https://github.com/voxpupuli/puppet-proxysql/blob/master/manifests/reload_config.pp#L10

Output log

Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Could not find resource 'File[proxysql-config-file]' in parameter 'subscribe' (file: /etc/puppetlabs/code/environments/tets/modules/proxysql/manifests/reload_config.pp, line: 22) on node proxysql-test01

Any additional information you'd like to impart

Am not sure if this is correct solution but considering subscribe only if
proxysql::manage_config_file is set to true else avoiding completley.
Or if we can achieve look up on file set via this variable : $proxysql::mycnf_file_name
should help.
OR we can disable the whole class reload_config if proxysql::manage_config_file set to false (could be dirty solution)

Let me know if am pursuing this incorrectly and i need to set some other variables to achieve this.

Will be waiting for the reply.

Thanks a lot in advance .

Escape issue in .my.cnf on machine with specific characters inside password template(my.cnf.erb)

Affected Puppet, Ruby, OS and module versions/distributions

On puppetm :

  • Puppet: 6.22.1
  • Ruby: ruby 2.5.1
  • Distribution: p57
  • Module version: 6.0.0

How to reproduce (e.g Puppet code you use)

My wrapper uses an ENC to replace $hello_proxysql_admin_password with a string containing the final password.
I'm sending it as "Sensitive" because it's expected by proxysql::init

admin_password                     => Sensitive($hello_proxysql_admin_password),

It is supposed to be filled in this template to finish at the machine level in order to be used by /usr/bin/mysql --defaults-extra-file clause.

[mysql]
prompt = 'Admin> '

[client]
<% if scope.lookupvar('::proxysql::admin_listen_socket') != '' -%>
socket = <%= scope.lookupvar('::proxysql::admin_listen_socket') %>
<% else -%>
host = <%= scope.lookupvar('::proxysql::admin_listen_ip') %>
port = <%= scope.lookupvar('::proxysql::admin_listen_port') %>
<% end -%>
user = <%= scope.lookupvar('::proxysql::admin_username') %>
password = <%= scope.lookupvar('::proxysql::admin_password').unwrap %>

But when your password contains [ or ] it breaks the actual catalog.

What are you seeing

Catalog blocked at steps to send mysql requests to create mysql_servers arrays in proxysql.

What behaviour did you expect instead

Further steps of Mysql statements being sent to the proxysql installation.

Output log

Error: Failed to apply catalog: Execution of '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT hostname, port, hostgroup_id FROM mysql_servers' returned 1: ERROR 1045 (28000): ProxySQL Error: Access denied for user 'admin'@'localhost' (using password: YES)

Any additional information you'd like to impart

I'm pretty new to puppet so it's the quickfix i dit for the moment :

Replaced
password = <%= scope.lookupvar('::proxysql::admin_password').unwrap %>
by
password = '<%= scope.lookupvar('::proxysql::admin_password').unwrap %>'

Regards,

New versions of proxysql

How to evolve this project

Hi,
I would like to get some feedback or just open a discussion about how we can progress with new proxysql versions, those will introduce new fields in tables, then some compatibility could be broken.

Also, another issue candidate I see: we use automatic failover techniques, then to make this module compatible to our design is to specify to the provider do not look at some specific fields such as hostgroups or status, otherwise if the failover happens, puppet would move back the promoted slave to the original hostgroup (readers) set by puppet.

I have work done regarding this second step.

Thoughts?

Cheers

Types redesign

Hello @bastelfreak and @mcrauwel!

I am now looking at types design and I think that it wasn't such a good idea to wrap everything in arrays, because right now all configurations files are looking a bit ugly.
Let me show my thoughts using examples.
Right now in order to configure multiples users in hiera we need to write an array of hashes, like:

"proxysql::mysql_users": [
    { "user1": {
        "password": "*92C74DFBDA5D60ABD41EFD7EB0DAE389F4646ABB",
        "default_hostgroup": 1,
        "transaction_persistent": false }
    },
    { "user2": {
        "password": "*86935F2843252CFAAC4CE713C0D5FF80CF444F3B",
        "default_hostgroup": 2 }
    },
    { "user3": {
        "password": "*86935F2843252CFAAC4CE713C0D5FF80CF444F3B",
        "default_hostgroup": 2 }
    },
    { "user4": {
        "password": "*86935F2843252CFAAC4CE713C0D5FF80CF444F3B",
        "default_hostgroup": 2 }
    }
 ]

but actually this arrays brackets just make reading in case of many users harder.
So my proposal is to get rid of arrays and use pure hashes (so it will be more ruby way).
So we will have this configuration parameter like:

"proxysql::mysql_users": {
    "user1": {
        "password": "*92C74DFBDA5D60ABD41EFD7EB0DAE389F4646ABB",
        "default_hostgroup": 1,
        "transaction_persistent": false 
    },
    "user2": {
        "password": "*86935F2843252CFAAC4CE713C0D5FF80CF444F3B",
        "default_hostgroup": 2 
    },
    "user3": {
        "password": "*86935F2843252CFAAC4CE713C0D5FF80CF444F3B",
        "default_hostgroup": 2 
    },
    "user4": {
        "password": "*86935F2843252CFAAC4CE713C0D5FF80CF444F3B",
        "default_hostgroup": 2 
    }
}

And the same thing for other types. As you see, this will be backward incompatible change, but it will make code an configuration more cleaner.

Next stable Release ?

Thank you for the module I use the stable Version and it helps a lot.

Do you have a time schedule when the next version of this module will be release ?
I just took a look and the stable Version is 4 years old and 239 commits behind master.

Do you need any help for the next release to be published ?

ProxySQL 2 gets restarted as root

On EL6/7 with versions of proxysql that use init.d and when using restart => true proxySQL gets started with /usr/bin/proxysql --reload as root as part of a service resource.

Back when this was written, proxysql 2 hadn't been released and it appears that proxysql 1 was normally run as root so this wasn't a problem. proxysql 2 should run as the proxysql user though.

Fortunately, in proxysql 2, the init.d script comes with a reload option, so this should be easy to fix.

(This issue is not applicable to EL7 from proxysql 2.0.7 onwards which switched to using systemd)

New release?

Any chance for a new release? #141 would be nice to have in a release as it stops puppet from creating spurious files in every directory you're running puppet in.

Ubuntu support

Hi,

We'd like to be able to take advantage of this project for our environment. I was hoping someone could point me in the right direction as to what it would take to make this project compatible with Ubuntu Xenial in hopes of submitting a PR.

Thanks!

Resources added to the configuration file are neither being loaded to runtime nor saved to disk

Affected Puppet, Ruby, OS and module versions/distributions

  • Puppet: 4.10.9 (PC1)
  • Ruby: 2.1.5p273
  • Distribution: Debian Jessie (amd64)
  • Module version: 2.0.0

How to reproduce (e.g Puppet code you use)

Apply the module via profile, with resources defined in hiera.

class profile::proxysql (

  $proxysql_admin_password = lookup(proxysql::admin_password),
  $proxysql_monitor_password = lookup(proxysql::monitor_password)
)
{

  class { '::proxysql':
    admin_password   => Sensitive($proxysql_admin_password),
    monitor_password => Sensitive($proxysql_monitor_password),
  }
}

Sample of hiera data:

proxysql::admin_password: ReallySecretPassword
proxysql::monitor_password: MumsTheWord
proxysql::override_config_settings:
  mysql_servers:
    sql1:
      address: 10.0.0.1
      port: 3306
      hostgroup: 0
    sql2:
      address: 10.0.0.2
      port: 3306
      hostgroup: 0
proxysql::package_ensure:  1.4.10

I have not specified the proxysql::load_to_runtime parameter in hiera, because it defaults to true.

What are you seeing

The config file /etc/proxysql.cnf is correctly written.

The in-memory database is not updated.

The runtime database is not updated.

In order to effect any change, I have to execute the following in the ProxySQL admin console:

Admin> load mysql servers from config;
Admin> load mysql servers to runtime;

The observation is the same with other resources such as the scheduler, mysql users, mysql query rules and so on.

What behaviour did you expect instead

I expected to see the resources being saved to the configuration file and then immediately loaded to runtime during the puppet run.

Impossible to define a host in 2 different hostgroups

Affected Puppet, Ruby, OS and module versions/distributions

  • Puppet: 5.0
  • Ruby: 2.1.5p273
  • Distribution: Debian
  • Module version: 2.0.0

How to reproduce (e.g Puppet code you use)

yaml:

test::services::proxysql::servers:
  - "10.60.6.21:3306-5":
      hostname: 10.60.6.21
      port: 3306
      hostgroup_id: 5
  - "10.60.6.21:3306-10":
      hostname: 10.60.6.21
      port: 3306
      hostgroup_id: 10
  - "10.60.6.31:3306-15":
      hostname: 10.60.6.31
      port: 3306
      hostgroup_id: 15
  - "10.60.6.31:3306-20":
      hostname: 10.60.6.31
      port: 3306
      hostgroup_id: 20

then, in the manifest:
create_resources(proxy_mysql_server, $servers)

What are you seeing

Notice: /Stage[main]/Test::Services::Proxysql/Proxy_mysql_server[10.60.6.21:3306-5]/ensure: created
Notice: /Stage[main]/Test::Services::Proxysql/Proxy_mysql_server[10.60.6.21:3306-10]/ensure: created
Notice: /Stage[main]/Test::Services::Proxysql/Proxy_mysql_server[10.60.6.31:3306-15]/ensure: created
Notice: /Stage[main]/Test::Services::Proxysql/Proxy_mysql_server[10.60.6.31:3306-20]/ensure: created
Admin> SELECT * FROM mysql_servers;
+--------------+------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname   | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 20           | 10.60.6.31 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 10           | 10.60.6.21 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
+--------------+------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+

What behaviour did you expect instead

Admin> SELECT * FROM mysql_servers;
+--------------+------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname   | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 15           | 10.60.6.31 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 20           | 10.60.6.31 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 5            | 10.60.6.21 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
| 10           | 10.60.6.21 | 3306 | ONLINE | 1      | 0           | 1000            | 0                   | 0       | 0              |         |
+--------------+------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+

Any additional information you'd like to impart

as the template splits each host in host, port e.g. it seems impossible to define a host in multiple hostgroups at this time.

Service management broken with ProxySQL 2 when `restart => true`

ProxySQL 2.0.7 has been released. The CentOS 7 packages have switched over to using systemd for service management. This breaks this module as we currently use start => /usr/bin/proxysql --reload in the Service resource. This means that the PidFile that the systemd service assumes is not written and as far as systemd is concerned, the service isn't started.

changing the admin-admin_credentials results in failing puppet runs

Affected Puppet, Ruby, OS and module versions/distributions

  • Puppet: 4.5.1
  • Distribution: Any
  • Module version: master

How to reproduce (e.g Puppet code you use)

class { '::proxysql':
    listen_port              => 3306,
    admin_username           => 'admin',
    admin_password           => '987654321',
    ...
}

What are you seeing

It changes the configfiles but not the running config so the 'admin' interface doesn't work anymore.

What behaviour did you expect instead

The running config should be updated also.

Output log

Notice: /Stage[main]/Proxysql::Config/File[proxysql-config-file]/content: content changed '{md5}1aa0b0ff694a7637ab19d056f2570f3b' to '{md5}3f789d32711f70a0de1bbb1020c43f27'
Notice: /Stage[main]/Proxysql::Config/File[root-mycnf-file]/content: content changed '{md5}a0022951a26962843d3b6145315614b1' to '{md5}373119578d26cb27679f5e28355c5c89'
Error: Failed to apply catalog: Execution of '/usr/bin/mysql --defaults-extra-file=/root/.my.cnf -NBe SELECT username FROM mysql_users' returned 1: ERROR 1045 (28000): ProxySQL Error: Access denied for user 'admin'@'' (using password: YES)

Installation failure on deb 9.5

Affected Puppet, Ruby, OS and module versions/distributions

  • Puppet: 4.8.2
  • Ruby: 2.3.3p222
  • Distribution: Debian 9.5
  • Module version: 2.0.0

How to reproduce (e.g Puppet code you use)

When using ::proxysql class from this class, under my environment, the installation fails due to missing run of apt-get update after adding the repository and key.

What are you seeing

after first apply of using proxysql class, i have to manually do a apt-get update and then run puppet agent -t to get those manifests working.

What behaviour did you expect instead

Output log

Notice: /Stage[main]/Proxysql::Repo/Apt::Source[proxysql_repo]/Apt::Key[Add key: 1448BF693CA600C799EB935804A562FB79953B49 from Apt::Source proxysql_repo]/Apt_key[Add key: 1448BF693CA600C799EB935804A562FB79953B49 from Apt::Source proxysql_repo]/ensure: created Notice: /Stage[main]/Proxysql::Repo/Apt::Source[proxysql_repo]/Apt::Setting[list-proxysql_repo]/File[/etc/apt/sources.list.d/proxysql_repo.list]/ensure: defined content as '{md5}301a45a0513dfaa4a245df009c1b565a' Info: /Stage[main]/Proxysql::Repo/Apt::Source[proxysql_repo]/Apt::Setting[list-proxysql_repo]/File[/etc/apt/sources.list.d/proxysql_repo.list]: Scheduling refresh of Class[Apt::Update] Error: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install proxysql' returned 100: Reading package lists... Building dependency tree... Reading state information... E: Unable to locate package proxysql Error: /Stage[main]/Proxysql::Install/Package[proxysql]/ensure: change from purged to present failed: Execution of '/usr/bin/apt-get -q -y -o DPkg::Options::=--force-confold install proxysql' returned 100: Reading package lists... Building dependency tree... Reading state information... E: Unable to locate package proxysql Notice: /Stage[main]/Proxysql::Install/File[proxysql-datadir]/ensure: created Info: Class[Proxysql::Install]: Unscheduling all events on Class[Proxysql::Install]

Any additional information you'd like to impart

Maybe its a problem with the puppet-apt module used. For me i'am using 4.5.0

Amazon Linux support

Affected Puppet, Ruby, OS and module versions/distributions

  • Puppet: 4.10.12
  • Ruby: 1.8.7
  • Distribution: Amazon Linux AMI release 2018.03
  • Module version: 2.0.0

How to reproduce (e.g Puppet code you use)

 class { '::proxysql':
    listen_port              => 3306,
   }

What are you seeing

Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install proxysql' returned 1: One of the configured repositories failed (ProxySQL YUM repository),
 and yum doesn't have enough cached data to continue. At this point the only
 safe thing yum can do is fail. There are a few ways to work "fix" this:

     1. Contact the upstream for the repository and get them to fix the problem.

     2. Reconfigure the baseurl/etc. for the repository, to point to a working
        upstream. This is most often useful if you are using a newer
        distribution release than is supported by the repository (and the
        packages for the previous distribution release still work).

     3. Disable the repository, so yum won't use it by default. Yum will then
        just ignore the repository until you permanently enable it again or use
        --enablerepo for temporary usage:

            yum-config-manager --disable proxysql_repo

     4. Configure the failing repository to be skipped, if it is unavailable.
        Note that yum will try to contact the repo. when it runs most commands,
        so will have to try and fail each time (and thus. yum will be be much
        slower). If it is a very temporary problem though, this is often a nice
        compromise:

            yum-config-manager --save --setopt=proxysql_repo.skip_if_unavailable=true

failure: repodata/repomd.xml from proxysql_repo: [Errno 256] No more mirrors to try.
http://repo.proxysql.com/ProxySQL/proxysql-1.4.x/centos/latest/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found

However with this change:

  class { '::proxysql':
    listen_port              => 3306,
    manage_repo		     => false,
  }

The module works as expected.

Created upstream issue sysown/proxysql#1694 to request releasever = latest after that would be approved the default module can claim Amazon Linux 2018.03 support

Default weight 0 causes issue

Hi,

in ProxySQL, weight 0 means server disabled. So (IMHO) you should use weight 1 as default weight, because this causes some issues for people wanting to use that playbook without prior knowledge of ProxySQL.

Thanks

proxy_mysql_user + ensure absent

Affected Puppet, Ruby, OS and module versions/distributions

  • Puppet:
  • Ruby:
  • Distribution:
  • Module version:

How to reproduce (e.g Puppet code you use)

I changed a user definition to 'ensure: absent' and started getting errors.

What are you seeing

Line 16 of proxy_mysql_user.rb emitted a Error: Failed to apply catalog: undefined method start_with?' for nil:NilClass` error.

What behaviour did you expect instead

Output log

Any additional information you'd like to impart

I created a bogus password for the deleted user, and then it seemed to work fine.

Missing configuration directives

  • Puppet: 5
  • Module version: v2

module doesn't support all the configuration directive the upstream proxysql package does as such you can't template all the needed functionality with this class.

What are you seeing

[root@testbox temp-puppet-rhel7]# puppet apply proxysql.pp
Error: no parameter named 'writer_is_also_reader' (file: /root/temp-puppet-rhel7/proxysql.pp, line: 120) on Proxy_mysql_replication_hostgroup[30-31] (file: /root/temp-puppet-rhel7/proxysql.pp, line: 120) on node testbox

proxy_mysql_replication_hostgroup { '30-31':
	writer_hostgroup => 30,
	reader_hostgroup => 31,
	writer_is_also_reader => 0,
	max_writers => 1,
	comment => 'Zabbix cluster: ensure all read and write traffic goes to a single active master' 
}

What behaviour did you expect instead

the variables to be mapped up to the configuration field values so that the host groups can be configured in a way that matches the proxysql v4 manual.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.