Code Monkey home page Code Monkey logo

puppet-gluster's Introduction

Gluster module for Puppet

Build Status Code Coverage Puppet Forge Puppet Forge - downloads Puppet Forge - endorsement Puppet Forge - scores

Moved to Vox Pupuli

This module has been moved to the Vox Pupuli organization. Please update all bookmarks and Puppetfile references.

Table of Contents

  1. Overview
  2. Custom Facts
  3. Classes
  4. Resources
  5. Examples
  6. Contributing
  7. Copyright

Overview

This module installs and configures servers to participate in a Gluster Trusted Storage Pool, create or modify one or more Gluster volumes, and mount Gluster volumes.

Also provided with this module are a number of custom Gluster-related facts.

Custom Facts

  • gluster_binary: the full pathname of the Gluster CLI command
  • gluster_peer_count: the number of peers to which this server is connected in the pool.
  • gluster_peer_list: a comma-separated list of peer hostnames
  • gluster_volume_list: a comma-separated list of volumes being served by this server
  • gluster_volume_#{vol}_bricks: a comma-separated list of bricks in each volume being served by this server
  • gluster_volume_#{vol}_options: a comma-separared list of options enabled on each volume
  • gluster_volume_#{vol}_ports: a comma-separated list of ports used by the bricks in the specified volume.

The gluster_binary fact will look for an external fact named gluster_custom_binary. If this fact is defined, gluster_binary will use that value. Otherwise the path will be searched until the gluster command is found.

Classes

params.pp

This class establishes a number of default values used by the other classes.

You should not need to include or reference this class directly.

repo.pp

This class enables the GlusterFS repository. Either Gluster.org for APT or CentOS managed YUM for EL.

Fedora users can get GlusterFS packages directly from Fedora's repository. Red Hat users with a Gluster Storage subscription should set the appropriate subscription/repo for their OS. Therefore for both Fedora and Red Hat Gluster Storage users, the default upstream community repo should be off:

gluster::repo => false

For Debian, the latest packages of the latest release will be installed by default. Otherwise, specify a version:

class { gluster::repo:
  version => '3.5.2',
}

For Ubuntu, the Gluster PPA repositories only contain the latest version of each release. Therefore specify the release to install:

class { gluster::repo:
  release => '10',
}

For systems using YUM, the latest package from the 3.8 release branch will be installed. You can specify a specific version and release:

class { gluster::repo:
  release => '3.7',
}
class { gluster:
  version => '3.7.12',
}

Package priorities are supported, but not activated by default.

Yum: If a priority parameter is passed to this class, the yum-plugin-priorities package will be installed, and a priority will be set on the Gluster repository.

Apt: If a priority parameter is passed to this class, it will be passed as is to the Apt::Source resource. See Puppetlabs apt module for details.

This is useful in the event that you want to install a version from the upstream repos that is older than that provided by your distribution's repositories.

install.pp

This class handles the installation of the Gluster packages (both server and client).

If the upstream Gluster repo is enabled (default), this class will install packages from there. Otherwise it will attempt to use native OS packages.

Currently, RHEL 6, RHEL 7, Debian 8, Raspbian and Ubuntu provide native Gluster packages (at least client).

class { gluster::install:
  server  => true,
  client  => true,
  repo    => true,
  version => 3.5.1-1.el6,
}

Note that on Red Hat (and derivative) systems, the version parameter should match the version number used by yum for the RPM package. Beware that Red Hat provides its own build of the GlusterFS FUSE client on RHEL but its minor version doesn't match the upstream. Therefore if you run a community GlusterFS server, you should try to match the version on your RHEL clients by running the community FUSE client. On Debian-based systems, only the first two version places are significant ("x.y"). The latest minor version from that release will be installed unless the "priority" parameter is used.

client.pp

This class installs only the Gluster client package(s). If you need to install both the server and client, please use the install.pp (or init.pp) classes.

class { gluster::client:
  repo    => true,
  version => '3.5.2',
}

Use of gluster::client is not supported with either gluster::install or gluster::init.

service.pp

This class manages the glusterd service.

class { gluster::service:
  ensure => running,
}

init.pp

This class implements a basic Gluster server.

In the default configuration, this class exports a gluster::peer defined type for itself, and then collects any other exported gluster::peer resources for the same pool for instantiation.

This default configuration makes it easy to implement a Gluster storage pool by simply assigning the gluster class to your Gluster servers: they'll each export their gluster::peer resources, and then instantiate the other servers' gluster::peer resources.

The use of exported resources assume you're using PuppetDB, or some other backing mechanism to support exported resources.

If a volumes parameter is passed, the defined Gluster volume(s) can be created at the same time as the storage pool. See the volume defined type below for more details.

class { gluster:
  repo    => true,
  client  => false,
  pool    => 'production',
  version => '3.5',
  volumes => {
    'data1' => {
      replica => 2,
      bricks  => [ 'srv1.local:/export/brick1/brick',
                   'srv2.local:/export/brick1/brick',
                   'srv1.local:/export/brick2/brick',
                   'srv2.local:/export/brick2/brick', ],
      options => [ 'server.allow-insecure: on',
                   'nfs.disable: true', ],
               }
             }
}

Resources

gluster::peer

This defined type creates a Gluster peering relationship. The name of the resource should be the fully-qualified domain name of a peer to which to connect. An optional pool parameter permits you to configure different storage pools built from different hosts.

With the exported resource implementation in init.pp, the first server to be defined in the pool will find no peers, and therefore not do anything. The second server to execute this module will collect the first server's exported resource and initiate the gluster peer probe, thus creating the storage pool.

Note that the server being probed does not perform any DNS resolution on the server doing the probing. This means that the probed server will report only the IP address of the probing server. The next time the probed client runs this module, it will execute a gluster peer probe against the originally-probing server, thereby updating its list of peers to use the FQDN of the other server.

See this mailing list post for more information.

gluster::peer { 'srv1.domain:
  pool => 'production',
}

gluster::volume

This defined type creates a Gluster volume. You can specify a stripe count, a replica count, the transport type, a list of bricks to use, and an optional set of volume options to enforce.

Note that creating brick filesystems is up to you. May I recommend the Puppet Labs LVM module ?

If using arbiter volumes, you must conform to the replica count that will work with them, at the time of writing this, Gluster 3.12 only supports a configuration of replica => 3, arbiter => 1.

When creating a new volume, this defined type will ensure that all of the servers hosting bricks in the volume are members of the storage pool. In this way, you can define the volume at the time you create servers, and once the last peer joins the pool the volume will be created.

Any volume options defined will be applied after the volume is created but before the volume is started.

In the event that the list of volume options active on a volume does not match the list of options passed to this defined type, no options will be removed by default. You must set the $remove_options parameter to true in order for this defined type to remove options.

Note that adding or removing options does not (currently) restart the volume.

gluster::volume { 'data1':
  replica => 2,
  bricks  => [
               'srv1.local:/export/brick1/brick',
               'srv2.local:/export/brick1/brick',
               'srv1.local:/export/brick2/brick',
               'srv2.local:/export/brick2/brick',
             ],
  options => [
               'server.allow-insecure: on',
               'nfs.ports-insecure: on',
             ],
}

gluster::volume::option

This defined type applies Gluster options to a volume.

In order to ensure uniqueness across multiple volumes, the title of each gluster::volume::option must include the name of the volume to which it applies. The format for these titles is volume:option_name:

gluster::volume::option{ 'g0:nfs.disable':
  value => 'on',
}

To remove an option, set the ensure parameter to absent:

gluster::volume::option{ 'g0:server.allow-insecure':
  ensure => absent,
}

gluster::mount

This defined type mounts a Gluster volume. Most of the parameters to this defined type match either the gluster FUSE options or the Puppet mount options.

gluster::mount { '/gluster/data1':
  ensure    => present,
  volume    => 'srv1.local:/data1',
  transport => 'tcp',
  atboot    => true,
  dump      => 0,
  pass      => 0,
}

Examples

Please see the examples directory.

Contributing

Pull requests are warmly welcomed!

Copyright

Copyright 2014 CoverMyMeds and released under the terms of the MIT License.

puppet-gluster's People

Contributors

alexjfisher avatar bastelfreak avatar bbriggs avatar bsauvajon avatar chuman avatar dhollinger avatar dhoppe avatar ekohl avatar glorpen avatar jacksgt avatar juniorsysadmin avatar jyaworski avatar kenyon avatar llowder avatar martialblog avatar maxadamo avatar niteman avatar pioto avatar rauchrob avatar rnelson0 avatar runejuhl avatar rwaffen avatar sacres avatar sammcj avatar shieldwed avatar skpy avatar smortex avatar tparkercbn avatar vinzent avatar zilchms avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

puppet-gluster's Issues

Name of server service is different in Debian

Hello,

the name of glusterfs server service is different in Debian. It's not glusterd, but glusterfs-server. I'll pitch two solutions, although I don't know what's the preferred(THE correct) way doing this in Puppet.

Service name can come from variable, therefore why not to put it there. Its value should be determined by LSB. So far so good. Now, where to put this logic.

  1. Either into params.pp
  2. or put case into service.pp

Systemd start for glusterfs-server failed!

I keep getting the error below when I run puppet agent -t but when I check the gluster service, the service is already running. What do I need to do to fix this error that I keep getting?

Error: Systemd start for glusterfs-server failed!
journalctl log for glusterfs-server:
-- No entries --

Error: /Stage[main]/Gluster::Service/Service[glusterfs-server]/ensure: change from stopped to running failed: Systemd start for glusterfs-server failed!
journalctl log for glusterfs-server:
-- No entries --

Mounts can be defined but mount point will not be created

When defining a mount, like so:

gluster_mounts:
  /somemountpoint:
    volume: host:/somevolume
    ensure: mounted
    options: defaults

The mount point (in this case /somemountpoint) will not be created.

Is that expected behaviour? (I was hoping it would also ensure the mount point is a directory.)

Heketi Support

Hello,

It would be fantastic if Heketi support could be added to install and manage the Heketi package / service and basic config (authentication etc...) for whatever service is going to interact with it (i.e. kubes etc...)

Thoughts?

Support yum priorities

With Red Hat now shipping Gluster in the official RHEL channels, there can be version problems when using the upstream Gluster repos.

One suggested fix is to use yum priorities.

We can easily add a priority element to the yumrepo resource, but this raises some interesting questions:

Should we install the yum-plugins-priorities package if a priority is defined?
I'm leaning toward 'yes'.

Should we expose a new repo_version parameter?
I have working code not yet committed that will parse the full package version (X.Y.X-1.el6) to create the appropriate yumrepo resource, but such a version hard-codes the RHEL major release, which makes a single Hiera definition for Gluster version hard for environments running both RHEL6 and RHEL7.

A version of '3.5.2' will work for the repo creation, but will fail to actually install any packages:

Error: Could not update: Failed to update to version 3.5.2, got version 3.5.2-1.el6 instead
Error: /Stage[main]/Gluster::Install/Package[glusterfs-fuse]/ensure: change from absent to 3.5.2 failed: Could not update: Failed to update to version 3.5.2, got version 3.5.2-1.el6 instead

We could keep the current implementation whereby the declaration of the repo controls the version that will be applied. This does seem to work as expected with priorities: if the Gluster repo has priority=50, then a simple yum install glusterfs-server will find the version specified in the Gluster repo, not the RHEL repo.

Gluster 3.10 support

Good day

Gluster version 3.10 is a long term support version.

Would you please consider upgrading your module to support 3.10.

Many thanks

Regards
Brent Clark

Volume name case issue

Affected Puppet, Ruby, OS and module versions/distributions

  • Puppet: 4.8.2 on client, 5.5 on server
  • Ruby: 2.3.3p222 on client
  • Distribution: Proxmox 5.2 (which is Debian GNU/Linux 9 (stretch)) on client, Ubuntu 18.04 on server
  • Module version: v4.0.0

How to reproduce (e.g Puppet code you use)

Hiera:

gluster::repo: false
gluster::server: true
gluster::client: false
gluster::use_exported_resources: true

gluster::pool: 'proxmox'

gluster::volumes:
  'proxmoxVMs':
    replica: 2
    bricks:
      - 'xs1.93.lan:/data/glusterfs/proxmoxVMs/brick1/brick'
      - 'xs2.93.lan:/data/glusterfs/proxmoxVMs/brick1/brick'
    options:
      - 'nfs.disable: on'

gluster::service::service_name: glusterfs-server
gluster::service::ensure: running

What are you seeing

When running puppet agent -t after the volume is created I get the following error message:

Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Error while evaluating a Function Call, 's
plit' parameter 'str' expects a String value, got Undef (file: /etc/puppetlabs/code/modules/gluster/manifests/volume.pp, line: 180, column: 21) (file: /etc/puppetlabs/code/modules/gluster/manifests/init.pp, line:
 83) on node xs1.93.lan

What behaviour did you expect instead

No errors. =)

Any additional information you'd like to impart

When I run facter -p on my Proxmox server I can see that it is returning the gluster volume bricks value name in all lower case, but the volume has two uppercase letters in it:

gluster_volume_proxmoxvms_bricks => ...

The line of code mentioned in the error message (line 180 in my version of volume.pp, line 160 in the current version) is using the original name:

$vol_bricks = split( getvar( "::gluster_volume_${title}_bricks" ), ',')

...appears to be failing to find the volume because it is using the actual name which has uppercase letters:

gluster_volume_proxmoxVMs_bricks

I have worked around this in my copy of the module by changing that line to this:

$title_downcase = downcase( $title )
$vol_bricks = split( getvar( "::gluster_volume_${title_downcase}_bricks" ), ',')

I would submit this code change, but I have a feeling this is not the right solution. ...I find it hard to imagine I'm the first person to have some uppercase letters in their volume name.

missing tag on forge.puppetlabs.com

Hi,
when i find "glusterfs" on forge, this module missing.
It's a shame :)

can you add tag "glusterfs"? It will very useful for novice :)

Duplicate declaration

Affected Puppet, Ruby, OS and module versions/distributions

  • Puppet: 5.0.2
  • Ruby: 2.3.1
  • Distribution: Ubuntu Xenial
  • Module version: v.3.0.0

How to reproduce (e.g Puppet code you use)

  class { ::gluster:
    server  => true,
    client  => false,
    repo    => false,
    volumes => {
      'backups' => {
        replica => 2,
        bricks  => [
          'sid-gfs-01.i.i.com:/mnt/backups',
          'sid-gfs-02.i.i.com:/mnt/backups',
        ],
        options => [
          'server.allow-insecure: on',
          'nfs.disable: true',
        ],
      }
    }
  }

  gluster::peer { [ 'sid-gfs-01.i.i.com' ]:
    pool    => 'gfs-backup',
    require => Class[::gluster::service],
  }

What are you seeing

Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Duplicate declaration: Gluster::Peer[sid-gfs-01.i.i.com] is already declared in file modules/gluster/manifests/init.pp:73

This conf comes from examples and does not work.

Unsorted arrays can cause issues with gluster volume commands

Affected Puppet, Ruby, OS and module versions/distributions

  • Puppet: 4.9.4
  • Ruby: 2.1.9p490 (2016-03-30 revision 54437)
  • Distribution: RHEL 7
  • Module version: current master and all previous releases

How to reproduce (e.g Puppet code you use)

Change the order but not the membership of bricks in a volume.

What are you seeing

gluster volume commands fail because there is no real change to the underlying subsystem. The commands run but return an error.

What behaviour did you expect instead

I expect nothing to happen unless the brick membership changes

Output log

Sorry, I lost this when I was debugging.

Any additional information you'd like to impart

I have a simple patch I will submit against this issue as a pull request.

refreshes to gluster::mount fail to remount the volume

if a refresh is triggered on a gluster::mount, puppet attempts to remount the filesystem with mount -o remount. This results an error like:

Failed to call refresh: Execution of '/bin/mount -o remount /mnt/foo' returned 1: Invalid option remount

...because gluster FUSE mounts don't support that option.

I propose passing the remounts => false option to the mount directive in the gluster::mount class to avoid this behaviour.

Add support for setting log level

To change the glusterd global log level:

glusterd --log-level WARNING

This can also be set in the config file, so perhaps the module could just accept options in the form of an array similar to the volume level options that would then populate the config file?

 In the /etc/sysconfig/glusterd file, locate the LOG_LEVEL parameter and set its value to WARNING.

## Set custom log file and log level (below are defaults)
#LOG_FILE='/var/log/glusterfs/glusterd.log'
LOG_LEVEL='WARNING'

As per: https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/Configuring_the_Log_Level.html

Tries to create duplicate volumes if there's only one gluster node

Currently this module tries to create volumes again, which is obviously fails because they already exist. This happens when there are no peers (so only one node in the pool) because in that case no volume facts are created and thus they are not seen by the creating code.

This seems to stem from the two checks at Gluster fact provider line 28 and line 58.

I'm not sure what's the best fix this situation. Maybe the volume facts should be provided in this case too but in that case the peer count check is not needed.

Update Test Cases for RedHat/YUM

After CentOS took over the responsibility to build and host the RPM (#69 #74), the test cases related to installation on RedHat based systems with YUM should be updated.

Volume creation has race condition with fact gluster_volume_list

My attempt to create a volume with two nodes seems stuck at the volume creation.

Evaluation Error: Error while evaluating a Function Call, function 'split' called with mis-matched arguments

Then points to volume.pp line 105. The split function there depends on the Fact 'gluster_volume_list' except that one is undef at this stage cause it is only populated if the cluster pool has a volume.

Or am I missing something?

module slow when setting options on volumes

At present, options are very slow to apply to clusters using this module as it iterates (slowly) over each volume and takes a long time to set each setting.

    pool_volumes:
      vol1:
        replica: 2
        brick_mountpoint: '/mnt/intpool01'
        options:
          - 'nfs.disable: true'
          - 'cluster.lookup-optimize: true'
          - 'cluster.readdir-optimize: true'
          - 'performance.cache-size: 768MB'
          - 'performance.io-cache: true'
          - 'performance.io-thread-count: 32'
          - 'performance.write-behind-window-size: 2MB'
          - 'performance.cache-refresh-timeout: 4'
          - 'server.event-threads: 8'
          - 'client.event-threads: 8'
          - 'diagnostics.brick-log-level: WARNING'
          - 'diagnostics.client-log-level: WARNING'

      vol2:
        replica: 2
        brick_mountpoint: '/mnt/intpool01'
        options:
          - 'nfs.disable: true'
          - 'cluster.lookup-optimize: true'
          - 'cluster.readdir-optimize: true'
          - 'performance.cache-size: 768MB'
          - 'performance.io-cache: true'
          - 'performance.io-thread-count: 32'
          - 'performance.write-behind-window-size: 2MB'
          - 'performance.cache-refresh-timeout: 4'
          - 'server.event-threads: 8'
          - 'client.event-threads: 8'
          - 'diagnostics.brick-log-level: WARNING'
          - 'diagnostics.client-log-level: WARNING'

Add support for Thin Arbiter volumes

Very useful recent feature, should be easy to add - I'm about to go overseas though so won't get a chance to do a MR for it for a few weeks at least.

https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Thin-Arbiter-Volumes/

Thin Arbiter volumes in Gluster

Thin Arbiter is a new type of quorum node where granularity of what is good and what is bad data is less compared to the traditional arbiter brick. In this type of volume, quorum is taken into account at a brick level rather than per file basis. If there is even one file that is marked bad (i.e. needs healing) on a data brick, that brick is considered bad for all files as a whole. So, even different file, if the write fails on the other data brick but succeeds on this 'bad' brick we will return failure for the write.

Why Thin Arbiter?

This is a solution for handling stretch cluster kind of workload, but it can be used for regular workloads as well in case users are satisfied with this kind of quorum in comparison to arbiter/3-way-replication. Thin arbiter node can be placed outside of trusted storage pool i.e, thin arbiter is the "stretched" node in the cluster. This node can be placed on cloud or anywhere even if that connection has high latency. As this node will take part only in case of failure (or a brick is down) and to decide the quorum, it will not impact the performance in normal cases. Cost to perform any file operation would be lesser than arbiter if everything is fine. I/O will only go to the data bricks and goes to thin-arbiter only in the case of first failure until heal completes.

Setting UP Thin Arbiter Volume

The command to run thin-arbiter process on node:

/usr/local/sbin/glusterfsd -N --volfile-id ta-vol -f /var/lib/glusterd/vols/thin-arbiter.vol --brick-port 24007 --xlator-option ta-vol-server.transport.socket.listen-port=24007

Creating a thin arbiter replica 2 volume:

glustercli volume create <volname> --replica 2 <host1>:<brick1> <host2>:<brick2> --thin-arbiter <quorum-host>:<path-to-store-replica-id-file>`

For example:

glustercli volume create testvol --replica 2 server{1..2}:/bricks/brick-{1..2} --thin-arbiter server-3:/bricks/brick_ta --force
volume create: testvol: success: please start the volume to access data

globbing in apt.pp fails with gluster release 3.10 and newer

Affected Puppet, Ruby, OS and module versions/distributions

  • Puppet: 5
  • Ruby: 2.4.2
  • Distribution: Archlinux
  • Module version: v3.0.0

How to reproduce (e.g Puppet code you use)

class { ::gluster::repo:
  release => '3.12',
}

What are you seeing

Core configures the repo for 3.12 but with the signing key for release 3.1

What behaviour did you expect instead

Core configures the repo for 3.12 with the signing key for release 3.12

Output log

Any additional information you'd like to impart

module mount not working

I am trying to mount a gluster volume called storage-test on my webserver with the manifest syntax below:

gluster::mount { '/glusterfs':
   ensure    => present,
   volume    => "storage:/storage-test",
   options   => 'defaults',
   transport => 'tcp',
   atboot    => true,
   dump      => 0,
   pass      => 0,
}

When I run puppet agent -t everything is processed fine with no issues. I then check /etc/fstab and I see this entry: storage:/storage-test /glusterfs glusterfs defaults,transport=tcp 0 0 but when I type mount I don't see any entry regarding the mount as defined above there and when I also type df -h, I don't see any entry there as well. After I check all of this, I reboot the webserver. After the webserver comes back up, when I type mount I do see the mount point and the same applies to when I type df -h, I see the mount entry there as well. After about a minute, when I perform the same checks, the mount entry that was showing after the reboot is no longer in mount or df -h.

I have also tried setting ensure => mounted but when I run puppet, it says the status has changed from unmounted to mounted and then it just hangs to the point where I have to reboot the server to recover and even with that, when I type df -h it hangs again. What I'm I doing wrong? why isnt't the volume mounting? Any help would be greatly appreciated to resolve this.

I am running glusterfs 3.12.3 for both server and client. Module version is the latest version

Ability to set global (volume) options

Global volume options for the cluster are set using the volume parameter of 'all'.

For example, to enable multiplexing:

gluster volume set all cluster.brick-multiplex on

However, this module does not seem to have any way to set options on the volume 'all', as it requires brick paths etc... and thus results in the following error:

 Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Gluster::Volume[all]: expects a value for parameter 'bricks'

"gluster peer probe" getting called on each puppet run

Affected Puppet, Ruby, OS and module versions/distributions

  • Puppet: 3.8.7
  • Ruby: 2.3.1p112
  • Distribution: Ubuntu 16.04
  • Module version: 3.0.0
  • Gluster version: 3.10.3

How to reproduce (e.g Puppet code you use)

  class { ::gluster:
    repo    => false,
    server  => true,
    client  => false,
  }

Exported resources enabled.

What are you seeing

The first gluster node (test-gluster1.local in this example) comes up fine. The second node (test-gluster2.local) comes up and probes test-gluster1.local as expected. test-gluster1.local then completes the handshaking by probing test-gluster2.local.

However, test-gluster2.local seems to believe it needs to continue probing test-gluster1.local on each puppet run:

2017-06-27 15:22:19 +0100 /Stage[main]/Gluster/Gluster::Peer[test-gluster1.local]/Exec[gluster peer probe test-gluster1.local]/returns (notice): executed successfully

On test-gluster2.local, if you run

facter --puppet gluster_peer_list

It shows the ip address of test-gluster1.local rather than the hostname, this is what I believe is causing the peer probe to be called each time as this does not match the gluster::peer resource title in puppetdb

Looking at the code for the gluster_peer_list fact, it determines the peers from the value of Hostname: in the gluster peer status command. If you run this command though, the correct hostname appears under "Other names:"

gluster peer status
Number of Peers: 1

Hostname: 192.168.1.2
Uuid: 36ab6e63-8d4a-4ac9-a7a8-15734b76213a
State: Peer in Cluster (Connected)
Other names:
test-gluster1.local

What behaviour did you expect instead

Puppet run without any changes.

Custom facts provided by module slow and thrash glusterd

The custom facts for the volumes / bricks that this module provides causes gluster volume info and gluster volume status to be called every time the facts are calculated, this may be the cause of #86.

Both the gluster volume status and gluster volume info commands are not designed to be run frequently as they query every node in the cluster along with volume metadata and do create quite a bit of load on the cluster.

On a small cluster with 120 volumes, this puppet module takes over 2.5 minutes just loading facts.

As such, if you have a three node cluster, with each node running puppet every 10 minutes it means that every 10 minutes you're spending 7 minutes just in facter.

I don't have a solution for this, but I don't think people should be constantly running these commands across their clusters, I did just find the gluster get-state command which dumps the state of the cluster out to a local file and is very fast - perhaps this could be used instead?

root@int-gluster-01:~ # time facter -p
real	2m8.956s
user	0m8.099s
sys	0m0.603s

vs

root@int-gluster-01:~ # time gluster get-state
glusterd state dumped to /var/run/gluster/glusterd_state_20170913_160143

real	0m0.471s
user	0m0.010s
sys	0m0.004s

Broken facts on gluster3.2

Please check the following items before submitting an issue -- thank you!

Nnote that this project is released with a Contributor Code of Conduct.
By participating in this project you agree to abide by its terms.
Contributor Code of Conduct.

  • There is no existing issue or PR that addresses this problem

Optional, but makes our lives much easier:

  • The issue affects the latest release of this module at the time of
    submission

Affected Puppet, Ruby, OS and module versions/distributions

Puppet3.8.7, Ruby1.8.3, CentOS6

What are you seeing

# facter -p
sh: -c: line 0: syntax error near unexpected token `('
sh: -c: line 0: `/usr/sbin/gluster volume info (position'
sh: -c: line 0: syntax error near unexpected token `)'
sh: -c: line 0: `/usr/sbin/gluster volume info 1)'

What behaviour did you expect instead

seeing cluster facts

How did this behaviour get triggered

fact expected nonexistent output to parse

Output log

Any additional information you'd like to impart

the fact does gluster volume list. This command isn't available on old gluster versions (3.2 in my case). The output is:

# gluster volume list
unrecognized word: list (position 1)

Module doesn't work with Puppet 4 due to undef variable passing

Affected Puppet, Ruby, OS and module versions/distributions

  • Puppet: 4.8.x Enterprise
  • Distribution: CentOS 7
  • Module version: 2.2.2

How to reproduce (e.g Puppet code you use)

Try using the module under any version of Puppet 4.x (assuming you do have strict_variables enabled which you should as it will be enforced shortly).

What are you seeing

Compilation fails when accessing variables from facts that don't yet exist as the module hasn't created them.

It looks like it's using the old Puppet 2/3 style if param == undef , when it should be using if defined(param).

See: https://docs.puppet.com/puppet/latest/reference/lang_facts_and_builtin_vars.html#compiler-variables

Available and should be used as of Puppet 4.x:

strict_variables = true (Puppet master/apply only) β€” This makes uninitialized variables cause parse errors, which can help squash difficult bugs by failing early instead of carrying undef values into places that don’t expect them.

What behaviour did you expect instead

Facts to be loaded, then usable.

Output log

Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Unknown variable: '::gluster_peer_list'. at /etc/puppetlabs/code/environments/samm_gluster_poc/modules/gluster/manifests/peer.pp:59:10 at /etc/puppetlabs/code/environments/samm_gluster_poc/site/profiles/manifests/services/gluster/host.pp:13 on node int-gluster-01.fqdn
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run

Although the peers are set in hiera and are being called from the profile.

cc/ @ross-w

Quietly handle manually created snapshots

Today when using the Gluster snapshot feature, a Puppet run will generate a notify and potentially a rather long one because of the path change for bricks.

notify{ "unable to resolve brick changes for Gluster volume ${title}!\nDefined: ${_bricks}\nCurrent: ${vol_bricks}": }

Gluster, by default uses a known path for snapshots (/run/gluster/snaps/) which will include the name of the snapshot or a UID and the name of the volume. Something like:

/run/gluster/snaps/123456789abcdef/storage_gv

By adding the common part of the path used by snapshots as a class parameter in volume.pp, and using ${title} we could have quiet Puppet run without long and confusing Puppet events.

And maybe down the line it will open the door for managing snapshots fully from the module.

Unable to resolve brick changes if the brick order changes

Affected Puppet, Ruby, OS and module versions/distributions

Docker image: puppetserver 2.7.2, puppetexplorer 2.0.0, pupperboard 0.2.0, puppetdb 4.3.0, puppet-postgres 0.1.0

  • Module version: 3.0.0

How to reproduce (e.g Puppet code you use)

Doing some stress testing of gluster I killed and rebuilt a node. Before doing this the gluster volume info reported the bricks in the order:

Brick1: server1:/data/voltest1/brick
Brick2: server2:/data/voltest1/brick
Brick3: server3:/data/voltest1/brick

After the rebuild the order is

Brick1: server2:/data/voltest1/brick
Brick2: server3:/data/voltest1/brick
Brick3: server1:/data/voltest1/brick

What are you seeing

Subsequent runs of puppet agent report

Notice: unable to resolve brick changes for Gluster volume voltest1!
Defined: server1:/data/voltest1/brick server2:/data/voltest1/brick server3:/data/voltest1/brick
Current: [server2:/data/voltest1/brick, server3:/data/voltest1/brick, server1:/data/voltest1/brick]

What behaviour did you expect instead

For it to not complain as the only change is the order of reporting. :)

Any additional information you'd like to impart

"server1", "server2" and "server3" above are not the real names of the servers.

The remove of the brick from the rebuilt server, the peer detach, and peer probe were done manually as the puppet module said it doesn't support removal of bricks from a volume.

Gluster Volume Error

I am getting this error when I add a gluster volume and run puppet agent -t:

Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Error while evaluating a Function Call, 'split' parameter 'str' expects a String value, got Undef at /etc/puppetlabs/code/environments/test/modules/gluster/manifests/volume.pp:180:21 at /etc/puppetlabs/code/environments/test/manifests/classes/gluster/test_volume.pp:3 on node gluster1.example.com
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run

Below is what I have for test_volume.pp:

    gluster::volume { 'Test':
        replica => 4,
        bricks  => [
          'gluster1.example.com:/data/test/brick',
          'gluster2.example.com:/data/test/brick',`
          'gluster3.example.com:/data/test/brick',
          'gluster4.example.com:/data/test/brick',
          'gluster5.example.com:/data/test/brick',
          'gluster6.example.com:/data/test/brick',
          'gluster7.example.com:/data/test/brick',
          'gluster8.example.com:/data/test/brick',
      ],
      require => [
        File['/data/test'],

What I'm I doing wrong, I can't seem to figure out why its complaining on those lines.

Support commas in volume options

Currently, volume options are collected as comma seperated list using the gluster_volume_${title}_options fact. However, some volume options may have values containing commas as well (for example, nfs.rpc-auth-allow).

In order to make this work, I see two approaches:

  • Use another seperating character in the list of volume options provided by the gluster_volume_${title}_options fact (this should ideally be a character, which would never ever occur in volume option names and values). For example, gluster_volume_${title}_options could be something like option1: 'value1', option2: 'value2,value3'
  • Use Facter arrays/hashes (which are, AFAIK, guaranteed to be available only for Puppet >= 4.0)

Add support for volume tiering

It looks like the module doesn't support tiering in GlusterFS volumes.
Even though hot tier implementation in GlusterFS is based on distributed replication, it does require a special "tier" command and there for matching support in this module.

Creating volume without replication

Affected Puppet, Ruby, OS and module versions/distributions

  • Puppet: puppet-3.8.7-1
  • Ruby: ruby-2.0.0
  • Distribution: RHEL 7
  • Module version: Latest

How to reproduce (e.g Puppet code you use)

What are you seeing

If I don't provide replica option, puppet does not create volume

What behaviour did you expect instead

I changed volume.pp and then it worked
if $replica {
if ! is_integer( $replica ) {
fail("Replica value ${replica} is not an integer")
} else {
$_replica = "replica ${replica}"
}
} else { # This else part was added
$_replica = ''
}

Output log

Any additional information you'd like to impart

Combination of `add-brick` and `replica` and `stripe` seems to be invalid

Hello,

it seems to me like combination of add-brick and replica and stripe is invalid.

/usr/sbin/gluster volume add-brick logs stripe 2 replica 2 <new_brick1> <new_brick2>
Wrong brick type: 2, use <HOSTNAME>:<export-dir-abs-path>
Usage: volume add-brick <VOLNAME> [<stripe|replica> <COUNT>] <NEW-BRICK> ... [force]

However, the same command without replica 2

/usr/sbin/gluster volume add-brick logs stripe 2 <new_brick1> <new_brick2>
Changing the 'stripe count' of the volume is not a supported feature. In some cases it may result in data loss on the volume. Also there may be issues with regular filesystem operations on the volume after the change. Do you really want to continue with 'stripe' count option ?  (y/n) n

Now, how this came to be. I've setup glusterfs with two servers and replica factor 2 first.

  gluster::peer { $gluster_peers:
    pool    => 'xxx',
  }
  gluster::volume { 'xxx':
    replica => 2,
    bricks  => $gluster_bricks,
    require => Gluster::Peer[$gluster_peers],
  }

Then I've decided to throw in two more servers and add stripe => 2, because that was original plan and I wanted to test it. You know the rest.

I'm not sure about how to fix this. I guess don't add replica when adding new brick? stripe and the fact you're unable to change it is probably another story :)

A stopped volume aborts a Puppet run

If a Gluster volume is stopped, a Puppet run will fail:

# puppet agent -t
Info: Retrieving plugin
Volume uploads is not started
Error: Could not retrieve local facts: undefined method `scan' for nil:NilClass
Error: Failed to apply catalog: Could not retrieve local facts: undefined method `scan' for nil:NilClass

This appears to be the offending line:
https://github.com/covermymeds/puppet-gluster/blob/master/lib/facter/gluster.rb#L44

We should check if the volume is started, rather than assuming it is.

Volume mapping values for auth.allow with multiple ips

Hi,

I keep getting the error below when I try to set multiple ips for "auth.allow" for a volume and I can't seem to figure out why or what I might be doing wrong.

Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Error while evaluating a Function Call, (<unknown>): mapping values are not allowed here at line 2 column 25 at /etc/puppetlabs/code/environments/test/modules/gluster/manifests/volume.pp:276:21 at /etc/puppetlabs/code/environments/test/manifests/classes/test/gluster/glusterd_volume.pp:52 on node example.com
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run

Could someone help me figureout what I might be doing wrong?

Do you still want to continue? Prompt

  • Puppet: 1.10.4-1jessie (I.e. Puppet4)
  • Distribution: Debian - Jessie
  • Module version: v3.0.1-rc0 (From Github)

What are you seeing

Running gluster 3.11.1-2, it would now appear Puppet cant create the volume, for gluster is waiting for a "Y" to ahead.
I.e.
root@REMOVED-web01 ~ # /usr/sbin/gluster volume create storage replica 2 transport tcp REMOVEDSERVER1:/export/shared/glusterfs REMOVEDSERVER2:/export/shared/glusterfs
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
(y/n) y

gluster::volume doesn't create volumes from hosts included in the volume

Hey, Mr. Merrill - thanks for the great module!

In an environment like Vagrant that is provisioned with puppet apply, the Gluster facts will not run, and as a result, ::gluster::volume will never work, since it thinks that peers are missing. I'm going to try to see if there's an easy way to fix this, but I wonder if you might have some thoughts.

Cannot reassign variable r at volume.pp:264

Hello,

you really shouldn't reuse variable names, because once variable is declared/assigned in Puppet, it can't be changed. You have used variable $r for replica, therefore you can't use it later to store options which won't be removed.

            $r = join( keys($remove), ', ' )
            notice("NOT REMOVING the following options for volume ${title}: ${r}.")

Fix is fairly easy. Just use different name for variable in question.

Apt public key for gluster.org changed address

Tried to use puppet-gluster (with 3.9) on Debian 8 and noticed that the apt repository installation failed because the public key is no longer at: https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub

For 3.9 it's at:
https://download.gluster.org/pub/gluster/glusterfs/3.9/rsa.pub

For other versions it's at:
https://download.gluster.org/pub/gluster/glusterfs/VERSION/LATEST/rsa.pub

I'll have a look at the module and (hopefully) provide a fix soon...

Affected Puppet, Ruby, OS and module versions/distributions

  • Puppet: 4.6.0
  • Module version: 2.3.0

How to reproduce (e.g Puppet code you use)

class { gluster::install:
  server  => true,
  repo    => true,
}

Output log

Error: /Stage[main]/Gluster::Repo::Apt/Apt::Source[glusterfs-LATEST]/Apt::Key[Add key: A4703C37D3F4DE7F1819E980FE79BB52D5DC52DC from Apt::Source glusterfs-LATEST]/Apt_key[Add key: A4703C37D3F4DE7F1819E980FE79BB52D5DC52DC from Apt::Source glusterfs-LATEST]/ensure: change from absent to present failed: 404 Not Found for https://download.gluster.org/pub/gluster/glusterfs/LATEST/rsa.pub

GlusterFS peers are not defined as prerequisite for a GlusterFS volume

Affected Puppet, Ruby, OS and module versions/distributions

  • Puppet: Puppet Enterprise 2018.1.4, Puppet Agent 5.5.6
  • Ruby: 2.0.0p648
  • Distribution: RHEL 7.5 3.10.0-862.9.1.el7.x86_64
  • Application: GlusterFS 3.7.9-10.el7rhgs
  • Module version: 4.1.0

How to reproduce (e.g Puppet code you use)

  $glusterversion = '3.7.9-10.el7rhgs'
  $poolname = 'production'
  $dirname = 'share'
  $replica_no = 3
  $bricks = ['gluster01:/srv/glfs/share', 'gluster02:/srv/glfs/share', 'gluster03:/srv/glfs/share']
  
  class { ::gluster:
    repo    => false,
    version => $glusterversion,
    server  => true,
    client  => false,
    pool    => $poolname,
    volumes => {
      $dirname => {
        replica   => $replica_no,
        bricks    => $bricks,
        transport => 'tcp',
        options   => ['nfs.disable: true'],
      }
    }
  }

What are you seeing

The Gluster module for puppet 4.1.0 with Puppet Enterprise 2018.4.1 for RHEL 7.5 does not define GlusterFS peers as prerequisite for the GlusterFS volume.

What behaviour did you expect instead

The Gluster module for puppet should define all peers and the volume in only one puppet run.

Output log

Puppet Agent Run 1 (Peer nodes are ready)

puppet agent log:

Notice: /Stage[main]/Gluster/Gluster::Volume[share]/Exec[gluster create volume share]/returns: volume create: share: failed: Host gluster02 is not in ' Peer in Cluster' state
Error: '/sbin/gluster volume create share replica 3 transport tcp gluster01:/srv/glfs/share gluster02:/srv/glfs/share gluster03:/srv/glfs/share' returned 1 instead of one of [0]
Error: /Stage[main]/Gluster/Gluster::Volume[share]/Exec[gluster create volume share]/returns: change from 'notrun' to ['0'] failed: '/sbin/gluster volume create share replica 3 transport tcp gluster01:/srv/glfs/share gluster02:/srv/glfs/share gluster03:/srv/glfs/share' returned 1 instead of one of [0]
Notice: /Stage[main]/Gluster/Gluster::Volume[share]/Gluster::Volume::Option[share:nfs.disable]/Exec[gluster option share nfs.disable true]: Dependency Exec[gluster create volume share] has failures: true

check peers manually afterwards:

# gluster peer status
Number of Peers: 0

Puppet Agent Run 2 (Peer nodes are ready)

puppet agent log:

Notice: /Stage[main]/Gluster/Gluster::Volume[share]/Exec[gluster create volume share]/returns: volume create: share: failed: /srv/glfs/share is already part of a volume
Error: '/sbin/gluster volume create share replica 3 transport tcp gluster01:/srv/glfs/share gluster02:/srv/glfs/share gluster03:/srv/glfs/share' returned 1 instead of one of [0]
Error: /Stage[main]/Gluster/Gluster::Volume[share]/Exec[gluster create volume share]/returns: change from 'notrun' to ['0'] failed: '/sbin/gluster volume create share replica 3 transport tcp gluster01:/srv/glfs/share gluster02:/srv/glfs/share gluster03:/srv/glfs/share' returned 1 instead of one of [0]
Notice: /Stage[main]/Gluster/Gluster::Volume[share]/Gluster::Volume::Option[share:nfs.disable]/Exec[gluster option share nfs.disable true]: Dependency Exec[gluster create volume share] has failures: true

check peers and facts (set by the gluster module) manually afterwards:

# gluster peer status
Number of Peers: 0

# puppet facts | grep gluster_
    "gluster_binary": "/sbin/gluster",
    "gluster_peer_count": 0,
    "gluster_peer_list": "",

manual peer probes after the two puppet runs are successful:

# gluster peer probe gluster02
peer probe: success. 

# gluster peer probe gluster03
peer probe: success. 

# gluster peer status
Number of Peers: 2

Hostname: gluster02
Uuid: ********-****-****-****-************
State: Peer in Cluster (Connected)

Hostname: gluster03
Uuid: ********-****-****-****-************
State: Peer in Cluster (Connected)

Any additional information you'd like to impart

I think the problem is that the peers are not defined. Tests with the gluster::peer class also did not define peers although the GlusterFS software is installed successfully and glusterd is started successfully on all peers.

The installed GlusterFS software and Daemon works fine because defining peers and the volume with this installed software can be done manually without any problems.

Weak regex for volume port Fact

The regex used to feed the volume port Fact can't always parse completely the output of the 'gluster volume status' command.
gluster.rb L 48

volume_ports[vol] = status.scan(/^Brick [^\t]+\t+(\d+)/).flatten.uniq.sort

I suspect that it might happen when the brick name is long enough to be formatted into two lines.

The end result is an empty Fact in Puppet

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.