Code Monkey home page Code Monkey logo

docker-volume-plugins's Introduction

Docker Managed Volume Plugins

This project is forked at https://github.com/marcelo-ochoa/docker-volume-plugins please submit PR or bugs there.

This project provides managed volume plugins for Docker to connect to CIFS, GlusterFS NFS.

Along with a generic CentOS Mounted Volume Plugin that allows for arbitrary packages to be installed and used by mount.

There are two key labels

  • dev this is an unstable version primarily used for development testing, do not use it on production.
  • latest this is the latest version that was built which should be ready for use in production systems.

There is no robust error handling. So garbage in -> garbage out

docker-volume-plugins's People

Contributors

mahmoudfarid avatar marcelo-ochoa avatar trajano avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-volume-plugins's Issues

docs not clear

In the following option: its not super clear what values can be for store1,store2

docker plugin install --alias PLUGINALIAS
trajano/glusterfs-volume-plugin
--grant-all-permissions --disable
docker plugin set PLUGINALIAS SERVERS=store1,store2
docker plugin enable PLUGINALIAS

Is that IP of node? is that volume name? Can you give real life example?

--name option to name the volume is missing for cifs-volume-plugin

If i run:

docker volume create -d trajano/cifs-volume-plugin --opt cifsopts=uid=1000 192.168.8.4/Downloads

the a value with the name "192.168.8.4/Downloads" will be created (username, password and domain came from /root/credentials/default):

docker volume ls
DRIVER                              VOLUME NAME
trajano/cifs-volume-plugin:latest   192.168.8.4/Downloads

To show you that this is not an allowed name i delete this volume and then try to mount this non-existing volume (this prints out the allowed characters for volume names):

docker run -d --name helloshare -v 192.168.8.4/Downloads:/testvolumesmb strm/helloworld-http
docker: Error response from daemon: create 192.168.8.4/Downloads: "192.168.8.4/Downloads" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path.
See 'docker run --help'.

means the "/" is not allowed. Would be nice if you generate a better volume name and add support for "--name" option to define a custom volume name.

Failed to chmod: operation not permitted

Trying to mount my nas just like I mounted it via the ContainX/docker-volume-netshare plugin, but I'm running into an error. The error occurs because I have "squash root to admin" set on the nas share.

# docker -v
Docker version 18.06.0-ce, build 0ffa825
# docker volume create -d trajano/nfs-volume-plugin --name test-data -o device=192.168.200.231:/volume1/nfs-data/docker/test
test-data
# docker run -it -v test-data:/mnt alpine
docker: Error response from daemon: failed to chmod on /var/lib/docker/plugins/80f7d48ff836ead9673bf21f108c684cd5ab867d601a7fac2d0e734bec0e49d2/propagated-mount/bd74903967b14b83bde505cadaf170e44df6587c1fa1df6487b9eb0f29616352: chmod /var/lib/docker/plugins/80f7d48ff836ead9673bf21f108c684cd5ab867d601a7fac2d0e734bec0e49d2/propagated-mount/bd74903967b14b83bde505cadaf170e44df6587c1fa1df6487b9eb0f29616352: operation not permitted.
See 'docker run --help'.

Is this a problem in docker? I don't see anywhere in the code where a chmod is taking place.

glusterfs volume name

Hi, does glusterfs volume name require to be "gfs" ?
Can I use different name, how do I specifiy it in the options ?

subfolder option do not work

i tried all of three methods to mount glusterfs volumes. None of them able to mount subdirectory in container.

the mountpoint seems correct :
gluster-node1:volume/uploads 400G 475M 400G 1% /data/www/web

But the mountpoint did not mount the subdir but volume himself :
root@c3bcdd1ea4cb:/data/www/web/# ls
uploads

Docker Compose, driver name error on running compose down and then compose up

On running compose down and then compose up simultaneously, without restarting docker, I received the following error:

ERROR: Configuration for volume user-db-logs specifies driver centos-nfs, but a volume with the same name uses a different driver (centos-nfs:latest). If you wish to use the new configuration, please remove the existing volume "jistdeploy_user-db-logs" first:

It appears that something is appending the :latest suffix to the driver even if it is not specified or not appending it appropriately when checking if it already exists.

Transport endpoint is not connected

Versions

  • glusterfs 3.13.2 on Ubuntu 18.04
  • Docker version 19.03.7, build 7141c199a2
  • glusterfs:latest "GlusterFS plugin for Docker"

Hosts
Gluster hosts 10.131.52.75,10.131.54.189,10.131.55.104
Docker Swarm hosts 10.131.121.134,10.131.121.220,10.131.119.235

gluster volume set gfs auth.allow 10.131.52.75,10.131.54.189,10.131.55.104,10.131.121.134,10.131.121.220,10.131.119.235 has been set on Gluster volume

UFW firewall rules are set as follows on all Gluster hosts.
All traffic from the LAN is allowed.

22/tcp                     ALLOW       Anywhere
Anywhere                   ALLOW       10.131.52.75
Anywhere                   ALLOW       10.131.54.189
Anywhere                   ALLOW       10.131.55.104
Anywhere                   ALLOW       10.131.121.134
Anywhere                   ALLOW       10.131.121.220
Anywhere                   ALLOW       10.131.119.235
Anywhere                   ALLOW       10.131.52.75/udp
Anywhere                   ALLOW       10.131.54.189/udp
Anywhere                   ALLOW       10.131.55.104/udp
Anywhere                   ALLOW       10.131.121.134/udp
Anywhere                   ALLOW       10.131.121.220/udp
Anywhere                   ALLOW       10.131.119.235/udp
22/tcp (v6)                ALLOW       Anywhere (v6)

When deploying the below Docker stack, the following error is encountered

wbhn2xvfpm79bd7615sa6pt7v   test_foo.1          alpine:latest@sha256:ab00606a42621fb68f2ed6ad3c88be54397f981a7b70a79db3d1172b11c4367d   docker1             Ready               Rejected 1 second ago     "open /var/lib/docker/plugins/31ec62ae30657bcbaadf9435f2731b72610eb81d3c3c3276e1a9632a86fd351f/propagated-mount/f7eca8725e52c250df3cb88a102c04bee71f95b8f99aa1cfc778eb43e2b67b75: transport endpoint is not connected"
648vue9e87dmhsl5lfor1bqn0    \_ test_foo.1      alpine:latest@sha256:ab00606a42621fb68f2ed6ad3c88be54397f981a7b70a79db3d1172b11c4367d   docker2             Shutdown            Rejected 5 seconds ago    "open /var/lib/docker/plugins/c46bd9fc2f5d9980398d594289fca4fc535f057d8ae7fe069e4381416cc940a8/propagated-mount/e1077d958f3233f0de63efabc218c14477c00090615b71320c9895dc0b3c7f06: transport endpoint is not connected"
version: "3.4"

services:
  foo:
    image: alpine
    command: ping localhost
    networks:
      - net
    volumes:
      - vol1:/tmp

networks:
  net:
    driver: overlay

volumes:
  vol1:
    driver: glusterfs
    driver_opts:
      servers: 10.131.52.75,10.131.54.189,10.131.55.104
    name: "gfs/vol1"

The vol1 subdirectory has already been created on the Gluster volume. The managed plugin has been installed and enabled on all 3 Swarm hosts, and the IPs of the Gluster hosts have also been set.

Excerpt from docker plugin inspect glusterfs:latest

            "Env": [
                "SERVERS=10.131.52.75,10.131.54.189,10.131.55.104"
            ],

Any help would be appreciated. Thanks in advance!

Subdir is not created and therefore doesn't seem to work, need to create subdir first

Hi,

I wonder if there is a fix coming or could we update the readme about a potential bug?

I had a container setup to work via gluster i.e.

volumes:
  registry_data:
    driver: glusterfs
    name: "gfs/docker-registry"

The container started and everything appeared to be working but the disks were not getting the updates. So i brought down my stack and created the directory manually i.e.

mkdir docker-regsitry

on one of the gluster nodes and now all is working again after brining the stack back up

I see the files appearing on the disk.

Thanks.

VolumeDriver.Mount: error mounting <volume>: exit status 1

I have 3 centos 7 nodes with gluster installed

[root@ds01 ~]# gluster volume info
Volume Name: gv0
Type: Replicate
Volume ID: 98f1c6e3-8fa0-4a2b-93d5-95b64435eb04
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ds01:/data/brick1/gv0
Brick2: ds02:/data/brick1/gv0
Brick3: ds03:/data/brick1/gv0
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off

The gv0 gluster volume is working fine. When I mount it and create files they are replicated to ds01,ds02,ds03

But when I create a docker volume with the glusterfs plugin and try to mount it in a docker instance I get
VolumeDriver.Mount: error mounting : exit status 1

All help appreciated!

versions used:
docker version 19.03.12
glusterfs 7.6
centos 7
Linux ds01 3.10.0-1127.13.1.el7.x86_64 #1 SMP Tue Jun 23 15:46:38 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

[root@ds01 ~]# docker plugin install --alias glusterfs trajano/glusterfs-volume-plugin --grant-all-permissions --disable
latest: Pulling from trajano/glusterfs-volume-plugin
a392984758fa: Download complete
Digest: sha256:f054d37e71aca82426c12c0d4a555861e1225969df2df6c45641303b4cf18a95
Status: Downloaded newer image for trajano/glusterfs-volume-plugin:latest
Installed plugin trajano/glusterfs-volume-plugin
[root@ds01 ~]# docker plugin set glusterfs SERVERS=ds01,ds02,ds03
[root@ds01 ~]# docker plugin enable glusterfs
glusterfs
[root@ds01 ~]# docker plugin list
ID NAME DESCRIPTION ENABLED
e6ee819ceeca glusterfs:latest GlusterFS plugin for Docker true
[root@ds01 ~]# docker plugin inspect glusterfs
[
{
"Config": {
"Args": {
"Description": "",
"Name": "",
"Settable": null,
"Value": null
},
"Description": "GlusterFS plugin for Docker",
"DockerVersion": "18.03.0-ce",
"Documentation": "https://github.com/trajano/docker-volume-plugins/",
"Entrypoint": [
"/glusterfs-volume-plugin"
],
"Env": [
{
"Description": "",
"Name": "SERVERS",
"Settable": [
"value"
],
"Value": ""
}
],
"Interface": {
"Socket": "gfs.sock",
"Types": [
"docker.volumedriver/1.0"
]
},
"IpcHost": false,
"Linux": {
"AllowAllDevices": false,
"Capabilities": [
"CAP_SYS_ADMIN"
],
"Devices": [
{
"Description": "",
"Name": "",
"Path": "/dev/fuse",
"Settable": null
}
]
},
"Mounts": null,
"Network": {
"Type": "host"
},
"PidHost": false,
"PropagatedMount": "/var/lib/docker-volumes",
"User": {},
"WorkDir": "",
"rootfs": {
"diff_ids": [
"sha256:9e8240f5b99231266ccb3260422fa333d6f72b3d7f6c77042d775ffc0e89b9ba"
],
"type": "layers"
}
},
"Enabled": true,
"Id": "e6ee819ceeca73104da53ae7cc9b16150ddef6536f0dfbf7bcc282f8915b6887",
"Name": "glusterfs:latest",
"PluginReference": "docker.io/trajano/glusterfs-volume-plugin:latest",
"Settings": {
"Args": [],
"Devices": [
{
"Description": "",
"Name": "",
"Path": "/dev/fuse",
"Settable": null
}
],
"Env": [
"SERVERS=ds01,ds02,ds03"
],
"Mounts": []
}
}
]
[root@ds01 ~]# docker volume create -d glusterfs gv2
gv2
[root@ds01 ~]# docker run -it -v gv2:/mnt alpine
docker: Error response from daemon: VolumeDriver.Mount: error mounting gv2: exit status 1.
See 'docker run --help'.

Jul 10 14:57:29 ds01 dockerd: time="2020-07-10T14:57:28+02:00" level=error msg="Entering go-plugins-helpers capabilitiesPath" plugin=e6ee819ceeca73104da53ae7cc9b16150ddef6536f0dfbf7bcc282f8915b6887
Jul 10 14:57:29 ds01 dockerd: time="2020-07-10T14:57:29+02:00" level=error msg="Entering go-plugins-helpers getPath" plugin=e6ee819ceeca73104da53ae7cc9b16150ddef6536f0dfbf7bcc282f8915b6887
Jul 10 14:57:29 ds01 dockerd: time="2020-07-10T14:57:29+02:00" level=error msg="Entering go-plugins-helpers capabilitiesPath" plugin=e6ee819ceeca73104da53ae7cc9b16150ddef6536f0dfbf7bcc282f8915b6887
Jul 10 14:57:29 ds01 dockerd: time="2020-07-10T14:57:29+02:00" level=error msg="Entering go-plugins-helpers getPath" plugin=e6ee819ceeca73104da53ae7cc9b16150ddef6536f0dfbf7bcc282f8915b6887
Jul 10 14:57:29 ds01 dockerd: time="2020-07-10T14:57:29+02:00" level=error msg="Entering go-plugins-helpers capabilitiesPath" plugin=e6ee819ceeca73104da53ae7cc9b16150ddef6536f0dfbf7bcc282f8915b6887
Jul 10 14:57:29 ds01 dockerd: time="2020-07-10T14:57:29+02:00" level=error msg="Entering go-plugins-helpers mountPath" plugin=e6ee819ceeca73104da53ae7cc9b16150ddef6536f0dfbf7bcc282f8915b6887
Jul 10 14:57:29 ds01 dockerd: time="2020-07-10T14:57:29+02:00" level=error msg="[-s ds01 -s ds02 -s ds03 --volfile-id=gv2 /var/lib/docker-volumes/de3b2d20f02f69bf4fd0112157e8ac2a259e820a06610b906d20693bb091241b]" plugin=e6ee819ceeca73104da53ae7cc9b16150ddef6536f0dfbf7bcc282f8915b6887
Jul 10 14:57:29 ds01 dockerd: time="2020-07-10T14:57:29+02:00" level=info msg="Command output: " plugin=e6ee819ceeca73104da53ae7cc9b16150ddef6536f0dfbf7bcc282f8915b6887
Jul 10 14:57:29 ds01 dockerd: time="2020-07-10T14:57:29.059610710+02:00" level=error msg="Handler for POST /v1.40/containers/create returned error: VolumeDriver.Mount: error mounting gv2: exit status 1"

Volumes not persisted between restarts

I am using this to deploy some applications via docker compose. However, containers marked as restart: always fail to start because the volumes error out as non-existent.

I installed with the following commands:

sudo docker plugin install --alias centos-nfs trajano/centos-mounted-volume-plugin --grant-all-permissions --disable
sudo docker plugin set centos-nfs PACKAGES=nfs-utils
sudo docker plugin set centos-nfs MOUNT_TYPE=nfs
sudo docker plugin set centos-nfs MOUNT_OPTIONS=hard,proto=tcp,nfsvers=4,intr
sudo docker plugin enable centos-nfs

My volume declaration is like such:

  webapp-logs:
    driver: centos-nfs
    driver_opts:
      device: host:logs/webapp

My docker compose command is like so:

sudo /usr/local/bin/docker-compose -f compose.yml up -d --build --force-recreate --remove-orphans
The resultant mount from docker inspect looks like so:

        "Mounts": [
            {
                "Type": "volume",
                "Name": "jistdeploy_webapp-logs",
                "Source": "/var/lib/docker/plugins/180d32f4982687ecfb6df714d95941749c0fc85c140d3e0180c9396775fa87cc/propagated-mount/13f96f9f26655ba3baddfc8e64d4bf812e54c1c5c68e24cd05081bbad0cba227",
                "Destination": "/var/log/webapps",
                "Driver": "centos-nfs:latest",
                "Mode": "rw",
                "RW": true,
                "Propagation": ""
            }
       ]

On restart of docker or the host I get the following error:

dockerd[1762]: time="2018-04-24T17:51:43.106196883Z" level=error msg="Failed to start container 3de3cad008484136b1c690c26fd46d17a7db80c01fc1187f4e9c2a9fac80b09d: get jistdeploy_webapp-logs: VolumeDriver.Get: volume jistdeploy_webapp-logs does not exist"

When stopping docker the "Source" location is removed.

Is there a way to make this persistent or have the plugin reconnect to the share on startup?

name Additional property name is not allowed

This is my first time using this, so I may be doing something wrong, but it doesn't feel like so.
I have installed GlusterFS in a 3 node swarm.
It works well, and the services can use the volumes as bind volumes.
I have installed the plugin, following the instructions, but when I try to build with compose or deploy to the stack I get:

docker stack deploy -c docker-compose.yml whatever
name Additional property name is not allowed

Example used docker-compose.yml:

`version: 3

services:

postgres:
image: "mdillon/postgis"
deploy:
replicas: 1
restart_policy:
condition: on-failure
delay: 5s
container_name: postgres
ports:
- "4321:5432"
volumes:
- pgdata:/var/lib/postgresql/data

volumes:
pgdata:
driver: glusterfs
name: "swarmVolume"
`
And this is the output for the swarmVolume info

Volume Name: swarmVolume Type: Replicate Volume ID: 76275e72-6e85-48c2-a8dd-21e8dfce8ed4 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: Swarm-Master:/gluster/volume1 Brick2: Swarm-Manager-1:/gluster/volume1 Brick3: Swarm-Manager-2:/gluster/volume1 Options Reconfigured: transport.address-family: inet storage.fips-mode-rchecksum: on nfs.disable: on performance.client-io-threads: off

docker info:

`Client:
Debug Mode: false

Server:
Containers: 4
Running: 3
Paused: 0
Stopped: 1
Images: 29
Server Version: 19.03.12
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: active
NodeID: 9vnjz1h7r1agru5g132skjk
Is Manager: true
ClusterID: p0wpqg412gkeln21tmvddom
Managers: 3
Nodes: 3
Default Address Pool: 1.0.0.0/8
SubnetSize: 24
Data Path Port: 4789
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 10
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 1.0.0.1
Manager Addresses:
1.0.0.1:2377
1.0.0.2:2377
1.0.0.3:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 7ad184331fa3e55e52890ea95e65ba581ae3429
runc version: dc9208a3303feef5b3839f4323d9be36df0a9dd
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.15.0-109-generic
Operating System: Ubuntu 18.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.852GiB
Name: Swarm-Master
ID: ZHPJ:UJW:JDDO:L4WN:LUDX:SGI:ZLLH:INPF:F5UY:J35Z:MLOH:4PP
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support`

Also, Is it possible to use this plugin from within gitlab-runner? So one can integrate with CI/CD, and build images aware of GlusterFS, and deploy to production.

VolumeDriver.Mount: error mounting data: exit status 1

I have worked gluster cluster. sync also worked.

# gluster volume info
 
Volume Name: gluster-fs
Type: Replicate
Volume ID: be633a3e-555f-44d0-8ec8-07e77a440f47
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: gluster0:/gluster/brick
Brick2: gluster1:/gluster/brick
Brick3: gluster2:/gluster/brick
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off

And I have simple docker-compose file

version: "3.4"
services:

  mysql:
    image: mysql
    deploy:
      mode: replicated
      replicas: 1
      placement:
        constraints: [node.role == manager]
    ports:
      - "3306:3306"
    networks:
      - default
    volumes:
      - data:/var/lib/mysql

volumes:
  data:
    driver: glusterfs
    name: "data"

I can't run because always got the same error like this

VolumeDriver.Mount: error mounting data: exit status 1

And this

[2020-01-16 10:40:27.671991] I [MSGID: 114057] [client-handshake.c:1376:select_server_supported_programs] 0-gluster-fs-client-2: Using Program GlusterFS 4.x v1, Num (1298437), Version (400)
[2020-01-16 10:40:27.672188] I [MSGID: 114057] [client-handshake.c:1376:select_server_supported_programs] 0-gluster-fs-client-1: Using Program GlusterFS 4.x v1, Num (1298437), Version (400)
[2020-01-16 10:40:27.672754] I [MSGID: 114046] [client-handshake.c:1106:client_setvolume_cbk] 0-gluster-fs-client-2: Connected to gluster-fs-client-2, attached to remote volume '/gluster/brick'.
[2020-01-16 10:40:27.672778] I [MSGID: 108002] [afr-common.c:5648:afr_notify] 0-gluster-fs-replicate-0: Client-quorum is met
[2020-01-16 10:40:27.673116] I [MSGID: 114046] [client-handshake.c:1106:client_setvolume_cbk] 0-gluster-fs-client-1: Connected to gluster-fs-client-1, attached to remote volume '/gluster/brick'.
[2020-01-16 10:40:27.675667] I [fuse-bridge.c:5166:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.27
[2020-01-16 10:40:27.675689] I [fuse-bridge.c:5777:fuse_graph_sync] 0-fuse: switched to graph 0
[2020-01-16 10:40:27.677743] I [MSGID: 108031] [afr-common.c:2581:afr_local_discovery_cbk] 0-gluster-fs-replicate-0: selecting local read_child gluster-fs-client-0
[2020-01-16 10:44:32.157093] E [fuse-bridge.c:227:check_and_dump_fuse_W] (--> /lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x12f)[0x7f3055f6ac9f] (--> /usr/lib/x86_64-linux-gnu/glusterfs/7.1/xlator/mount/fuse.so(+0x8e32)[0x7f3054522e32] (--> /usr/lib/x86_64-l
inux-gnu/glusterfs/7.1/xlator/mount/fuse.so(+0x9fe8)[0x7f3054523fe8] (--> /lib/x86_64-linux-gnu/libpthread.so.0(+0x7fa3)[0x7f3055adcfa3] (--> /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f30557244cf] ))))) 0-glusterfs-fuse: writing to fuse device failed: No such file or
 directory
[2020-01-16 10:44:34.118949] E [fuse-bridge.c:227:check_and_dump_fuse_W] (--> /lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x12f)[0x7f3055f6ac9f] (--> /usr/lib/x86_64-linux-gnu/glusterfs/7.1/xlator/mount/fuse.so(+0x8e32)[0x7f3054522e32] (--> /usr/lib/x86_64-l
inux-gnu/glusterfs/7.1/xlator/mount/fuse.so(+0x9fe8)[0x7f3054523fe8] (--> /lib/x86_64-linux-gnu/libpthread.so.0(+0x7fa3)[0x7f3055adcfa3] (--> /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f30557244cf] ))))) 0-glusterfs-fuse: writing to fuse device failed: No such file or
 directory

I'm try use
name: "gfs/data"
name: "gluster-fs/data"

Also i try recreate gluster volume with other names like gfs
Also i manually create folder data
And no lucky.
It is worked?

mount options

is there an ability to pass mount options? specifically acl?

docker stop command create <defunct> process

Every time a container having a glusterfs docker volume mounted is stopped, a daemon process typed as <defunct> is created.

docker run --rm -it -v my-glusterfs-volume:/work alpine

After a control-c, ps aux shows defunct. This is what ps command shows after 6 starts and stops.

root 24438 0.0 0.0 0 0 ? Zs 11:04 0:00 [glusterfs] <defunct>
root 24560 0.0 0.0 0 0 ? Zs 11:04 0:00 [glusterfs] <defunct>
root 24801 0.0 0.0 0 0 ? Zs 11:05 0:00 [glusterfs] <defunct>
root 25263 0.0 0.0 0 0 ? Zs 11:07 0:00 [glusterfs] <defunct>
root 25381 0.0 0.0 0 0 ? Zs 11:08 0:00 [glusterfs] <defunct>
root 25529 0.0 0.0 0 0 ? Zs 11:08 0:00 [glusterfs] <defunct>

Regards,

Failure to create directory with GlusterFS 4.1

I tried the current master branch (aeecbb6) on a simple Docker host with GlusterFS 4.1. It appears to be working EXCEPT when a non-existent subdirectory is specified. Did something change between GlusterFS 3.x and 4.1?

Volume persistence WORKS with a bare Gluster volume:

docker run -it --rm --volume-driver gluster1 -v gfs1:/data alpine

df /data
Filesystem           1K-blocks      Used Available Use% Mounted on
srv1:gfs1        14624768   2320512  12304256  16% /data

Using a subdir is SILENTLY BROKEN - all appears well until the container terminates and your "persistent" data is vaporized:

docker run -it --rm --volume-driver gluster1 -v gfs1/dir1:/data alpine

Evidence of a problem can be seen in "df" output: it returns an unexpected device (the filesystem of /var/lib/docker) instead of the GlusterFS device:

df /data
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/vg1-lv_docker
                     314394636    403180 313991456   0% /data

Specifying a subdir WORKS if it has already been created within the GlusterFS volume:

docker run -it --rm --volume-driver gluster1 -v gfs1/n:/data alpine

df /data
Filesystem           1K-blocks      Used Available Use% Mounted on
srv1:gfs1/n      14624768   2320512  12304256  16% /data

Install/version info

For this testing, the Docker host served as one of the two peers in the GlusterFS cluster.

docker plugin install --alias gluster1   trajano/glusterfs-volume-plugin \
  --grant-all-permissions --disable
docker plugin set gluster1 SERVERS=srv1,srv2
docker plugin enable gluster1

rpm -qa centos-release centos-release-gluster41 docker-ce glusterfs | sort
centos-release-7-6.1810.2.el7.centos.x86_64
centos-release-gluster41-1.0-3.el7.centos.noarch
docker-ce-18.09.6-3.el7.x86_64
glusterfs-4.1.8-1.el7.x86_64

sudo gluster volume list
gfs1

failed to set file label ... operation not supported

Hi all,

any attempt to mount a volume (via compose file or cli mode) returns this error:

docker: Error response from daemon: **failed to set file label** on /var/lib/docker/plugins/e3534cc8a654f232b07bd53094198ccd9b8039c17fc14964f2078d3e601a6c15/propagated-mount/6bfc1351bcbc76e90518cb94a70fad08a5b3a9f5a477a0a2eca86696e2ace1ce: **operation not supported**.

I'm using
Docker v19.03.8 in swarm mode
GlusterFS v4.1.9
on CentOS 7 with SELinux in Enforcing mode

Plugin configuration
docker plugin install --alias glusterfs trajano/glusterfs-volume-plugin --grant-all-permissions --disable docker plugin set glusterfs SERVERS=swarm-master1,swarm-master2,swarm-master3,swarm-worker1,swarm-worker2,swarm-worker3 docker plugin enable glusterfs

Any ideas?

Enabling plugin throws "no such file or directory" error on boot2docker vm

I am attempting to use the nfs-volume-pugin on a docker machine created using dockertoolbox and the vsphere driver. So the underlying machine image is a boot2docker vm.

When attempting to enable the plugin I am receiving an error:

Error response from daemon: dial unix /run/docker/plugins/bf1e4c06ddc7d75060a52846b09e8efb573bb49ae6f65147b541e53c66a5e204/nfs.sock: connect: no such file or directory

It seems that it might be some kind of permission error, but I can not tell if that is an error because of the file permissions in boot2docker or because of the manner in which the plugin is attempting to create the files in question. I am able to manually create the file, but then I receive a different error and it gets deleted afterwards, so I lean towards it not being a file permission issue, but an issue with the plugin.

Boot2Docker version 18.06.0-ce, build HEAD : 1f40eb2 - Thu Jul 19 18:48:09 UTC 2018
doc
Client:
Version: 18.06.0-ce
API version: 1.38
Go version: go1.10.3
Git commit: 0ffa825
Built: Wed Jul 18 19:04:39 2018
OS/Arch: linux/amd64
Experimental: false

Server:
Engine:
Version: 18.06.0-ce
API version: 1.38 (minimum version 1.12)
Go version: go1.10.3
Git commit: 0ffa825
Built: Wed Jul 18 19:13:39 2018
OS/Arch: linux/amd64
Experimental: false

Gluster mount drive vs this plugin ?

Hi,

Sorry for the silly question.

I have my gluster enabled on both my docker swarm machines. I basically write to /mnt/...... and gluster actually takes care of syncing the data.

What does this plugin allow me to do apart from that.

Yep, sorry, a silly question - I know :-)

But its not apparent to me what installing the plugin gives me when glusterfs takes care of the syncing anyway - is this not right ??

Anybody confirm?

Thanks

When mounting a CIFS volume, the service does not get replicated

In this case, nothing works. No logs or anything.

Running the following w/ cifs-utils on host machine works (CentOS 7):

 mount -t cifs   -o vers=3.02,mfsymlinks,file_mode=0666,dir_mode=0777,credentials=/opt/apps/photos/secret.txt //photobank/photos /opt/apps/photos/testcifs

However, when using the plugin the service just gets stuck. No replication, no logs. Nothing is happening:

version: '3.7'
services:
  image: emby/embyserver
    networks:
      - public-net
    volumes:
      - type: volume
        source: emby-config
        target: /config
      - photobank:/mnt/photobank
    secrets:
      - source: photobank.credentials
        target: /root/credentials/photobank@photos
    deploy:
       ... deploy options here
secrets:
  photobank.credentials:
    external: true
networks:
  public-net:
    external: true
volumes:
  emby-config:
  photobank:
    driver: trajano/cifs-volume-plugin
    driver_opts:
      cifsopts: vers=3.02,mfsymlinks,file_mode=0666,dir_mode=0777
    name: "photobank/photos"

(photobank.credentials secret stores contents of "secret.txt," which is used by the bash command above.

Using normal volumes work. However, using the plugin volume does nothing. It is just stuck.

Docker version:

Client: Docker Engine - Community
 Version:           19.03.3
 API version:       1.40
 Go version:        go1.12.10
 Git commit:        a872fc2f86
 Built:             Tue Oct  8 00:58:10 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.3
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.10
  Git commit:       a872fc2f86
  Built:            Tue Oct  8 00:56:46 2019
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.10
  GitCommit:        b34a5c8af56e510852c35414db4c1f4fa6172339
 runc:
  Version:          1.0.0-rc8+dev
  GitCommit:        3e425f80a8c931f88e6d94a8c831b9d5aa481657
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

What am I doing wrong here?

Error response from daemon: manifest for lukics/glusterfs-volume-plugin:latest not found

Managed to build the image on my raspberry pi.

Dockerfile:
FROM centos:7 RUN yum install -q -y wget git glusterfs glusterfs-fuse attr RUN wget https://storage.googleapis.com/golang/go1.9.linux-armv6l.tar.gz && \ tar -C /usr/local -xzf go1.9.linux-armv6l.tar.gz && \ export PATH=$PATH:/usr/local/go/bin # put into ~/.profile RUN /usr/local/go/bin/go get github.com/trajano/docker-volume-plugins/glusterfs-volume-plugin && \ mv $HOME/go/bin/glusterfs-volume-plugin / && \ rm -rf $HOME/go && \ yum remove -q -y go git gcc && \ yum autoremove -q -y && \ yum clean all && \ rm -rf /var/cache/yum /var/log/anaconda /var/cache/yum /etc/mtab && \ rm /var/log/lastlog /var/log/tallylog

I build it with:
docker build -t lukics/glusterfs-volume-plugin .

when I try to install it I get:
$ docker plugin install lukics/glusterfs-volume-plugin --grant-all-permissions Error response from daemon: manifest for lukics/glusterfs-volume-plugin:latest not found
What did I wrong?

Question: Which latest version of glusterfs plugin is work?

glusterfs -V
glusterfs 7.1
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.

What is the latest version of glusterfs must be for fully work this plugin?

#    volumes:
#      - type: bind
#        source: /mnt/glusterfs/vote/mysql
#        target: /var/lib/mysql
    volumes:
      - votedata:/var/lib/mysql

volumes:
  votedata:
    driver: glusterfs
    name: "gfs/votedata"

I just add volume plugin, and service always fail (

Do I need create the folder "/vol1" on volume manual or not?

Hello!

I have a swarm/gluster cluster (three nodes):

  • Ubuntu 18.04
  • Docker version 19.03.5, build 633a0ea838
  • glusterfs 3.13.2

I've made and started a volume:
gluster volume create gfs replica 3 swarm0:/gluster/brick swarm1:/gluster/brick swarm2:/gluster/brick force

gluster volume info

Volume Name: gfs
Type: Replicate
Volume ID: a4460b88-2b08-4ea3-b0c5-183deb87949f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: swarm0:/gluster/brick
Brick2: swarm1:/gluster/brick
Brick3: swarm2:/gluster/brick
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

I've mounted the volume on all nodes:

sudo umount /mnt
sudo echo 'localhost:/gfs /mnt glusterfs defaults,_netdev,backupvolfile-server=localhost 0 0' >> /etc/fstab
sudo mount.glusterfs localhost:/gfs /mnt
sudo chown -R root:docker /mnt

I've installed and enabled plugin on all nodes:

docker plugin install --alias glusterfs trajano/glusterfs-volume-plugin --grant-all-permissions --disable
docker plugin set glusterfs SERVERS=swarm0,swarm1,swarm2
docker plugin enable glusterfs

I've tryied to deploy a service:

---
version: "3.4"

services:
  foo:
    image: alpine
    command: ping localhost
    networks:
      - net
    volumes:
      - vol1:/tmp

networks:
  net:
    driver: overlay

volumes:
  vol1:
    driver: glusterfs
    name: "gfs/vol1"

docker stack deploy -c gfs-test.yml gfs-test

I've became a errors:

7eifltph80eo09u69uv06d0lg \_ gfs-test_foo.1 alpine:latest@sha256:2171658620155679240babee0a7714f6509fae66898db422ad803b951257db78 swarm2 Shutdown Rejected 8 minutes ago "VolumeDriver.Mount: error mounting gfs/vol1: exit status 1"

The docker volume exists:

docker volume inspect gfs/vol1
[
    {
        "CreatedAt": "0001-01-01T00:00:00Z",
        "Driver": "glusterfs:latest",
        "Labels": {
            "com.docker.stack.namespace": "gfs-test"
        },
        "Mountpoint": "",
        "Name": "gfs/vol1",
        "Options": {},
        "Scope": "global",
        "Status": {
            "args": [
                "-s",
                "swarm0",
                "-s",
                "swarm1",
                "-s",
                "swarm2",
                "--volfile-id=gfs",
                "--subdir-mount=/vol1"
            ],
            "mounted": false
        }
    }
]

If i create a folder "vol1" on one of the nodes in volume folder so it works fine:

docker stack deploy -c gfs-test.yml gfs-test
Creating network gfs-test_net
Creating service gfs-test_foo

7b3cm1fnuy3x8xudcl87mr788 gfs-test_foo.1 alpine:latest@sha256:2171658620155679240babee0a7714f6509fae66898db422ad803b951257db78 swarm1 Running Running about a minute ago

docker service inspect gfs-test_foo

...
                    "Mounts": [
                        {
                            "Type": "volume",
                            "Source": "gfs/vol1",
                            "Target": "/tmp",
                            "VolumeOptions": {
                                "Labels": {
                                    "com.docker.stack.namespace": "gfs-test"
                                },
                                "DriverConfig": {
                                    "Name": "glusterfs"
                                }
                            }
                        }
                    ],
...

Do I need create a folder in a volume's folder before or I have a problem?

name Additional property name is not allowed

In my docker-compose.yml I have:

volumes:
  vol1:
    driver: glusterfs
    name: "gfs"

I'm using docker swarm.
docker stack deploy -c docker-compose.yml vol1-demo
I have error:
name Additional property name is not allowed

Anybody can help me?

Passing special mount options through glusteropts option ( enable-ino32 )

Hi,

i'm setting up a cockroachDB cluster and am using this plugin for glusterFS access. Everything works great until I discovered, that the db won't start due to a missing mount option.

I have verified that this will work:

glusterfs -s 192.168.128.107 -s 192.168.128.240 -s 192.168.128.97 --enable-ino32=on   --volfile-id=db1 --volfile-server=192.168.128.107 /mnt

And then manually start a container with the folder attached as a volume:

docker run -ti -v /mnt:/mnt roach:v2

However, when I try this using docker-compose and this plugin it won't work.

This is how the volume is specified in the compose file:

volumes:
  codb-1:
     driver: glusterfs
     driver_opts:
            glusteropts: "-s 192.168.128.107 -s 192.168.128.240 -s 192.168.128.97 --enable-ino32=on   --volfile-id=db1 --volfile-server=192.168.128.107 "

The log messages i get on the host where this starts are:

Mar 28 11:13:05 imt3003worker-3-2 dockerd[9247]: time="2019-03-28T11:13:05Z" level=error msg="Entering go-plugins-helpers mountPath" plugin=c268b167dd63d1515e32b0020743eb2614414266c7aa4e5e8505cfda6b703885
Mar 28 11:13:05 imt3003worker-3-2 dockerd[9247]: time="2019-03-28T11:13:05Z" level=error msg="[-s 192.168.128.107 -s 192.168.128.240 -s 192.168.128.97 --enable-ino32=on   --volfile-id=db1 --volfile-server=192.168.128.107  /var/lib/docker-volumes/800abb5a25f11b755e27d019c3ad781644fca0ce14916efee4aee77a1f11ae13]" plugin=c268b167dd63d1515e32b0020743eb2614414266c7aa4e5e8505cfda6b703885
Mar 28 11:13:05 imt3003worker-3-2 dockerd[9247]: time="2019-03-28T11:13:05Z" level=info msg="Command output: Usage: glusterfs [OPTION...] --volfile-server=SERVER [MOUNT-POINT]" plugin=c268b167dd63d1515e32b0020743eb2614414266c7aa4e5e8505cfda6b703885
Mar 28 11:13:05 imt3003worker-3-2 dockerd[9247]: time="2019-03-28T11:13:05Z" level=info msg="  or:  glusterfs [OPTION...] --volfile=VOLFILE [MOUNT-POINT]" plugin=c268b167dd63d1515e32b0020743eb2614414266c7aa4e5e8505cfda6b703885
Mar 28 11:13:05 imt3003worker-3-2 dockerd[9247]: time="2019-03-28T11:13:05Z" level=info msg="Try `glusterfs --help' or `glusterfs --usage' for more information." plugin=c268b167dd63d1515e32b0020743eb2614414266c7aa4e5e8505cfda6b703885
Mar 28 11:13:05 imt3003worker-3-2 dockerd[9247]: time="2019-03-28T11:13:05Z" level=info plugin=c268b167dd63d1515e32b0020743eb2614414266c7aa4e5e8505cfda6b703885
Mar 28 11:13:05 imt3003worker-3-2 dockerd[9247]: time="2019-03-28T11:13:05.570778395Z" level=error msg="fatal task error" error="VolumeDriver.Mount: error mounting roach_codb-1: exit status 64" module=node/agent/taskmanager node.id=ovme1tww944ctcvltyv3eu4ab service.id=09gkviz967ljg4atnzadl4brk task.id=pgy4fel3e9kmv9zrr10iexxay

Also, the README uses glusterfsopts, but the code seems to refer to glusteropts, which appears to be working for me.

I understand that the glusterfs client used in the plugin ( CentOS) is not the same as I'm using locally ( Ubuntu ). Any advice on how to proceed would be greatly appreciated.

Automated Builds

It would be really great if this was an automated build. I have done this many times with my projects and can assist in getting the build set up on Docker Hub.

GlusterFS Server Volume Plugin

The glusterfs-volume-plugin wraps a GlusterFS-Fuse client to connect to a GlusterFS server cluster. Instead of having the managed plugin just be a client use it as the actual GlusterFS server.

This is to track down the concept and high level architecture to see the feasibility of such an endeavor.

The objective of the plugin is to abstract away the brick and volume management while inside the swarm. so there will be limitations to simplify the usage:

  • There is only one glusterfs pool for the swarm
  • There is only one gluster volume
  • All the bricks that are allocated will be given to the GlusterFS volume.
  • There is only one instance of the plugin per node and will use host networking exposing the GlusterFS ports to the rest of the network

Can't mount volume

I tried the following in a vanilla centos7:

# remove any old version
yum -y remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate
docker-engine

# isntall it
yum install -y yum-utils   device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum -y install docker-ce docker-ce-cli containerd.io
systemctl enable --now docker

# enable plugin
plugin_alias='glusterfs-plugin'
servers='glusterfs0,glusterfs1,glusterfs2'

docker plugin install --alias $plugin_alias trajano/glusterfs-volume-plugin --grant-all-permissions --disable
docker plugin set $plugin_alias SERVERS="$servers"
docker plugin enable $plugin_alias

# create volume
docker volume create -d glusterfs-plugin:latest test

# try it
docker run -it -v test:/mnt alpine

If fails like this:

[root@glusterfs0 ~]# docker run -it -v test:/mnt alpine
^[[3~docker: Error response from daemon: VolumeDriver.Mount: error mounting test: exit status 1.

And the logs say:

Apr 11 01:36:16 glusterfs0.ditas.mia.cloudsigma.com dockerd[19413]: time="2019-04-11T01:36:16Z" level=error msg="Entering go-plugins-helpers capabilitiesPath" plugin=8d4372842780942f912a8fe61ea2d31c2fc253eeba7a1f4f86968bba9a59063b
Apr 11 01:36:16 glusterfs0.ditas.mia.cloudsigma.com dockerd[19413]: time="2019-04-11T01:36:16Z" level=error msg="Entering go-plugins-helpers getPath" plugin=8d4372842780942f912a8fe61ea2d31c2fc253eeba7a1f4f86968bba9a59063b
Apr 11 01:36:16 glusterfs0.ditas.mia.cloudsigma.com dockerd[19413]: time="2019-04-11T01:36:16Z" level=error msg="Entering go-plugins-helpers capabilitiesPath" plugin=8d4372842780942f912a8fe61ea2d31c2fc253eeba7a1f4f86968bba9a59063b
Apr 11 01:36:16 glusterfs0.ditas.mia.cloudsigma.com dockerd[19413]: time="2019-04-11T01:36:16Z" level=error msg="Entering go-plugins-helpers getPath" plugin=8d4372842780942f912a8fe61ea2d31c2fc253eeba7a1f4f86968bba9a59063b
Apr 11 01:36:16 glusterfs0.ditas.mia.cloudsigma.com dockerd[19413]: time="2019-04-11T01:36:16Z" level=error msg="Entering go-plugins-helpers capabilitiesPath" plugin=8d4372842780942f912a8fe61ea2d31c2fc253eeba7a1f4f86968bba9a59063b
Apr 11 01:36:16 glusterfs0.ditas.mia.cloudsigma.com dockerd[19413]: time="2019-04-11T01:36:16Z" level=error msg="Entering go-plugins-helpers mountPath" plugin=8d4372842780942f912a8fe61ea2d31c2fc253eeba7a1f4f86968bba9a59063b
Apr 11 01:36:16 glusterfs0.ditas.mia.cloudsigma.com dockerd[19413]: time="2019-04-11T01:36:16Z" level=error msg="[-s glusterfs0 -s glusterfs1 -s glusterfs2 --volfile-id=test /var/lib/docker-volumes/d4781c258094fd2c306d11a9922bf0410b7b6d151bf50cf80f6949a50c97b044]" plugin=8d4372842780942f912a8fe61ea2d31c2fc253eeba7a1f4f86968bba9a59063b
Apr 11 01:36:16 glusterfs0.ditas.mia.cloudsigma.com dockerd[19413]: time="2019-04-11T01:36:16Z" level=info msg="Command output: " plugin=8d4372842780942f912a8fe61ea2d31c2fc253eeba7a1f4f86968bba9a59063b
Apr 11 01:36:16 glusterfs0.ditas.mia.cloudsigma.com dockerd[19413]: time="2019-04-11T01:36:16.141200212Z" level=error msg="Handler for POST /v1.39/containers/create returned error: VolumeDriver.Mount: error mounting test: exit status 1"
Apr 11 01:36:25 glusterfs0.ditas.mia.cloudsigma.com dhclient[3334]: DHCPREQUEST on eth0 to 10.20.241.23 port 67 (xid=0xc9099cc)

Also, check this out:

[root@glusterfs0 ~]# docker volume list
DRIVER                    VOLUME NAME
glusterfs-plugin:latest   test

[root@glusterfs0 ~]# docker volume inspect test
[
    {
        "CreatedAt": "0001-01-01T00:00:00Z",
        "Driver": "glusterfs-plugin:latest",
        "Labels": {},
        "Mountpoint": "",
        "Name": "test",
        "Options": {},
        "Scope": "global",
        "Status": {
            "args": [
                "-s",
                "glusterfs0",
                "-s",
                "glusterfs1",
                "-s",
                "glusterfs2",
                "--volfile-id=test"
            ],
            "mounted": false
        }
    }
]

[root@glusterfs0 ~]# gluster volume list
No volumes present in cluster

[root@glusterfs0 ~]# gluster peer status
Number of Peers: 2

Hostname: glusterfs2
Uuid: ab2cdee4-7257-4238-8aa7-a9a3068f2398
State: Peer in Cluster (Connected)

Hostname: glusterfs1
Uuid: fb556123-992c-4021-a169-2eafe0a7ec00
State: Peer in Cluster (Connected)

error creating mount for propagated mount: no such file or directory

$ docker plugin install --alias trajano \
>   trajano/glusterfs-volume-plugin \
>   --grant-all-permissions --disable
latest: Pulling from trajano/glusterfs-volume-plugin
505f03dea96b: Download complete
Digest: sha256:1e9b3049ea727d32f7e4a4f205dc5cade46c3431ebb6ba8e83b571ffc3232323
Status: Downloaded newer image for trajano/glusterfs-volume-plugin:latest
Installed plugin trajano/glusterfs-volume-plugin
$ docker plugin set trajano SERVERS=gluster1
$ docker plugin enable trajano
Error response from daemon: error creating mount for propagated mount: no such file or directory

NFS Managed Volume Plugin

Although centos-managed-volume-plugin can do NFS, it would be nice if it didn't need as much configuration and have it ready to go.

Cannot mount CIFS shares: permission denied

I have an Ubuntu machine with Docker and a NAS I'm trying to use for the storage - in this case Plex. Here is my docker-compse.yml:

version: '3.4'
services:
  plex:
    container_name: plex
    image: plexinc/pms-docker:latest
    restart: unless-stopped
    ports:
      - '32400:32400/tcp'
      - '32469:32469/tcp'
      - '1900:1900/udp'
      - '32410:32410/udp'
      - '32412:32412/udp'
      - '32413:32413/udp'
      - '32414:32414/udp'
    environment:
      - TZ=America/New_York
      - PLEX_CLAIM=$PLEX_CLAIM
    volumes:
      - 'plex_config:/config'
      - 'plex_media:/media'
    tmpfs: /transcode

volumes:
   plex_config:
   plex_media:
      driver: cifs
      name: "192.168.2.9/media"

I have a credentials file for the share placed on the docker host at /root/credentials/192.168.2.9@media that looks like this:

username=plex
password=<password>
domain=WORKGROUP # I don't use a domain, so I assume this is what to use? or can it just be left blank in this case?

However, when I run docker-compose up I get the following error:

ERROR: for plex  Cannot start service plex: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/var/lib/docker/5000.5000/plugins/39cedb6cd9c9cbdc785ce4340bc31a0d85bba919fb5add30a08bf3bd8ab6b80f/propagated-mount/cd368dad7818f77a4c07bf22e59898a62eaa293416b03897cb6491aaa7579393\\\" to rootfs \\\"/var/lib/docker/5000.5000/overlay2/51889ee2bcb56a491ad370b245b871799bac66fa390b4443ad7cd7ab33e86e9b/merged\\\" at \\\"/media\\\" caused \\\"stat /var/lib/docker/5000.5000/plugins/39cedb6cd9c9cbdc785ce4340bc31a0d85bba919fb5add30a08bf3bd8ab6b80f/propagated-mount/cd368dad7818f77a4c07bf22e59898a62eaa293416b03897cb6491aaa7579393: permission denied\\\"\"": unknown
ERROR: Encountered errors while bringing up the project.

I can confirm that the credentials in the file have appropriate permissions as I tested logging into the share on my Windows laptop with them and can read/write/etc. I am using namespace remapping with the user container with uid and gid of 5000 as seen in the error message.

Could anyone possibly help me out with this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.