Code Monkey home page Code Monkey logo

docker-volume-gluster's Introduction

docker-volume-gluster License Project Status

GitHub release Go Report Card codecov master : Travis master develop : Travis develop

Use GlusterFS as a backend for docker volume

Status : proof of concept (working)

Use GlusterFS cli in the plugin container so it depend on fuse on the host.

Docker plugin (New & Easy method) Docker Pulls ImageLayers Size

docker plugin install sapk/plugin-gluster
docker volume create --driver sapk/plugin-gluster --opt voluri="<volumeserver>:<volumename>" --name test
docker run -v test:/mnt --rm -ti ubuntu

Create and Mount volume

docker volume create --driver sapk/plugin-gluster --opt voluri="<volumeserver>,<otherserver>,<otheroptionalserver>:<volumename>" --name test
docker run -v test:/mnt --rm -ti ubuntu

Docker-compose

volumes:
  some_vol:
    driver: sapk/plugin-gluster
    driver_opts:
      voluri: "<volumeserver>:<volumename>"

Additionnal docker-plugin config

docker plugin disable sapk/plugin-gluster

docker plugin set sapk/plugin-gluster DEBUG=1 #Activate --verbose
docker plugin set sapk/plugin-gluster MOUNT_UNIQ=1 #Activate --mount-uniq

docker plugin enable sapk/plugin-gluster

Legacy plugin installation

For Docker version 1.12 or below, the managed plugin system is not supported. This also happens if the plugin is not installed via docker plugin install. Docker's new plugin system is the preferred way to add drivers and plugins, where the plugin is just an image downloaded from registry containing the executable and needed configuration files. You can run both legacy and new plugins in Docker versions above 1.12, but be aware that legacy plugins will not show up on docker plugin ls. They will be listed instead under plugins on docker info.

That way, the driver's name will be just gluster (in both the CLI and Compose environments):

Build

make

Start daemon

./docker-volume-gluster daemon
OR in a docker container
docker run -d --device=/dev/fuse:/dev/fuse --cap-add=SYS_ADMIN --cap-add=MKNOD  -v /run/docker/plugins:/run/docker/plugins -v /var/lib/docker-volumes/gluster:/var/lib/docker-volumes/gluster:shared sapk/docker-volume-gluster

For more advance params : ./docker-volume-gluster --help OR ./docker-volume-gluster daemon --help

Run listening volume drive deamon to listen for mount request

Usage:
  docker-volume-gluster daemon [flags]

Flags:
  -h, --help         help for daemon
      --mount-uniq   Set mountpoint based on definition and not the name of volume

Global Flags:
  -b, --basedir string   Mounted volume base directory (default "/var/lib/docker-volumes/gluster")
  -v, --verbose          Turns on verbose logging

Create and Mount volume

docker volume create --driver gluster --opt voluri="<volumeserver>:<volumename>" --name test
docker run -v test:/mnt --rm -ti ubuntu

Performances :

As tested here, this plugin provide same performances as a gluster volume mounted on host via docker bind mount.

Inspired from :

docker-volume-gluster's People

Contributors

delissonjunio avatar nicodmf avatar sapk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-volume-gluster's Issues

Performance

I quickly did a performance test:

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=container-ssd-centos --filename=test3 --directory=/mnt/ --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
container-ssd-centos: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.2.10
Starting 1 process
container-ssd-centos: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [m(1)] [100.0% done] [5920KB/2020KB/0KB /s] [1480/505/0 iops] [eta 00m:00s]
container-ssd-centos: (groupid=0, jobs=1): err= 0: pid=573: Wed Dec  6 23:34:37 2017
  read : io=3071.7MB, bw=5191.6KB/s, iops=1297, runt=605871msec
  write: io=1024.4MB, bw=1731.3KB/s, iops=432, runt=605871msec
  cpu          : usr=1.31%, sys=4.32%, ctx=1049638, majf=0, minf=8
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=786347/w=262229/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: io=3071.7MB, aggrb=5191KB/s, minb=5191KB/s, maxb=5191KB/s, mint=605871msec, maxt=605871msec
  WRITE: io=1024.4MB, aggrb=1731KB/s, minb=1731KB/s, maxb=1731KB/s, mint=605871msec, maxt=605871msec

For the reference, I did the same test but this time with an mounted local gluster host volume.

docker run -ti --rm -v /mnt/gv0/:/mnt/gv0 ubuntu bash
apt-get update && apt-get install -y fio

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=container-ssd-centos --filename=test3 --directory=/mnt/ --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
container-ssd-centos: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.2.10
Starting 1 process
container-ssd-centos: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [m(1)] [100.0% done] [65504KB/21732KB/0KB /s] [16.4K/5433/0 iops] [eta 00m:00s]
container-ssd-centos: (groupid=0, jobs=1): err= 0: pid=556: Wed Dec  6 23:39:35 2017
  read : io=3071.7MB, bw=68224KB/s, iops=17055, runt= 46104msec
  write: io=1024.4MB, bw=22751KB/s, iops=5687, runt= 46104msec
  cpu          : usr=10.48%, sys=46.19%, ctx=1073291, majf=0, minf=9
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=786347/w=262229/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: io=3071.7MB, aggrb=68223KB/s, minb=68223KB/s, maxb=68223KB/s, mint=46104msec, maxt=46104msec
  WRITE: io=1024.4MB, aggrb=22751KB/s, minb=22751KB/s, maxb=22751KB/s, mint=46104msec, maxt=46104msec
root@52a4cc5b356d:/# container-ssd-centos: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64

The difference is 68MB/s to 5MB/s so it is more or less a factor 10 difference between the two results.

Do you have an idea if and how the drive can be improved so it is on par with fuse mounted glusterfs?

Is it supposed to automatically create the specified subdir?

I was trying this tutorial: https://sysadmins.co.za/container-persistent-storage-for-docker-swarm-using-a-glusterfs-volume-plugin/

My GlusterFS version is 3.13.2. I'm not sure if it is just a permissions issue, but when I try this in my compose file:

volumes:
  vol1:
    driver: glusterfs
    name: "gfs/vol1"

The container doesn't have the gluster mount, but if I do the following:

volumes:
  vol1:
    driver: glusterfs
    name: "gfs"

It works fine, but the data is put into the root of the volume. The first option works fine if I create the "vol1" directory before running the stack, and this is why I think it may just be a permissions issue. What should be the permissions set at the filesystem level on the gluster volume mount?

docker volume rm fails on an unmounted volume

  • Plugin version (or commit ref) : v1.01
  • Docker version : 18.03
  • Plugin type : legacy
  • Operating system: Linux

Description

  1. Create the volume
  2. Create a container that uses the volume
  3. stop the container
  4. umount the glusterfs mount
  5. docker volume rm fails with error code 32 in the logs because the mount point does not exist.

The scenario happens as well when you reboot the cluster.

I think a quick fix would be to check if there is a mount before doing a umount when doing an rm

rm -f still works though

Add choice on naming of mountpoint

Allow to not use the name of the volume but for example his definition like voluri=":" this will limit to one mountpoint if different docker-compose (prefix volume with service name) mount the same volume.

containers/create

  • Plugin version (or commit ref) :sapk/plugin-gluster:latest
  • Docker version :19.03.2
  • Plugin type : managed
  • Operating system: centos7 64bit

Description

[root@docker-node2 data]# docker volume create --driver sapk/plugin-gluster --opt voluri="docker-node2:data_share" --name mysql_data
mysql_data
[root@docker-node2 data]# docker run -v mysql_data:/mnt -ti nginx
docker: Error response from daemon: VolumeDriver.Mount: exit status 1.
[root@docker-node2 data]# mount -t glusterfs docker-node2:data_share /data/client
[root@docker-node2 data]#

--use docker run mount glusterfs exit status 1.
--Using system commands is successful:mount -t glusterfs docker-node2:data_share /data/client

Logs

ep 23 07:42:42 docker-node2 kernel: XFS (dm-3): Mounting V4 Filesystem
Sep 23 07:42:42 docker-node2 kernel: XFS (dm-3): Ending clean mount
Sep 23 07:42:42 docker-node2 kernel: XFS (dm-3): Unmounting Filesystem
Sep 23 07:42:42 docker-node2 dockerd: time="2019-09-23T07:42:42+01:00" level=error msg="2019/09/23 06:42:42 Entering go-plugins-helpers capabilitiesPath" plugin=4aeb7d50a3ea0e89b9155a7527f2dd3637ff810bf7bab0cf6f8ecb9da1c47128
Sep 23 07:42:42 docker-node2 dockerd: time="2019-09-23T07:42:42+01:00" level=error msg="2019/09/23 06:42:42 Entering go-plugins-helpers getPath" plugin=4aeb7d50a3ea0e89b9155a7527f2dd3637ff810bf7bab0cf6f8ecb9da1c47128
Sep 23 07:42:43 docker-node2 kernel: XFS (dm-3): Mounting V4 Filesystem
Sep 23 07:42:43 docker-node2 kernel: XFS (dm-3): Ending clean mount
Sep 23 07:42:43 docker-node2 dockerd: time="2019-09-23T07:42:43+01:00" level=error msg="2019/09/23 06:42:43 Entering go-plugins-helpers capabilitiesPath" plugin=4aeb7d50a3ea0e89b9155a7527f2dd3637ff810bf7bab0cf6f8ecb9da1c47128
Sep 23 07:42:43 docker-node2 dockerd: time="2019-09-23T07:42:43+01:00" level=error msg="2019/09/23 06:42:43 Entering go-plugins-helpers getPath" plugin=4aeb7d50a3ea0e89b9155a7527f2dd3637ff810bf7bab0cf6f8ecb9da1c47128
Sep 23 07:42:43 docker-node2 dockerd: time="2019-09-23T07:42:43+01:00" level=error msg="2019/09/23 06:42:43 Entering go-plugins-helpers capabilitiesPath" plugin=4aeb7d50a3ea0e89b9155a7527f2dd3637ff810bf7bab0cf6f8ecb9da1c47128
Sep 23 07:42:43 docker-node2 dockerd: time="2019-09-23T07:42:43+01:00" level=error msg="2019/09/23 06:42:43 Entering go-plugins-helpers mountPath" plugin=4aeb7d50a3ea0e89b9155a7527f2dd3637ff810bf7bab0cf6f8ecb9da1c47128
Sep 23 07:42:43 docker-node2 kernel: XFS (dm-3): Unmounting Filesystem
Sep 23 07:42:43 docker-node2 dockerd: time="2019-09-23T07:42:43.907368280+01:00" level=error msg="Handler for POST /v1.40/containers/create returned error: VolumeDriver.Mount: exit status 1"

I am unable to remove the volume even with -f

  • Plugin version (or commit ref) :latest
  • Docker version :18.03
  • Plugin type : managed
  • Operating system:

Description

I am unable to remove a volume even with the -f

Logs

Here is what happens

trajano@noriko MINGW64 /d/p/trajano.net/jenkins (master)
$ docker volume rm -f test
test

trajano@noriko MINGW64 /d/p/trajano.net/jenkins (master)
$ docker volume ls
DRIVER                       VOLUME NAME
local                        722a2fa4d4e0d5bff53257b822e0eb175d092b1270c90b8b6c00215a5a1dcce1
local                        957185ba07393749e458f79bc6cada77f6447b1cabed1993ce95a8e877e984b5
local                        ddf5beffd17c3ed35dc4d2bfae51d0ef2c89ddfdd009890b73e8ada21827042b
sapk/plugin-gluster:latest   jenkins_jenkins_home
sapk/plugin-gluster:latest   jenkins_nexus-data
sapk/plugin-gluster:latest   test

Tests

mount error

In my gluster server:

[root@d71ef829f16e /]#  gluster volume info

Volume Name: gv0
Type: Replicate
Volume ID: 008f9aea-2d1b-4a1b-8c4a-ea430d84bac3
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 10.1.139.34:/gluster/brick2
Brick2: 10.1.139.35:/gluster/brick2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
[root@d71ef829f16e /]#

and in my docker node:
`docker plugin install sapk/plugin-gluster
docker volume create --driver gluster --opt voluri="10.1.139.35:008f9aea-2d1b-4a1b-8c4a-ea430d84bac3" --name test

test
docker run -v test:/mnt --rm -ti ubuntu`

and it could mount when "docker run -v test:/mnt --rm -ti ubuntu

`root@osd1:~# docker run -v test:/mnt --rm -ti ubuntu

docker: Error response from daemon: VolumeDriver.Mount: exit status 1.
See 'docker run --help'.
root@osd1:#
root@docker0:
# docker plugin ls
ID NAME DESCRIPTION ENABLED
bed64a4a4914 sapk/plugin-gluster:latest GlusterFS plugin for Docker true
root@docker0:~# docker volume ls
DRIVER VOLUME NAME
local 07cea292c645234f0f96293899e135450771e9056d133302197ffb58c3dc8d0a
sapk/plugin-gluster:latest test
`

Is there anything wrong?

Can't remove volume

I can't remove a volume. It says that is used by a container however this is not true, the container is removed docker ps -a returns an empty list.

docker volume rm portainer_portainer-data 
Error response from daemon: unable to remove volume: remove portainer_portainer-data: VolumeDriver.Remove: volume portainer_portainer-data is currently used by a container

This is the result from the logs.

Dez 10 13:20:54 cluser-node-1 dockerd[15464]: time="2017-12-10T13:20:54.131124932Z" level=error msg="plugin not found"
Dez 10 13:20:55 cluser-node-1 dockerd[15464]: time="2017-12-10T13:20:55.705197900Z" level=error msg="Handler for DELETE /v1.32/volumes/portainer_portainer-data returned error: unable to remove volume: remove portainer_portainer-data: VolumeDriver.Rem...used by a container"

This is how I added the plugin

docker plugin install --grant-all-permissions --alias glusterfs sapk/plugin-gluster

Plugin not appearing for Docker

Hello! I am trying to run the latest release on my docker installation, with no luck.
I always get the following error:

ERROR: Volume oracle-data specifies nonexistent driver sapk/plugin-gluster

Some maybe useful data:

# docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 18
Server Version: 17.10.0-ce
Storage Driver: overlay
 Backing Filesystem: extfs
 Supports d_type: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: cifs gluster local
 Network: bridge host ipvlan macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
 NodeID: vq3vsc93yhaah3pvctwxi6kl6
 Is Manager: true
 ClusterID: pbwe7pxb0pg5xk6lcwbkdsqmz
 Managers: 3
 Nodes: 3
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
  Force Rotate: 0
 Autolock Managers: false
 Root Rotation In Progress: false
 Node Address: 169.60.192.165
 Manager Addresses:
  xxxxx
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 06b9cb35161009dcb7123345749fef02f7cea8e0
runc version: 0351df1c5a66838d0c392b4ac4cf9450de844e2d
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-693.5.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 15.51GiB
Name: xxxxx
ID: CX76:BIYY:JAQK:LPDX:RVXZ:MF5W:XZWW:KEWT:HTAE:7NVR:EDCG:FHIW
Docker Root Dir: /opt/docker-disk
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 41
 Goroutines: 177
 System Time: 2017-11-13T19:31:35.357063958-06:00
 EventsListeners: 0
Registry: https://index.docker.io/v1/
Experimental: true
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
# docker plugin ls
ID                  NAME                DESCRIPTION         ENABLED
# journalctl -u docker-volume-glusterfs
Nov 13 19:23:44 vm1 systemd[1]: Starting Docker GlusterFS Plugin...
Nov 13 19:23:45 vm1 docker-volume-gluster[32198]: time="2017-11-13T19:23:45-06:00" level=warning msg="No persistence file found, I will start with a empty list of volume.Config File "persistence" Not Found in "[/etc/docker-volumes/gluster]""
Nov 13 19:23:45 vm1 docker-volume-gluster[32198]: time="2017-11-13T19:23:45-06:00" level=debug msg="&{{{0 0} 0 0 0 0} /var/lib/docker-volumes/gluster  false 0xc4200ca780 map[] map[]}"
Nov 13 19:23:45 vm1 docker-volume-gluster[32198]: time="2017-11-13T19:23:45-06:00" level=debug msg="&{0xc42005a540 {0xc420108120}}"
Nov 13 19:24:09 vm1 docker-volume-gluster[32198]: time="2017-11-13T19:24:09-06:00" level=debug msg="Entering Get: name: svc1_oracle-data"

mount gluster volume in docker service

  • Plugin version (or commit ref) : latest
    docker plugin ls
    ID NAME DESCRIPTION ENABLED
    96d24f765378 sapk/plugin-gluster:latest GlusterFS plugin for Docker true
    (Digest: sha256:776c684bad6b9d8600444c21ad40dd3e8d9987008f5a6ddc21f3a7379cf4248d)

  • Docker version : 17.12.0-ce
    docker info
    Containers: 10
    Running: 1
    Paused: 0
    Stopped: 9
    Images: 12
    Server Version: 17.12.0-ce
    Storage Driver: overlay2
    Backing Filesystem: xfs
    Supports d_type: true
    Native Overlay Diff: true
    Logging Driver: json-file
    Cgroup Driver: cgroupfs
    Plugins:
    Volume: local
    Network: bridge host macvlan null overlay
    Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
    Swarm: active
    NodeID: 78d8gy48o8d2oa8r16f4oxlsl
    Is Manager: true
    ClusterID: z85ihxm7afsllcxaiitkr1ne8
    Managers: 1
    Nodes: 1
    Orchestration:
    Task History Retention Limit: 5
    Raft:
    Snapshot Interval: 10000
    Number of Old Snapshots to Retain: 0
    Heartbeat Tick: 1
    Election Tick: 3
    Dispatcher:
    Heartbeat Period: 5 seconds
    CA Configuration:
    Expiry Duration: 3 months
    Force Rotate: 0
    Autolock Managers: false
    Root Rotation In Progress: false
    Node Address: 192.168.0.45
    Manager Addresses:
    192.168.0.45:2377
    Runtimes: runc
    Default Runtime: runc
    Init Binary: docker-init
    containerd version: 89623f28b87a6004d4b785663257362d1658a729
    runc version: b2567b37d7b75eb4cf325b77297b140ea686ce8f
    init version: 949e6fa
    Security Options:
    seccomp
    Profile: default
    Kernel Version: 3.10.0-693.11.1.el7.x86_64
    Operating System: CentOS Linux 7 (Core)
    OSType: linux
    Architecture: x86_64
    CPUs: 2
    Total Memory: 992.3MiB
    Name: lease-01.dc01.adsolutions
    ID: IHRB:OXLB:FH7Q:KAYH:QVP6:73N5:SVWP:S34P:V3CT:WMSY:3UG2:NBEP
    Docker Root Dir: /var/lib/docker
    Debug Mode (client): false
    Debug Mode (server): false
    Registry: https://index.docker.io/v1/
    Labels:
    Experimental: false
    Insecure Registries:
    127.0.0.0/8
    Live Restore Enabled: false

WARNING: bridge-nf-call-ip6tables is disabled

  • Plugin type : managed
  • Operating system: CentOS 7 3.10.0-693.11.1.el7.x86_64

Description

I'm trying to run the plugin as service:
docker service create --replicas 1 --name testing --mount type=volume,volume-opt="voluri=10.0.0.1:/docker",volume-driver=sapk/plugin-gluster,dst=/testing registry
(as side note adding multiple hosts as in: voluri=10.0.0.1,10.0.0.2:/docker seems to fail the docker cli to fail parsing on the comma (,)).

running a local the same command as local docker image works fine;
docker volume create --driver sapk/plugin-gluster --opt voluri="10.0.0.1,10.0.0.2:/docker" --name test
docker run -v test:/testing --rm -ti registry

I've also tried accessing the local volume via the service via;
docker service create --replicas 1 --name testing --mount type=volume,src=test,volume-driver=sapk/plugin-gluster,dst=/testing registry

Logs

The logs for creating the local docker container and the service look about the same;
Entering go-plugins-helpers mountPath" plugin=35970f8dcba65e81c467a5ded9daff8998c501cc042317676d4e81517e1357a2
"level=debug msg="Entering Mount: &{docker 4d7acdf887dea92f3e0290d6c5832018eaa2c9a70a958cff1626fa1a8f23e97f}"" plugin=35970f8dcba65e81c467a5ded9daff8998c501cc042317676d4e81517e1357a2
"level=debug msg="Entering MountExist: name: docker"" plugin=35970f8dcba65e81c467a5ded9daff8998c501cc042317676d4e81517e1357a2
"level=debug msg="Volume found: &{ docker %!s(int=0)}"" plugin=35970f8dcba65e81c467a5ded9daff8998c501cc042317676d4e81517e1357a2
"level=debug msg="Mount found: &{/var/lib/docker-volumes/gluster/docker %!s(int=0)}"" plugin=35970f8dcba65e81c467a5ded9daff8998c501cc042317676d4e81517e1357a2

Any clue why services refuse to mount?

Fix tests

Need a DNS name for docker container running gluster nodes.

Gluster Volume per Container or Shared Gluster Volume

I manually created a gluster volume gv0. Now I had the expectation that when use the plugin, I can use this single gluster volume gv0 for multiple container.

I guess It is not intended to be used in this way, (or I can't find an alternative), because the data are stored in the root folder of the gluster volume gv0/. I was actually expecting them in gv0/portainer-data/...

So is it intended that each container should have its own gluster volume?

volumes:
  portainer-data:
    driver: glusterfs
    driver_opts:
      voluri: "localhost:gv0"

Feature: creation of sub directory into glusterfs volume

@sapk
Is it ppssible add a feature (e.g. additional parameter of volume create) to allow creation of sub-directory into GlusterFS volume so we can use 1 volume for multiple containers (lots of sub folders)?

It should be better then create lots of volumes (per-container) on dense environments.

Gluster cli already allow managing of directory quotas even if folders are not yet created.

Create volume doesn't work ... ?

I may be misunderstanding what happens here
I installed the plugin and ran the following

docker volume create --driver sapk/plugin-gluster --opt voluri="cluster-server-1-data:myvol1" --name test1

creates a volume called test1

# docker volume ls
DRIVER                       VOLUME NAME
sapk/plugin-gluster:latest   test1

However from the gluster server

# gluster volume status
Status of volume: gv0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick cluster-server-1-data:/glusterstore/b
rick1/vol1                                  49152     0          Y       523  
Brick cluster-server-2-data:/glusterstore/b
rick1/vol1                                  49152     0          Y       616  
Brick cluster-server-3-data:/glusterstore/b
rick1/vol1                                  49152     0          Y       493  
Self-heal Daemon on localhost               N/A       N/A        Y       544  
Self-heal Daemon on cluster-server-3-data   N/A       N/A        Y       514  
Self-heal Daemon on cluster-server-2-data   N/A       N/A        Y       637  
 
Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks

where gv0 is a pre-existing volume. I expected to see a new volume ?

Of course, attempting to run a container with the mounted volume causes an error

Creating a docker volume against gv0 works as expected and mounts successfully.

Am I misunderstanding something or is there an error somewhere?

Also, I set DEBUG=1 but can't work out how to view the logging (I'm not familiar with goloang)

Fail fast on errors during mount

We have a hyper-converged system that runs Gluster and Swarm on the same node.
so we are mounting the volumes like this "voluri": "localhost:gv-portainer"

When there is an error in gluster cluster. eg. Gluster servie isn't stated, then it is still possible to spinup a container that mounts a gluster volume. It is impossible to know if a volume was really mounted or not because in both cases the application runs. One only see in the application itself that the data is missing.

It would be a good idea to fail fast if a volume can't be mounted, so the container never comes up and can be addressed accordingly.

I am not sure if that is related to #18.

Volume missing after reboot

I created a volume on both Docker hosts that connects to a gluster cluster hosted on 2 other servers with the following command:

docker volume create --driver sapk/plugin-gluster --opt voluri="gluster.mydomain.com:volume-app1" --name volume-app1

I had a Docker host lock up yesterday and I had to perform a hard reboot on the VM, when it came back up, the volume was mounted with the Local driver instead of sapk/plugin-gluster:latest as the other host shows with docker volume ls so I removed the volume from the troubled host and recreated it. This connected to gluster again and shows the correct data and all containers that rely on the volume work correctly.

How do I make the docker volume persist across reboots?

Cannot "docker volume rm" volumes created with sapk/docker-volume-gluster

  • Plugin version (or commit ref) : v1.0.7-3
  • Docker version : Docker version 17.06.2-ee-8, build 4e8ed51
  • Plugin type : legacy/managed
  • Operating system: Linux testhost 3.10.0-862.el7.x86_64 #1 SMP Wed Mar 21 18:14:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux

Created a docker volume using docker gluster plugin. After volume creation, the volume could not be deleted. Disabling, removing and reinstalling the plugin ultimately removed the volume. Commands I ran to demonstrate the problem follow:

$ docker volume ls
DRIVER VOLUME NAME
$ docker volume create --driver sapk/plugin-gluster --opt voluri=10.10.10.3:testvol,10.10.10.4:testvol,10.10.10.5:testvol --name testvol
testvol
$ docker volume ls
DRIVER VOLUME NAME
sapk/plugin-gluster:latest testvol
$ docker volume rm testvol
testvol
$ docker volume ls
DRIVER VOLUME NAME
sapk/plugin-gluster:latest testvol
$ docker volume prune --force
Total reclaimed space: 0B
$ docker volume ls
DRIVER VOLUME NAME
sapk/plugin-gluster:latest testvol
$ docker plugin disable sapk/plugin-gluster
sapk/plugin-gluster
$ docker volume ls
DRIVER VOLUME NAME
$ docker plugin enable sapk/plugin-gluster
sapk/plugin-gluster
$ docker volume ls
DRIVER VOLUME NAME
sapk/plugin-gluster:latest testvol
$ docker plugin disable sapk/plugin-gluster
sapk/plugin-gluster
$ docker plugin rm sapk/plugin-gluster
sapk/plugin-gluster
$ docker volume ls
DRIVER VOLUME NAME
$ docker plugin install sapk/plugin-gluster
Plugin "sapk/plugin-gluster" is requesting the following privileges:

  • network: [host]
  • device: [/dev/fuse]
  • capabilities: [CAP_SYS_ADMIN]
    Do you grant the above permissions? [y/N] y
    latest: Pulling from sapk/plugin-gluster

Digest: sha256:d19c1a17316a00f14bc81b6396afe10d0d43aa5f21a38d08819bb17b9d93e96c
Status: Downloaded newer image for sapk/plugin-gluster:latest
Installed plugin sapk/plugin-gluster
$ docker volume ls
DRIVER VOLUME NAME
$ uname -a
Linux hostname 3.10.0-862.el7.x86_64 #1 SMP Wed Mar 21 18:14:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux
$ docker --version
Docker version 17.06.2-ee-8, build 4e8ed51
$ docker plugin inspect sapk/plugin-gluster
[
{
"Config": {
"Args": {
"Description": "Arguments to be passed to the plugin",
"Name": "args",
"Settable": [
"value"
],
"Value": []
},
"Description": "GlusterFS plugin for Docker",
"DockerVersion": "18.03.0-ce",
"Documentation": "https://docs.docker.com/engine/extend/plugins/",
"Entrypoint": [
"/usr/bin/docker-volume-gluster",
"daemon"
],
"Env": [
{
"Description": "",
"Name": "DEBUG",
"Settable": [
"value"
],
"Value": "0"
},
{
"Description": "",
"Name": "MOUNT_UNIQ",
"Settable": [
"value"
],
"Value": "0"
}
],
"Interface": {
"Socket": "gluster.sock",
"Types": [
"docker.volumedriver/1.0"
]
},
"IpcHost": false,
"Linux": {
"AllowAllDevices": false,
"Capabilities": [
"CAP_SYS_ADMIN"
],
"Devices": [
{
"Description": "",
"Name": "",
"Path": "/dev/fuse",
"Settable": null
}
]
},
"Mounts": null,
"Network": {
"Type": "host"
},
"PidHost": false,
"PropagatedMount": "/var/lib/docker-volumes/gluster",
"User": {},
"WorkDir": "",
"rootfs": {
"diff_ids": [
"sha256:afd8ae1982e66fda975ab2a3389d596878bb7ffa16e56ac85427be1387078068"
],
"type": "layers"
}
},
"Enabled": true,
"Id": "054913b9a430435062b942e60344f29f5ac6af943a5519e2de40ef80ea6e174c",
"Name": "sapk/plugin-gluster:latest",
"PluginReference": "docker.io/sapk/plugin-gluster:latest",
"Settings": {
"Args": [],
"Devices": [
{
"Description": "",
"Name": "",
"Path": "/dev/fuse",
"Settable": null
}
],
"Env": [
"DEBUG=0",
"MOUNT_UNIQ=0"
],
"Mounts": []
}
}
]
$

Logs
Plugin daemon log:
/var/lib/docker/plugins/feb488a43c8777d985a2ab8bcf3afa5bdbc8515da1b4581aab0888bb4368bb3a/rootfs/usr/bin/docker-volume-gluster daemon --verbose
DEBU[0000] Debug mode on
DEBU[0000] Init gluster driver at /var/lib/docker-volumes/gluster, UniqName: false
WARN[0000] No persistence file found, I will start with a empty list of volume.Config File "persistence" Not Found in "[/etc/docker-volumes/gluster]"
DEBU[0000] &{{{0 0} 0 0 0 0} /var/lib/docker-volumes/gluster false 0xc42012c780 map[] map[]}
DEBU[0000] &{0xc4200a48c0 {0xc42001fcb0}}
2018/06/07 11:32:34 Entering go-plugins-helpers listPath
DEBU[0063] Entering List
2018/06/07 11:33:57 Entering go-plugins-helpers listPath
DEBU[0146] Entering List
2018/06/07 11:37:01 Entering go-plugins-helpers listPath
DEBU[0330] Entering List
2018/06/07 11:37:15 Entering go-plugins-helpers listPath
DEBU[0343] Entering List
2018/06/07 11:37:35 Entering go-plugins-helpers listPath
DEBU[0364] Entering List
2018/06/07 11:37:57 Entering go-plugins-helpers listPath
DEBU[0386] Entering List
2018/06/07 11:38:22 Entering go-plugins-helpers listPath
DEBU[0411] Entering List
2018/06/07 11:40:46 Entering go-plugins-helpers listPath
DEBU[0555] Entering List

VolumeDriver.Mount: exit status 1

  • Plugin version (or commit ref) : Latest
  • Docker version : 17.12.0-ce
  • Plugin type : Glusterfs Plugin
  • Operating system: Ubuntu 16.04.3 LTS

Description

docker volume inspect logs
[
    {
        "CreatedAt": "0001-01-01T00:00:00Z",
        "Driver": "sapk/plugin-gluster:latest",
        "Labels": {},
        "Mountpoint": "/var/lib/docker-volumes/gluster/logs",
        "Name": "logs",
        "Options": {
            "voluri": "node1:dev_logs"
        },
        "Scope": "local",
        "Status": {
            "TODO": "List"
        }
    }
]
 docker run -v logs:/mnt -it ubuntu
docker: Error response from daemon: VolumeDriver.Mount: exit status 1.
...

## Logs


an 10 18:34:09 node1 dockerd[1076]: time="2018-01-10T18:34:09.589431701Z" level=error msg="fatal task error" error="starting container failed: error while mounting volume '/var/lib/docker/plugins/4bc1d7f4f641c2479f60f850f5b0fbaa3b293e67bb65e0c7c4ba5c66d8780ecc/rootfs': VolumeDriver.Mount: exit status 1" module=node/agent/taskmanager node.id=pbjcny9n4k1h2dlestuscupmh service.id=iq4n61hgp4djn3b8nfau0u330 task.id=feiuf2eevzvar0l84hvw50zph
Jan 10 18:34:09 node1 dockerd[1076]: time="2018-01-10T18:34:09Z" level=error msg="2018/01/10 18:34:09 Entering go-plugins-helpers capabilitiesPath" plugin=4bc1d7f4f641c2479f60f850f5b0fbaa3b293e67bb65e0c7c4ba5c66d8780ecc
Jan 10 18:34:09 node1 dockerd[1076]: time="2018-01-10T18:34:09Z" level=error msg="2018/01/10 18:34:09 Entering go-plugins-helpers capabilitiesPath" plugin=4bc1d7f4f641c2479f60f850f5b0fbaa3b293e67bb65e0c7c4ba5c66d8780ecc
## Tests

raspberry pi build

I am trying to create the pluging for raspbery pi (armv7l).

when running make on the cloned reposity I get following error:
# github.com/sapk/docker-volume-gluster/gluster/driver
gluster/driver/tools.go:79: undefined: url.PathEscape
gluster/driver/tools.go:81: undefined: url.PathEscape
Makefile:77: recipe for target 'compile' failed
make: *** [compile] Error 2
I looking for a solution for a manage gluster fs plugin on raspberry pi

List multiple servers in voluri

Is it possible to add multiple volume servers (peers) ? for example:

volumes:
  some_vol:
    driver: sapk/plugin-gluster
    driver_opts:
      voluri: "gluster1.com:glustervol1,gluster2.com:glustervol1"

VolumeDriver.Mount: exit status 107

version

sapk/plugin-gluster:lates

myOS

centOS 7.4

when i run this command :
docker run -d --volume-driver sapk/plugin-gluster:latest --volume huihigh:/mnt centos echo "Hello zhouqy from a container" >> /mnt/hello-from-container.txt

error log

/usr/bin/docker-current: Error response from daemon: error while mounting volume '/var/lib/docker/plugins/4cb99a7e4a34224c8003186ed067dce1c9c02c55a4ab57f35c613a83bc60b580/rootfs': VolumeDriver.Mount: exit status 107.

Add more testing

It would need more unit test but also integration test by settting up a gluster ring.

VolumeDriver.Mount: exit status 1

  • Plugin version (or commit ref) : latest
  • Docker version : 17.09.1
  • Plugin type : managed
  • Operating system: Ubuntu 16.04

Description

Six node swarm. Three are Swarm managers / Gluster servers, the other three are workers. They can all resolve each other and ping each other, all behind the same network switch.

We currently have the original glusterfs volume plugin installed on the workers. We want to migrate to your plugin so I have executed sudo docker plugin install sapk/plugin-gluster on each worker.

I created a new Gluster volume then a new docker volume using it and your plugin without incident.

I tried launching a stack using a docker-compose.yaml file:

version: '3.3'

services:

  test-server:
    image: ubuntu:latest
    hostname: ubuntu
    volumes:
      - jg-test_test:/data
    deploy:
      placement:
        constraints:
          - node.role != manager

This "landed" on virt-b, logs below. An earlier attempt, based on a top-level volumes definition and before using docker volume create ... also failed with the exact same error.

In short, neither by creating the docker volume in advance nor allowing docker swarm to auto-create it worked for me. The worker can ping the gluster declared node(s) just fine.

It is unclear how to debug this. I tried adding a debug: 1 to the volume definition without change observed.

Could the fact that these workers already use Gluster volumes via the original glusterfs plugin be preventing your plugin's use? At a loss to explain otherwise since we have it working with the origin plugin fine.

Logs

Jan 11 16:03:52 virt-b dockerd[1051]: time="2018-01-11T16:03:52.069898516Z" level=error msg="fatal task error" error="starting container failed: error while mounting volume '/var/lib/docker/plugins/ff58396a87804b347745e384c8021758fdb97d41d398767c7bf84fb3fabd1974/rootfs': VolumeDriver.Mount: exit status 1" module="node/agent/taskmanager" node.id=t72pzcfvekpi7zayyi1su185y service.id=cshb1o1btdblbjwko0xup5fno task.id=svlw41f8r26bck88233qxbdnb
Jan 11 16:03:52 virt-b dockerd[1051]: time="2018-01-11T16:03:52Z" level=info msg="2018/01/11 16:03:52 Entering go-plugins-helpers capabilitiesPath" plugin=ff58396a87804b347745e384c8021758fdb97d41d398767c7bf84fb3fabd1974
Jan 11 16:03:52 virt-b dockerd[1051]: time="2018-01-11T16:03:52Z" level=info msg="2018/01/11 16:03:52 Entering go-plugins-helpers capabilitiesPath" plugin=ff58396a87804b347745e384c8021758fdb97d41d398767c7bf84fb3fabd1974
Jan 11 16:03:52 virt-b dockerd[1051]: time="2018-01-11T16:03:52Z" level=info msg="2018/01/11 16:03:52 Entering go-plugins-helpers getPath" plugin=ff58396a87804b347745e384c8021758fdb97d41d398767c7bf84fb3fabd1974
Jan 11 16:03:52 virt-b kernel: [1354011.732285] aufs au_opts_verify:1597:dockerd[4919]: dirperm1 breaks the protection by the permission bits on the lower branch
Jan 11 16:03:52 virt-b dockerd[1051]: time="2018-01-11T16:03:52Z" level=info msg="2018/01/11 16:03:52 Entering go-plugins-helpers capabilitiesPath" plugin=ff58396a87804b347745e384c8021758fdb97d41d398767c7bf84fb3fabd1974
Jan 11 16:03:52 virt-b dockerd[1051]: time="2018-01-11T16:03:52Z" level=info msg="2018/01/11 16:03:52 Entering go-plugins-helpers capabilitiesPath" plugin=ff58396a87804b347745e384c8021758fdb97d41d398767c7bf84fb3fabd1974
Jan 11 16:03:52 virt-b dockerd[1051]: time="2018-01-11T16:03:52Z" level=info msg="2018/01/11 16:03:52 Entering go-plugins-helpers getPath" plugin=ff58396a87804b347745e384c8021758fdb97d41d398767c7bf84fb3fabd1974
Jan 11 16:03:52 virt-b kernel: [1354011.759236] aufs au_opts_verify:1597:dockerd[4919]: dirperm1 breaks the protection by the permission bits on the lower branch

simple getting started guide

In example, I assume that the glusterfs pool has to be up and running before using the driver, but it is not stated anywhere. Also I don't understand whether the docker host has to also partecipate in the gluster pool or not, needs fuse installed, etc. Overall documenting some prerequirements and pointers to gluster documentation where applicable would be a really nice addition - I can contribute if you gimme some directions

VolumeDriver.Mount: exit status 1

I currently have 2 gluster servers also running docker with app1 in a container using this plugin.

I recently set up a 3rd docker host in the same datacenter as the other two. They are essentially on a public LAN with 1ms ping between them. The only difference is the two gluster servers/docker hosts are running Debian linux and the 3rd host is running RancherOS. I temporarily disabled the firewall on both gluster servers for testing and re-enabled when finished.

Volume info from one Gluster server.

gluster volume info

Volume Name: docker-app1
Type: Replicate
Volume ID: 4bf3fb99-0ee7-4345-9f98-fc3039204749
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster1.mydomain.com:/docker
Brick2: gluster2.mydomain.com:/docker
Options Reconfigured:
auth.allow: all
transport.address-family: inet
nfs.disable: on

on the RancherOS host:

[rancher@rancher ~]$ docker volume create --driver sapk/plugin-gluster --opt voluri="gluster.mydomain.com:docker-app1" --name docker-app1                  
docker-app1

[rancher@rancher ~]$ docker volume ls
DRIVER                       VOLUME NAME
local                        f9a1fe0d3dbaa575b0c3e17753a6931f727eb7e610f0173b574b7cac42419044
local                        fa0909ccc4200a6af2fe52d47e460514ccd0e80c06a23c4caca49215a073ae61
local                        test
sapk/plugin-gluster:latest   docker-app1

[rancher@rancher ~]$ docker plugin ls
ID                  NAME                         DESCRIPTION                   ENABLED
d1e1965d1ebe        sapk/plugin-gluster:latest   GlusterFS plugin for Docker   true

[rancher@rancher ~]$ docker run -v docker-app1:/mnt --rm -ti ubuntu
docker: Error response from daemon: VolumeDriver.Mount: exit status 1.
See 'docker run --help'.

plugin is not listed in in `docker info`

The gluster plugin is not listed in in docker info (only the local driver shows)

root@virt-e:~# docker plugin ls
ID                  NAME                         DESCRIPTION                   ENABLED
b6a4dfbbd698        sapk/plugin-gluster:latest   GlusterFS plugin for Docker   true

root@virt-e:~# docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 1
Server Version: 17.09.0-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active

mounting sub dirs

Is it somehow possible to mount volume subdirs?

there is a statement in the READ.me about this at the volume creation:
docker volume create --driver sapk/plugin-gluster --opt voluri="<volumeserver>,<otherserver>,<otheroptionalserver>:<volumename></optional/sub/dir>" --name test

is there also an option to select subdirs of the gluster volume at running the container:

docker run -v test:/mnt --rm -ti ubuntu

e.g.

docker run -v <test>/<subdir>:/mnt --rm -ti ubuntu

docker: Error response from daemon: VolumeDriver.Mount: EOF.

  • Plugin version (or commit ref) :from docker hub
  • Docker version :Version: 18.09.0
  • Plugin type : legacy/managed legacy
  • Operating system:ubuntu1804

Description

root@ocp11:~# docker run -v test:/mnt --rm -ti alpine
docker: Error response from daemon: VolumeDriver.Mount: EOF.
See 'docker run --help'.

root@ocp11:~# docker volume inspect test
[
{
"CreatedAt": "0001-01-01T00:00:00Z",
"Driver": "registry.tsingj.local/plugin-gluster:latest",
"Labels": {},
"Mountpoint": "/var/lib/docker-volumes/gluster/test",
"Name": "test",
"Options": {
"voluri": "192.168.0.7,192.168.0.8,192.168.0.15,192.168.0.18:gfs"
},
"Scope": "global",
"Status": {
"TODO": "List"
}
}
]

root@ocp7:/mnt# gluster vol info

Volume Name: gfs
Type: Distributed-Replicate
Volume ID: e508cfb9-e426-4031-875c-95fc71b4664a
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: ocp7:/mnt/gluster
Brick2: ocp8:/mnt/gluster
Brick3: ocp15:/mnt/gluster
Brick4: ocp18:/mnt/gluster
Options Reconfigured:
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
cluster.server-quorum-type: server
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
cluster.server-quorum-ratio: 51%

Logs

Aug 8 18:56:51 localhost dockerd[31935]: time="2019-08-08T18:56:51+08:00" level=error msg="2019/08/08 10:56:51 http: panic serving @: runtime error: index out of range" plugin=fab75fb4d1204d6750d6b14668f3d003c95da0c2963919b9fa7e5e1f7f12c96b
Aug 8 18:56:51 localhost dockerd[31935]: time="2019-08-08T18:56:51+08:00" level=error msg="goroutine 163 [running]:" plugin=fab75fb4d1204d6750d6b14668f3d003c95da0c2963919b9fa7e5e1f7f12c96b
Aug 8 18:56:51 localhost dockerd[31935]: time="2019-08-08T18:56:51+08:00" level=error msg="net/http.(*conn).serve.func1(0xc00023c320)" plugin=fab75fb4d1204d6750d6b14668f3d003c95da0c2963919b9fa7e5e1f7f12c96b
Aug 8 18:56:51 localhost dockerd[31935]: time="2019-08-08T18:56:51+08:00" level=error msg="\t/usr/local/go/src/net/http/server.go:1746 +0xd0" plugin=fab75fb4d1204d6750d6b14668f3d003c95da0c2963919b9fa7e5e1f7f12c96b
Aug 8 18:56:51 localhost dockerd[31935]: time="2019-08-08T18:56:51+08:00" level=error msg="panic(0x83ff20, 0xc2cb30)" plugin=fab75fb4d1204d6750d6b14668f3d003c95da0c2963919b9fa7e5e1f7f12c96b
Aug 8 18:56:51 localhost dockerd[31935]: time="2019-08-08T18:56:51+08:00" level=error msg="\t/usr/local/go/src/runtime/panic.go:513 +0x1b9" plugin=fab75fb4d1204d6750d6b14668f3d003c95da0c2963919b9fa7e5e1f7f12c96b
Aug 8 18:56:51 localhost dockerd[31935]: time="2019-08-08T18:56:51+08:00" level=error msg="github.com/sapk/docker-volume-gluster/gluster/driver.parseVolURI(0x0, 0x0, 0x0, 0x4)" plugin=fab75fb4d1204d6750d6b14668f3d003c95da0c2963919b9fa7e5e1f7f12c96b
Aug 8 18:56:51 localhost dockerd[31935]: time="2019-08-08T18:56:51+08:00" level=error msg="\t/go/src/app/.gopath/src/github.com/sapk/docker-volume-gluster/gluster/driver/tools.go:74 +0x1c6" plugin=fab75fb4d1204d6750d6b14668f3d003c95da0c2963919b9fa7e5e1f7f12c96b
Aug 8 18:56:51 localhost dockerd[31935]: time="2019-08-08T18:56:51+08:00" level=error msg="github.com/sapk/docker-volume-gluster/gluster/driver.(*GlusterDriver).Mount(0xc0000f8820, 0xc0001f1c80, 0x0, 0x0, 0x0)" plugin=fab75fb4d1204d6750d6b14668f3d003c95da0c2963919b9fa7e5e1f7f12c96b
Aug 8 18:56:51 localhost dockerd[31935]: time="2019-08-08T18:56:51+08:00" level=error msg="\t/go/src/app/.gopath/src/github.com/sapk/docker-volume-gluster/gluster/driver/driver.go:239 +0x19c" plugin=fab75fb4d1204d6750d6b14668f3d003c95da0c2963919b9fa7e5e1f7f12c96b
Aug 8 18:56:51 localhost dockerd[31935]: time="2019-08-08T18:56:51+08:00" level=error msg="github.com/sapk/docker-volume-gluster/vendor/github.com/docker/go-plugins-helpers/volume.(*Handler).initMux.func3(0x932f80, 0xc000336620, 0xc0002dab00)" plugin=fab75fb4d1204d6750d6b14668f3d003c95da0c2963919b9fa7e5e1f7f12c96b
Aug 8 18:56:51 localhost dockerd[31935]: time="2019-08-08T18:56:51+08:00" level=error msg="\t/go/src/app/.gopath/src/github.com/sapk/docker-volume-gluster/vendor/github.com/docker/go-plugins-helpers/volume/api.go:166 +0xe9" plugin=fab75fb4d1204d6750d6b14668f3d003c95da0c2963919b9fa7e5e1f7f12c96b
Aug 8 18:56:51 localhost dockerd[31935]: time="2019-08-08T18:56:51+08:00" level=error msg="net/http.HandlerFunc.ServeHTTP(0xc0000b3780, 0x932f80, 0xc000336620, 0xc0002dab00)" plugin=fab75fb4d1204d6750d6b14668f3d003c95da0c2963919b9fa7e5e1f7f12c96b
Aug 8 18:56:51 localhost dockerd[31935]: time="2019-08-08T18:56:51+08:00" level=error msg="\t/usr/local/go/src/net/http/server.go:1964 +0x44" plugin=fab75fb4d1204d6750d6b14668f3d003c95da0c2963919b9fa7e5e1f7f12c96b
Aug 8 18:56:51 localhost dockerd[31935]: time="2019-08-08T18:56:51+08:00" level=error msg="net/http.(*ServeMux).ServeHTTP(0xc0000e9c50, 0x932f80, 0xc000336620, 0xc0002dab00)" plugin=fab75fb4d1204d6750d6b14668f3d003c95da0c2963919b9fa7e5e1f7f12c96b
Aug 8 18:56:51 localhost dockerd[31935]: time="2019-08-08T18:56:51+08:00" level=error msg="\t/usr/local/go/src/net/http/server.go:2361 +0x127" plugin=fab75fb4d1204d6750d6b14668f3d003c95da0c2963919b9fa7e5e1f7f12c96b
Aug 8 18:56:51 localhost dockerd[31935]: time="2019-08-08T18:56:51+08:00" level=error msg="net/http.serverHandler.ServeHTTP(0xc0000eb6c0, 0x932f80, 0xc000336620, 0xc0002dab00)" plugin=fab75fb4d1204d6750d6b14668f3d003c95da0c2963919b9fa7e5e1f7f12c96b
Aug 8 18:56:51 localhost dockerd[31935]: time="2019-08-08T18:56:51+08:00" level=error msg="\t/usr/local/go/src/net/http/server.go:2741 +0xab" plugin=fab75fb4d1204d6750d6b14668f3d003c95da0c2963919b9fa7e5e1f7f12c96b
Aug 8 18:56:51 localhost dockerd[31935]: time="2019-08-08T18:56:51+08:00" level=error msg="net/http.(*conn).serve(0xc00023c320, 0x933300, 0xc000334540)" plugin=fab75fb4d1204d6750d6b14668f3d003c95da0c2963919b9fa7e5e1f7f12c96b
Aug 8 18:56:51 localhost dockerd[31935]: time="2019-08-08T18:56:51+08:00" level=error msg="\t/usr/local/go/src/net/http/server.go:1847 +0x646" plugin=fab75fb4d1204d6750d6b14668f3d003c95da0c2963919b9fa7e5e1f7f12c96b
Aug 8 18:56:51 localhost dockerd[31935]: time="2019-08-08T18:56:51+08:00" level=error msg="created by net/http.(*Server).Serve" plugin=fab75fb4d1204d6750d6b14668f3d003c95da0c2963919b9fa7e5e1f7f12c96b
Aug 8 18:56:51 localhost dockerd[31935]: time="2019-08-08T18:56:51+08:00" level=error msg="\t/usr/local/go/src/net/http/server.go:2851 +0x2f5" plugin=fab75fb4d1204d6750d6b14668f3d003c95da0c2963919b9fa7e5e1f7f12c96b
Aug 8 18:56:52 localhost dockerd[31935]: time="2019-08-08T18:56:52+08:00" level=error msg="2019/08/08 10:56:52 Entering go-plugins-helpers mountPath" plugin=fab75fb4d1204d6750d6b14668f3d003c95da0c2963919b9fa7e5e1f7f12c96b
Aug 8 18:56:53 localhost dockerd[31935]: time="2019-08-08T18:56:53.310795637+08:00" level=error msg="Handler for POST /v1.39/containers/create returned error: VolumeDriver.Mount: EOF\n"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.