Code Monkey home page Code Monkey logo

gluster-csi-driver's Introduction

gluster-csi-driver

Go Report Card Build Status

This repo contains CSI driver for Gluster. The Container Storage Interface (CSI) is a proposed new industry standard for cluster-wide volume plugins. “Container Storage Interface” (CSI) enables storage vendors (SP) to develop a plugin once and have it work across a number of container orchestration (CO) systems.

Demo of GlusterFS CSI driver to create and delete volumes on GD2 Cluster

GlusterFS CSI driver Demo

Building GlusterFS CSI driver

This repository contains the source and a Dockerfile to build the GlusterFS CSI driver. The driver is built as a multi-stage container build. This requires a relatively recent version of Docker or Buildah.

Docker packages can be obtained for CentOS, Fedora or other distributions.

To build, ensure docker is installed, and run:

  1. Get inside the repository directory
[root@localhost]# cd gluster-csi-driver
  1. Build the glusterfs-csi-driver container
[root@localhost]# ./build.sh

Testing GlusterFS CSI driver

Deploy kubernetes Cluster

Deploy a GD2 gluster cluster

Create a glusterfs storage class (RWX)

[root@localhost]# cat storage-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: glusterfs-csi
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: org.gluster.glusterfs
[root@localhost]# kubectl create -f storage-class.yaml
storageclass.storage.k8s.io/glusterfs-csi created

Verify glusterfs storage class (RWX)

[root@localhost]# kubectl get storageclass
NAME                      PROVISIONER             AGE
glusterfs-csi (default)   org.gluster.glusterfs   105s
[root@localhost]# kubectl describe storageclass/glusterfs-csi
Name:                  glusterfs-csi
IsDefaultClass:        Yes
Annotations:           storageclass.kubernetes.io/is-default-class=true
Provisioner:           org.gluster.glusterfs
Parameters:            <none>
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

Create RWX PersistentVolumeClaim

[root@localhost]# cat pvc.yaml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: glusterfs-csi-pv
spec:
  storageClassName: glusterfs-csi
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 5Gi

[root@localhost]# kubectl create -f pvc.yaml
persistentvolumeclaim/glusterfs-csi-pv created

Validate the RWX claim creation

[root@localhost]# kubectl get pvc
NAME      STATUS    VOLUME                                                        CAPACITY   ACCESS MODES   STORAGECLASS   AGE
glusterfs-csi-pv   Bound     pvc-953d21f5a51311e8   5Gi        RWX            glusterfs-csi   3s
[root@localhost]# kubectl describe pvc
Name:          glusterfs-csi-pv
Namespace:     default
StorageClass:  glusterfs-csi
Status:        Bound
Volume:        pvc-953d21f5a51311e8
Labels:        <none>
Annotations:   control-plane.alpha.kubernetes.io/leader={"holderIdentity":"874a6cc9-a511-11e8-bae2-0a580af40202","leaseDurationSeconds":15,"acquireTime":"2018-08-21T07:26:58Z","renewTime":"2018-08-21T07:27:00Z","lea...
               pv.kubernetes.io/bind-completed=yes
               pv.kubernetes.io/bound-by-controller=yes
               storageClassName=glusterfs-csi
               volume.kubernetes.io/storage-provisioner=org.gluster.glusterfs
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      5Gi
Access Modes:  RWX
Events:
  Type    Reason                 Age                From                                                                                          Message
  ----    ------                 ----               ----                                                                                          -------
  Normal  ExternalProvisioning   30s (x2 over 30s)  persistentvolume-controller                                                                   waiting for a volume to be created, either by external provisioner "org.gluster.glusterfs" or manually created by system administrator
  Normal  Provisioning           30s                org.gluster.glusterfs csi-provisioner-glusterfsplugin-0 874a6cc9-a511-11e8-bae2-0a580af40202  External provisioner is provisioning volume for claim "default/glusterfs-csi-pv"
  Normal  ProvisioningSucceeded  29s                org.gluster.glusterfs csi-provisioner-glusterfsplugin-0 874a6cc9-a511-11e8-bae2-0a580af40202  Successfully provisioned volume pvc-953d21f5a51311e8

Verify PV details:

[root@localhost]# kubectl describe pv
Name:            pvc-953d21f5a51311e8
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by=org.gluster.glusterfs
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    glusterfs-csi
Status:          Bound
Claim:           default/glusterfs-csi-pv
Reclaim Policy:  Delete
Access Modes:    RWX
Capacity:        5Gi
Node Affinity:   <none>
Message:
Source:
    Type:          CSI (a Container Storage Interface (CSI) volume source)
    Driver:        org.gluster.glusterfs
    VolumeHandle:  pvc-953d21f5a51311e8
    ReadOnly:      false
Events:            <none>

Create a pod with RWX pvc claim

[root@localhost]# cat app.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: gluster
  labels:
    name: gluster
spec:
  containers:
  - name: gluster
    image: redis
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - mountPath: "/mnt/gluster"
      name: glustercsivol
  volumes:
  - name: glustercsivol
    persistentVolumeClaim:
      claimName: glusterfs-csi-pv

[root@localhost]# kubectl create -f app.yaml

Check mount output and validate.

[root@localhost]# mount |grep glusterfs
192.168.121.158:pvc-953d21f5a51311e8 on /var/lib/kubelet/pods/2a563343-a514-11e8-a324-525400a04cb4/volumes/kubernetes.io~csi/pvc-953d21f5a51311e8/mount type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

[root@localhost]# kubectl delete pod gluster
pod "gluster" deleted
[root@localhost]# mount |grep glusterfs
[root@localhost]#

Support for Snapshot

Kubernetes v1.12 introduces alpha support for volume snapshotting. This feature allows creating/deleting volume snapshots, and the ability to create new volumes from a snapshot natively using the Kubernetes API.

To verify clone functionality work as intended, lets start with writing some data into already created application with PVC.

[root@localhost]# kubectl exec -it redis /bin/bash
root@redis:/data# cd /mnt/gluster/
root@redis:/mnt/gluster# echo "glusterfs csi clone test" > clone_data

Create a snapshot class

[root@localhost]# cat snapshot-class.yaml
---
apiVersion: snapshot.storage.k8s.io/v1alpha1
kind: VolumeSnapshotClass
metadata:
  name: glusterfs-csi-snap
snapshotter: org.gluster.glusterfs
[root@localhost]# kubectl create -f snapshot-class.yaml
volumesnapshotclass.snapshot.storage.k8s.io/glusterfs-csi-snap created

Verify snapshot class

[root@localhost]# kubectl get volumesnapshotclass
NAME               AGE
glusterfs-csi-snap   1h
[root@localhost]# kubectl describe volumesnapshotclass/glusterfs-csi-snap
Name:         glusterfs-csi-snap
Namespace:
Labels:       <none>
Annotations:  <none>
API Version:  snapshot.storage.k8s.io/v1alpha1
Kind:         VolumeSnapshotClass
Metadata:
  Creation Timestamp:  2018-10-24T04:57:34Z
  Generation:          1
  Resource Version:    3215
  Self Link:           /apis/snapshot.storage.k8s.io/v1alpha1/volumesnapshotclasses/glusterfs-csi-snap
  UID:                 51de83df-d749-11e8-892a-525400d84c47
Snapshotter:           org.gluster.glusterfs
Events:                <none>

Create a snapshot from RWX pvc

[root@localhost]# cat volume-snapshot.yaml
---
apiVersion: snapshot.storage.k8s.io/v1alpha1
kind: VolumeSnapshot
metadata:
  name: glusterfs-csi-ss
spec:
  snapshotClassName: glusterfs-csi-ss
  source:
    name: glusterfs-csi-pv
    kind: PersistentVolumeClaim

[root@localhost]# kubectl create -f volume-snapshot.yaml
volumesnapshot.snapshot.storage.k8s.io/glusterfs-csi-ss created

Verify volume snapshot

[root@localhost]# kubectl get volumesnapshot
NAME               AGE
glusterfs-csi-ss   13s
[root@localhost]# kubectl describe volumesnapshot/glusterfs-csi-ss
Name:         glusterfs-csi-ss
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  snapshot.storage.k8s.io/v1alpha1
Kind:         VolumeSnapshot
Metadata:
  Creation Timestamp:  2018-10-24T06:39:35Z
  Generation:          1
  Resource Version:    12567
  Self Link:           /apis/snapshot.storage.k8s.io/v1alpha1/namespaces/default/volumesnapshots/glusterfs-csi-ss
  UID:                 929722b7-d757-11e8-892a-525400d84c47
Spec:
  Snapshot Class Name:    glusterfs-csi-snap
  Snapshot Content Name:  snapcontent-929722b7-d757-11e8-892a-525400d84c47
  Source:
    Kind:  PersistentVolumeClaim
    Name:  glusterfs-csi-pv
Status:
  Creation Time:  1970-01-01T00:00:01Z
  Ready:          true
  Restore Size:   <nil>
Events:           <none>

Provision new volume from snapshot

[root@localhost]# cat pvc-restore.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: glusterfs-pv-restore
spec:
  storageClassName: glusterfs-csi
  dataSource:
    name: glusterfs-csi-ss
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
[root@localhost]# kubectl create -f pvc-restore.yaml
persistentvolumeclaim/glusterfs-pv-restore created

Verify newly created claim

[root@localhost]# kubectl get pvc
NAME                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE
glusterfs-csi-pv       Bound    pvc-712278b0-d749-11e8-892a-525400d84c47   5Gi        RWX            glusterfs-csi   103m
glusterfs-pv-restore   Bound    pvc-dfcc36f0-d757-11e8-892a-525400d84c47   5Gi        RWO            glusterfs-csi   14s
[root@localhost]# kubectl describe pvc/glusterfs-pv-restore
Name:          glusterfs-pv-restore
Namespace:     default
StorageClass:  glusterfs-csi
Status:        Bound
Volume:        pvc-dfcc36f0-d757-11e8-892a-525400d84c47
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.kubernetes.io/storage-provisioner: org.gluster.glusterfs
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      5Gi
Access Modes:  RWO
Events:
  Type       Reason                 Age   From                                                                                          Message
  ----       ------                 ----  ----                                                                                          -------
  Normal     ExternalProvisioning   41s   persistentvolume-controller                                                                   waiting for a volume to be created, either by external provisioner "org.gluster.glusterfs" or manually created by system administrator
  Normal     Provisioning           41s   org.gluster.glusterfs_csi-provisioner-glusterfsplugin-0_1e7821cb-d749-11e8-9935-0a580af40303  External provisioner is provisioning volume for claim "default/glusterfs-pv-restore"
  Normal     ProvisioningSucceeded  41s   org.gluster.glusterfs_csi-provisioner-glusterfsplugin-0_1e7821cb-d749-11e8-9935-0a580af40303  Successfully provisioned volume pvc-dfcc36f0-d757-11e8-892a-525400d84c47
Mounted By:  <none>

Create an app with New claim

[root@localhost]# cat app-with-clone.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: redis-pvc-restore
  labels:
    name: redis-pvc-restore
spec:
  containers:
    - name: redis-pvc-restore
      image: redis:latest
      imagePullPolicy: IfNotPresent
      volumeMounts:
        - mountPath: "/mnt/gluster"
          name: glusterfscsivol
  volumes:
    - name: glusterfscsivol
      persistentVolumeClaim:
        claimName: glusterfs-pv-restore
[root@localhost]# kubectl create -f app-with-clone.yaml
pod/redis-pvc-restore created

Verify cloned data is present in newly created application

[root@localhost]# kubectl get po
NAME                                   READY   STATUS    RESTARTS   AGE
csi-attacher-glusterfsplugin-0         2/2     Running   0          112m
csi-nodeplugin-glusterfsplugin-dl7pp   2/2     Running   0          112m
csi-nodeplugin-glusterfsplugin-khrtd   2/2     Running   0          112m
csi-nodeplugin-glusterfsplugin-kqcsw   2/2     Running   0          112m
csi-provisioner-glusterfsplugin-0      3/3     Running   0          112m
glusterfs-55v7v                        1/1     Running   0          128m
glusterfs-qbvgv                        1/1     Running   0          128m
glusterfs-vclr4                        1/1     Running   0          128m
redis                                  1/1     Running   0          109m
redis-pvc-restore                      1/1     Running   0          26s
[root@localhost]# kubectl exec -it redis-pvc-restore /bin/bash
root@redis-pvc-restore:/data# cd /mnt/gluster/
root@redis-pvc-restore:/mnt/gluster# ls
clone_data
root@redis-pvc-restore:/mnt/gluster# cat clone_data
glusterfs csi clone test

Create a glusterfs lite storage class to use loopback bricks (RWX)

[root@localhost]# cat glusterfs-lite-storage-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: glusterfs-lite-csi
provisioner: org.gluster.glusterfs
parameters:
  brickType: "loop"
[root@localhost]# kubectl create -f glusterfs-lite-storage-class.yaml
storageclass.storage.k8s.io/glusterfs-lite-csi created

Verify glusterfs storage class (RWX)

[root@localhost]# kubectl get storageclass
NAME                      PROVISIONER             AGE
glusterfs-lite-csi        org.gluster.glusterfs   105s

Create RWX PersistentVolumeClaim using glusterfs-lite-csi storage class

[root@localhost]# cat pvc.yaml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: glusterfs-lite-csi-pv
spec:
  storageClassName: glusterfs-lite-csi
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 5Gi

[root@localhost]# kubectl create -f pvc.yaml
persistentvolumeclaim/glusterfs-lite-csi-pv created

Validate the RWX claim creation

[root@localhost]# kubectl get pvc
NAME                    STATUS    VOLUME                 CAPACITY   ACCESS MODES   STORAGECLASS         AGE
glusterfs-lite-csi-pv   Bound     pvc-943d21f5a51312e7   5Gi        RWX            glusterfs-lite-csi   5s

Create PVC with thin arbiter support

follow guide to setup thin arbiter

Create Thin-Arbiter storage class

$ cat thin-arbiter-storageclass.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: glusterfs-csi-thin-arbiter
provisioner: org.gluster.glusterfs
parameters:
  arbiterType: "thin"
  arbiterPath: "192.168.10.90:24007/mnt/arbiter-path"
$ kubectl create -f thin-arbiter-storageclass.yaml
storageclass.storage.k8s.io/glusterfs-csi-thin-arbiter created

Create Thin-Arbiter PersistentVolumeClaim

$ cat thin-arbiter-pvc.yaml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: glusterfs-csi-thin-pv
spec:
  storageClassName: glusterfs-csi-thin-arbiter
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
$ kube create -f thin-arbiter-pvc.yaml
persistentvolumeclaim/glusterfs-csi-thin-pv created

Verify PVC is in Bound state

$ kube get pvc
NAME                     STATUS        VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS            AGE
glusterfs-csi-thin-pv    Bound         pvc-86b3b70b-1fa0-11e9-9232-525400ea010d   5Gi        RWX            glusterfs-csi-arbiter   13m

Create an app with claim

$ cat thin-arbiter-pod.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: ta-redis
  labels:
    name: redis
spec:
  containers:
    - name: redis
      image: redis
      imagePullPolicy: IfNotPresent
      volumeMounts:
        - mountPath: "/mnt/gluster"
          name: glusterfscsivol
  volumes:
    - name: glusterfscsivol
      persistentVolumeClaim:
        claimName: glusterfs-csi-thin-pv
$ kube create -f thin-arbiter-pod.yaml
pod/ta-redis created

Verify app is in running state

$ kube get po
NAME        READY   STATUS        RESTARTS   AGE
ta-redis    1/1     Running       0          6m54s

gluster-csi-driver's People

Contributors

aravindavk avatar dependabot[bot] avatar humblec avatar jarrpa avatar joejulian avatar johnstrunk avatar kotreshhr avatar kshlm avatar madhu-1 avatar nixpanic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gluster-csi-driver's Issues

Print version information

Describe the feature you'd like to have.
At startup, the CSI pod should output sufficient version information to tell what version of the software is running.

What is the value to the end user? (why is it a priority?)
In order to facilitate bug reports, we should have a clear display of relevant package and repo versions such that issues can be reproduced.

How will we know we have a good solution? (acceptance criteria)

  • At container startup, the following should be displayed:
    • RPM versions for important packages (at least rpm -qa | grep gluster)
    • The git hash from the CSI repo used to build the driver (probably something like: git describe --dirty --always --tags | sed 's/-/./2' | sed 's/-/./2')

Additional context
I envision this as a shell script that execs the csi driver after print the relevant information

Monitoring fuse mounts

Describe the feature you'd like to have.
The CSI pod should be able to monitor the fuse mount processes and report on abnormal conditions.

What is the value to the end user? (why is it a priority?)
The CSI driver will spawn a FUSE process as a part of each volume mount. Currently, these extra processes are just "fire and forget." However, it is possible for one of these fuse processes to crash. Even with #3 this may not be detectable. A daemon within the pod should watch the fuse processes and ensure that if one crashes, a suitable error is logged so that an admin can easily diagnose why a pod has lost access to storage.

How will we know we have a good solution? (acceptance criteria)

  • An error message is logged to stdout (and visible via kubectl logs <container> when a fuse process abnormally exits
  • No error is logged if the fuse process exits due to an unmount

Additional context

  • Bonus points for detecting and logging loss of connection to the server (that may reconnect in the future). <== I suspect this will be visible with #3
  • It would be nice if there was a way to automatically remedy the crash, but I'm currently at a loss.

Failed to create volume

Describe the bug
if the volume is not in started state , we wont be able to get the volume status

Steps to reproduce
Steps to reproduce the behavior:

  • create 30 PVC in kubernetes
    Actual results
    Failed to some PVC in kubernetes

Expected behavior
Creation of PVC should be successful
Logs:

E1025 10:32:01.249272       1 controllerserver.go:236] failed to fetch volume : volume not started
E1025 10:32:01.249325       1 controllerserver.go:106] error checking for pre-existing volume: rpc error: code = Internal desc = error in fetching volume details volume not started
E1025 10:32:01.249357       1 utils.go:100] GRPC error: rpc error: code = Internal desc = error in fetching volume details volume not started

Auto-built containers don't have git version info

Describe the bug
The current method used to build in docker cloud doesn't set the build args properly, so some are left blank. Of particular importance are the version and build-date labels.

Steps to reproduce

$ skopeo inspect docker://docker.io/gluster/gluster-csi-driver
{
    "Name": "docker.io/gluster/gluster-csi-driver",
    "Digest": "sha256:7c3c4b360c1341912837c736173ff874b13536abdd53eaaa35febce761363e26",
    "RepoTags": [
        "latest"
    ],
    "Created": "2018-11-13T21:10:45.810160207Z",
    "DockerVersion": "18.03.1-ee-3",
    "Labels": {
        "Summary": "FUSE-based CSI driver for Gluster file access",
        "build-date": "(unknown)",
        "io.k8s.description": "FUSE-based CSI driver for Gluster file access",
        "name": "glusterfs-csi-driver",
        "org.label-schema.schema-version": "= 1.0     org.label-schema.name=CentOS Base Image     org.label-schema.vendor=CentOS     org.label-schema.license=GPLv2     org.label-schema.build-date=20180531",
        "vcs-type": "git",
        "vcs-url": "https://github.com/gluster/gluster-csi-driver",
        "vendor": "gluster.org",
        "version": "(unknown)"
    },
    "Architecture": "amd64",
    "Os": "linux",
    "Layers": [
        "sha256:7dc0dca2b1516961d6b3200564049db0a6e0410b370bb2189e2efae0d368616f",
        "sha256:207a2a561b4feedb244c56d01de2b541772983e55c6fd6c5cc9f13f853dd7ade",
        "sha256:8a27c9866f70d2455c7259a5218ac79e2c6f00b1412a5b71c6ee5c73381591b2"
    ]
}

Additional context
This should be able to be resolved by adding a build hook. See: https://docs.docker.com/docker-cloud/builds/advanced/#custom-build-phase-hooks

Create CSI driver for gluster block

This issue handles the requirement for glusterblock CSI driver which is capable of performing below actions in its first version.

*) Create block volume
*) Delete block volume
*) Mount
*) Unmount
*) Documentation about how to use/test this driver.

This attempt has to be started with a design PR which describes the driver in detail.

  • Which API's ( heketi, GD2, heketi/GD2) to use to provision block volumes.
  • Volume create API
  • Volume delete API
  • Mount
  • Unmount

Few PVCs are are failed to create while creating 100 pvcs using script

Describe the bug
Creating 100 pvcs using script with 30 secs gap , few pvcs are failed to create.

Steps to reproduce
-> Create 100 pvcs uisng script then observe the behavior
Actual results

Error from server (InternalError): error when creating "pvc.yaml": Internal error occurred: resource quota evaluates timeout
persistentvolumeclaim/gcs-pvc44 created
persistentvolumeclaim/gcs-pvc45 created
persistentvolumeclaim/gcs-pvc46 created
Error from server (InternalError): error when creating "pvc.yaml": Internal error occurred: resource quota evaluates timeout
Unable to connect to the server: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
persistentvolumeclaim/gcs-pvc49 created
persistentvolumeclaim/gcs-pvc50 created
persistentvolumeclaim/gcs-pvc51 created
persistentvolumeclaim/gcs-pvc52 created
Error from server (Timeout): error when creating "pvc.yaml": the server was unable to return a response in the time allotted, but may still be processing the request (post persistentvolumeclaims)
persistentvolumeclaim/gcs-pvc54 created

Expected behavior
All pvcs should be created successfully, all pvcs should be bounded successfully

Tried to create 100 PVCs using script, 81 pvcs are in pending state

Describe the bug

Tried to create 100 pvcs using script, around first 18 pvcs and 1 pvc in between are bounded, remaining all pvcs went to pending state.

Steps to reproduce

  1. Create 100 PVCs using script
  2. Around first 18 pvcs and in between one pvc are bounded and remaining 81 pvcs are in pending state.
  3. After some time all etcd pods went to completed state.
kubectl -n gcs get all
NAME                                       READY   STATUS             RESTARTS   AGE
pod/csi-attacher-glusterfsplugin-0         2/2     Running            0          7h49m
pod/csi-nodeplugin-glusterfsplugin-6fgq8   2/2     Running            0          7h49m
pod/csi-nodeplugin-glusterfsplugin-hlb95   2/2     Running            0          7h49m
pod/csi-nodeplugin-glusterfsplugin-pmhvd   2/2     Running            0          7h49m
pod/csi-provisioner-glusterfsplugin-0      2/2     Running            0          7h49m
pod/etcd-bq58l8bfkz                        0/1     Completed          0          7h52m
pod/etcd-operator-7cb5bd459b-97hxq         1/1     Running            0          7h52m
pod/etcd-phfpxjxg6n                        0/1     Completed          0          7h50m
pod/etcd-wfkrgtvhwb                        0/1     Completed          0          7h51m
pod/gluster-kube1-0                        0/1     CrashLoopBackOff   39         7h50m
pod/gluster-kube2-0                        1/1     Running            39         7h50m
pod/gluster-kube3-0                        1/1     Running            0          7h50m

Actual results

[vagrant@kube1 ~]$ kubectl get pvc
NAME        STATUS    VOLUME                 CAPACITY   ACCESS MODES   STORAGECLASS    AGE
gcs-pvc1    Bound     pvc-81a6ba81d08111e8   1Gi        RWX            glusterfs-csi   3h48m
gcs-pvc10   Bound     pvc-9b9e0860d08111e8   1Gi        RWX            glusterfs-csi   3h48m
gcs-pvc11   Bound     pvc-9bd2d6e4d08111e8   1Gi        RWX            glusterfs-csi   3h48m
gcs-pvc12   Bound     pvc-9c0b870ed08111e8   1Gi        RWX            glusterfs-csi   3h48m
gcs-pvc13   Bound     pvc-9c47638ad08111e8   1Gi        RWX            glusterfs-csi   3h48m
gcs-pvc14   Bound     pvc-9c8c2571d08111e8   1Gi        RWX            glusterfs-csi   3h47m
gcs-pvc15   Bound     pvc-9ccc5abbd08111e8   1Gi        RWX            glusterfs-csi   3h47m
gcs-pvc16   Pending                                                    glusterfs-csi   3h47m
gcs-pvc17   Bound     pvc-9d45447bd08111e8   1Gi        RWX            glusterfs-csi   3h47m
gcs-pvc18   Bound     pvc-9d867c62d08111e8   1Gi        RWX            glusterfs-csi   3h47m
gcs-pvc19   Pending                                                    glusterfs-csi   3h47m
gcs-pvc2    Bound     pvc-81d695ead08111e8   1Gi        RWX            glusterfs-csi   3h48m
gcs-pvc20   Pending                                                    glusterfs-csi   3h47m
gcs-pvc21   Pending                                                    glusterfs-csi   3h47m
gcs-pvc22   Pending                                                    glusterfs-csi   3h47m
gcs-pvc23   Pending                                                    glusterfs-csi   3h47m
gcs-pvc24   Pending                                                    glusterfs-csi   3h47m
gcs-pvc25   Pending                                                    glusterfs-csi   3h47m
gcs-pvc26   Pending                                                    glusterfs-csi   3h47m
gcs-pvc27   Pending                                                    glusterfs-csi   3h47m
gcs-pvc28   Pending                                                    glusterfs-csi   3h47m
gcs-pvc29   Pending                                                    glusterfs-csi   3h47m
gcs-pvc3    Bound     pvc-99c94050d08111e8   1Gi        RWX            glusterfs-csi   3h48m
gcs-pvc30   Pending                                                    glusterfs-csi   3h47m
gcs-pvc31   Pending                                                    glusterfs-csi   3h47m
gcs-pvc32   Pending                                                    glusterfs-csi   3h47m
gcs-pvc33   Pending                                                    glusterfs-csi   3h47m
gcs-pvc34   Pending                                                    glusterfs-csi   3h47m
gcs-pvc35   Pending                                                    glusterfs-csi   3h47m
gcs-pvc36   Pending                                                    glusterfs-csi   3h47m
gcs-pvc37   Pending                                                    glusterfs-csi   3h47m
gcs-pvc38   Pending                                                    glusterfs-csi   3h47m
gcs-pvc39   Pending                                                    glusterfs-csi   3h47m
gcs-pvc4    Bound     pvc-9a00a42ad08111e8   1Gi        RWX            glusterfs-csi   3h48m
gcs-pvc40   Pending                                                    glusterfs-csi   3h47m
gcs-pvc41   Bound     pvc-a2874d39d08111e8   1Gi        RWX            glusterfs-csi   3h47m
gcs-pvc42   Pending                                                    glusterfs-csi   3h47m
gcs-pvc43   Pending                                                    glusterfs-csi   3h47m
gcs-pvc44   Pending                                                    glusterfs-csi   3h47m
gcs-pvc45   Pending                                                    glusterfs-csi   3h47m
gcs-pvc46   Pending                                                    glusterfs-csi   3h47m
gcs-pvc47   Pending                                                    glusterfs-csi   3h47m
gcs-pvc48   Pending                                                    glusterfs-csi   3h47m
gcs-pvc49   Pending                                                    glusterfs-csi   3h47m
gcs-pvc5    Bound     pvc-9a41b36dd08111e8   1Gi        RWX            glusterfs-csi   3h48m
gcs-pvc50   Pending                                                    glusterfs-csi   3h47m
gcs-pvc51   Pending                                                    glusterfs-csi   3h47m
gcs-pvc52   Pending                                                    glusterfs-csi   3h47m
gcs-pvc53   Pending                                                    glusterfs-csi   3h47m
gcs-pvc54   Pending                                                    glusterfs-csi   3h47m
gcs-pvc55   Pending                                                    glusterfs-csi   3h47m
gcs-pvc56   Pending                                                    glusterfs-csi   3h47m
gcs-pvc57   Pending                                                    glusterfs-csi   3h47m
gcs-pvc58   Pending                                                    glusterfs-csi   3h47m
gcs-pvc59   Pending                                                    glusterfs-csi   3h47m
gcs-pvc6    Bound     pvc-9a873c66d08111e8   1Gi        RWX            glusterfs-csi   3h48m
gcs-pvc60   Pending                                                    glusterfs-csi   3h47m
gcs-pvc61   Pending                                                    glusterfs-csi   3h47m
gcs-pvc62   Pending                                                    glusterfs-csi   3h47m
gcs-pvc63   Pending                                                    glusterfs-csi   3h47m
gcs-pvc64   Pending                                                    glusterfs-csi   3h47m
gcs-pvc65   Pending                                                    glusterfs-csi   3h47m
gcs-pvc66   Pending                                                    glusterfs-csi   3h47m
gcs-pvc67   Pending                                                    glusterfs-csi   3h47m
gcs-pvc68   Pending                                                    glusterfs-csi   3h47m
gcs-pvc69   Pending                                                    glusterfs-csi   3h47m
gcs-pvc7    Bound     pvc-9ac3f85dd08111e8   1Gi        RWX            glusterfs-csi   3h48m
gcs-pvc70   Pending                                                    glusterfs-csi   3h47m
gcs-pvc71   Pending                                                    glusterfs-csi   3h47m
gcs-pvc72   Pending                                                    glusterfs-csi   3h47m
gcs-pvc73   Pending                                                    glusterfs-csi   3h47m
gcs-pvc74   Pending                                                    glusterfs-csi   3h47m
gcs-pvc75   Pending                                                    glusterfs-csi   3h47m
gcs-pvc76   Pending                                                    glusterfs-csi   3h47m
gcs-pvc77   Pending                                                    glusterfs-csi   3h47m
gcs-pvc78   Pending                                                    glusterfs-csi   3h47m
gcs-pvc79   Pending                                                    glusterfs-csi   3h47m
gcs-pvc8    Bound     pvc-9b17b401d08111e8   1Gi        RWX            glusterfs-csi   3h48m
gcs-pvc80   Pending                                                    glusterfs-csi   3h47m
gcs-pvc81   Pending                                                    glusterfs-csi   3h47m
gcs-pvc82   Pending                                                    glusterfs-csi   3h47m
gcs-pvc83   Pending                                                    glusterfs-csi   3h47m
gcs-pvc84   Pending                                                    glusterfs-csi   3h47m
gcs-pvc85   Pending                                                    glusterfs-csi   3h47m
gcs-pvc86   Pending                                                    glusterfs-csi   3h47m
gcs-pvc87   Pending                                                    glusterfs-csi   3h47m
gcs-pvc88   Pending                                                    glusterfs-csi   3h47m
gcs-pvc89   Pending                                                    glusterfs-csi   3h47m
gcs-pvc9    Bound     pvc-9b50b296d08111e8   1Gi        RWX            glusterfs-csi   3h48m
gcs-pvc90   Pending                                                    glusterfs-csi   3h47m
gcs-pvc91   Pending                                                    glusterfs-csi   3h47m
gcs-pvc92   Pending                                                    glusterfs-csi   3h47m
gcs-pvc93   Pending                                                    glusterfs-csi   3h47m
gcs-pvc94   Pending                                                    glusterfs-csi   3h47m
gcs-pvc95   Pending                                                    glusterfs-csi   3h47m
gcs-pvc96   Pending                                                    glusterfs-csi   3h47m
gcs-pvc97   Pending                                                    glusterfs-csi   3h47m
gcs-pvc98   Pending                                                    glusterfs-csi   3h47m
gcs-pvc99   Pending                                                    glusterfs-csi   3h47m

Expected behavior
All 100 pvcs should be bounded, All gd2 pods , etcd pods and csi-driver pods should be in running state.

Move Dockerfile to root & investigate multi-stage build

Describe the feature you'd like to have.
Buildah wants the Dockerfile to be at the root, so in order for the CSI driver to support buildah in addition to docker for container builds, we need to move the Dockerfile. As a part of this, we should also look at introducing multi-stage container builds.

What is the value to the end user? (why is it a priority?)
Supporting both docker & buildah enables users to select their desired build tool, and particularly w/ buildah, builds can be done w/o root privileges.
Additionally, by moving to a fully containerized build process, users & devs don't need to worry about the specifics of configuring their build environment (dep and other utilities). Only a working docker/buildah would be necessary. This also means the container repositories would be able to build minimal containers from source as opposed to having to choose between (1) end containers w/ the build tools in them or (2) having a CI job push a pre-built container.

How will we know we have a good solution? (acceptance criteria)

  • Verify buildah requirements (Dockerfile placement)
  • Verify we can have multiple Dockerfiles in root (for addl csi drivers)
  • Dockerfile moved
  • Multi-stage build that incorporates the current make and docker build steps
  • Preserve method for generating container w/ a driver and dependencies from local machine (to support dev workflow)
  • Docs updated for build process

Additional context
Originates from discussion started here: #33 (comment)

new PVC's in pending once a glusterd2 pod restarts

Steps to reproduce:-

[vagrant@kube1 ~]$ 
[vagrant@kube1 ~]$ kubectl get pods -n gcs
NAME                                   READY     STATUS    RESTARTS   AGE
csi-attacher-glusterfsplugin-0         2/2       Running   0          8m
csi-nodeplugin-glusterfsplugin-k29qv   2/2       Running   0          8m
csi-nodeplugin-glusterfsplugin-lk2ph   2/2       Running   0          8m
csi-nodeplugin-glusterfsplugin-pg87p   2/2       Running   0          8m
csi-provisioner-glusterfsplugin-0      2/2       Running   0          8m
etcd-9cjrsjmqwl                        1/1       Running   0          10m
etcd-cdb4lkvrnz                        1/1       Running   0          9m
etcd-operator-989bf8569-b8tp9          1/1       Running   0          11m
etcd-wx6nxptwdw                        1/1       Running   0          10m
glusterd2-cluster-448fp                1/1       Running   0          9m
glusterd2-cluster-4dw62                1/1       Running   0          9m
glusterd2-cluster-dmdxt                1/1       Running   0          9m
[vagrant@kube1 ~]$ 
[vagrant@kube1 ~]$ kubectl delete pod glusterd2-cluster-448fp --grace-period=0 -n gcs
pod "glusterd2-cluster-448fp" deleted
[vagrant@kube1 ~]$ 
[vagrant@kube1 ~]$ kubectl get pods -n gcs
NAME                                   READY     STATUS              RESTARTS   AGE
csi-attacher-glusterfsplugin-0         2/2       Running             0          10m
csi-nodeplugin-glusterfsplugin-k29qv   2/2       Running             0          10m
csi-nodeplugin-glusterfsplugin-lk2ph   2/2       Running             0          10m
csi-nodeplugin-glusterfsplugin-pg87p   2/2       Running             0          10m
csi-provisioner-glusterfsplugin-0      2/2       Running             0          10m
etcd-9cjrsjmqwl                        1/1       Running             0          12m
etcd-cdb4lkvrnz                        1/1       Running             0          11m
etcd-operator-989bf8569-b8tp9          1/1       Running             0          13m
etcd-wx6nxptwdw                        1/1       Running             0          11m
glusterd2-cluster-4dw62                1/1       Running             0          10m
glusterd2-cluster-dmdxt                1/1       Running             0          10m
glusterd2-cluster-qv7l5                0/1       ContainerCreating   0          8s
[vagrant@kube1 ~]$ kubectl get pods -n gcs
NAME                                   READY     STATUS    RESTARTS   AGE
csi-attacher-glusterfsplugin-0         2/2       Running   0          10m
csi-nodeplugin-glusterfsplugin-k29qv   2/2       Running   0          10m
csi-nodeplugin-glusterfsplugin-lk2ph   2/2       Running   0          10m
csi-nodeplugin-glusterfsplugin-pg87p   2/2       Running   0          10m
csi-provisioner-glusterfsplugin-0      2/2       Running   0          10m
etcd-9cjrsjmqwl                        1/1       Running   0          12m
etcd-cdb4lkvrnz                        1/1       Running   0          11m
etcd-operator-989bf8569-b8tp9          1/1       Running   0          13m
etcd-wx6nxptwdw                        1/1       Running   0          12m
glusterd2-cluster-4dw62                1/1       Running   0          10m
glusterd2-cluster-dmdxt                1/1       Running   0          10m
glusterd2-cluster-qv7l5                1/1       Running   0          16s
[vagrant@kube1 ~]$ 
[vagrant@kube1 ~]$ vi pvc.yaml 
[vagrant@kube1 ~]$ kubectl create -f pvc.yaml 
persistentvolumeclaim/gcs-pvc2 created
[vagrant@kube1 ~]$ 
[vagrant@kube1 ~]$ kubectl get pvc
NAME       STATUS    VOLUME                 CAPACITY   ACCESS MODES   STORAGECLASS    AGE
gcs-pvc1   Bound     pvc-16423ba3c7f111e8   2Gi        RWX            glusterfs-csi   4m
gcs-pvc2   Pending                                                    glusterfs-csi   10s
[vagrant@kube1 ~]$ 

Actual results
The new pvc's created are going in pending state

Expected behavior
New PVC shouldn't be pending state.

Additional context

Add efficient volume cloning support

Describe the feature you'd like to have.
Support for the new CLONE_VOLUME operation, which can take a VolumeSource (not snapshot) as origin to clone. This will be used by kubevirt/cdi.

What is the value to the end user? (why is it a priority?)
Efficient cloning is an important feature for KuneVirt, and we would like Gluster to be one of the commonly used storage backends.

How will we know we have a good solution? (acceptance criteria)
Standard CSI CLONE_VOLUME requests should be handled correctly.

Reduce complexity of CreateVolume and NodePublishVolume

Describe the bug
CreateVolume and NodePublishVolume have a high complexity according to gocyclo. They should be simplified.

Steps to reproduce
Steps to reproduce the behavior:

  1. Run gocyclo on them using gometalinter

Actual results

pkg/glusterfs/controllerserver.go:33::warning: cyclomatic complexity 12 of function (*ControllerServer).CreateVolume() is high (> 10) (gocyclo)
pkg/glusterfs/nodeserver.go:35::warning: cyclomatic complexity 13 of function (*NodeServer).NodePublishVolume() is high (> 10) (gocyclo)

Expected behavior
These should not generate errors

Additional context
The warnings on these functions have been silenced (via code comments) to permit Travis-ci to pass.
Depends on #53

Implement volume options

Describe the feature you'd like to have.
Implement the ability to define the set of volume options as specified in the StorageClass: parameters.volumeOptions.

What is the value to the end user? (why is it a priority?)
Depending on the workload, it is desirable to have different options set on the Gluster volumes. Some examples include setting custom cache sizes or enabling/disabling translators for the volume.

How will we know we have a good solution? (acceptance criteria)

  • At volume create, the driver will provision the volume and ensure all options passed in via parameters.volumeOptions.[*] are set on the newly created volume.

Additional context
Unless we can come up w/ a good reason, I suggest a straight translation of key=value into the equivalent of gluster vol set volname key value without trying to otherwise manipulate or constrain the list.

Documentation: request flow

Describe the feature you'd like to have.
The repo should have a document describing:

  • the request flow for all CSI calls
  • how API endpoints are located for volume ops and mounting

What is the value to the end user? (why is it a priority?)
This is primarily for the benefit of new developers or those not familiar with the code. It will allow them to understand the motivation for the various Services used and how they permit the gluster servers to be located. This will be helpful for extending the existing driver, adding new drivers, and for troubleshooting.

How will we know we have a good solution? (acceptance criteria)

  • Each CSI call should be enumerated with the other entities that are contacted and the general request flow for servicing that type of request.
  • All Services, Deployments, and other kube objects that are recommended when deploying the CSI driver should be described with their purpose.

Be able to configure disperse (ec) volumes

Describe the feature you'd like to have.
In addition to having replicated volumes, it should be possible to provision erasure coded volumes via parameters.volumeType.type and parameters.volumeType.disperse.*

What is the value to the end user? (why is it a priority?)
Users may prefer to make the capacity/performance trade-off that is available with disperse volumes.

How will we know we have a good solution? (acceptance criteria)

  • When parameters.volumeType.type has a value of disperse, an EC volume will be created
  • parameters.volumeType.disperse.data can be used to change the number of data fragments, defaulting to 4
  • parameters.volumeType.disperse.redundancy can be used to change the number of checksum fragments, defaulting to 2.

Additional context

  • parameters.volumeType.type should default to replicate, not disperse

Ref: #30

Add georep fields to SC

Describe the feature you'd like to have.
The StorageClass design proposed in #30 lacks a way of specifying that the volume type should be georeplicated. Decide what fields should be configurable and how they should be exposed.

What is the value to the end user? (why is it a priority?)
Admins would like to be able to replicate Gluster-backed PVs to other Gluster clusters. The target cluster could be GCS converged, GCS independent, or standalone Gluster.

How will we know we have a good solution? (acceptance criteria)

  • The fields exposed in the SC aren't redundant w/ other configuration
  • The georep configuration for the volume can be fully expressed in the SC (nothing needs to be done to the PVC/PV or actual volume once created)
  • Fields in the SC should assume that a cluster-level configuration is handled elsewhere

Additional context
The georep relationship (source cluster to target cluster) is at the cluster-level, so it should not be exposed or configured here. Instead, we should only refer to the peering relationship at the level of the SC:

  • At the cluster level, there would be configuration: ThisCluster can replicate to ThatCluster (at addr X on port Y)
  • The SC should just refer to "ThatCluster", ensuring that address and credential changes can be handled exactly once for the whole cluster.
  • The cluster-level configuration is out of scope for this item.

Depends on #30

Any available proposal?

Hi, it's been a while since kubernetes csi spec released, just want to know the progress of glusterfs csi support. And some of the questions:

  1. Will gluster csi driver continue to use heketi as api server?
    In fact after using the in-tree plugin we found heketi have some problem to bring to production environment such as ha, db single failure, two step deploy, don't support snapshot, etc.
    And from an architectural perspective, introducing third-party controls makes the structure no longer simple.
  2. If we want to contribute, where should the contributors to discuss the problem such as design proposal or feature discuss?

Default storage classes

Describe the feature you'd like to have.
There should be a well-considered set of default storage classes that can optionally be installed with the CSI driver.

What is the value to the end user? (why is it a priority?)
By providing a default set of StorageClass objects, a new user can get up-and-running more easily. The SCs serve as documentation of the current best practice for various workloads, and they also provide working examples that the user can customize.

How will we know we have a good solution? (acceptance criteria)

  • We have a vetted set of StorageClass objects for a couple common workloads
  • The SCs can be easily added to a cluster at install time
  • One of they can optionally be made the default for the cluster

Additional context
This item will have interactions with the operator and the install method that is chosen.

Gluster volume naming based on PVC name

Describe the feature you'd like to have.
It should be possible to propagate storage application context down to gluster. OCS 3.10 added the ability to influence the name of the underlying gluster volume based on the PVC that was used to provision it. This happens by specifying a "custom prefix" in the StorageClass, and based upon that, the underlying gluster volume that backs the PVC takes the form of <prefix>-<pvc_namespace>-<pvc_name>-UUID.

What is the value to the end user? (why is it a priority?)
This feature is required to assist administrators that are trying to use traditional backup software with kubernetes-based storage. They need a way, from outside the kube cluster, to locate the data that a containerized application uses. By having custom naming, glustercli volume list could be used to locate the volumes and mount them with a legacy backup application.

How will we know we have a good solution? (acceptance criteria)

  • Given information about an application running in the kubernetes environment, it should be possible to locate the Gluster volume(s) that it uses.
  • The backup host shouldn't need to directly query the kube API
  • The solution should work efficiently even with 1000s of volumes
  • It needs to be able to recognize scenarios where a PVC is deleted and recreated with the same name/ns.
  • These capabilities should be applicable to restore as well.

Additional context

  • Custom naming of the volume is one option for tying application context to the volume. It is also possible to use volume-level tags in GD2.
  • Getting access to the PVC name/namespace may require an upcall by the CSI driver to kube, breaking the multi-orchestrator nature.
  • Depending on where the PV name gets created, it may be possible to do this with modified kube/csi sidecar containers

RFC: Driver and image versioning

The CSI driver(s) will be versioned via proper "releases". I'd like to generate consensus about that process and the corresponding container images. This discussion is triggered by a number of issues:

  • The changelog file that is staged in #11
  • The operator will need to grab well-defined container versions
  • We need to have auto-built container images that can easily be correlated with the git commit from which they originate

Proposal

  • Use Semantic versioning for versioning releases as appropriate
  • Releases are handled via branch and tag in github
  • Container images are tagged as follows:
    • Current github master is built to image:latest
    • Tagged releases x.y.z are build to image:x.y.z
    • Additional moving image tags: image:x and image:x.y should also be provided
  • Container image builds should be auto triggered via git hooks to quay and/or docker hub
  • Image builds should use Docker and Buildah compatible multi-stage builds to minimize the image size
    • A Travis job should be able to ensure we remain compatible with both docker build and buildah bud
  • Release notes should clearly state the version(s) of CSI and GD2 it is compatible with

Implications for downstream

  • FROM scratch isn't currently supported, but we can easily s// the 2nd stage FROM line to be a minimal image
  • I don't see a reason to provide RPMs, so direct image build should be sufficient

mount flag to log to stdout to be provided

By default the current gluster plugin in kubernetes and openshift log in /var/log/openshift etc. We should provide capability to change log file and/or provide a way to show them on stdout.

Remove server & backup servers from volume response

Access to GD2 should be via a Service (potentially headless) that redirects to the available GD2 servers in the cluster. This means that REST calls for provisioning can go to that service name, as can mount requests.

This means that neither the server, nor backup server list is necessary in the volume response RPCs as the CSI driver knows the DNS name of the service when it is started.

Additional benefit is that as the cluster members change, the service membership gets updated, and everything stays valid.

Properly document exported functions

Describe the bug
Some golang exported functions are not properly documented, causing golint to fail.

Steps to reproduce
Steps to reproduce the behavior:

  1. run gometalinter -j4 --sort=path --sort=line --sort=column --deadline 1h --vendor --disable-all --enable golint ./...

Actual results
golint returns errors:

pkg/glusterfs/controllerserver.go:27:6:warning: exported type ControllerServer should have comment or be unexported (golint)
pkg/glusterfs/controllerserver.go:377:1:warning: comment on exported method ControllerServer.CreateSnapshot should be of the form "CreateSnapshot ..." (golint)
pkg/glusterfs/controllerserver.go:382:1:warning: comment on exported method ControllerServer.DeleteSnapshot should be of the form "DeleteSnapshot ..." (golint)
pkg/glusterfs/controllerserver.go:387:1:warning: comment on exported method ControllerServer.ListSnapshots should be of the form "ListSnapshots ..." (golint)
pkg/glusterfs/driver.go:16:1:warning: comment on exported type GfDriver should be of the form "GfDriver ..." (with optional leading article) (golint)
pkg/glusterfs/driver.go:45:1:warning: exported function NewControllerServer should have comment or be unexported (golint)
pkg/glusterfs/driver.go:51:1:warning: exported function NewNodeServer should have comment or be unexported (golint)
pkg/glusterfs/driver.go:57:1:warning: exported function NewidentityServer should have comment or be unexported (golint)
pkg/glusterfs/driver.go:63:1:warning: exported method GfDriver.Run should have comment or be unexported (golint)
pkg/glusterfs/identityserver.go:10:6:warning: exported type IdentityServer should have comment or be unexported (golint)
pkg/glusterfs/nodeserver.go:17:6:warning: exported type NodeServer should have comment or be unexported (golint)
pkg/glusterfs/nodeserver.go:131:1:warning: exported method NodeServer.NodeGetId should have comment or be unexported (golint)
pkg/glusterfs/nodeserver.go:131:23:warning: method NodeGetId should be NodeGetID (golint)

Expected behavior
golint should succeed, and all exported symbols should be properly documented or unexported. No exceptions.

Additional context
As a workaround, in PR #53 , I have added stub comments to permit the CI job to pass.
Depends on #53

Automate lgtm and approve process in this repo via bots.

It would be awesome if we can enable gihub bot which respect actions like below:

  • lgtm label auto applied once we have github lgtm comment in the PR.
  • approved label once we have approve github comment
  • auto merge of the PR once we have lgtm and approved labels in the PR
    ....etc.

Add Travis-ci

Describe the feature you'd like to have.
Travis-ci should be enabled for this repo to handle pre-merge checks. Travis is much more lightweight (and commonly used) than centos-ci. I would like to have travis for linting and unit tests, reserving centos-ci for e2e testing.

What is the value to the end user? (why is it a priority?)
Robust linting and unit tests via travis provide an easily visible and configurable method for ensuring a base level of quality is maintained. Additionally, Travis tests are executed on all PRs w/o waiting for admin approval. This provides immediate feedback to contributors, allowing them to refine their code without committer involvement.

How will we know we have a good solution? (acceptance criteria)

  • Linting
    • Shell scripts
    • markdown
    • yaml
  • Compilation with stable and latest golang
  • gometalinter
    • This seems to include the reasonable set we would want to run
  • Test build of the container(s) with docker and buildah
  • Documentation of the Travis setup

Additional context
The above checklist is largely derived from the existing centos-ci tests

PV and volume naming from PVC

Describe the feature you'd like to have.
The Gluster volume name that backs a PV/PVC should be easily recognizable, given the PVC name and namespace.

What is the value to the end user? (why is it a priority?)
Some organizations have defined IT processes wherein they need to be able to access data that resides within a PV from outside the kube cluster. One example of this is an IT department that uses a legacy (non-containerized) backup application for backing up containerized application data. In these cases, they need to be able to easily locate the Gluster volume that holds the data for an application.

How will we know we have a good solution? (acceptance criteria)

  • If an admin knows the name of the PVC and the Namespace within which it resides, they should be able to easily locate the correct Gluster volume by querying GD2 directly.
  • There must be a way to ensure there are no collisions in volume names. This is a concern primarily if the PVC name is used to set the volume name
  • These same capabilities should apply to snapshots (once implemented)

Additional context

  • It's not clear if this is possible today, given just the CSI interface.
  • One implementation may be to influence the volume name, but GD2 volume-level tags could also be used.
  • This feature currently exists for Heketi-based deployments

Handle volume already started/stopped error from glusterd2

Describe the bug
Take care of multiple volume start and start api calls for the same volume
as listed in the gcs issue

in current code volume already stopped/started error from the glusterd2 is considered as the
error case and we are not going ahead to delete already stopped volume, or considereing sending back the success response for already started volume to the kubernetes from the CSI side

PVC deletion causes hung/(terminating status) which is in mounted state

Describe the bug
Trying to delete the PVC which is mounted to some app pod and mount point is having some I/O's on it, PVC deletion not coming out from terminal and after given ctrl+c verified pvc status, pvc status is in terminating state.

Steps to reproduce

-> Create a PVC (auto provisioning volume)
-> Mount PVC to some app pod.
-> Then run I/O's on mount point.
-> Try to delete pvc from kubernetes master using 'kubectl delete pvc/, it will give the message like pvc deleted but it will not come out from terminal.

$ kubectl delete pvc/glusterfs-csi-pv1
persistentvolumeclaim "glusterfs-csi-pv1" deleted

ctrl+c

-> After ctrl+c given , verified pvc status, pvc status went to terminating state.

$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
glusterfs-csi-pv1 Terminating pvc-fa591562bfe011e8 2Gi RWX glusterfs-csi 20h

-> Deleted app pod where pvc is mounte

Actual results
Trying to delete the PVC which is mounted to some app pod and mount point is having some I/O's on it, PVC deletion not coming out from terminal and after given ctrl+c verified pvc status, pvc status is in terminating state.

Expected behavior
PVC deletion should throw an error message saying that pvc is in mounted state.

Additional context
Add any other context about the problem here.

Single PVC's going in pending state after continuous creation and deletion

PVC going in pending state after creating and deleting single PVC in loop.

[vagrant@kube1 ~]$ kubectl describe pvc 
Name:          gcs-pvc1
Namespace:     default
StorageClass:  glusterfs-csi
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: org.gluster.glusterfs
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
Events:
  Type       Reason                Age               From                         Message
  ----       ------                ----              ----                         -------
  Normal     ExternalProvisioning  8s (x3 over 20s)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "org.gluster.glusterfs" or manually created by system administrator
Mounted By:  <none>
[vagrant@kube1 ~]$ 

PVCs are in Pending sate

Describe the bug
Tried to create pvc's on my latest setup, pvcs are not getting bound all are in pending state.

Steps to reproduce
-> Deploy gcs setup using vagrant
-> Create PVC
-> PVCs are in pending state

*Actual results

[vagrant@kube2 ~]$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
gcs-pv1 Pending glusterfs-csi 160m
gcs-pv2 Pending glusterfs-csi 155m

Expected behavior
PVCS should bound

Additional context
CSI driver logs:

I1121 08:48:46.203230       1 csi-provisioner.go:82] Version: v0.4.0-rc.1-5-gf2ffed45
I1121 08:48:46.203703       1 csi-provisioner.go:96] Building kube configs for running in cluster...
I1121 08:49:20.280015       1 controller.go:572] Starting provisioner controller org.gluster.glusterfs_csi-provisioner-glusterfsplugin-0_562b4957-ed6a-11e8-b364-0a580ae94209!
I1121 08:49:20.380319       1 controller.go:621] Started provisioner controller org.gluster.glusterfs_csi-provisioner-glusterfsplugin-0_562b4957-ed6a-11e8-b364-0a580ae94209!
I1122 07:05:28.439306       1 controller.go:927] provision "default/gcs-pv1" class "glusterfs-csi": started
I1122 07:05:28.465292       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162308", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv1"
W1122 07:05:28.491696       1 controller.go:686] Retrying syncing claim "default/gcs-pv1" because failures 0 < threshold 15
E1122 07:05:28.491757       1 controller.go:701] error syncing claim "default/gcs-pv1": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:05:28.491813       1 controller.go:927] provision "default/gcs-pv1" class "glusterfs-csi": started
I1122 07:05:28.492031       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162308", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:05:28.497678       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162309", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv1"
W1122 07:05:28.510991       1 controller.go:686] Retrying syncing claim "default/gcs-pv1" because failures 1 < threshold 15
E1122 07:05:28.511030       1 controller.go:701] error syncing claim "default/gcs-pv1": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:05:28.511073       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162309", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:05:43.492032       1 controller.go:927] provision "default/gcs-pv1" class "glusterfs-csi": started
I1122 07:05:43.503528       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162309", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv1"
W1122 07:05:43.518185       1 controller.go:686] Retrying syncing claim "default/gcs-pv1" because failures 2 < threshold 15
E1122 07:05:43.518239       1 controller.go:701] error syncing claim "default/gcs-pv1": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:05:43.518854       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162309", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:06:43.518560       1 controller.go:927] provision "default/gcs-pv1" class "glusterfs-csi": started
I1122 07:06:43.528056       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162309", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv1"
W1122 07:06:43.540386       1 controller.go:686] Retrying syncing claim "default/gcs-pv1" because failures 3 < threshold 15
E1122 07:06:43.540429       1 controller.go:701] error syncing claim "default/gcs-pv1": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:06:43.540460       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162309", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:08:43.540728       1 controller.go:927] provision "default/gcs-pv1" class "glusterfs-csi": started
I1122 07:08:43.554139       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162309", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv1"
W1122 07:08:43.566193       1 controller.go:686] Retrying syncing claim "default/gcs-pv1" because failures 4 < threshold 15
E1122 07:08:43.566238       1 controller.go:701] error syncing claim "default/gcs-pv1": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:08:43.566267       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162309", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:10:34.595901       1 controller.go:927] provision "default/gcs-pv2" class "glusterfs-csi": started
I1122 07:10:34.605677       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv2", UID:"b261821c-ee25-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162957", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv2"
W1122 07:10:34.617679       1 controller.go:686] Retrying syncing claim "default/gcs-pv2" because failures 0 < threshold 15
E1122 07:10:34.617722       1 controller.go:701] error syncing claim "default/gcs-pv2": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:10:34.617754       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv2", UID:"b261821c-ee25-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162957", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:10:49.617991       1 controller.go:927] provision "default/gcs-pv2" class "glusterfs-csi": started
I1122 07:10:49.630975       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv2", UID:"b261821c-ee25-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162957", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv2"
W1122 07:10:49.647387       1 controller.go:686] Retrying syncing claim "default/gcs-pv2" because failures 1 < threshold 15
E1122 07:10:49.647454       1 controller.go:701] error syncing claim "default/gcs-pv2": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:10:49.648081       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv2", UID:"b261821c-ee25-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162957", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:11:19.647736       1 controller.go:927] provision "default/gcs-pv2" class "glusterfs-csi": started
I1122 07:11:19.659540       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv2", UID:"b261821c-ee25-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162957", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv2"
W1122 07:11:19.671616       1 controller.go:686] Retrying syncing claim "default/gcs-pv2" because failures 2 < threshold 15
E1122 07:11:19.671658       1 controller.go:701] error syncing claim "default/gcs-pv2": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:11:19.671693       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv2", UID:"b261821c-ee25-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162957", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:12:19.671984       1 controller.go:927] provision "default/gcs-pv2" class "glusterfs-csi": started
I1122 07:12:19.687932       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv2", UID:"b261821c-ee25-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162957", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv2"
W1122 07:12:19.702296       1 controller.go:686] Retrying syncing claim "default/gcs-pv2" because failures 3 < threshold 15
E1122 07:12:19.702392       1 controller.go:701] error syncing claim "default/gcs-pv2": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:12:19.702443       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv2", UID:"b261821c-ee25-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162957", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:12:43.566466       1 controller.go:927] provision "default/gcs-pv1" class "glusterfs-csi": started
I1122 07:12:43.577625       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162309", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv1"
W1122 07:12:43.590578       1 controller.go:686] Retrying syncing claim "default/gcs-pv1" because failures 5 < threshold 15
E1122 07:12:43.590633       1 controller.go:701] error syncing claim "default/gcs-pv1": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:12:43.590667       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162309", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:14:19.702694       1 controller.go:927] provision "default/gcs-pv2" class "glusterfs-csi": started
I1122 07:14:19.719779       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv2", UID:"b261821c-ee25-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162957", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv2"
W1122 07:14:19.736027       1 controller.go:686] Retrying syncing claim "default/gcs-pv2" because failures 4 < threshold 15
E1122 07:14:19.736091       1 controller.go:701] error syncing claim "default/gcs-pv2": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:14:19.736142       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv2", UID:"b261821c-ee25-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162957", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:18:19.737509       1 controller.go:927] provision "default/gcs-pv2" class "glusterfs-csi": started
I1122 07:18:19.764458       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv2", UID:"b261821c-ee25-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162957", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv2"
W1122 07:18:19.785888       1 controller.go:686] Retrying syncing claim "default/gcs-pv2" because failures 5 < threshold 15
E1122 07:18:19.785934       1 controller.go:701] error syncing claim "default/gcs-pv2": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:18:19.785972       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv2", UID:"b261821c-ee25-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162957", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:20:43.590986       1 controller.go:927] provision "default/gcs-pv1" class "glusterfs-csi": started
I1122 07:20:43.603922       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162309", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv1"
W1122 07:20:43.618852       1 controller.go:686] Retrying syncing claim "default/gcs-pv1" because failures 6 < threshold 15
E1122 07:20:43.618928       1 controller.go:701] error syncing claim "default/gcs-pv1": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:20:43.619642       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162309", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:26:19.786305       1 controller.go:927] provision "default/gcs-pv2" class "glusterfs-csi": started
I1122 07:26:19.800442       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv2", UID:"b261821c-ee25-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162957", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv2"
W1122 07:26:19.813135       1 controller.go:686] Retrying syncing claim "default/gcs-pv2" because failures 6 < threshold 15
E1122 07:26:19.813251       1 controller.go:701] error syncing claim "default/gcs-pv2": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:26:19.813310       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv2", UID:"b261821c-ee25-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162957", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:36:43.619325       1 controller.go:927] provision "default/gcs-pv1" class "glusterfs-csi": started
I1122 07:36:43.630453       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162309", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv1"
W1122 07:36:43.647623       1 controller.go:686] Retrying syncing claim "default/gcs-pv1" because failures 7 < threshold 15
E1122 07:36:43.647694       1 controller.go:701] error syncing claim "default/gcs-pv1": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:36:43.648615       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162309", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:42:19.813653       1 controller.go:927] provision "default/gcs-pv2" class "glusterfs-csi": started
I1122 07:42:19.825415       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv2", UID:"b261821c-ee25-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162957", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv2"
W1122 07:42:19.836274       1 controller.go:686] Retrying syncing claim "default/gcs-pv2" because failures 7 < threshold 15
E1122 07:42:19.836355       1 controller.go:701] error syncing claim "default/gcs-pv2": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:42:19.836400       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv2", UID:"b261821c-ee25-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162957", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:53:23.648494       1 controller.go:927] provision "default/gcs-pv1" class "glusterfs-csi": started
I1122 07:53:23.660620       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162309", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv1"
W1122 07:53:23.671897       1 controller.go:686] Retrying syncing claim "default/gcs-pv1" because failures 8 < threshold 15
E1122 07:53:23.671951       1 controller.go:701] error syncing claim "default/gcs-pv1": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:53:23.672004       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162309", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:58:59.836656       1 controller.go:927] provision "default/gcs-pv2" class "glusterfs-csi": started
I1122 07:58:59.846826       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv2", UID:"b261821c-ee25-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162957", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv2"
W1122 07:58:59.861668       1 controller.go:686] Retrying syncing claim "default/gcs-pv2" because failures 8 < threshold 15
E1122 07:58:59.861712       1 controller.go:701] error syncing claim "default/gcs-pv2": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 07:58:59.861751       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv2", UID:"b261821c-ee25-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162957", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 08:10:03.672635       1 controller.go:927] provision "default/gcs-pv1" class "glusterfs-csi": started
I1122 08:10:03.684551       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162309", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv1"
W1122 08:10:03.702617       1 controller.go:686] Retrying syncing claim "default/gcs-pv1" because failures 9 < threshold 15
E1122 08:10:03.702671       1 controller.go:701] error syncing claim "default/gcs-pv1": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 08:10:03.702769       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162309", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 08:15:39.862018       1 controller.go:927] provision "default/gcs-pv2" class "glusterfs-csi": started
I1122 08:15:39.875169       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv2", UID:"b261821c-ee25-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162957", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv2"
W1122 08:15:39.889053       1 controller.go:686] Retrying syncing claim "default/gcs-pv2" because failures 9 < threshold 15
E1122 08:15:39.889098       1 controller.go:701] error syncing claim "default/gcs-pv2": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 08:15:39.889144       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv2", UID:"b261821c-ee25-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162957", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 08:26:43.703058       1 controller.go:927] provision "default/gcs-pv1" class "glusterfs-csi": started
I1122 08:26:43.709771       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162309", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv1"
W1122 08:26:43.729329       1 controller.go:686] Retrying syncing claim "default/gcs-pv1" because failures 10 < threshold 15
E1122 08:26:43.729399       1 controller.go:701] error syncing claim "default/gcs-pv1": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 08:26:43.729460       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162309", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 08:32:19.889643       1 controller.go:927] provision "default/gcs-pv2" class "glusterfs-csi": started
I1122 08:32:19.901712       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv2", UID:"b261821c-ee25-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162957", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv2"
W1122 08:32:19.916880       1 controller.go:686] Retrying syncing claim "default/gcs-pv2" because failures 10 < threshold 15
E1122 08:32:19.916933       1 controller.go:701] error syncing claim "default/gcs-pv2": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 08:32:19.916981       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv2", UID:"b261821c-ee25-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162957", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 08:43:23.729656       1 controller.go:927] provision "default/gcs-pv1" class "glusterfs-csi": started
I1122 08:43:23.741646       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162309", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv1"
W1122 08:43:23.757154       1 controller.go:686] Retrying syncing claim "default/gcs-pv1" because failures 11 < threshold 15
E1122 08:43:23.757244       1 controller.go:701] error syncing claim "default/gcs-pv1": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 08:43:23.758271       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162309", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 08:48:59.917198       1 controller.go:927] provision "default/gcs-pv2" class "glusterfs-csi": started
I1122 08:48:59.934080       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv2", UID:"b261821c-ee25-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162957", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv2"
W1122 08:48:59.952943       1 controller.go:686] Retrying syncing claim "default/gcs-pv2" because failures 11 < threshold 15
E1122 08:48:59.953001       1 controller.go:701] error syncing claim "default/gcs-pv2": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 08:48:59.953740       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv2", UID:"b261821c-ee25-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162957", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 09:00:03.757494       1 controller.go:927] provision "default/gcs-pv1" class "glusterfs-csi": started
I1122 09:00:03.766514       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162309", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv1"
W1122 09:00:03.780014       1 controller.go:686] Retrying syncing claim "default/gcs-pv1" because failures 12 < threshold 15
E1122 09:00:03.780060       1 controller.go:701] error syncing claim "default/gcs-pv1": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 09:00:03.780105       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162309", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 09:05:39.953330       1 controller.go:927] provision "default/gcs-pv2" class "glusterfs-csi": started
I1122 09:05:39.964360       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv2", UID:"b261821c-ee25-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162957", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv2"
W1122 09:05:39.979472       1 controller.go:686] Retrying syncing claim "default/gcs-pv2" because failures 12 < threshold 15
E1122 09:05:39.979524       1 controller.go:701] error syncing claim "default/gcs-pv2": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 09:05:39.980074       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv2", UID:"b261821c-ee25-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162957", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 09:16:43.780421       1 controller.go:927] provision "default/gcs-pv1" class "glusterfs-csi": started
I1122 09:16:43.794844       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162309", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/gcs-pv1"
W1122 09:16:43.811517       1 controller.go:686] Retrying syncing claim "default/gcs-pv1" because failures 13 < threshold 15
E1122 09:16:43.811563       1 controller.go:701] error syncing claim "default/gcs-pv1": failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520
I1122 09:16:43.812094       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"gcs-pv1", UID:"fe1751c2-ee24-11e8-9630-525400c27fe7", APIVersion:"v1", ResourceVersion:"162309", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "glusterfs-csi": rpc error: code = Internal desc = failed to create volume: invalid Volume Size, Minimum size required is 20971520

Documentation: driver design

Describe the feature you'd like to have.
The design & modularity of the codebase should be documented.

What is the value to the end user? (why is it a priority?)
Proper documentation will permit a developer to make maximum re-use of existing code for additional driver types while also using robust, approved APIs so as to keep modularity and driver types isolated from one another.

How will we know we have a good solution? (acceptance criteria)

  • The driver-internal APIs should be fully documented (godoc)
  • There should be a single document that provides an overview of these internal interfaces and how they should be used. Providing examples of how the modularity maximizes re-use between GD2 and Heketi glusterfs drivers is highly desirable.

Additional context
This item will need to wait for refactoring for Heketi to be completed.

Able to write data on mount points of PVCs which is having access mode ROX(ReadOnlyMany)

Describe the bug
A clear and concise description of what the bug is.
Created PVC with access mode ROX (ReadOnlyMany), Then mounted to 3 replica controller app pods. i am able to write data on mount point. If PVC is having ReadOnly access, mount point should not allow to write any data.

Steps to reproduce
Steps to reproduce the behavior:

  1. Created PVC with access mode ROX (ReadOnlyMany)
  2. Mount PVC to 3 replica controller app pods
  3. Start writing data on mount point.

Actual results

Able write data on mount point

Expected behavior

If PVC is having ReadOnly access, mount point should not allow to write any data.

update latest image in docker-hub

I see a new image of glusterfs-csi-driver with tag v0.0.9 in dockerhub, but the latest image is 2 months old, are we updating the latest images once we do the new realease

v0.0.9            95 MB         3 days ago
latest            94 MB         2 months ago

if we are not updating the latest image, then we need to update the GCS repo to point to the new glusterfs-csi-driver new image.

Heketi-based CSI driver

Describe the feature you'd like to have.
There should be a CSI driver that works with Heketi so that users who have not yet moved to GD2 can benefit from CSI.

What is the value to the end user? (why is it a priority?)
The current gluster-file CSI driver is targeted at GD2, but many users have not started using gd2 in their environments. By having a CSI driver that targets Heketi, users can get the benefits of the standard CSI interface without also needing to migrate to the new gluster management daemon.

How will we know we have a good solution? (acceptance criteria)

  • The driver will support at least basic CSI functionality of (de-)provisioning & (un-)mounting.
  • The driver will build as a separate executable in a separate container from existing drivers so that it can be installed, upgraded, and patched independent of other CSI drivers in this repo.
  • There should be maximal sharing of code between this and the gd2 driver to gain benefits for bug elimination and testing.
  • The resulting code should not contain conditionals based on gd2 or heketi. Proper modularization and the separate entry points of the drivers should be sufficient to distinguish code paths.

Additional context

  • This driver and associated refactoring will be handled in a dedicated branch to be merged back periodically.
  • For CI testing, this depends on #53

Build badge should report master status

Describe the bug
The "build:passing" badge in the main README.md points to the current centos-ci status for the driver. Unfortunately, it doesn't report on just the status of the master branch, it reports on the last CI job. This could have been a PR test that failed, and by reporting on the main page, it gives the impression that our repo master is broken.

Steps to reproduce
Steps to reproduce the behavior:

  1. Submit a PR that fails centos-ci
  2. Take a look at the badge in the main README for the repo

Actual results
Badge will show failed when master is fine

Expected behavior
Badge status should reflect master status

Additional context

All PVC going in pending state during peer add operations

  1. Create a 2 gd2 pods in GCS cluster.
  2. Create a PVC (should go in pending state)
[vagrant@kube1 ~]$ kubectl create  -f pvc.yaml 
persistentvolumeclaim/gcs-pvc1 created
[vagrant@kube1 ~]$ kubectl get pvc
NAME       STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS    AGE
gcs-pvc1   Pending                                      glusterfs-csi   32s
  1. Add the 3rd pods in gd2 gcs cluster.
[root@gluster-kube1-0 /]# glustercli peer add gluster-kube1-0.glusterd2.gcs:24008 --endpoints=http://10.233.64.10:24007
Peer add successful
+--------------------------------------+-----------------+-------------------------------------+-------------------------------------+
|                  ID                  |      NAME       |          CLIENT ADDRESSES           |           PEER ADDRESSES            |
+--------------------------------------+-----------------+-------------------------------------+-------------------------------------+
| b693721c-3f32-4c46-aa6e-e98ba18112cc | gluster-kube1-0 | gluster-kube1-0.glusterd2.gcs:24007 | gluster-kube1-0.glusterd2.gcs:24008 |
+--------------------------------------+-----------------+-------------------------------------+-------------------------------------+
[root@gluster-kube1-0 /]# 
[root@gluster-kube1-0 /]# glustercli peer status --endpoints=http://10.233.64.10:24007
+--------------------------------------+-----------------+-------------------------------------+-------------------------------------+--------+-----+
|                  ID                  |      NAME       |          CLIENT ADDRESSES           |           PEER ADDRESSES            | ONLINE | PID |
+--------------------------------------+-----------------+-------------------------------------+-------------------------------------+--------+-----+
| 0e524ecc-cb3f-4b71-a051-07ee0135119f | gluster-kube2-0 | gluster-kube2-0.glusterd2.gcs:24007 | gluster-kube2-0.glusterd2.gcs:24008 | yes    |  24 |
| b693721c-3f32-4c46-aa6e-e98ba18112cc | gluster-kube1-0 | gluster-kube1-0.glusterd2.gcs:24007 | gluster-kube1-0.glusterd2.gcs:24008 | yes    |  22 |
| f003d36a-f584-403b-96b7-66865791b51b | gluster-kube3-0 | gluster-kube3-0.glusterd2.gcs:24007 | gluster-kube3-0.glusterd2.gcs:24008 | yes    |  22 |
+--------------------------------------+-----------------+-------------------------------------+-------------------------------------+--------+-----+
[root@gluster-kube1-0 /]# 
  1. Create a new pvc . The new PVC going in pending state.
[vagrant@kube1 ~]$ kubectl create  -f pvc.yaml 
persistentvolumeclaim/gcs-pvc2 created
[vagrant@kube1 ~]$ 
[vagrant@kube1 ~]$ kubectl get pvc
NAME       STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS    AGE
gcs-pvc1   Pending                                      glusterfs-csi   2m42s
gcs-pvc2   Pending                                      glusterfs-csi   4s
[vagrant@kube1 ~]$ 
[vagrant@kube1 ~]$ kubectl get pvc 
NAME       STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS    AGE
gcs-pvc1   Pending                                      glusterfs-csi   10m
gcs-pvc2   Pending                                      glusterfs-csi   8m2s

  1. Check for the cluster status
[vagrant@kube1 ~]$ 
[vagrant@kube1 ~]$ kubectl get pods -n gcs
NAME                                   READY   STATUS    RESTARTS   AGE
csi-attacher-glusterfsplugin-0         2/2     Running   0          27h
csi-nodeplugin-glusterfsplugin-lwzzq   2/2     Running   0          27h
csi-nodeplugin-glusterfsplugin-p8hgt   2/2     Running   0          27h
csi-nodeplugin-glusterfsplugin-wkv7r   2/2     Running   0          27h
csi-provisioner-glusterfsplugin-0      3/3     Running   0          27h
etcd-4cj2v42rq8                        1/1     Running   0          27h
etcd-6k2dp5dv8b                        1/1     Running   0          27h
etcd-965nvhqndk                        1/1     Running   0          27h
etcd-operator-7cb5bd459b-r7c2z         1/1     Running   0          27h
gluster-kube1-0                        1/1     Running   1          22h
gluster-kube2-0                        1/1     Running   1          22h
gluster-kube3-0                        1/1     Running   1          22h
[vagrant@kube1 ~]$ 

add support for snapshot

The snapshot should support the following API's

  • Create snaphost
  • List snapshots (based on snaphost Id or based on volume Id)
  • Delete snapshot
  • Add support to create volume from snapshot

CSI pods going in Crashloopbackoff mode after creating the GCS setup

Describe the bug
After creating the GCS setup using vagrant the CSI pods are going in Crashloopbackoff.

Steps to reproduce

  1. Create a GCS setup using vagrant
  2. Login to the cluster vagrant ssh kube1
  3. Check the state of the cluster.
[vagrant@kube1 ~]$ kubectl get pods -n gcs
NAME                                   READY   STATUS             RESTARTS   AGE
csi-attacher-glusterfsplugin-0         1/2     CrashLoopBackOff   17         83m
csi-nodeplugin-glusterfsplugin-5gqbt   1/2     CrashLoopBackOff   20         83m
csi-nodeplugin-glusterfsplugin-jcb9v   1/2     CrashLoopBackOff   20         83m
csi-nodeplugin-glusterfsplugin-qt25s   1/2     CrashLoopBackOff   20         83m
csi-provisioner-glusterfsplugin-0      2/3     CrashLoopBackOff   20         83m
etcd-gp64thnjx8                        1/1     Running            0          89m
etcd-jtscxn5vgw                        1/1     Running            0          87m
etcd-k6424x6jlx                        1/1     Running            0          90m
etcd-operator-7cb5bd459b-w5zvd         1/1     Running            0          92m
gluster-kube1-0                        1/1     Running            0          87m
gluster-kube2-0                        1/1     Running            0          87m
gluster-kube3-0                        1/1     Running            1          87m
[vagrant@kube1 ~]$  kubectl get sts 
No resources found.
[vagrant@kube1 ~]$ 

Actual results
CSI pods going in CrashloopbackOff mode.

Expected behavior
The Pods should be in ready state.

Add unit tests for CSI driver functions

We have to add unit tests for CSI driver functions and run it with each PR.

  • This unit tests should be against CSI spec 0.3
  • Should cover postive and negative test cases.
  • should use kubernetes/csi-test framework

Be able to configure replicate volume type

Describe the feature you'd like to have.
Today, the volume type is fixed as a replica 3 volume. It should be possible to use the parameters defined in #30 parameters.volumeType.type and parameters.volumeType.replicate.* to create the various types of replicated volumes.

What is the value to the end user? (why is it a priority?)
Not all users want 3-way replication. They can get significant space savings by using one of the arbiter types.

How will we know we have a good solution? (acceptance criteria)

  • parameters.volumeType.replicate.replicas should determine the number of replicas for the volume, with the default remaining at 3
  • parameters.volumeType.replicate.arbiterType should determine the type of arbiter used, with a default of none

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.