Code Monkey home page Code Monkey logo

dory's People

Contributors

datamattsson avatar e4jet avatar raunakkumar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dory's Issues

Static provision (dory) & patch command resulting PV in failed status after removing PVC

Static provisioning showed failed status of PV after executing patch command to change reclaim policy and deleting respective POD & PVC.

Steps performed:-

  1. Created PV,PVC & POD.
  2. changed default reclaim policy to delete by executing command "kubectl patch pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}'".
  3. Deleted POD & PVC.
  4. Checked status of PV, it went in "failed" status.
  5. Checked created volume, it was still present.

Observation:-

  1. Patch command to change reclaim policy is not properly supported in static provision.
  2. After changing reclaim policy through patch command and deleting POD & PVC, PV goes in failed state.
  3. If we do not execute patch command then PV status going in "Released" state after deleting POD & PVC.
  4. In case of PV status in "Released", if I create PVC again then PVC stucks at "Pending" status.
  5. Volume still exist after removing POD,PVC & PV. (after changing reclaim policy as delete too in static provision).
  6. If we inspect corresponding volume, it shows "No such volume" error but still can see volume entry in "docker volume ls" command.

Note:- No log getting generated for dory service, didn't get any log for above use case.

YAML's:

PV-IMPORTVOL.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-importvol
spec:
    capacity:
      storage: 35Gi
    accessModes:
    - ReadWriteOnce
    flexVolume:
      driver: hpe.com/hpe
      options:
        name: VOLUME1
        backend: 3PAR1

pvc.yml

apiVersion: v1
metadata:
  name: pvc-import
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 16Gi

pod.yml

apiVersion: v1
kind: Pod
metadata:
  name: pod-import
spec:
  containers:
  - name: minio
    image: minio/minio:latest
    args:
    - server
    - /export
    env:
    - name: MINIO_ACCESS_KEY
      value: minio
    - name: MINIO_SECRET_KEY
      value: doryspeakswhale
    ports:
    - containerPort: 9000
    volumeMounts:
    - name: export
      mountPath: /export
  volumes:
    - name: export
      persistentVolumeClaim:
        claimName: pvc-import

Note:- I have checked same setup with old dory binary (Version=1.1.0-ba064b80), able to reproduce same issue.

Can't mount a volume

Hello 😃 ,

I'm using the hpe plugin to create dynamically volumes on a Nimble storage.

I followed this link https://developer.hpe.com/blog/doryd-a-dynamic-provisioner-for-docker-volume-plugins and ended with these configuration files :

---                             
kind: Pod                                               
apiVersion: v1                                          
metadata:                                               
  name: pod-test                                        
spec:                                                   
  containers:                                           
        - image: httpd:latest                           
          name: httpd                                   
          volumeMounts:                                 
            - name: httpd-persistent-storage            
              mountPath: /data                          
  volumes:                                              
        - name: httpd-persistent-storage                
          persistentVolumeClaim:                        
            claimName: pvc-test                         
---                                                     
apiVersion: v1                                          
kind: PersistentVolumeClaim                             
metadata:                                               
  name: pvc-test                                        
spec:                                                   
  accessModes:                                          
    - ReadWriteOnce                                     
  volumeMode: Filesystem                                
  resources:                                            
    requests:                                           
      storage: 1Gi                                      
  storageClassName: sc-test                             
---                                                     
kind: StorageClass                                      
apiVersion: storage.k8s.io/v1                           
metadata:                                               
 name: sc-test                                          
 annotations:                                           
  storageclass.kubernetes.io/is-default-class: "true"   
provisioner: dev.hpe.com/nimble                         
parameters:                                             
  description: "Volume provisioned sc-test StorageClass"
  folder: "data-test"                                 

The StorageClass and the PersistenceVolumeClaim are working as I can see the volume created in the folder data-test on the Nimble's side.

However I'm still having the error 32 when the volume is binded to the Pod/Container :

Normal Scheduled Successfully assigned default/pod-test to ste1ppoc3 14 minutes ago
Warning FailedMount MountVolume.SetUp failed for volume "sc-test-a5e937af-9c05-11e9-aa73-1c98ec2bb718" : mount command failed, status: Failure, reason: rc=32 a minute ago
Warning FailedMount Unable to mount volumes for pod "pod-test_default(4cc17710-9c0f-11e9-b6c3-1c98ec2bb718)": timeout expired waiting for volumes to attach or mount for pod "default"/"pod-test". list of unmounted volumes=[httpd-persistent-storage]. list of unattached volumes=[httpd-persistent-storage default-token-jvh54] a few seconds ago

Important thing to notice, when I execute the following command I get :

sudo curl -XPOST --unix-socket /run/docker/plugins/nimble.sock http:/Plugin.Activate
{"implements":["VolumeDriver"],"Err":""}

Have you any idea on where this issue could came from ?

Regards 👍

Unable to mount volumes for pod

Hello ,

I'm using the hpe plugin to create dynamically volumes on a Nimble storage for deployment of SQL server 2019 and and the docker volume does not mount.
My configuration is as follows :
3 physical servers
1 master node + 2 nodes
Red Hat Enterprise Linux 7.6
Kubernetes release 1.15
Docker 18.09.7
Linux Nimble storage toolkit 2.5.1.110
1 Nimble Storage os release 5.0.7.200-612527
1 dedicated iSCSI VLAN

My problem is :
Refresh iSCSI is not automatic and the docker volume is not mounted because data access is not configured on nimble storage and the SQL Server pods,can not be deployed because it is impossible to mount the volume.

when I look at the log of the deployment of the SQL container I have the following :

· Warning FailedMount 3m28s (x142 over 5h22m) kubelet, dl35 Unable to mount volumes for pod "mssql-kjflr_default(0d48ba77-4490-4b24-b74d-779a4e350f97)": timeout expired waiting for volumes to attach or mount for pod "default"/"mssql-kjflr". list of unmounted volumes=[mssql]. list of unattached volumes=[mssql default-token-nw7fd]
· Warning FailedMount (x128 over 5h24m) kubelet, dl35 MountVolume.SetUp failed for volume "database-ef0b7359-a206-4aae-9720-618985e6c124" : mount command failed, status: Failure, reason: Post http://unix/VolumeDriver.Mount: http: ContentLength=101 with Body length 0

what works :
volume creation on the Nimble storage
the docker volume is created : the PVC and PV are ok

The procedure I followed is as follows:

· I manually run the doryd process to trace the logs :
/usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~nimble/doryd /root/.kube/config hpe.com

· creating a storage class :
· creating a PVC
· creating a SQL Server pods

I hope my explanations are clear ...

I do not know if you have an idea or look for the problem ? did I forget a step?

thank you in advance for your help.

Thanks

Emmanuel

Config Files :

/usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~nimble/nimble.json
{
"dockerVolumePluginSocketPath": "/run/docker/plugins/nimble.sock",
"logFilePath": "/var/log/dory.log",
"logDebug": false,
"stripK8sFromOptions": true,
"createVolumes": true,
"enable1.6": false,
"listOfStorageResourceOptions": [ "size", "sizeInGiB" ],
"factorForConversion": 1073741824,
"defaultOptions":[ {"mountConflictDelay": 30}]
}

Storage class :

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: database
provisioner: hpe.com/nimble
parameters:
description: "Volume provisioned by doryd : Nimble SQLServer DATA KUB"
perfPolicy: "SQL Server"
folder: "kub-SQLDATA"

Create PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: kb-sqlserver-data01
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
storageClassName: database

Deployment SQL POD

apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: mssql
labels:
app: mssql
spec:
replicas: 1
selector:
matchLabels:
app: mssql
template:
metadata:
labels:
app: mssql
spec:
containers:
- name: mssql
image: mcr.microsoft.com/mssql/rhel/server:2019-CTP3.1
resources:
requests:
cpu: 4
memory: 16Gi
env:
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
value: "********"
ports:

  • containerPort: 1433
    volumeMounts:
    - name: mssql
    mountPath: /var/opt/mssql
    volumes:
    - name: mssql
    persistentVolumeClaim:
    claimName: kb-sqlserver-data01

Bug: mountConflict delay is passed by default to the underlying docker volume plugin

doryd is passing mountConflictDelay map key with value as "30" to the underlying volume plugin, even though the plugin might choose to ignore (or) implement in a config file etc.

From the logs

sudo /tmp/doryd /root/.kube/config dev.hpe.com

12:58:35 volume.go:125: storageclass option key:provisioning value:thin
12:58:35 volume.go:125: storageclass option key:name value:from-production-d610b5f4-1730-11e8-b771-ecb1d7a4b070
12:58:35 volume.go:138: storage class does not contain size key, overriding to claim size
12:58:35 provisioner.go:377: processing mountConflictDelay:30
12:58:35 provisioner.go:380: setting the docker option mountConflictDelay:30
12:58:35 provisioner.go:385: optionsMap map[provisioning:thin name:from-production-d610b5f4-1730-11e8-b771-ecb1d7a4b070 size:16 mountConflictDelay:30]
12:58:35 client.go:126: request: action=POST path=http://unix/VolumeDriver.Create payload={"Name":"from-production-d610b5f4-1730-11e8-b771-ecb1d7a4b070","Opts":{"mountConflictDelay":30,"provisioning":"thin","size":16}}
12:58:35 client.go:168: response: 200 OK, length=-1
12:58:35 dockervol.go:267: unable to create docker volume using from-production-d610b5f4-1730-11e8-b771-ecb1d7a4b070 & map[size:16 mountConflictDelay:30 provisioning:thin] - create volume failed, error is: mountConflictDelay is not a valid option. Valid options are: ['mount-volume', 'compression', 'size', 'provisioning', 'flash-cache', 'cloneOf', 'snapshotOf', 'expirationHours', 'retentionHours', 'promote', 'qos-name']
12:58:35 provisioner.go:520: failed to create docker volume, error = create volume failed, error is: mountConflictDelay is not a valid option. Valid options are: ['mount-volume', 'compression', 'size', 'provisioning', 'flash-cache', 'cloneOf', 'snapshotOf', 'expirationHours', 'retentionHours', 'promote', 'qos-name']
12:58:36 provisioner.

To overcome this we need to pass hpe.json in /usr/libexec/kubernetes/..../

{
    "logFilePath": "/var/log/dory.log",
    "logDebug": false,
    "stripK8sFromOptions": true,
    "dockerVolumePluginSocketPath": "/run/docker/plugins/nimble.sock",
    "createVolumes": true,
    "enable1.6": false,
    "listOfStorageResourceOptions" :    ["size","sizeInGiB"],
    "factorForConversion": 1073741824,
    "defaultOptions": [{"size": 10}]
}

Can I set the reclaimPolicy in Storageclass??

Hi all. I am one TC of Korea HPE Pointnext.
I did the K8S with 3PAR POC a few days ago.
However, I could not change the reclaim policy in the Storageclass.

Does not dory originally support it?

Below are my configuration.

thin_sc.yml


kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: thin
provisioner: dev.hpe.com/hpe
reclaimPolicy: "Retain"
parameters:
provisioning: "thin"

pvc.yml


kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: thin-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 16Gi
storageClassName: thin

Result

NAME PROVISIONER
storageclasses/thin dev.hpe.com/hpe

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv/thin-0289be6d-72d7-11e8-a9fc-ecebb896add8 16Gi RWO Delete Bound default/thin-pvc thin 2s

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc/thin-pvc Bound thin-0289be6d-72d7-11e8-a9fc-ecebb896add8 16Gi RWO thin 2s

docker volume/system prune can cause persistent volumes to be unintentionally deleted

After playing with the beta of Docker's Kubernetes and speaking with @cpuguy83 it has become apparent that users of the Docker distro of Kubernetes could potentially shoot themselves in the foot with docker volume prune. Similarly, docker system prune --volumes could cause problems. The issue here is the classic 'multi-master problem'. The docker engine doesn't know anything about the volumes doryd has created to back Persistent Volumes. Because these volumes aren't referenced by any docker container, the docker engine sees them as unused and candidates for reclamation.

This hasn't been an issue to date because most k8s users don't interact with the docker command, they use kubectl or equivalent. In the case of the Docker distro though, the user is likely to be interacting with both. This creates a situation where a user of the docker command could disrupt consumers (Deployments or Pods) of Persistent Volumes.

CSI driver status

Are there any timelines for when a CSI driver might surface? CSI has been GA in kubernetes since Jan. 2019, and CSI spec has been 1.0 since Nov. 2018.

Bug: doryd crashes if StorageClass does'nt have any parameters: section

I have a StorageClass with parameters: section missing

Like

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
 name: database
provisioner: dev.hpe.com/nimble

I do see this error on the container running the doryd engine, which this exception logged, and the container restart happens.

[root@csimbe13-b05 quick-start]# docker ps -a | grep nimble
76d85e347364        nimblestorage/doryd@sha256:ed0fc2cbd942f35d0efe549fd39365226250a0e5ab7e51927ba6d84f239cf0b1                                      "doryd /etc/kubernete"   About a minute ago   Exited (2) About a minute ago                       k8s_dory_doryd-t2n0k_default_5b45a0da-1258-11e8-acd9-ecb1d7a4aa90_151
[root@csimbe13-b05 quick-start]# docker logs 76d85e347364
09:48:30 provisioner.go:184: provisioner (prefix=dev.hpe.com) is being created with instance id 57acbbec-222f-4391-a0dd-745502bc5941.
09:48:30 claim.go:85: processAddedClaim: provisioner:dev.hpe.com/hpe pvc:example-claim1  class:transactionaldb
09:48:30 provisioner.go:106: addMessageChan: creating 484f3213-1559-11e8-acd9-ecb1d7a4aa90
09:48:30 claim.go:85: processAddedClaim: provisioner:dev.hpe.com/hpe pvc:example-claim  class:from-production
09:48:30 provisioner.go:106: addMessageChan: creating b05c02bb-1557-11e8-acd9-ecb1d7a4aa90
panic: assignment to entry in nil map
 
goroutine 129 [running]:
nimblestorage/pkg/k8s/provisioner.(*Provisioner).newPersistentVolume(0xc420508300, 0xc420902200, 0x34, 0xc4202800f0, 0xc4208cf800, 0xc420654600, 0x0, 0xc420913b30, 0x43105e)
        /opt/build/workspace/hostint_go_default/src/nimblestorage/pkg/k8s/provisioner/volume.go:161 +0x215
nimblestorage/pkg/k8s/provisioner.(*Provisioner).provisionVolume(0xc420508300, 0xc4208cf800, 0xc420654600)
        /opt/build/workspace/hostint_go_default/src/nimblestorage/pkg/k8s/provisioner/provisioner.go:305 +0x353
nimblestorage/pkg/k8s/provisioner.(*Provisioner).processAddedClaim(0xc420508300, 0xc4208cf800)
        /opt/build/workspace/hostint_go_default/src/nimblestorage/pkg/k8s/provisioner/claim.go:87 +0x578
created by nimblestorage/pkg/k8s/provisioner.(*Provisioner).addedClaim
        /opt/build/workspace/hostint_go_default/src/nimblestorage/pkg/k8s/provisioner/claim.go:63 +0x14d
[root@csimbe13-b05 quick-start]#

Unmount failure not handled in dory

Steps:

  1. Create PV, PVC and POD for specified backend say "3PAR1".
  2. Remove backend entry "3PAR1" from hpe.conf
  3. Stop container corresponding to the POD created above

Observation:

After stopping container, HPE Plugin attempts to unmount volume that was mounted within the container. Since the corresponding backend for the volume is not found, this results in error returned by HPE docker plugin is to Dory which doesn't seem to be handled. This leads to successful restart of the container.

Expected Behaviour:

Upon receiving this error, dory is suppose to communicate this to kubernetes so that the corresponding POD can be marked as failed.

For more details, please refer to hpe-storage/python-hpedockerplugin#502

Log showing HPE plugin returning appropriate error:

2019-04-12 06:04:02.858 21 INFO hpedockerplugin.backend_orchestrator [-] Getting details for volume : IMPORTED-VOLUME
2019-04-12 06:04:02.858 21 DEBUG etcd.client [-] Issuing read for key /volumes/ with args {'recursive': True} read /usr/lib/python3.6/site-packages/python_etcd-0.4.5-py3.6.egg/etcd/client.py:582
2019-04-12 06:04:02.860 21 INFO hpedockerplugin.etcdutil [-] Get volbyname: volname is IMPORTED-VOLUME
2019-04-12 06:04:02.862 21 DEBUG hpedockerplugin.backend_orchestrator [-]  Populating cache IMPORTED-VOLUME, 3PAR1  add_cache_entry /python-hpedockerplugin/hpedockerplugin/backend_orchestrator.py:131
2019-04-12 06:04:02.862 21 INFO hpedockerplugin.backend_orchestrator [-]  Operating on backend : 3PAR1 on volume IMPORTED-VOLUME
2019-04-12 06:04:02.862 21 INFO hpedockerplugin.backend_orchestrator [-]  Request get_volume_snap_details
2019-04-12 06:04:02.862 21 INFO hpedockerplugin.backend_orchestrator [-]  with  args (None, 'IMPORTED-VOLUME')
2019-04-12 06:04:02.863 21 INFO hpedockerplugin.backend_orchestrator [-]  with  kwargs is {}
2019-04-12 06:04:02.863 21 ERROR hpedockerplugin.backend_orchestrator [-] ERROR: Backend '3PAR1' was NOT initialized successfully. Please check hpe.conf for incorrect entries and rectify it.
2019-04-12T06:04:02+0000 [twisted.python.log#info] "-" - - [12/Apr/2019:06:04:01 +0000] "POST /VolumeDriver.Get HTTP/1.1" 200 127 "-" "Go-http-client/1.1"
2019-04-12T06:05:02+0000 [twisted.web.http.HTTPChannel#info] Timing out client: UNIXAddress(None)
2019-04-12T06:05:25+0000 [twisted.python.log#info] "-" - - [12/Apr/2019:06:05:25 +0000] "POST /VolumeDriver.Capabilities HTTP/1.1" 200 37 "-" "Go-http-client/1.1"
2019-04-12 06:05:48.044 21 INFO hpedockerplugin.backend_orchestrator [-] Getting details for volume : IMPORTED-VOLUME
2019-04-12 06:05:48.045 21 DEBUG hpedockerplugin.backend_orchestrator [-]  Returning the backend details from cache IMPORTED-VOLUME , 3PAR1 get_volume_backend_details /python-hpedockerplugin/hpedockerplugin/backend_orchestrator.py:116
2019-04-12 06:05:48.045 21 INFO hpedockerplugin.backend_orchestrator [-]  Operating on backend : 3PAR1 on volume IMPORTED-VOLUME
2019-04-12 06:05:48.045 21 INFO hpedockerplugin.backend_orchestrator [-]  Request get_volume_snap_details
2019-04-12 06:05:48.046 21 INFO hpedockerplugin.backend_orchestrator [-]  with  args (None, 'IMPORTED-VOLUME')
2019-04-12 06:05:48.046 21 INFO hpedockerplugin.backend_orchestrator [-]  with  kwargs is {}
2019-04-12 06:05:48.046 21 ERROR hpedockerplugin.backend_orchestrator [-] ERROR: Backend '3PAR1' was NOT initialized successfully. Please check hpe.conf for incorrect entries and rectify it.
2019-04-12T06:05:48+0000 [twisted.python.log#info] "-" - - [12/Apr/2019:06:05:47 +0000] "POST /VolumeDriver.Get HTTP/1.1" 200 127 "-" "Go-http-client/1.1"
2019-04-12T06:06:48+0000 [twisted.web.http.HTTPChannel#info] Timing out client: UNIXAddress(None)

Introduce accessMode enforcement in "driver.json"

It would be practical to introduce an accessMode enforcement for a particular driver in the driver.json file to have doryd reject incompatible accessModes specified in the PVC. The user should get the appropriate error message if an incompatible accessMode have been submitted.

Bug: Mount of the POD using managed docker volume plugin fails

Under plain kubernetes environment, the flexvolumedriver binaries are installed, under

[docker@cld13b4 ~]$ ls -l /usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~hpe/
total 52360
-rwxr-xr-x. 1 docker docker 47046107 Apr 20 06:11 doryd
-rwxr-xr-x. 1 docker docker  6561963 Apr 20 06:11 hpe
-rw-r--r--. 1 docker docker      236 May 18 10:06 hpe.json

Config Details:

  • hpe.json is like
{
    "dockerVolumePluginSocketPath": "hpe:latest",
    "logDebug": true,
    "supportsCapabilities": true,
    "stripK8sFromOptions": true,
    "createVolumes": true,
    "listOfStorageResourceOptions": [ "size" ]
}
  • Managed plugin of HPE 3PAR is installed with docker 17.03
  • During a mount of the POD , the following error is logged.

Contents from /var/log/dory.log

Info : 2018/05/18 08:18:23 dory.go:80: [3930] entry  : Driver=hpe Version=1.1.0-ba064b80 Socket=hpe:latest Overridden=true
Info : 2018/05/18 08:18:23 dory.go:82: [3930] request: mount [/var/lib/kubelet/pods/3cf80cd9-5a95-11e8-8f3e-ecb1d7a4af30/volumes/hpe.com~hpe/sc-comp3-3ced2d29-5a95-11e8-8f3e-ecb1d7a4af30 {"kubernetes.io/fsType":"","kubernetes.io/pod.name":"pod-comp3","kubernetes.io/pod.namespace":"default","kubernetes.io/pod.uid":"3cf80cd9-5a95-11e8-8f3e-ecb1d7a4af30","kubernetes.io/pvOrVolumeName":"sc-comp3-3ced2d29-5a95-11e8-8f3e-ecb1d7a4af30","kubernetes.io/readwrite":"rw","kubernetes.io/serviceAccount.name":"default","name":"sc-comp3-3ced2d29-5a95-11e8-8f3e-ecb1d7a4af30","size":"16"}]
Debug: 2018/05/18 08:18:23 client.go:126: request: action=GET path=http://unix/plugins payload=
Debug: 2018/05/18 08:18:23 client.go:168: response: 200 OK, length=-1
Debug: 2018/05/18 08:18:23 dockerlt.go:62: returning []dockerlt.Plugin{dockerlt.Plugin{ID:"bc1f8296a8742b04d4e406c898c6dd6677735faf5ebd04cfc735c64e5c3ab5f3", Name:"hpe:latest", Enabled:true, Config:dockerlt.PluginConfig{Interface:dockerlt.PluginInterface{Socket:"hpe.sock"}}}}
Debug: 2018/05/18 08:18:23 client.go:126: request: action=POST path=http://unix/VolumeDriver.Capabilities payload={}
Debug: 2018/05/18 08:18:23 client.go:168: response: 200 OK, length=-1
Debug: 2018/05/18 08:18:23 dockervol.go:190: returning &dockervol.CapResponse{Capabilities:dockervol.PluginCapabilities{Scope:"global"}}
Debug: 2018/05/18 08:18:23 flexvol.go:194: mount called with [/var/lib/kubelet/pods/3cf80cd9-5a95-11e8-8f3e-ecb1d7a4af30/volumes/hpe.com~hpe/sc-comp3-3ced2d29-5a95-11e8-8f3e-ecb1d7a4af30 {"kubernetes.io/fsType":"","kubernetes.io/pod.name":"pod-comp3","kubernetes.io/pod.namespace":"default","kubernetes.io/pod.uid":"3cf80cd9-5a95-11e8-8f3e-ecb1d7a4af30","kubernetes.io/pvOrVolumeName":"sc-comp3-3ced2d29-5a95-11e8-8f3e-ecb1d7a4af30","kubernetes.io/readwrite":"rw","kubernetes.io/serviceAccount.name":"default","name":"sc-comp3-3ced2d29-5a95-11e8-8f3e-ecb1d7a4af30","size":"16"}]
Debug: 2018/05/18 08:18:23 flexvol.go:344: findJSON(1) about to unmarshal {"kubernetes.io/fsType":"","kubernetes.io/pod.name":"pod-comp3","kubernetes.io/pod.namespace":"default","kubernetes.io/pod.uid":"3cf80cd9-5a95-11e8-8f3e-ecb1d7a4af30","kubernetes.io/pvOrVolumeName":"sc-comp3-3ced2d29-5a95-11e8-8f3e-ecb1d7a4af30","kubernetes.io/readwrite":"rw","kubernetes.io/serviceAccount.name":"default","name":"sc-comp3-3ced2d29-5a95-11e8-8f3e-ecb1d7a4af30","size":"16"}
Debug: 2018/05/18 08:18:23 flexvol.go:167: getOrCreate called with sc-comp3-3ced2d29-5a95-11e8-8f3e-ecb1d7a4af30 and {"kubernetes.io/fsType":"","kubernetes.io/pod.name":"pod-comp3","kubernetes.io/pod.namespace":"default","kubernetes.io/pod.uid":"3cf80cd9-5a95-11e8-8f3e-ecb1d7a4af30","kubernetes.io/pvOrVolumeName":"sc-comp3-3ced2d29-5a95-11e8-8f3e-ecb1d7a4af30","kubernetes.io/readwrite":"rw","kubernetes.io/serviceAccount.name":"default","name":"sc-comp3-3ced2d29-5a95-11e8-8f3e-ecb1d7a4af30","size":"16"}
Debug: 2018/05/18 08:18:23 client.go:126: request: action=POST path=http://unix/VolumeDriver.Get payload={"Name":"sc-comp3-3ced2d29-5a95-11e8-8f3e-ecb1d7a4af30"}
Debug: 2018/05/18 08:18:23 client.go:168: response: 200 OK, length=-1
Debug: 2018/05/18 08:18:23 dockervol.go:214: returning &dockervol.GetResponse{Volume:dockervol.DockerVolume{Name:"sc-comp3-3ced2d29-5a95-11e8-8f3e-ecb1d7a4af30", Mountpoint:"/opt/hpe/data/hpedocker-dm-uuid-mpath-360002ac00000000001012f2e00019d52", Status:map[string]interface {}{"volume_detail":map[string]interface {}{"mountConflictDelay":30, "flash_cache":interface {}(nil), "provisioning":"thin", "compression":interface {}(nil), "size":16}}}, Err:""}
Debug: 2018/05/18 08:18:23 flexvol.go:307: getMountID called with /var/lib/kubelet/pods/3cf80cd9-5a95-11e8-8f3e-ecb1d7a4af30/volumes/hpe.com~hpe/sc-comp3-3ced2d29-5a95-11e8-8f3e-ecb1d7a4af30
Debug: 2018/05/18 08:18:23 flexvol.go:313: getMountID returning "3cf80cd9-5a95-11e8-8f3e-ecb1d7a4af30"
Debug: 2018/05/18 08:18:23 client.go:126: request: action=POST path=http://unix/VolumeDriver.Mount payload={"Name":"sc-comp3-3ced2d29-5a95-11e8-8f3e-ecb1d7a4af30","ID":"3cf80cd9-5a95-11e8-8f3e-ecb1d7a4af30"}
Debug: 2018/05/18 08:18:23 client.go:168: response: 200 OK, length=-1
Debug: 2018/05/18 08:18:23 bmount.go:42: BindMount called with /opt/hpe/data/hpedocker-dm-uuid-mpath-360002ac00000000001012f2e00019d52 /var/lib/kubelet/pods/3cf80cd9-5a95-11e8-8f3e-ecb1d7a4af30/volumes/hpe.com~hpe/sc-comp3-3ced2d29-5a95-11e8-8f3e-ecb1d7a4af30 false
Debug: 2018/05/18 08:18:23 cmd.go:33: ExecCommandOutput called with mount[--bind /opt/hpe/data/hpedocker-dm-uuid-mpath-360002ac00000000001012f2e00019d52 /var/lib/kubelet/pods/3cf80cd9-5a95-11e8-8f3e-ecb1d7a4af30/volumes/hpe.com~hpe/sc-comp3-3ced2d29-5a95-11e8-8f3e-ecb1d7a4af30]
Debug: 2018/05/18 08:18:23 cmd.go:49: out :mount: special device /opt/hpe/data/hpedocker-dm-uuid-mpath-360002ac00000000001012f2e00019d52 does not exist
Debug: 2018/05/18 08:18:23 cmd.go:49: out :
Error: 2018/05/18 08:18:23 bmount.go:51: BindMount failed with 32.  It was called with /opt/hpe/data/hpedocker-dm-uuid-mpath-360002ac00000000001012f2e00019d52 /var/lib/kubelet/pods/3cf80cd9-5a95-11e8-8f3e-ecb1d7a4af30/volumes/hpe.com~hpe/sc-comp3-3ced2d29-5a95-11e8-8f3e-ecb1d7a4af30 false.  Output=mount: special device /opt/hpe/data/hpedocker-dm-uuid-mpath-360002ac00000000001012f2e00019d52 does not exist
.

looking at the system output --
ls -l /opt/hpe/data/hpedocker-dm-uuid-mpath-360002ac00000000001012f2e00019d52
reveals the directory is not directly available in the docker host, but under the plugin's rootfs something similar to this

[docker@cld13b2 ~]$ sudo find / -name "hpedocker-dm-uuid-mpath-360002ac00000000001012f9e00019d52" -type d
/var/lib/docker/plugins/114534cb1fc8933961bd954bdaee237133ba64d768226bba39b804555271e316/rootfs/opt/hpe/data/hpedocker-dm-uuid-mpath-360002ac00000000001012f2e00019d52
/var/lib/docker/plugins/114534cb1fc8933961bd954bdaee237133ba64d768226bba39b804555271e316/propagated-mount/hpedocker-dm-uuid-mpath-360002ac00000000001012f2e00019d52

So, it short the flexvolumedriver looks it does the following

  1. docker volume inspect <vol>
  2. Get the MountPoint from the above command in step 1
  3. Do a bind mount for the path obtained to the POD's mount point folder.

As part of this , can I ask , if the flexvolumedriver

  1. Can detect the managed plugin operating in the host
  2. Get the plugin id
  3. And look for the mountpoint under rootfs filesystem of the plugin
    /var/lib/docker/plugins/<plugin_id>/rootfs/

where "mountpoint" is got from the $ docker volume inspect <vol>

Doryd does not perform auto-cleanup operation for HPE 3PAR Docker volumes.

Steps performed are as follows:

  1. Created a storage class (pv1).
  2. Created a PVC.

Observation: PV and docker volume got created automatically. It was expected since doryd provides dynamic provisioning.

  1. Deleted PVC.

Observation: PV got deleted but docker volume was still present.

[docker@csimbe06-b12 ~]$ docker volume ls
DRIVER              VOLUME NAME
hpe                 pv1-2cd62abf-2115-11e8-b71e-10604b98ece8

Doryd log snippet:

03:07:47 client.go:147: WILLIAM: before decode
03:07:47 dockervol.go:214: returning &dockervol.GetResponse{Volume:dockervol.DockerVolume{Name:"", Mountpoint:"", Status:map[string]interface {}(nil)}, Err:""}
03:07:47 volume.go:105: deletedVol event: cleaning up pv:pv1-2cd62abf-2115-11e8-b71e-10604b98ece8 phase:Released
03:07:47 provisioner.go:425: looking for /usr/libexec/kubernetes/kubelet-plugins/volume/exec/dev.hpe.com~hpe/hpe.json
03:07:47 provisioner.go:452: parsing defaultOptions [map[size:10]]
03:07:47 provisioner.go:458: key size value 10
03:07:47 provisioner.go:462: dockerOptions map[size:10]
03:07:47 provisioner.go:135: sendUpdate: pv:pv1-2cd62abf-2115-11e8-b71e-10604b98ece8 (74a98a4b-2115-11e8-b71e-10604b98ece8) phase:Released
03:07:47 provisioner.go:147: send: skipping 74a98a4b-2115-11e8-b71e-10604b98ece8, not in map
03:07:47 client.go:126: request: action=POST path=http://unix/VolumeDriver.Get payload={"Name":"pv1-2cd62abf-2115-11e8-b71e-10604b98ece8"}
03:07:47 client.go:173: response: 200 OK, length=-1
03:07:47 client.go:174: ================================================================
  1. Created a different PVC.
  2. Created a POD mapped to above PVC.
  3. Deleted POD.

Observation: POD and PV got deleted but PVC (in Lost state) and docker volume were still present.

[docker@csimbe06-b12 ~]$ sudo oc get pv,pvc
NAME            STATUS    VOLUME                                     CAPACITY   ACCESSMODES   STORAGECLASS   AGE
pvc/pv-claim1   Lost      pv1-749cc402-2118-11e8-b71e-10604b98ece8   0                        pv1            7m

[docker@csimbe06-b12 ~]$ docker volume ls
DRIVER              VOLUME NAME
hpe                 pv1-2cd62abf-2115-11e8-b71e-10604b98ece8
hpe                 pv1-749cc402-2118-11e8-b71e-10604b98ece8

Doryd log snippet:

03:35:11 provisioner.go:237: statusLogger: provision chains=0, delete chains=0, parked chains=0, ids tracked=0, connection=valid
03:35:15 volume.go:92: deletedVol event: pv:pv1-749cc402-2118-11e8-b71e-10604b98ece8 phase:Bound (reclaim policy:Delete) - skipping
03:35:15 provisioner.go:135: sendUpdate: pv:pv1-749cc402-2118-11e8-b71e-10604b98ece8 (74e4fe16-2118-11e8-b71e-10604b98ece8) phase:Bound
03:35:15 provisioner.go:147: send: skipping 74e4fe16-2118-11e8-b71e-10604b98ece8, not in map
03:35:15 claim.go:96: updatedClaim: pvc pv-claim1 current phase=Lost
03:35:15 provisioner.go:129: sendUpdate: pvc:pv-claim1 (749cc402-2118-11e8-b71e-10604b98ece8) phase:Lost
03:35:15 provisioner.go:147: send: skipping 749cc402-2118-11e8-b71e-10604b98ece8, not in map
03:35:16 provisioner.go:237: statusLogger: provision chains=0, delete chains=0, parked chains=0, ids tracked=0, connection=valid

dockOpts["size"] need to be converted to string

When a user specify a storage resources size in a PVC, the value gets treated as an integer internally and passed down to the Docker Volume API socket as an integer type in JSON.

Plugins that may be written using the official Go bindings from Docker will not work as the library only accepts strings in the value. https://github.com/docker/go-plugins-helpers/tree/master/volume (https://github.com/docker/go-plugins-helpers/blob/1e6269c305b8c75cfda1c8aa91349c38d7335814/volume/api.go#L26)

Since neither a StorageClass or PVC annotation accepts any other data types than strings. It would be more coherent to pass claimSizeInGiB as a string to the Docker Volume API socket.

I'm having trouble following the code that determines where size should be picked from (I tried setting "size" in the StorageClass, with no effect) so this crude patch solved my immediate problem:

diff --git a/common/k8s/provisioner/volume.go b/common/k8s/provisioner/volume.go
index 243c1b2..3041eac 100644
--- a/common/k8s/provisioner/volume.go
+++ b/common/k8s/provisioner/volume.go
@@ -28,6 +28,7 @@ import (
        "k8s.io/client-go/tools/cache"
        "k8s.io/client-go/tools/reference"
        "strings"
+       "strconv"
 )
 
 func (p *Provisioner) listAllVolumes(options meta_v1.ListOptions) (runtime.Object, error) {
@@ -152,7 +153,7 @@ func (p *Provisioner) getDockerOptions(params map[string]string, class *storage_
                        for _, option := range listOfOptions {
                                if key == option {
                                        util.LogInfo.Printf("storageclass option matched storage resource option:%s ,overriding the value to %d", key, claimSizeinGiB)
-                                       dockOpts[key] = claimSizeinGiB
+                                       dockOpts[key] = strconv.Itoa(claimSizeinGiB)
                                        break
                                }
                        }

Building doryd on resource constrained hosts fails

When building doryd on a slow host (like a tiny AWS instance etc) the build won't succeed on one pass.

First pass:

$ make doryd
» lint
»» lint ./cmd/
export GOPATH=/root/go PATH=$PATH:/root/go/bin GOOS=linux GOARCH=amd64 CGO_ENABLED=0; gometalinter --vendor --disable-all --enable=vet --enable=vetshadow --enable=golint --enable=ineffassign --enable=goconst --enable=deadcode --enable=dupl --enable=varcheck --enable=gocyclo --enable=misspell ./cmd/...
WARNING: deadline exceeded by linter vetshadow (try increasing --deadline)
WARNING: deadline exceeded by linter vet (try increasing --deadline)
make: *** [lint] Error 2

Running make again:

$ make doryd
» lint
»» lint ./cmd/
export GOPATH=/root/go PATH=$PATH:/root/go/bin GOOS=linux GOARCH=amd64 CGO_ENABLED=0; gometalinter --vendor --disable-all --enable=vet --enable=vetshadow --enable=golint --enable=ineffassign --enable=goconst --enable=deadcode --enable=dupl --enable=varcheck --enable=gocyclo --enable=misspell ./cmd/...
»» lint ./common/
export GOPATH=/root/go PATH=$PATH:/root/go/bin GOOS=linux GOARCH=amd64 CGO_ENABLED=0; gometalinter --vendor --disable-all --enable=vet --enable=vetshadow --enable=golint --enable=ineffassign --enable=goconst --enable=deadcode --enable=dupl --enable=varcheck --enable=gocyclo --enable=misspell ./common/...
» dory
»» build doryd
export GOPATH=/root/go PATH=$PATH:/root/go/bin GOOS=linux GOARCH=amd64 CGO_ENABLED=0 && go build -ldflags '-X main.Version=1.1.0 -X main.Commit=dcd3130d' ./cmd/doryd/doryd.go
»» sha256sum doryd
sha256sum  doryd > doryd.sha256sum
9136df5ca1abb23cd94a5640f3dd1940f7c02b60b68d4109da5c906e44fb42b6  doryd

Increasing the linter deadline would help false positive failed builds.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.