Code Monkey home page Code Monkey logo

csi-driver-nfs's People

Contributors

abhijeetgauravm avatar alankan-finocomp avatar andyzhangx avatar boddumanohar avatar chakri-nelluri avatar dependabot[bot] avatar farodin91 avatar fengshunli avatar fungaren avatar humblec avatar jsafrane avatar k8s-ci-robot avatar lpabon avatar mathu97 avatar mayankshah1607 avatar msau42 avatar navilg avatar pierreprinetti avatar pohly avatar prateekpandey14 avatar rootfs avatar saad-ali avatar sachinkumarsingh092 avatar sbezverk avatar songjiaxun avatar spiffxp avatar umagnus avatar woehrl01 avatar wozniakjan avatar xing-yang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

csi-driver-nfs's Issues

add imagePullsecrets in helm chart

Is your feature request related to a problem?/Why is this needed

Describe the solution you'd like in detail

Describe alternatives you've considered

Additional context

add subPath e2e test

Is your feature request related to a problem?/Why is this needed

Describe the solution you'd like in detail

Describe alternatives you've considered

Additional context

fix broken cloud build job for nfs driver

What happened:

fyi, we disabled the cloud build job for nfs because it has never passed:
https://github.com/kubernetes/test-infra/pull/19937/files#diff-6572b562b74e036959cc413b3e4b7eb275f82e89218a1b1eb36a3dfb136348cdR39

https://k8s-testgrid.appspot.com/sig-storage-image-build#post-csi-driver-nfs-push-images

Can you help fix the job?

What you expected to happen:

How to reproduce it:

Anything else we need to know?:

Environment:

  • CSI Driver version:
  • Kubernetes version (use kubectl version):
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

How to debug timeout after 2 min for trying to attach the volume

Since the nfs example in
https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs
fails to resolve an in-cluster nfs server I was directed to try CSI based nfs and here I am. So I added the yaml from
https://github.com/kubernetes-csi/csi-driver-nfs/tree/master/deploy/kubernetes
without any modification to my helm chart. The pv/pvc look like this to let me switch back and forth:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: data
spec:
  capacity:
    storage: 100Mi
  accessModes:
    - ReadWriteMany
  storageClassName: ""
{{- if .Values.use_csi }}
  csi:
    driver: nfs.csi.k8s.io
    # seems to be unused if volume is only used once
    volumeHandle: data
    volumeAttributes:
      server: nfs-server.app-data.svc.cluster.local
      share: /exports
{{- else }}
  nfs:
    path: /exports
    server: nfs-server.app-data.svc.cluster.local
{{- end }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  resources:
    requests:
      storage: 100Mi

The whole chart gets installed to a namespace app-data by helm. The nfs-server comes up, the csi plugin comes up. The pv and pvc come up as bound, but the pods never come up :/ After ~2 minutes I get the error below and that keeps repeating:

Events:
  Type     Reason              Age    From                                       Message
  ----     ------              ----   ----                                       -------
  Normal   Scheduled           2m32s  default-scheduler                          Successfully assigned app-data/data-reader-bmv5z to master.local
  Warning  FailedAttachVolume  32s    attachdetach-controller                    AttachVolume.Attach failed for volume "data" : attachment timeout for volume data
  Warning  FailedMount         29s    kubelet, master.local  Unable to attach or mount volumes: unmounted volumes=[data], unattached volumes=[default-token-r2q5v data]: timed out waiting for the condition
  Warning  FailedAttachVolume  0s     attachdetach-controller                    AttachVolume.FindAttachablePluginBySpec failed for volume "data"

Looks like this
kubernetes/kubernetes#89173

Nothing in the logs of the csi plugin though. The cluster is a 2 node kubeadm cluster (1.16).

Any idea whats going on?

create an nfs provisioner example

Is your feature request related to a problem?/Why is this needed

Describe the solution you'd like in detail

there is already a nfs server container here: https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/test/nfs-testdriver.go#L113

could you follow this doc: https://github.com/kubernetes-csi/csi-driver-smb/tree/master/deploy/example/smb-provisioner, to set up a similar nfs server

I think we need to set up an NFS server provisioner example first, that would be easy for testing

Describe alternatives you've considered

Additional context

Support automatic PV deletion and storage cleanup with reclaimPolicy: Delete

Is your feature request related to a problem?/Why is this needed
According to the example files this CSI Driver (which seems to be the only one available for NFS) only supports reclaimPolicy: Retain, if I understand correctly this means after deleting a PVC it's associated PV has to be removed manually and even after removing the PV the created files and directories are not deleted from the NFS target server.

Describe the solution you'd like in detail
I'd love if this driver could implement reclaimPolicy: Delete and proper cleanup of PVs, so when a PVC is deleted, the associated PV and files/folders on the NFS server would be deleted as well.

Describe alternatives you've considered
I've thought about somehow automating the process of regularly cleaning unbound PVs and deleting the associated directories on the NFS server, but this doesn't seem like a great solution

mixed clusters (windows+linux)

I am doing some tests with this diver and Helm on a windows+linux cluster.
However, the csi-nfs-node also gets triggered on windows nodes.

Could you hardcore a node selector for csi-nfs-node, or have a way to set a nodeselector via a Helm config override?

example provisioner

It looks like this driver can be used separate from a dynamic provisioner. Some documentation on how to produce a csi provisioner that only does the CreateVolume/DeleteVolume part and uses the existing containers for the attach/mount.

Errors when create new volume

What happened:

In the Sanity test logs, I found a couple of errors:

Controller Service CreateVolume
  should return appropriate values SingleNodeWriter WithCapacity 1Gi Type:Mount
  /home/azureuser/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/controller.go:441
STEP: reusing connection to CSI driver at unix:///tmp/csi.sock
STEP: creating mount and staging directories
I1116 00:09:09.272069  130286 utils.go:47] GRPC call: /csi.v1.Controller/ControllerGetCapabilities
I1116 00:09:09.272083  130286 utils.go:48] GRPC request: {}
I1116 00:09:09.272123  130286 controllerserver.go:175] Using default ControllerGetCapabilities
I1116 00:09:09.272130  130286 utils.go:53] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}}]}
STEP: creating a volume
I1116 00:09:09.272648  130286 utils.go:47] GRPC call: /csi.v1.Controller/CreateVolume
I1116 00:09:09.272659  130286 utils.go:48] GRPC request: {"capacity_range":{"required_bytes":10737418240},"name":"sanity-controller-create-single-with-capacity-14041F18-511E2738","parameters":{"server":"127.0.0.1","share":"/"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
I1116 00:09:09.272762  130286 controllerserver.go:243] internally mounting 127.0.0.1:/ at /tmp/sanity-controller-create-single-with-capacity-14041F18-511E2738
I1116 00:09:09.339489  130286 controllerserver.go:261] internally unmounting /tmp/sanity-controller-create-single-with-capacity-14041F18-511E2738
W1116 00:09:09.339520  130286 controllerserver.go:75] failed to unmount nfs server: rpc error: code = InvalidArgument desc = Volume ID missing in request
I1116 00:09:09.339529  130286 utils.go:53] GRPC response: {"volume":{"capacity_bytes":10737418240,"volume_context":{"server":"127.0.0.1","share":"/sanity-controller-create-single-with-capacity-14041F18-511E2738"},"volume_id":"127.0.0.1///sanity-controller-create-single-with-capacity-14041F18-511E2738"}}
STEP: cleaning up deleting the volume
I1116 00:09:09.340131  130286 utils.go:47] GRPC call: /csi.v1.Controller/DeleteVolume
I1116 00:09:09.340147  130286 utils.go:48] GRPC request: {"volume_id":"127.0.0.1///sanity-controller-create-single-with-capacity-14041F18-511E2738"}
I1116 00:09:09.340215  130286 controllerserver.go:115] failed to get nfs volume for volume id 127.0.0.1///sanity-controller-create-single-with-capacity-14041F18-511E2738 deletion: volume
id "127.0.0.1///sanity-controller-create-single-with-capacity-14041F18-511E2738" unexpected format: got 4 tokens
I1116 00:09:09.340230  130286 utils.go:53] GRPC response: {}
•SS

What you expected to happen:

How to reproduce it:

Anything else we need to know?:

Environment:

  • CSI Driver version:
  • Kubernetes version (use kubectl version):
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

provide more driver info in logs

Is your feature request related to a problem?/Why is this needed

Describe the solution you'd like in detail

could refer to https://github.com/kubernetes-csi/csi-driver-smb/blob/a9a2e47b1790b18e722e9d0b4d413c4e8a520507/pkg/smb/smb.go#L54-L58

[pod/csi-smb-controller-5dd89b9bf6-vbbzq/smb] DRIVER INFORMATION:
[pod/csi-smb-controller-5dd89b9bf6-vbbzq/smb] -------------------
[pod/csi-smb-controller-5dd89b9bf6-vbbzq/smb] Build Date: "2021-01-09T16:38:17Z"
[pod/csi-smb-controller-5dd89b9bf6-vbbzq/smb] Compiler: gc
[pod/csi-smb-controller-5dd89b9bf6-vbbzq/smb] Driver Name: smb.csi.k8s.io
[pod/csi-smb-controller-5dd89b9bf6-vbbzq/smb] Driver Version: e2e-c091d36a1d9ee8d6026cb06a165bbe49110afcb5
[pod/csi-smb-controller-5dd89b9bf6-vbbzq/smb] Git Commit: c091d36a1d9ee8d6026cb06a165bbe49110afcb5
[pod/csi-smb-controller-5dd89b9bf6-vbbzq/smb] Go Version: go1.15.5
[pod/csi-smb-controller-5dd89b9bf6-vbbzq/smb] Platform: linux/amd64

Describe alternatives you've considered

Additional context

enable spelling and boilerplate check

Is your feature request related to a problem?/Why is this needed

Describe the solution you'd like in detail

#127 disabled following two checks:

  • verify-spelling.sh
  • verify-boilerplate.sh

need to upstream the above two scripts to https://github.com/kubernetes-csi/csi-release-tools.git and then re-enable these two scripts in this project

Describe alternatives you've considered

Additional context

error: docs/csi-dev.md:38:12: "environmnet" is a misspelling of "environments"
error: docs/csi-dev.md:89:6: "environmnet" is a misspelling of "environments"
error: release-tools/SIDECAR_RELEASE_PROCESS.md:14:17: "maintainence" is a misspelling of "maintenance"
error: release-tools/prow.sh:76:31: "seperately" is a misspelling of "separately"
error: release-tools/prow.sh:296:37: "seperated" is a misspelling of "separated"
error: release-tools/prow.sh:1001:32: "succesful" is a misspelling of "successful"
Found spelling errors!
Cleaning up...
Makefile:46: recipe for target 'verify' failed
make: *** [verify] Error 1
Error: Process completed with exit code 2.

Docs: volumeAttributes clarification

What happened:

Here I see volumeAttributes.source parameter. But here there is volumeAttributes.server instead.

What you expected to happen:

I guess, volumeAttributes.server is correct and volumeAttributes.source is not.

Add information to README for csi-driver-nfs

Add the follow information:

  • What version of CSI does this driver support
  • What is the status of this driver (alpha).
  • Which of the following (optional) features does this driver support:
    • dynamic provisioning
    • online resize or offline resize
    • block
    • snapshot
  • Which AccessModes does this driver support?

/assign @prateekpandey14

NFS mount point becomes unresponsive after restarting csi node pod

What happened:
NFS mount point becomes unresponsive after restarting csi node pod

What you expected to happen:
NFS mount point should continue to be accesible after csi node pod is restarted

How to reproduce it:

  • Deploy csi driver and use nginx.yaml example to deploy application pod.
  • Check application pod is running and verify nfs mount is accessible.
  • Delete the csi node pod, after it is restarted, checked nfs mount again.

Anything else we need to know?:

Environment:

Process killed because out of memory

What happened:
When I try to create multiple PVCs at once, the process inside NFS controller is killed because memory cgroup is out of memory

[1160550.652403] Memory cgroup out of memory: Killed process 2801203 (mount.nfs) total-vm:156432kB, anon-rss:71132kB, file-rss:4296kB, shmem-rss:0kB, UID:0

What you expected to happen:
PVCs are created.
How to reproduce it:
I think it is sufficient to request more resources than this file specifies (around lines 44, 64, 90). It already happend and was resolved only by manually rewriting amount of limits Deployment can request (I put 10x more)
Anything else we need to know?:

Environment:

  • CSI Driver version: latest master
  • Kubernetes version (use kubectl version): v1.19.3
  • OS (e.g. from /etc/os-release): CentOS Linux 8
  • Kernel (e.g. uname -a): Linux 4.18.0-240.10.1.el8_3.x86_64
  • Install tools:
  • Others:

Create initial release of csi-driver-nfs

  1. Do some manual testing to ensure this NFS driver works as intended.
  2. Create a release-0.1 branch for kubernetes-csi/csi-driver-nfs
  3. Cut an official release v0.1.0 release of this driver from the new branch.
  4. Push a csi-driver-nfs:v0.1.0 container (either to quay or gcr.io whatever the Kubernetes CSI team decides)

CC @prateekpandey14 @msau42

flag enable-leader-election not supported any longer

we should update the charts to comply to the latest csi-provisioner image
possible fix:

containers:
- name: csi-provisioner
image: "{{ .Values.image.csiProvisioner.repository }}:{{ .Values.image.csiProvisioner.tag }}"
args:
- "-v=5"
- "--csi-address=$(ADDRESS)"
- "--leader-election"
#- "--leader-election-type=leases"

Support CreateVolume or DeleteVolume

previously, I use nfs-client-provisioner from repo .
Except from the size, It works fine because it can provision and delete.
So I think if csi-driver-nfs should add this feature?
If yes, I can create a PR or discuss more detail

PVC not provsioned because file exists

What happened:
I deployed PVC with deployment and PVC did not create although the directory on NFS server was created.

What you expected to happen:
The PVC gets created

How to reproduce it:
It's an unstable behaviour, sometimes it happens and sometimes not. Today I deployed 4 PVCs at once and one of them ended in this state. Logs:

I0209 16:15:51.630511       1 mount_linux.go:146] Mounting cmd (mount) with arguments (-t nfs -o hard,nfsvers=3,nolock 147.251.6.50:/gpfs/vol1/nfs/wes /tmp/pvc-41b9c53e-a0fc-4131-a020-9db9de970342)
I0209 16:15:52.728172       1 controllerserver.go:267] internally unmounting /tmp/pvc-41b9c53e-a0fc-4131-a020-9db9de970342
I0209 16:15:52.728411       1 nodeserver.go:120] NodeUnpublishVolume: CleanupMountPoint /tmp/pvc-41b9c53e-a0fc-4131-a020-9db9de970342 on volumeID(147.251.6.50/gpfs/vol1/nfs/wes/pvc-41b9c53e-a0fc-4131-a020-9db9de970342)
I0209 16:15:52.728528       1 mount_helper_common.go:71] "/tmp/pvc-41b9c53e-a0fc-4131-a020-9db9de970342" is a mountpoint, unmounting
I0209 16:15:52.728585       1 mount_linux.go:238] Unmounting /tmp/pvc-41b9c53e-a0fc-4131-a020-9db9de970342
I0209 16:15:53.344914       1 mount_helper_common.go:85] "/tmp/pvc-41b9c53e-a0fc-4131-a020-9db9de970342" is unmounted, deleting the directory
E0209 16:15:53.345448       1 utils.go:89] GRPC error: rpc error: code = Internal desc = failed to make subdirectory: mkdir /tmp/pvc-41b9c53e-a0fc-4131-a020-9db9de970342/pvc-41b9c53e-a0fc-4131-a020-9db9de970342: file exists

Anything else we need to know?:

Environment:

  • CSI Driver version: latest master
  • Kubernetes version (use kubectl version): v1.19.3
  • OS (e.g. from /etc/os-release): CentOS Linux 8
  • Kernel (e.g. uname -a): Linux 4.18.0-240.10.1.el8_3.x86_64
  • Install tools:
  • Others:

allow to specify NFS version, or is there any workaround?

The csi-driver-nfs always try a NFSv4 connection.

When trying to connect to a NFSv3 only server it fails.

If I do tests from a linux machine to connect to this NFS server

root@ubuntu1804:/home/irispulse# mount -v -t nfs 192.168.123.1:/ /mnt mount.nfs: timeout set for Thu Mar 18 07:18:43 2021 mount.nfs: trying text-based options 'vers=4.2,addr=192.168.123.1,clientaddr=192.168.123.129' mount.nfs: mount(2): Operation not supported mount.nfs: requested NFS version or transport protocol is not supported

but if passing options

root@ubuntu1804:/home/irispulse# mount -v -t nfs -o nfsvers=3 192.168.123.1:/ /mnt mount.nfs: timeout set for Thu Mar 18 07:19:49 2021 mount.nfs: trying text-based options 'nfsvers=3,addr=192.168.123.1' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: trying 192.168.123.1 prog 100003 vers 3 prot TCP port 2049 mount.nfs: prog 100005, trying vers=3, prot=17 mount.nfs: trying 192.168.123.1 prog 100005 vers 3 prot UDP port 1058
and works!

---- Use case

I have several testing/dev environments. And the aim is having lightweight dev environments on windows system.
Here I can use winnfsd https://github.com/winnfsd/winnfsd, on the main windows host, and have k8s running in a VM.
Then I can run WinNFSd.exe -pathFile C:\path\to\your\pathfile and simulate the exports of a linux NFS server
However WinNFSd only supports NFSv3, and I cannot find any open source NFSv4 client for this windows lightweight kind of systems.

add e2e test for this driver

Is your feature request related to a problem?/Why is this needed

Describe the solution you'd like in detail

Option#1, re-enable https://github.com/kubernetes-csi/csi-driver-nfs#running-kubernetes-end-to-end-tests-on-an-nfs-driver

Option#2 as below:

1. Install nfs-server-provisioner helm chart

  • following example would provision 100GB storage(one data disk) on an agent node, serving as a NFSv3 server
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm install stable/nfs-server-provisioner --generate-name --set=persistence.storageClass=default,persistence.enabled=true,persistence.size=100Gi

2. get nfs server address, in following example, it's 10.0.193.57:/export

# k get svc
NAME                                TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                                                                                                     AGE
nfs-server-provisioner-1599984974   ClusterIP   10.0.193.57   <none>        2049/TCP,2049/UDP,32803/TCP,32803/UDP,20048/TCP,20048/UDP,875/TCP,875/UDP,111/TCP,111/UDP,662/TCP,662/UDP   46m

3. set nfs address in PV:

  csi:
    driver: nfs.csi.k8s.io
    volumeHandle: data-id
    volumeAttributes:
      # The nfs server could be a K8s service
      # server: nfs-server.default.svc.cluster.local
      server: 10.0.193.57
      share: /export

4. create nginx example pod

# k exec -it nginx sh
# mount | grep nfs
10.0.193.57:/export on /var/www type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.244.0.145,local_lock=none,addr=10.0.193.57)

refer to https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/examples/kubernetes/nginx.yaml

Describe alternatives you've considered

Additional context

Creating PVCs still results in memory fail

What happened:
Hi,
in the end, doubling the memory limit didn't help. (#155 )

[16432189.299684] Memory cgroup out of memory: Killed process 3674331 (nfsplugin) total-vm:734676kB, anon-rss:8768kB, file-rss:18444kB, shmem-rss:0kB, UID:0

I wanted to create 4 PVCs. I resized to 1000Mi. It might be that the more PVCs you want to create at once, the more memory you need. This could be specified somewhere as some deployments are burstable and will have to create multiple PVCs at once

What you expected to happen:
create PVCs

How to reproduce it:

Anything else we need to know?:

Environment:

  • CSI Driver version:
  • Kubernetes version (use kubectl version):
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Add unit tests for the driver code

Is your feature request related to a problem?/Why is this needed

Describe the solution you'd like in detail

Describe alternatives you've considered

Additional context

Tests fail on NixOS

What happened:
Test case test/e2e/e2e_suite_test.go:157 fails when buiding with Nix

What you expected to happen:
I expected tests to succeed as checking if the project directory ends in csi-driver-nfs doesn't make sense

How to reproduce it:
Try to build csi-driver-nfs on NixOS with pkgs.buildGoModule and you'll encounter this fun bug

Anything else we need to know?:

Environment:

  • CSI Driver version: N/A
  • Kubernetes version (use kubectl version): N/A
  • OS (e.g. from /etc/os-release): NixOS
  • Kernel (e.g. uname -a): N/A
  • Install tools: N/A
  • Others: N/A

Recommended Fix:
Delete that line

fix `post-csi-driver-nfs-push-images` job

#66 added symlinks for cloudbuild.yaml and cloudbuild.sh to run the post-csi-driver-nfs-push-images job. However, this job is failing, with logs as follows:

Running...
$ARTIFACTS is set, sending logs to /logs/artifacts
2020/10/18 14:48:22 Build directory: .
2020/10/18 14:48:22 Config directory: /home/prow/go/src/github.com/kubernetes-csi/csi-driver-nfs
2020/10/18 14:48:22 cd-ing to build directory: .
2020/10/18 14:48:22 Creating source tarball at /tmp/156290404...
2020/10/18 14:48:24 Uploading /tmp/156290404 to gs://k8s-staging-sig-storage-gcb/source/57e9b50b-34fe-476a-ae36-ccf6f84d13c9.tgz...
Copying file:///tmp/156290404 [Content-Type=application/octet-stream]...
/ [0 files][    0.0 B/  7.2 MiB]                                                
/ [1 files][  7.2 MiB/  7.2 MiB]                                                
Operation completed over 1 objects/7.2 MiB.                                      
2020/10/18 14:48:26 Running build jobs...
2020/10/18 14:48:26 No variants.yaml, starting single build job...
Created [https://cloudbuild.googleapis.com/v1/projects/k8s-staging-sig-storage/builds/79474838-8f9d-48ca-a599-14028ef741f2].
Logs are available at [https://console.cloud.google.com/cloud-build/builds/79474838-8f9d-48ca-a599-14028ef741f2?project=272675062337].
----------------------------- REMOTE BUILD OUTPUT ------------------------------
ERROR: (gcloud.builds.submit) build 79474838-8f9d-48ca-a599-14028ef741f2 completed with status "FAILURE"
starting build "79474838-8f9d-48ca-a599-14028ef741f2"

FETCHSOURCE
Fetching storage object: gs://k8s-staging-sig-storage-gcb/source/1603032507.13-d259672d20a14a8ba8331fd9ae037c9b.tgz#1603032507335179
--------------------------------------------------------------------------------


2020/10/18 14:58:47 Failed to run some build jobs: [error running [gcloud builds submit --verbosity info --config /home/prow/go/src/github.com/kubernetes-csi/csi-driver-nfs/cloudbuild.yaml --substitutions _PULL_BASE_REF=master,_GIT_TAG=v20201018-v2.0.0-83-g108aef3 --project k8s-staging-sig-storage --gcs-log-dir gs://k8s-staging-sig-storage-gcb/logs --gcs-source-staging-dir gs://k8s-staging-sig-storage-gcb/source gs://k8s-staging-sig-storage-gcb/source/57e9b50b-34fe-476a-ae36-ccf6f84d13c9.tgz]: exit status 1]

exportfs: /exports does not support NFS export

What happened:
I took a fresh ubuntu 20 VM on my local kind k8s cluster and installed the csi driver using the command:

./deploy/install-driver.sh

after that I tried to instal nfs server by applying the YAML

deploy/example/nfs-provisioner/nfs-server.yaml

but then the container doesn't start. I get this error in pod logs

Writing SHARED_DIRECTORY to /etc/exports file
The PERMITTED environment variable is unset or null, defaulting to '*'.
This means any client can mount.
The READ_ONLY environment variable is unset or null, defaulting to 'rw'.
Clients have read/write access.
The SYNC environment variable is unset or null, defaulting to 'async' mode.
Writes will not be immediately written to disk.
Displaying /etc/exports contents:
/exports *(rw,fsid=0,async,no_subtree_check,no_auth_nlm,insecure,no_root_squash)

Starting rpcbind...
Displaying rpcbind status...
   program version netid     address                service    owner
    100000    4    tcp6      ::.0.111               -          superuser
    100000    3    tcp6      ::.0.111               -          superuser
    100000    4    udp6      ::.0.111               -          superuser
    100000    3    udp6      ::.0.111               -          superuser
    100000    4    tcp       0.0.0.0.0.111          -          superuser
    100000    3    tcp       0.0.0.0.0.111          -          superuser
    100000    2    tcp       0.0.0.0.0.111          -          superuser
    100000    4    udp       0.0.0.0.0.111          -          superuser
    100000    3    udp       0.0.0.0.0.111          -          superuser
    100000    2    udp       0.0.0.0.0.111          -          superuser
    100000    4    local     /var/run/rpcbind.sock  -          superuser
    100000    3    local     /var/run/rpcbind.sock  -          superuser
Starting NFS in the background...
Exporting File System...
rpc.nfsd: knfsd is currently up
exporting *:/exports
exportfs: /exports does not support NFS export
Export validation failed, exiting...

What you expected to happen:
I expect the above YAML to work as it is. I tried the same on MacOS also even then I get the same error.

Environment:

  • CSI Driver version: latest master
  • Kubernetes version (use kubectl version): v.1.20
  • OS (e.g. from /etc/os-release): ubuntu 20
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

Image "quay.io/k8scsi/nfsplugin:v1.0.0" not found

I was trying to deploy NFS driver but two pods failed to come up as it could not find nfsplugin image.

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
csi-attacher-nfsplugin-0 1/2 ImagePullBackOff 2 4m52s
csi-nodeplugin-nfsplugin-7ldnq 0/2 ImagePullBackOff 3 4m52s

$ kubectl describe pod csi-attacher-nfsplugin-0
.....
.....

Events:
Type Reason Age From Message


Normal Pulling 4m59s kubelet, dsib1243 pulling image "quay.io/k8scsi/csi-attacher:v1.0.1"
Normal Scheduled 4m59s default-scheduler Successfully assigned default/csi-attacher-nfsplugin-0 to dsib1243
Normal Pulled 4m52s kubelet, dsib1243 Successfully pulled image "quay.io/k8scsi/csi-attacher:v1.0.1"
Normal Created 4m52s kubelet, dsib1243 Created container
Normal Started 4m52s kubelet, dsib1243 Started container
Normal Pulling 4m8s (x3 over 4m52s) kubelet, dsib1243 pulling image "quay.io/k8scsi/nfsplugin:v1.0.0"
Warning Failed 4m6s (x3 over 4m51s) kubelet, dsib1243 Failed to pull image "quay.io/k8scsi/nfsplugin:v1.0.0": rpc error: code = Unknown desc = Error response from daemon: manifest for quay.io/k8scsi/nfsplugin:v1.0.0 not found
Warning Failed 4m6s (x3 over 4m51s) kubelet, dsib1243 Error: ErrImagePull
Normal BackOff 3m28s (x6 over 4m51s) kubelet, dsib1243 Back-off pulling image "quay.io/k8scsi/nfsplugin:v1.0.0"
Warning Failed 3m28s (x6 over 4m51s) kubelet, dsib1243 Error: ImagePullBackOff

$ docker image pull quay.io/k8scsi/nfsplugin:v1.0.0
Error response from daemon: manifest for quay.io/k8scsi/nfsplugin:v1.0.0 not found

Sanity test is not idempotent

What happened:
If you ran the sanity test twice, one test would fail. Seems like one volume is not mounted.
sudo umount -f /tmp/aa* helps.

------------------------------
Controller Service CreateVolume
  should not fail when creating volume with maximum-length name
  /home/azureuser/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/controller.go:639
STEP: reusing connection to CSI driver at unix:///tmp/csi.sock
STEP: creating mount and staging directories
I1116 00:09:09.340667  130286 utils.go:47] GRPC call: /csi.v1.Controller/ControllerGetCapabilities
I1116 00:09:09.340688  130286 utils.go:48] GRPC request: {}
I1116 00:09:09.340731  130286 controllerserver.go:175] Using default ControllerGetCapabilities
I1116 00:09:09.340739  130286 utils.go:53] GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}}]}
STEP: creating a volume
I1116 00:09:09.341127  130286 utils.go:47] GRPC call: /csi.v1.Controller/CreateVolume
I1116 00:09:09.341142  130286 utils.go:48] GRPC request: {"capacity_range":{"required_bytes":10737418240},"name":"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa","parameters":{"server":"127.0.0.1","share":"/"},"volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":1}}]}
I1116 00:09:09.341232  130286 controllerserver.go:243] internally mounting 127.0.0.1:/ at /tmp/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
I1116 00:09:09.378808  130286 controllerserver.go:261] internally unmounting /tmp/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
W1116 00:09:09.378835  130286 controllerserver.go:75] failed to unmount nfs server: rpc error: code = InvalidArgument desc = Volume ID missing in request
E1116 00:09:09.378842  130286 utils.go:51] GRPC error: rpc error: code = Internal desc = failed to make subdirectory: mkdir /tmp/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa: file exists

• Failure [0.039 seconds]
Controller Service
/home/azureuser/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/tests.go:44
  CreateVolume
  /home/azureuser/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/controller.go:358
    should not fail when creating volume with maximum-length name [It]
    /home/azureuser/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/controller.go:639

    Unexpected error:
        <*status.statusError | 0xc0003e4840>: {
            state: {
                NoUnkeyedLiterals: {},
                DoNotCompare: [],
                DoNotCopy: [],
                atomicMessageInfo: nil,
            },
            sizeCache: 0,
            unknownFields: nil,
            Code: 13,
            Message: "failed to make subdirectory: mkdir /tmp/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa: file exists",
            Details: nil,
        }
        rpc error: code = Internal desc = failed to make subdirectory: mkdir /tmp/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa: file exists
    occurred

    /home/azureuser/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/controller.go:670
------------------------------

What you expected to happen:

How to reproduce it:

Anything else we need to know?:

Environment:

  • CSI Driver version:
  • Kubernetes version (use kubectl version):
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:

production ready and external NFS support .

HI * ,

I want to know if there any target that the nfs plugin will be supported on k8s for production envs.

and i want to know also if this plugin supported external nfs v4/ nfs v3 , that not installed on the k8s cluster .

BR Eliyahu

StorageClass mountOptions not being applied to mount arguments

What happened:
NFS mountOptions not applied, causing NFS mount to fail.

I1208 18:53:37.938380       1 utils.go:47] GRPC call: /csi.v1.Controller/CreateVolume
I1208 18:53:37.938406       1 utils.go:48] GRPC request: {"capacity_range":{"required_bytes":1073741824},"name":"pvc-85810e45-fc85-4dd7-8f63-a27b897104f9","parameters":{"server":"10.0.0.58","share":"/Users/user/dev/exports"},"volume_capabilities":[{"AccessType":{"Mount":{"mount_flags":["vers=3","nolock"]}},"access_mode":{"mode":5}}]}
I1208 18:53:37.941805       1 controllerserver.go:239] internally mounting 10.0.0.58:/Users/user/dev/exports at /tmp/pvc-85810e45-fc85-4dd7-8f63-a27b897104f9
E1208 18:53:48.362083       1 mount_linux.go:150] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 10.0.0.58:/Users/user/dev/exports /tmp/pvc-85810e45-fc85-4dd7-8f63-a27b897104f9
Output: mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
mount.nfs: Protocol not supported

Options: {"mount_flags":["vers=3","nolock"]}}
Output: Mounting arguments: -t nfs 10.0.0.58:/Users/user/dev/exports /tmp/pvc-85810e45-fc85-4dd7-8f63-a27b897104f9

What you expected to happen:
NFS mount options to be applied to the mount command.

How to reproduce it:
The following StorageClass and PersistentVolumeClaims were used.

# StorageClass #

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
  server: 10.0.0.58
  share: /Users/user/dev/exports
reclaimPolicy: Retain # only retain is supported
volumeBindingMode: Immediate
mountOptions:
  - vers=3
  - nolock
# PVC #

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs-csi

Anything else we need to know?:

Environment:

  • CSI Driver version:
    master
  • Kubernetes version (use kubectl version):
    v1.19.4
  • OS (e.g. from /etc/os-release):
    Fedora 32 (Server Edition)
  • Kernel (e.g. uname -a):
    Linux 5.9.11-100.fc32.x86_64
  • Install tools:
    kubectl remote install
  • Others:

Enable k8s e2e tests in CI

Once CreateVolume is supported, then it will be easier to run the K8s external storage e2e test suite.

One more challenge though is getting the nfs driver to run on a kind cluster. There may be some dependencies on kernel modules or system libraries.

csi-nfs-controller on the same node, gives a CrashLoopBackOff

What happened:

deployed the csi-driver-nfs on a cluster via kubectl apply -f ... and the static manifests.

What you expected to happen:

All pods are running in a healthy state.

How to reproduce it:

Deploy the manifest.

Anything else we need to know?:

both pod are scheduled on one node, but this is a 5 node cluster, three control-plane, and two workers.

$ kubectl -n kube-system get pod -o wide -l app=csi-nfs-controller
NAME                                   READY   STATUS             RESTARTS    AGE   IP              NODE            NOMINATED NODE   READINESS GATES
csi-nfs-controller-84f58c6dcb-n6r6w    2/3     CrashLoopBackOff   201        14h   10.250.10.248    worker1   <none>           <none>
csi-nfs-controller-84f58c6dcb-szzsw    3/3     Running            19         14h   10.250.10.248    worker1   <none>           <none>

Environment:

  • CSI Driver version: v2.1.0
  • Kubernetes version (use kubectl version):
serverVersion:
  buildDate: "2021-01-13T13:20:00Z"
  compiler: gc
  gitCommit: faecb196815e248d3ecfb03c680a4507229c2a56
  gitTreeState: clean
  gitVersion: v1.20.2
  goVersion: go1.15.5
  major: "1"
  minor: "20"
  platform: linux/amd64
  • OS (e.g. from /etc/os-release): 18.04.5 LTS (Bionic Beaver)

csi-attacher unable to connect to API server

I tried the plugin on Ubuntu 16.04/Doecker 18.06/Kubernetes 1.13.4 and got the following error on cs0-attacher container. How do I debug this issue:

$ kubectl logs csi-attacher-nfsplugin-0 csi-attacher
I0413 14:55:39.129774 1 main.go:76] Version: v1.0.1-0-gb7dadac
I0413 14:55:39.131980 1 connection.go:89] Connecting to /csi/sockets/pluginproxy/csi.sock
I0413 14:55:39.136236 1 connection.go:116] Still trying, connection is CONNECTING
I0413 14:55:39.141327 1 connection.go:116] Still trying, connection is TRANSIENT_FAILURE
I0413 14:55:40.136586 1 connection.go:116] Still trying, connection is CONNECTING
I0413 14:55:40.137052 1 connection.go:113] Connected
I0413 14:55:40.137116 1 connection.go:242] GRPC call: /csi.v1.Identity/Probe
I0413 14:55:40.137133 1 connection.go:243] GRPC request: {}
I0413 14:55:40.141847 1 connection.go:245] GRPC response: {}
I0413 14:55:40.142573 1 connection.go:246] GRPC error:
I0413 14:55:40.142586 1 main.go:211] Probe succeeded
I0413 14:55:40.142616 1 connection.go:242] GRPC call: /csi.v1.Identity/GetPluginInfo
I0413 14:55:40.142638 1 connection.go:243] GRPC request: {}
I0413 14:55:40.143985 1 connection.go:245] GRPC response: {"name":"csi-nfsplugin","vendor_version":"1.0.0-rc2"}
I0413 14:55:40.144585 1 connection.go:246] GRPC error:
I0413 14:55:40.144595 1 main.go:128] CSI driver name: "csi-nfsplugin"
I0413 14:55:40.144606 1 connection.go:242] GRPC call: /csi.v1.Identity/GetPluginCapabilities
I0413 14:55:40.144612 1 connection.go:243] GRPC request: {}
I0413 14:55:40.145717 1 connection.go:245] GRPC response: {"capabilities":[{"Type":{"Service":{"type":1}}}]}
I0413 14:55:40.147553 1 connection.go:246] GRPC error:
I0413 14:55:40.147567 1 connection.go:242] GRPC call: /csi.v1.Controller/ControllerGetCapabilities
I0413 14:55:40.147741 1 connection.go:243] GRPC request: {}
I0413 14:55:40.149565 1 connection.go:245] GRPC response: {"capabilities":[{"Type":{"Rpc":{}}}]}
I0413 14:55:40.154085 1 connection.go:246] GRPC error:
I0413 14:55:40.154097 1 main.go:155] CSI driver does not support ControllerPublishUnpublish, using trivial handler
I0413 14:55:40.154375 1 controller.go:111] Starting CSI attacher
E0413 14:56:10.155897 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: Get https://10.96.0.1:443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0413 14:56:10.155947 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.VolumeAttachment: Get https://10.96.0.1:443/apis/storage.k8s.io/v1beta1/volumeattachments?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0413 14:56:41.157868 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: Get https://10.96.0.1:443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0413 14:56:41.158958 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.VolumeAttachment: Get https://10.96.0.1:443/apis/storage.k8s.io/v1beta1/volumeattachments?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0413 14:57:12.158694 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: Get https://10.96.0.1:443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0413 14:57:12.159482 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1beta1.VolumeAttachment: Get https://10.96.0.1:443/apis/storage.k8s.io/v1beta1/volumeattachments?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0413 14:57:43.159441 1 reflector.go:134] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.PersistentVolume: Get https://10.96.0.1:443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

fix sanity test failures

  • make sanity-test
Summarizing 23 Failures:

[Fail] Controller Service [Controller Server] ControllerGetCapabilities [It] should return appropriate capabilities
/root/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/controller.go:143

[Fail] Controller Service [Controller Server] ValidateVolumeCapabilities [It] should fail when no volume id is provided
/root/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/controller.go:931

[Fail] Controller Service [Controller Server] ValidateVolumeCapabilities [It] should fail when no volume capabilities are provided
/root/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/controller.go:958

[Fail] Controller Service [Controller Server] ValidateVolumeCapabilities [It] should return appropriate values (no optional values added)
/root/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/controller.go:1013

[Fail] Controller Service [Controller Server] ValidateVolumeCapabilities [It] should fail when the requested volume does not exist
/root/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/controller.go:1082

[Fail] Node Service NodePublishVolume [It] should fail when no volume id is provided
/root/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/node.go:183

[Fail] Node Service NodePublishVolume [It] should fail when no target path is provided
/root/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/node.go:198

[Fail] Node Service NodePublishVolume [It] should fail when no volume capability is provided
/root/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/node.go:214

[Fail] Node Service [BeforeEach] NodeUnpublishVolume should fail when no volume id is provided
/root/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/sanity.go:234

[Fail] Node Service [BeforeEach] NodeUnpublishVolume should fail when no target path is provided
/root/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/sanity.go:234

[Fail] Node Service [BeforeEach] NodeStageVolume should fail when no volume id is provided
/root/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/sanity.go:234

[Fail] Node Service [BeforeEach] NodeStageVolume should fail when no staging target path is provided
/root/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/sanity.go:234

[Fail] Node Service [BeforeEach] NodeStageVolume should fail when no volume capability is provided
/root/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/sanity.go:234

[Fail] Node Service [BeforeEach] NodeUnstageVolume should fail when no volume id is provided
/root/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/sanity.go:234

[Fail] Node Service [BeforeEach] NodeUnstageVolume should fail when no staging target path is provided
/root/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/sanity.go:234

[Fail] Node Service [BeforeEach] NodeGetVolumeStats should fail when no volume id is provided
/root/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/sanity.go:234

[Fail] Node Service [BeforeEach] NodeGetVolumeStats should fail when no volume path is provided
/root/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/sanity.go:234

[Fail] Node Service [BeforeEach] NodeGetVolumeStats should fail when volume is not found
/root/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/sanity.go:234

[Fail] Node Service [BeforeEach] NodeGetVolumeStats should fail when volume does not exist on the specified path
/root/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/sanity.go:234

[Fail] Node Service [BeforeEach] should work
/root/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/sanity.go:234

[Fail] DeleteSnapshot [Controller Server] [BeforeEach] should fail when no snapshot id is provided
/root/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/sanity.go:234

[Fail] DeleteSnapshot [Controller Server] [BeforeEach] should succeed when an invalid snapshot id is used
/root/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/sanity.go:234

[Fail] DeleteSnapshot [Controller Server] [BeforeEach] should return appropriate values (no optional values added)
/root/go/pkg/mod/github.com/kubernetes-csi/[email protected]+incompatible/pkg/sanity/sanity.go:234

Ran 28 of 72 Specs in 0.168 seconds
FAIL! -- 5 Passed | 23 Failed | 0 Pending | 44 Skipped
--- FAIL: TestSanity (0.18s)
FAIL
	

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.