Code Monkey home page Code Monkey logo

local-path-provisioner's People

Contributors

ajdexter avatar albanbedel avatar anothertobi avatar anthonyenr1quez avatar dchirikov avatar derekbit avatar ibuildthecloud avatar icefed avatar innobead avatar js185692 avatar justusbunsi avatar kate-goldenring avatar kevinzwang avatar km4rcus avatar kmova avatar liupeng0518 avatar mantissahz avatar meln5674 avatar mgoltzsche avatar mmeinzer avatar mnorrsken avatar nicktming avatar nltimv avatar sbocinec avatar sergelogvinov avatar skyoo2003 avatar tamalsaha avatar tgfree7 avatar visokoo avatar yasker avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

local-path-provisioner's Issues

High CPU Usage

See #14

I am using k3s on a i5 and the local path provisioner (version v0.0.11) sits at 18% CPU:
grafik

Steps to reproduce:

  1. Install k3s on x86 Hardware
  2. Use the local path provisioner

Interestingly enough I am also running k3s on a cloud server, not using the local-path-provisioner (but having it running) and it doesn't consume nearly as many resources there.

Path Provisioner deletes volume from disk

After rebooting our development server it seems that the path provisioner completely deleted the mongodb volume directory. We are using Rancher 2.2.8 with Kubernetes 1.14.6

What can cause this? In our development server we use mongodb replica set helm chart but only one instance atm.

Here is the output from the local-path-provisioner which shows indeed that it got deleted

time="2019-10-07T11:04:15Z" level=debug msg="Applied config: {\"nodePathMap\":[{\"node\":\"DEFAULT_PATH_FOR_NON_LISTED_NODES\",\"paths\":[\"/opt/local-path-provisioner\"]}]}" 
time="2019-10-07T11:04:15Z" level=debug msg="Provisioner started" 
time="2019-10-07T11:09:21Z" level=info msg="Deleting volume pvc-8316611a-de0f-11e9-856d-e23a50295529 at dev:/opt/local-path-provisioner/pvc-8316611a-de0f-11e9-856d-e23a50295529" 
time="2019-10-07T11:09:24Z" level=info msg="Volume pvc-8316611a-de0f-11e9-856d-e23a50295529 has been deleted on dev:/opt/local-path-provisioner/pvc-8316611a-de0f-11e9-856d-e23a50295529" ```

Helm chart deploy fail for k3s

Error: Chart requires kubernetesVersion: >=1.12.0 which is incompatible with Kubernetes v1.14.1-k3s.4
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-20T04:49:16Z", GoVersion:"go1.12.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1-k3s.4", GitCommit:"52f3b42401c93c36467f1fd6d294a3aba26c7def", GitTreeState:"clean", BuildDate:"2019-04-15T22:13+00:00Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}

Security context not respected

I'm trying to use local-path-provisioner with kind. While it seems to generally work with multi-node clusters, security contexts are not respected. Volumes are always mounted with root as group. Here's a simple example that demonstrates this:

apiVersion: v1
kind: Pod
metadata:
  name: local-path-test
  labels:
    app.kubernetes.io/name: local-path-test
spec:
  containers:
    - name: test
      image: busybox
      command:
        - /config/test.sh
      volumeMounts:
        - name: test
          mountPath: /test
        - name: config
          mountPath: /config
  securityContext:
    fsGroup: 1000
    runAsNonRoot: true
    runAsUser: 1000
  terminationGracePeriodSeconds: 0
  volumes:
    - name: test
      persistentVolumeClaim:
        claimName: local-path-test
    - name: config
      configMap:
        name: local-path-test
        defaultMode: 0555

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: local-path-test
  labels:
    app.kubernetes.io/name: local-path-test
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: "1Gi"
  storageClassName: local-path

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: local-path-test
  labels:
    app.kubernetes.io/name: local-path-test
data:
  test.sh: |
    #!/bin/sh

    ls -al /test

    echo 'Hello from local-path-test'
    cp /config/text.txt /test/test.txt
    touch /test/foo

  text.txt: |
    some test content

Here's the log from the container:

total 4
drwxr-xr-x    2 root     root            40 Feb 22 09:50 .
drwxr-xr-x    1 root     root          4096 Feb 22 09:50 ..
Hello from local-path-test
cp: can't create '/test/test.txt': Permission denied
touch: /test/foo: Permission denied

As can be seen, the mounted volume has root as group instead of 1000 as specified by the security context. I also installed local-path-provisioner on Docker4Mac. The result is the same, so it is not a kind issue. Using the default storage class on Docker4Mac, it works as expected.

Any way to persist data?

It seems the default option is "delete" on the Persistent Volume. I suspect that is by design, but I was curious if there is even a way to be able to persist the data.

be able to control image for volume provisioning

kubectl -n local-path-storage-applog describe pod create-pvc-536367fb-8185-11e9-a1e6-eeeeeeeeeeee :

            Message
  ----     ------   ----                ----                   -------
  Normal   Pulling  30s (x4 over 119s)  kubelet, 10.246.89.91  pulling image "busybox"
  Warning  Failed   29s (x4 over 118s)  kubelet, 10.246.89.91  Failed to pull image "busybox": rpc error: code = Unknown desc = Get https://registry-1.docker.io/v2/: EOF
  Warning  Failed   29s (x4 over 118s)  kubelet, 10.246.89.91  Error: ErrImagePull
  Normal   BackOff  16s (x6 over 117s)  kubelet, 10.246.89.91  Back-off pulling image "busybox"
  Warning  Failed   4s (x7 over 117s)   kubelet, 10.246.89.91  Error: ImagePullBackOff
[et2448@Davids-Work-MacBook-Pro ~ (⎈ |icp-global-context:kube-system)]$ 

would it be possible to pass any args to control the full image url? (host/registry/tag)? We have to run through a proxy

How to re-use existing PVC?

Hi,
I've got this working in my set up, where the PVC is created on the node, in a mounted block storage, located at /mnt/dev_primary_lon1

However, the PVC is created in a UUID-based folder:
/mnt/dev_primary_lon1/pvc-xxxxxxx

If the node is somehow destroyed, but data should be save as it's on the block storage. But if I were to re-spin-up a new node, is there any way to point it to the existing PVC?

Or to simply create a pvc with a defined name in the first place?

Regards,
Andy

docker URL when in corp set up

How to configure docker registry URL to a private docker registry? Provisioner look for a busybox image from dockerhub to create the folders? How to set up provisioner in air gaped env?

Unknown Permission Issues

Hi guys

I've been using the lpp for some time, then I deicided to try something new.

Use case: edge cluster with shared folders
Status: in vm simulation, using vbox shared folders

Configuration: lpp cm updated to point to shared folder /persistentvolume instead of /var/lib/rancher/k3s/storage

Issue: nginx can read files from the persisten volume configured on the shared folder without issues, prometheus can't write in the folder and crashes

The only real debug I can get is the error log of prometheus which says only "I can't write so I'm panicking"
Full debug here
https://rancher-users.slack.com/archives/CGGQEHPPW/p1583445098193200?thread_ts=1583445098.193200&cid=CGGQEHPPW

Suggestion: additional documentation, helpful info, logs, configs about permissions necessary for lpp to work

Log records starts with ERROR: logging before flag.Parse:

K: 1.16.0
v0.0.11

now log looks like:

time="2019-10-02T14:23:12Z" level=debug msg="Applied config: {\"nodePathMap\":[{\"node\":\"DEFAULT_PATH_FOR_NON_LISTED_NODES\",\"paths\":[\"/opt/local-path-provisioner\"]}]}" 
time="2019-10-02T14:23:12Z" level=debug msg="Provisioner started" 
ERROR: logging before flag.Parse: I1002 14:23:12.817611       1 leaderelection.go:187] attempting to acquire leader lease  local-path-storage/rancher.io-local-path...
ERROR: logging before flag.Parse: I1002 14:23:12.894052       1 leaderelection.go:196] successfully acquired lease local-path-storage/rancher.io-local-path
ERROR: logging before flag.Parse: I1002 14:23:12.894163       1 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"local-path-storage", Name:"rancher.io-local-path", UID:"b8b35657-ea50-4a08-a1a2-e63087b44ffe", APIVersion:"v1", ResourceVersion:"663678", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' local-path-provisioner-56db8cbdb5-6lfgx_2a9d3a0c-e520-11e9-95fc-865370074a10 became leader
ERROR: logging before flag.Parse: I1002 14:23:12.894239       1 controller.go:572] Starting provisioner controller rancher.io/local-path_local-path-provisioner-56db8cbdb5-6lfgx_2a9d3a0c-e520-11e9-95fc-865370074a10!
ERROR: logging before flag.Parse: I1002 14:23:12.994573       1 controller.go:621] Started provisioner controller rancher.io/local-path_local-path-provisioner-56db8cbdb5-6lfgx_2a9d3a0c-e520-11e9-95fc-865370074a10!

[FEATURE REQUEST] More obvious folder names

This is a feature request but I would like to suggest a change to how the folders are named.

When browsing the file system the names are non-obvious, other provisioners, such as the NFS provisioner include the namespace and claim name in the folder name along with the PV name.

So where you have https://github.com/rancher/local-path-provisioner/blob/master/provisioner.go#L183-L184

NFS provisioner has https://github.com/kubernetes-incubator/external-storage/blob/master/nfs-client/cmd/nfs-client-provisioner/provisioner.go#L62-L65

It looks like a simple change and I can do a PR if you agree

disabling pv affinity

I'm using local-path-provisioner with a directory backed by gluster which is shared through all my nodes. but I can't have multiple pods with shared pvc on multiple nodes because my the pv has nodeAffinity

Is it possible to disable the node affinity of the pv?

Why the provisioner only support ReadWriteOnce

Why the provisioner only support ReadWriteOnce pvc and not ReadOnlyMany/ReadWriteMany.

Since it's just a node-local directory, there's no problem with having multiple writer/readers as long as the application support this.

"didn't find available persistent volumes to bind"

0/5 nodes are available: 1 node(s) were out of disk space, 2 node(s) were not ready, 3 node(s) didn't find available persistent volumes to bind.

Hey, I ran through the quick start here and ran into the above issue on our baremetal rke cluster. The two not ready are legit, but there's plenty of space on the nodes.

It doesn't look like the provisioner was called at all to create the PV?
time="2019-03-01T15:23:54Z" level=debug msg="Applied config: {\"nodePathMap\":[{\"node\":\"DEFAULT_PATH_FOR_NON_LISTED_NODES\",\"paths\":[\"/opt/local-path-provisioner\"]}]}"
time="2019-03-01T15:23:54Z" level=debug msg="Provisioner started"

pv alway store data in one node

I have two node one is k3s-master and the another is k3s-node when i install the deployment local-path-provisioner ,it was schedulered to k3s-node,and i create a pvc with a pod to use it,but the pv always store data in k3s-master, how can i modify it to store data in k3s-node

Error: attempting to acquire leader lease local-path-storage/rancher.io-local-path...

Issue

When I create a kind kubernetes cluster which uses as default StorageClass - rancher/local-path, then a pv or pvc cannot be created as the Log of the rancher provisioning controller is reporting the following election error

kc -n local-path-storage logs -f local-path-provisioner-7745554f7f-tm74b    
time="2020-04-21T15:00:23Z" level=debug msg="Applied config: {\"nodePathMap\":[{\"node\":\"DEFAULT_PATH_FOR_NON_LISTED_NODES\",\"paths\":[\"/var/local-path-provisioner\"]}]}" 
time="2020-04-21T15:00:23Z" level=debug msg="Provisioner started" 
ERROR: logging before flag.Parse: I0421 15:00:23.953416       1 leaderelection.go:187] attempting to acquire leader lease  local-path-storage/rancher.io-local-path...
ERROR: logging before flag.Parse: I0421 15:00:23.961715       1 leaderelection.go:196] successfully acquired lease local-path-storage/rancher.io-local-path
ERROR: logging before flag.Parse: I0421 15:00:23.962264       1 controller.go:572] Starting provisioner controller rancher.io/local-path_local-path-provisioner-7745554f7f-tm74b_d3eb09fa-83e0-11ea-aa04-9aae5601dfcc!
ERROR: logging before flag.Parse: I0421 15:00:23.962454       1 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"local-path-storage", Name:"rancher.io-local-path", UID:"c36aade7-becd-403c-a21f-ad8d43e2ecac", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' local-path-provisioner-7745554f7f-tm74b_d3eb09fa-83e0-11ea-aa04-9aae5601dfcc became leader
ERROR: logging before flag.Parse: I0421 15:00:24.062761       1 controller.go:621] Started provisioner controller rancher.io/local-path_local-path-provisioner-7745554f7f-tm74b_d3eb09fa-83e0-11ea-aa04-9aae5601dfcc!

What could be the issue ?

Persistence Volume Name is dynamic (Eg :pvc-5f9522a8-b900-11e9-b3d4-005056a04e8b)

Since the k8s local storage doesn't support dynamic provisioning yet, I found the local-file-provisioner from Rancher which seems to fill this gap. Deploying the examples mentioned in the documentation, I see PVs with random names generated:
Example:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-exciting-jellyfish-rabbitmq-ha-0 Bound pvc-5f9522a8-b900-11e9-b3d4-005056a04e8b 8Gi RWO standard 8m49s
data-exciting-jellyfish-rabbitmq-ha-1 Bound pvc-8b201f83-b900-11e9-b3d4-005056a04e8b 8Gi RWO standard 7m36s
data-exciting-jellyfish-rabbitmq-ha-2 Bound pvc-b6835ba6-b900-11e9-b3d4-005056a04e8b 8Gi RWO standard 6m23s

On the node, the folder /opt/local-path-provisioner/pvc-e8ade5da-b2bf-11e9-a8b0-d43d7eed0a97/ was generated. It is possible to get the link from PVC to PV. But this make it more complicated, especially when having many PVCs and manually looking in the volume is required for debugging. Or when exporting/importing backups.

I'd like to have more human readable volume name so that I can specify in the pvc file so that later I will know the directory name post creation of PV where my application will store the files

Are these error messages?

I followed the instructions in the README.md. Everything seems to be working perfecty until it's time to write data to the volume (via the pod). A bunch of log lines are spit out and nothing is written:

kubectl exec volume-test -- sh -c "echo local-path-test > /data/test"

I0225 10:36:49.840600   22359 log.go:172] (0xc0009d00b0) (0xc0007214a0) Create stream
I0225 10:36:49.840850   22359 log.go:172] (0xc0009d00b0) (0xc0007214a0) Stream added, broadcasting: 1
I0225 10:36:49.845332   22359 log.go:172] (0xc0009d00b0) Reply frame received for 1
I0225 10:36:49.845357   22359 log.go:172] (0xc0009d00b0) (0xc0006e7ae0) Create stream
I0225 10:36:49.845365   22359 log.go:172] (0xc0009d00b0) (0xc0006e7ae0) Stream added, broadcasting: 3
I0225 10:36:49.848761   22359 log.go:172] (0xc0009d00b0) Reply frame received for 3
I0225 10:36:49.848779   22359 log.go:172] (0xc0009d00b0) (0xc000966000) Create stream
I0225 10:36:49.848787   22359 log.go:172] (0xc0009d00b0) (0xc000966000) Stream added, broadcasting: 5
I0225 10:36:49.852192   22359 log.go:172] (0xc0009d00b0) Reply frame received for 5
I0225 10:36:49.977345   22359 log.go:172] (0xc0009d00b0) (0xc0006e7ae0) Stream removed, broadcasting: 3
I0225 10:36:49.977387   22359 log.go:172] (0xc0009d00b0) Data frame received for 1
I0225 10:36:49.977405   22359 log.go:172] (0xc0007214a0) (1) Data frame handling
I0225 10:36:49.977427   22359 log.go:172] (0xc0007214a0) (1) Data frame sent
I0225 10:36:49.977447   22359 log.go:172] (0xc0009d00b0) (0xc0007214a0) Stream removed, broadcasting: 1
I0225 10:36:49.977695   22359 log.go:172] (0xc0009d00b0) (0xc000966000) Stream removed, broadcasting: 5
I0225 10:36:49.977721   22359 log.go:172] (0xc0009d00b0) (0xc0007214a0) Stream removed, broadcasting: 1
I0225 10:36:49.977735   22359 log.go:172] (0xc0009d00b0) (0xc0006e7ae0) Stream removed, broadcasting: 3
I0225 10:36:49.977766   22359 log.go:172] (0xc0009d00b0) (0xc000966000) Stream removed, broadcasting: 5
I0225 10:36:49.977791   22359 log.go:172] (0xc0009d00b0) Go away received

They don't look like critical errors and data is actually being written, if I continue with the instructions, everything happens as expected.

Local path provisioners is restarting with following error

 F0907 17:44:36.445267       1 controller.go:647] leaderelection lost
goroutine 1 [running]:
github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog.stacks(0xc00046c300, 0xc000694500, 0x45, 0xf1)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog/glog.go:766 +0xb1
github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog.(*loggingT).output(0x2009ca0, 0xc000000003, 0xc0002269a0, 0x1f910d1, 0xd, 0x287, 0x0)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog/glog.go:717 +0x303
github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog.(*loggingT).printf(0x2009ca0, 0x3, 0x12e0f79, 0x13, 0x0, 0x0, 0x0)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog/glog.go:655 +0x14e
github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog.Fatalf(...)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog/glog.go:1145
github.com/rancher/local-path-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller.(*ProvisionController).Run.func2()
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:647 +0x5c
github.com/rancher/local-path-provisioner/vendor/k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run.func1(0xc0000ce0c0)
	/go/src/github.com/rancher/local-path-provisioner/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:148 +0x40
github.com/rancher/local-path-provisioner/vendor/k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run(0xc0000ce0c0, 0x14c7de0, 0xc000171bc0)
	/go/src/github.com/rancher/local-path-provisioner/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:157 +0x10f
github.com/rancher/local-path-provisioner/vendor/k8s.io/client-go/tools/leaderelection.RunOrDie(0x14c7e20, 0xc000046060, 0x14d4160, 0xc0000dcc60, 0x37e11d600, 0x2540be400, 0x77359400, 0xc00007f7c0, 0x1358910, 0x0)
	/go/src/github.com/rancher/local-path-provisioner/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:166 +0x87
github.com/rancher/local-path-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller.(*ProvisionController).Run(0xc0000c2b60, 0xc00018e600)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:639 +0x36f
main.startDaemon(0xc00032db80, 0x5, 0x4)
	/go/src/github.com/rancher/local-path-provisioner/main.go:134 +0x793
main.StartCmd.func1(0xc00032db80)
	/go/src/github.com/rancher/local-path-provisioner/main.go:80 +0x2f
github.com/rancher/local-path-provisioner/vendor/github.com/urfave/cli.HandleAction(0x10fb200, 0x1359218, 0xc00032db80, 0xc00037e600, 0x0)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/urfave/cli/app.go:487 +0x7c
github.com/rancher/local-path-provisioner/vendor/github.com/urfave/cli.Command.Run(0x12d6eca, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/urfave/cli/command.go:193 +0x925
github.com/rancher/local-path-provisioner/vendor/github.com/urfave/cli.(*App).Run(0xc00016d6c0, 0xc00003a0c0, 0x4, 0x4, 0x0, 0x0)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/urfave/cli/app.go:250 +0x785
main.main()
	/go/src/github.com/rancher/local-path-provisioner/main.go:166 +0x2b

Provisioner doesn't work if pods has tolerations/nodeSelector

Hey, thanks for sharing provisioner. It's really cool.

Steps to reproduce:

  1. Setup some kubernetes nodes with taints and mark them with some label
  2. Give pods tolerations and nodeSelector, so the nodes will be scheduled only to nodes with taints:

For example:

nodeSelector:
  node-role.kubernetes.io/sre: ""

tolerations:
  - key: "node-role.kubernetes.io/sre"
    operator: "Equal"
    effect: "NoSchedule"
  - key: "node-role.kubernetes.io/sre"
    operator: "Equal"
    effect: "NoExecute"

Provisioner doesn't work in this case because it uses helper pods to create dirs. Helper pods can't be scheduled on target node because of node taints. To make this example works helper pod should copy tolerations from pod which claiming storage.

Advice Required

our setup : worker nodes - have an nfs mount shared across using nfs mounts - fetched from ldap - for eg : folder like /nfsdata - there is a huge amount of data in the range of PetaBytes in this storage already.

Is local-path-provisioner designed for such a usecase - i assume not - as it only supports ReadWriteOnce mode and there is node affinity set. Do you think there is another module which can work for our purpose ?

Provisionner not follow the pod claim Node Affinity...

i've a kubernetes cluster of two node.
i've deployed local path provisioner on it.
i deploy a mariadb with node affinity on node A with pv claim using local path provisionner...
but provisionner whant to provisione the pvc on node B in place of node A where the pod are launch (the pod as node A affinity)... and fail by timeout to wait the pvc...

how to force local path provisionner to follow the pod affinity ?

Not working for subPath

Thanks for providing this provisioner. I am setting up MySQL HA using Kubernetes Stateful example.

I have tried this provisioner, but it is creating dynamic PV & mounting also but not in DEFAULT provided directory, but on this path /var/lib/kubelet/pods/82d24112-fc50-11e8-90e7-005056b146f6/volume-subpaths.

I found that it is happeing if I use subPath, otherwise it works well.

Any idea, how can I resolve this issue?

Should contrllor run as statefulset instead of deployment?

Citing from here:

https://arslan.io/2018/06/21/how-to-write-a-container-storage-interface-csi-plugin/

For the Controller plugin, we could deploy it as a StatefulSet. Most people associate StatefulSet with a persistent storage. But it’s more powerful. A StatefulSet also comes with scaling guarantees. This means we could use the Controller plugin as a StatefulSet and set the replicas field to 1. This would give us a stable scaling, that means that if the pod dies or we do an update, it never creates a second pod before the first one is fully shutdown and terminated. This is very important, because this guarantees that only a single copy of the Controller plugin will run.

What do you think @yasker ?

Provisioner falls over after etcd timeout

I start a helm chart on a kind installation using rancher. The chart provisions PVs for about 6 containers. This is running on pretty weak hardware, a circleci machine runner.

The chart doesn't come up. It's waiting for PV claims to be fulfilled (they're stuck in pending). The pods in question when start staying that the pod to create the volume already exists[1]. Which it does. And the path for the volume has been created. But pod creation hangs forever.

Turns out a request to etcd timed out [2] and this took down the provisioner. When it came back, it didn't recover the operations that were in progress.

[1] Pod describe:

 Normal     Provisioning          52s (x5 over 4m39s)    rancher.io/local-path_local-path-provisioner-69fc9568b9-vmmhj_2f07fa18-8215-11e9-87ef-420caaa87133  External provisioner is provisioning volume for claim "test-6505f1a5/datadir-test-6505f1a5-zookeeper-0"
  Warning    ProvisioningFailed    52s (x5 over 4m38s)    rancher.io/local-path_local-path-provisioner-69fc9568b9-vmmhj_2f07fa18-8215-11e9-87ef-420caaa87133  failed to provision volume with StorageClass "local-path": failed to create volume pvc-ee4d2be8-8214-11e9-9d5f-0242ac110002: pods "create-pvc-ee4d2be8-8214-11e9-9d5f-0242ac110002" already exists

[2]

circleci@default-306a0f9a-e068-4f47-bf37-c13d6eb031f2:~$ kubectl -n local-path-storage logs -p local-path-provisioner-69fc9568b9-vmmhj
time="2019-05-29T13:21:13Z" level=debug msg="Applied config: {\"nodePathMap\":[{\"node\":\"DEFAULT_PATH_FOR_NON_LISTED_NODES\",\"paths\":[\"/opt/local-path-provisioner\"]}]}" 
time="2019-05-29T13:21:13Z" level=debug msg="Provisioner started" 
time="2019-05-29T13:23:21Z" level=debug msg="config doesn't contain node kind-worker, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead" 
time="2019-05-29T13:23:21Z" level=info msg="Creating volume pvc-ee13ad7e-8214-11e9-9d5f-0242ac110002 at kind-worker:/opt/local-path-provisioner/pvc-ee13ad7e-8214-11e9-9d5f-0242ac110002" 
time="2019-05-29T13:23:21Z" level=debug msg="config doesn't contain node kind-worker2, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead" 
time="2019-05-29T13:23:21Z" level=info msg="Creating volume pvc-ee186e49-8214-11e9-9d5f-0242ac110002 at kind-worker2:/opt/local-path-provisioner/pvc-ee186e49-8214-11e9-9d5f-0242ac110002" 
time="2019-05-29T13:23:22Z" level=debug msg="config doesn't contain node kind-worker, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead" 
time="2019-05-29T13:23:22Z" level=info msg="Creating volume pvc-ee331af8-8214-11e9-9d5f-0242ac110002 at kind-worker:/opt/local-path-provisioner/pvc-ee331af8-8214-11e9-9d5f-0242ac110002" 
time="2019-05-29T13:23:23Z" level=debug msg="config doesn't contain node kind-worker2, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead" 
time="2019-05-29T13:23:23Z" level=info msg="Creating volume pvc-ee4d2be8-8214-11e9-9d5f-0242ac110002 at kind-worker2:/opt/local-path-provisioner/pvc-ee4d2be8-8214-11e9-9d5f-0242ac110002" 
E0529 13:23:46.181575       1 leaderelection.go:286] Failed to update lock: etcdserver: request timed out
E0529 13:23:46.809377       1 event.go:259] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Subsets:[]v1.EndpointSubset(nil)}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Normal' 'LeaderElection' 'local-path-provisioner-69fc9568b9-vmmhj_a1902f4d-8214-11e9-a599-420caaa87133 stopped leading'
F0529 13:23:50.398931       1 controller.go:647] leaderelection lost
goroutine 1 [running]:
github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog.stacks(0xc0003ea500, 0xc000698000, 0x45, 0xb4)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog/glog.go:766 +0xb1
github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog.(*loggingT).output(0x2009ca0, 0xc000000003, 0xc000148930, 0x1f910d1, 0xd, 0x287, 0x0)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog/glog.go:717 +0x303
github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog.(*loggingT).printf(0x2009ca0, 0x3, 0x12e0f79, 0x13, 0x0, 0x0, 0x0)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog/glog.go:655 +0x14e
github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog.Fatalf(...)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/golang/glog/glog.go:1145
github.com/rancher/local-path-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller.(*ProvisionController).Run.func2()
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:647 +0x5c
github.com/rancher/local-path-provisioner/vendor/k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run.func1(0xc000362540)
	/go/src/github.com/rancher/local-path-provisioner/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:148 +0x40
github.com/rancher/local-path-provisioner/vendor/k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run(0xc000362540, 0x14c7de0, 0xc000079c00)
	/go/src/github.com/rancher/local-path-provisioner/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:157 +0x10f
github.com/rancher/local-path-provisioner/vendor/k8s.io/client-go/tools/leaderelection.RunOrDie(0x14c7e20, 0xc000044040, 0x14d4160, 0xc000317200, 0x37e11d600, 0x2540be400, 0x77359400, 0xc000370900, 0x1358910, 0x0)
	/go/src/github.com/rancher/local-path-provisioner/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:166 +0x87
github.com/rancher/local-path-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller.(*ProvisionController).Run(0xc0002fc4e0, 0xc0000bc7e0)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:639 +0x36f
main.startDaemon(0xc000314640, 0x5, 0x4)
	/go/src/github.com/rancher/local-path-provisioner/main.go:134 +0x793
main.StartCmd.func1(0xc000314640)
	/go/src/github.com/rancher/local-path-provisioner/main.go:80 +0x2f
github.com/rancher/local-path-provisioner/vendor/github.com/urfave/cli.HandleAction(0x10fb200, 0x1359218, 0xc000314640, 0xc000300d00, 0x0)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/urfave/cli/app.go:487 +0x7c
github.com/rancher/local-path-provisioner/vendor/github.com/urfave/cli.Command.Run(0x12d6eca, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/urfave/cli/command.go:193 +0x925
github.com/rancher/local-path-provisioner/vendor/github.com/urfave/cli.(*App).Run(0xc0002fc340, 0xc00003a050, 0x5, 0x5, 0x0, 0x0)
	/go/src/github.com/rancher/local-path-provisioner/vendor/github.com/urfave/cli/app.go:250 +0x785
main.main()
	/go/src/github.com/rancher/local-path-provisioner/main.go:166 +0x2ba

Default Node Affinity doesn't match on node created by AWS (provider Amazon EC2)

Basic Info:
Rancher Version: 2.3.2
Kubernetes Version: v1.16.3-rancher1-1
Provider: Amazon EC2

Brief Description:
Pod which used the PVC created by this module cannot be scheduled due to the auto-generated PV's Node Affinity Rule: kubernetes.io/hostname = {{someNodeName}} doesn't match with the actual value of the node.

How to reproduce

  • Spin up a cluster with provider is Amazon EC2
  • Install this module and try to run the example
  • Pod volume-test cannot be scheduled with this message:
0/4 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 3 node(s) had volume node affinity conflict.

More description
The Persistent Volume which is created by this module has the kubernetes.io/hostname value which is equal to the Private DNS value of the instance that run. But actually the kubernetes.io/hostname label has a value of the instance's name and cannot be edited

More image:

AWS Instance board:
AWS Instance board

Node's label:
Node's label

Node Affinity from the PV:
Capture3

Expand pvc

didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.

create process timeout after 120 seconds

Followed the tutorial not changing a single thing...

Got this in the logs:
create process timeout after 120 seconds

And it keeps trying and trying to create a pvc volume...

I can see that there's no pv created too...

Do I need to create the folder /opt/local-path-provisioner manually? (even doing that it doesn't work)

Did anyone have the same issue?

Thanks!

High CPU usage

local-path-provisioner is one of the most CPU consuming processes in my cluster:

image

It seemingly ate 20 minutes of CPU time in just 22 hours:

local-path-storage   local-path-provisioner-5fbd477b57-mpg4s    1/1     Running     5          22h

During that time, it only created 3 PV's:

❯ k logs -n local-path-storage local-path-provisioner-5fbd477b57-mpg4s
time="2019-04-22T03:00:46Z" level=debug msg="Applied config: {\"nodePathMap\":[{\"node\":\"DEFAULT_PATH_FOR_NON_LISTED_NODES\",\"paths\":[\"/opt/local-path-provisioner\"]}]}"
time="2019-04-22T03:00:46Z" level=debug msg="Provisioner started"
time="2019-04-22T05:13:19Z" level=debug msg="config doesn't contain node kube0, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead"
time="2019-04-22T05:13:19Z" level=info msg="Creating volume pvc-5782375f-64bd-11e9-a240-525400a0c459 at kube0:/opt/local-path-provisioner/pvc-5782375f-64bd-11e9-a240-525400a0c459"
time="2019-04-22T05:13:25Z" level=info msg="Volume pvc-5782375f-64bd-11e9-a240-525400a0c459 has been created on kube0:/opt/local-path-provisioner/pvc-5782375f-64bd-11e9-a240-525400a0c459"
time="2019-04-22T10:37:15Z" level=info msg="Deleting volume pvc-5782375f-64bd-11e9-a240-525400a0c459 at kube0:/opt/local-path-provisioner/pvc-5782375f-64bd-11e9-a240-525400a0c459"
time="2019-04-22T10:37:19Z" level=info msg="Volume pvc-5782375f-64bd-11e9-a240-525400a0c459 has been deleted on kube0:/opt/local-path-provisioner/pvc-5782375f-64bd-11e9-a240-525400a0c459"
time="2019-04-22T10:38:20Z" level=debug msg="config doesn't contain node kube0, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead"
time="2019-04-22T10:38:20Z" level=info msg="Creating volume pvc-bee28903-64ea-11e9-a240-525400a0c459 at kube0:/opt/local-path-provisioner/pvc-bee28903-64ea-11e9-a240-525400a0c459"
time="2019-04-22T10:38:24Z" level=info msg="Volume pvc-bee28903-64ea-11e9-a240-525400a0c459 has been created on kube0:/opt/local-path-provisioner/pvc-bee28903-64ea-11e9-a240-525400a0c459"
time="2019-04-22T11:25:29Z" level=info msg="Deleting volume pvc-bee28903-64ea-11e9-a240-525400a0c459 at kube0:/opt/local-path-provisioner/pvc-bee28903-64ea-11e9-a240-525400a0c459"
time="2019-04-22T11:25:33Z" level=info msg="Volume pvc-bee28903-64ea-11e9-a240-525400a0c459 has been deleted on kube0:/opt/local-path-provisioner/pvc-bee28903-64ea-11e9-a240-525400a0c459"
time="2019-04-22T11:26:18Z" level=debug msg="config doesn't contain node kube0, use DEFAULT_PATH_FOR_NON_LISTED_NODES instead"
time="2019-04-22T11:26:18Z" level=info msg="Creating volume pvc-72a9e446-64f1-11e9-a240-525400a0c459 at kube0:/opt/local-path-provisioner/pvc-72a9e446-64f1-11e9-a240-525400a0c459"
time="2019-04-22T11:26:23Z" level=info msg="Volume pvc-72a9e446-64f1-11e9-a240-525400a0c459 has been created on kube0:/opt/local-path-provisioner/pvc-72a9e446-64f1-11e9-a240-525400a0c459"

Any ideas what's the matter? It it doing some kind of suboptimal high frequency polling? It bugs me that my brand new, almost empty single node Kubernetes setup already has LA 0.6 without any load coming towards it.

It was deployed with:

kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

without any modifications to the manifest.

Permission denied when try to create a pvc

I am following the steps to install/configure local-path-provider.

I have a local-path-provisioner running.

I have created pvc and status is pending (waiting for the pod).

I am trying to create a nginx pod like the example and i am facing this problem:

When i check create-pvc-33e6692e.... pod i have this error:
mkdir: can't create directory '/data/pvc-33e6692e-a32d-11e9-85ac-42010a8e0036': Permission denied

My local path is already configured like root.root and 777 permissions.

Anyone can help me?

Btrfs subvolume

It would be great if the provisioner could create a btrfs subvolume with quota for each PVC.

This would also give us snapshot and backup functionality using btrfs tools.

multiple local-path-provisioner for different host volumes

Question:
A single local-path-provisioner is working fine for me.
I have two different host volumes on my nodes (fast local ssd on /mnt/sdd, slow glusterfs on /mnt/gvol)
Might it be possible to setup two seperate storage classes with its own local-path-provisioner or is this out of scope ?

automatically create pv for pending pvc

I tried to run this command, but the pv is not automatic create for the pending pvc like GKE
I use local-path-provisioner deploy by helm chart with
helm install --name=local-path-provisioner ./deploy/chart/

Thank in advance

helm install --name=redis stable/redis

kubectl get pvc

NAME                                STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-redis-core-redis-ha-server-0   Pending                                                     2m30s

kubectl get sc

Name:                  local-path
IsDefaultClass:        No
Annotations:           <none>
Provisioner:           cluster.local/local-path-provisioner
Parameters:            <none>
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     WaitForFirstConsumer
Events:                <none>

kubectl get pod

local-path-provisioner-f856c74cf-jqv8v   1/1     Running   0          10m

securityContext fsGroup has no effect

My setup:

kind version
v0.5.1
kind create cluster
export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

Test 1: emptyDir

cat << EOF | k apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: security-context-works
spec:
  securityContext:
    runAsUser: 1000
    runAsGroup: 3000
    fsGroup: 2000
  volumes:
  - name: sec-ctx-vol
    emptyDir: {}
  containers:
  - name: sec-ctx-demo
    image: busybox
    command: [ "sh", "-c", "sleep 1h" ]
    volumeMounts:
    - name: sec-ctx-vol
      mountPath: /data/demo
    securityContext:
      allowPrivilegeEscalation: false
EOF

it works (2000):

kubectl exec -it security-context-works -- ls -la /data/
total 12
drwxr-xr-x    3 root     root          4096 Aug 28 12:12 .
drwxr-xr-x    1 root     root          4096 Aug 28 12:12 ..
drwxrwsrwx    2 root     2000          4096 Aug 28 12:12 demo

Test 2: rancher.io/local-path

cat << EOF | k apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: data-test
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path
  resources:
    requests:
      storage: 1Gi
EOF
cat << EOF | k apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: security-context-fails
spec:
  securityContext:
    runAsUser: 1000
    runAsGroup: 3000
    fsGroup: 2000
  volumes:
  - name: data-test
    persistentVolumeClaim:
      claimName: data-test
  containers:
  - name: sec-ctx-demo
    image: busybox
    command: [ "sh", "-c", "sleep 1h" ]
    volumeMounts:
    - name: data-test
      mountPath: /data/demo
    securityContext:
      allowPrivilegeEscalation: false
EOF

It fails (root):

kubectl exec -it security-context-fails -- ls -la /data/
total 12
drwxr-xr-x    3 root     root          4096 Aug 28 12:20 .
drwxr-xr-x    1 root     root          4096 Aug 28 12:20 ..
drwxrwxrwx    2 root     root          4096 Aug 28 12:20 demo

Any idea what is causing that? I was expecting the group to be 2000 for the demo directory.

support PVC selector field

Trying to create a PVC to use a PV that's been released. I get this error when trying to use selector field of a PVC to match the label foo=bar on my PV

Warning  ProvisioningFailed    13s
rancher.io/local-path_local-path-provisioner-ccbdd96dc-s87rg_5ced8b65-eec7-11e9-bd79-ca353f592fca  failed to provision volume with StorageClass "local-path": claim.Spec.Selector is not supported

Customizing Provisioner Name (e.g. to mock kubernetes.io/aws-ebs plugin)

Hello -
We have a use case where we would like to mimic the existence of AWS volume plugin on k3s, by using something like local storage path.

I noticed in the source code that the Provisioner Name is supports a flag with the name PROVISIONER_NAME.

I've deployed the following local-path-provisioner deployment at k3s and it seems to be starting up successfully, the pod logs are showing correct name as well.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: aws-ebs-mock-provisioner
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: aws-ebs-mock-provisioner
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: aws-ebs-mock-provisioner
    spec:
      containers:
      - command:
        - local-path-provisioner
        - start
        - --config
        - /etc/config/config.json
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: PROVISIONER_NAME
          value: kubernetes.io/aws-ebs
        image: rancher/local-path-provisioner:v0.0.12
        imagePullPolicy: IfNotPresent
        name: local-path-provisioner
        volumeMounts:
        - mountPath: /etc/config/
          name: config-volume
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      serviceAccount: local-path-provisioner-service-account
      serviceAccountName: local-path-provisioner-service-account
      volumes:
      - configMap:
          defaultMode: 420
          name: ebs-local-path-config
        name: config-volume

However, it doesn't seem that k3s is recognizing the plugin. PVCs are stlil showing the following event:

  Warning  ProvisioningFailed    104s (x182 over 46m)  persistentvolume-controller  no volume plugin matched

I am not sure if my understanding of the PROVISIONER_NAME flag is correct, or if this is a bug, or whether I need to take additional steps to register the volume with k3s using the name kubernetes.io/aws-ebs.

I would greatly appreciate any pointers!

Thank you

Feature request create,delete,resize_cmd support

In order to have a reliable setup it does not make sense to have one big volumes with a lot of pvc as folders because any single pvc could fill the whole disk.

Therefore I would like to have some additional options like this:


                {
                        "node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
                        "paths":["/opt/local-path-provisioner"]
                        "create_cmd":"btrfs subvolume create ${PATH}",
                        "delete_cmd":"btrfs subvolume delete ${PATH}"
                },
                {
                        "node":"yasker-lp-dev1",
                        "paths":["/opt/local-path-provisioner", "/data1"]
                        "create_cmd":"lvm create blabla ${PATH}",
                        "delete_cmd":"lvm delete blabla ${PATH}",
                        "reszie_cmd":"lvm resize",
                        "available_cmd" "check_lvm_space.sh {SIZE}"
                },
                {
                        "node":"yasker-lp-dev3",
                        "paths":[]
                        "create_cmd":null,
                        "delete_cmd":null
                }

It would be greate to have examples for btrfs and lvm which are most commonly used... with variables for PVC name, storage size and so on we could also ensure that no single volume would max out a single node.

This solution is very flexible and would also support any other filesystem like zfs...

provisioner doesn't like when nodes go away, VolumeFailedDelete

I'm running on some bare metal servers, and if one of them goes away (effectively permanently), PVs and PVCs don't get reaped, so the pods (created as a statefulset) can't recover.

This may be okay. Let me know if you'd like a reproducable example, or if it's a conceptual thing.

Here's an example I just ran across. It's a STS of Elasticsearch pods. Having persistent data is great, but if a server goes away, the pod just sits in purgatory.

$ kubectl describe pod esdata-4 -n kube-logging
...
Status:               Pending
...
    Mounts:
      /var/lib/elasticsearch from esdata-data (rw,path="esdata_data")
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  esdata-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  esdata-data-esdata-4
    ReadOnly:   false
...
Events:
  Type     Reason            Age                   From               Message
  ----     ------            ----                  ----               -------
  Warning  FailedScheduling  3m43s (x45 over 39m)  default-scheduler  0/5 nodes are available: 5 node(s) had volume node affinity conflict.


$ kubectl describe pv pvc-3628fa90-9e11-11e9-83ca-d4bed9ad776a
...
Annotations:       pv.kubernetes.io/provisioned-by: rancher.io/local-path
Finalizers:        [kubernetes.io/pv-protection]
StorageClass:      local-path
Status:            Released
Claim:             kube-logging/esdata-data-esdata-4
Reclaim Policy:    Delete
Node Affinity:     
  Required Terms:  
    Term 0:        kubernetes.io/hostname in [DEADHOST]
...
Events:
  Type     Reason              Age                  From                                                                                               Message
  ----     ------              ----                 ----                                                                                               -------
  Warning  VolumeFailedDelete  37s (x4 over 2m57s)  rancher.io/local-path_local-path-provisioner-f7986dc46-cg8nl_7ff46af3-9e4f-11e9-a883-fa15f9dfdfe0  failed to delete volume pvc-3628fa90-9e11-11e9-83ca-d4bed9ad776a: failed to delete volume pvc-3628fa90-9e11-11e9-83ca-d4bed9ad776a: pods "delete-pvc-3628fa90-9e11-11e9-83ca-d4bed9ad776a" not found

I can delete it manually, just kubectl delete pv [pvid]. I then have to create the pv and pvc, also manually, before the pod is happy. I assumed there'd be a timeout reaping PVCs from dead nodes.

cc @tamsky in case he's come across this, as I see he's been around this repo.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.