Code Monkey home page Code Monkey logo

freenas-provisioner's People

Contributors

cruwe avatar davidasnider avatar davidwin93 avatar elegant996 avatar nmaupu avatar rkojedzinszky avatar travisghansen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

freenas-provisioner's Issues

Problem Mounting PVC

When I try to use the test claim and test pod, I get this error:

MountVolume.SetUp failed for volume "pvc-c07c28ce-c985-4301-aac3-fe740acdb909" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/snap/microk8s/common/var/lib/kubelet/pods/e39c738f-9f84-4373-b01f-f9230bb731f5/volumes/kubernetes.io~nfs/pvc-c07c28ce-c985-4301-aac3-fe740acdb909 --scope -- mount -t nfs 192.168.1.222:/mnt/WD/storage/default/freenas-test-pvc /var/snap/microk8s/common/var/lib/kubelet/pods/e39c738f-9f84-4373-b01f-f9230bb731f5/volumes/kubernetes.io~nfs/pvc-c07c28ce-c985-4301-aac3-fe740acdb909 Output: Running scope as unit: run-r696e73d75a2e44f19756b9353ec38ea0.scope mount: /var/snap/microk8s/common/var/lib/kubelet/pods/e39c738f-9f84-4373-b01f-f9230bb731f5/volumes/kubernetes.io~nfs/pvc-c07c28ce-c985-4301-aac3-fe740acdb909: bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program.

I am using microk8s on Ubuntu 18.04 LTS and FreeNAS-11.3-U3.2 . The PV, the class, the PVC seems to work (I see new dataset on Freenas).

Should I specify something special for the mount options?

support api details in config file

I'm interested in allowing for FreeNAS api (host, protocol, username, password, etc) details to be stored in some sort of config file outside the cluster. Reasoning is we have a pretty open cluster(s) that I'd like to be able to deploy the storage class in, but not worry about cracking down on things too much. Idea would be:

  1. run the provisioner on FreeNAS itself (or generally out of the cluster)
  2. support secrets out of cluster by passing a --secrets-file flag or similar

Structure of the secrets file would be something like the following directly mirroring the secrets in cluster:

---
<storage-class-name>:
  protocol:
  host:
  port:
  username:
  password:
  allowInsecure:
---
<storage-class-name>:
  protocol:
  ...

Interested in feedback on the idea.

scope of provisioner instance

I've been thinking about the scope of a single instance as it relates to storage classes. I effectively 3 potential options:

  1. 1 provisioner instance to 1 server and 1 storage class (this is how it's currently structured)
  2. 1 provisioner instance to 1 server and 1+ storage classes
  3. 1 provisioner instance to 1+ servers and 1+ classes

I can't seem to wrap my brain around what the scope is of the built-in provisioners in kubernetes is. I see examples that lead me to believe there is no definitive answer but I'm certainly seeing indications of provisioners that do #2 and #3. I'd like to retool this provisioner to support at least #2 but maybe #3.

The general idea for #2 would be to keep the general structure we have but to add FREENAS_POOL, FREENAS_MOUNTPOINT, and FREENAS_PARENT_DATASET (which I believe can all be unified into a single config value) to the per-class configuration (likely as a secret vs a parameter).

The general idea for #3 would be to only set the ID (if that's even needed) via an environment variable, everything else (items mentioned above + server details like creds, uri, etc) would go into the per-class configuration (again, likely as a secret).

I think long-term approach #3 provides the greatest flexibility. You could still of course provision multiple instances, but it would all but eliminate the need.

invalid memory address or nil pointer dereference

Recently upgraded Kubernetes to v1.23.4+k3s1. Now when a new Volume request comes in, I am receiving the error:

E0520 19:18:14.718931       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 145 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x15ce420, 0x22c5770)
	/home/travis/gopath/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/home/travis/gopath/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:48 +0x89
panic(0x15ce420, 0x22c5770)
	/home/travis/.gimme/versions/go1.15.12.linux.amd64/src/runtime/panic.go:969 +0x1b9
github.com/nmaupu/freenas-provisioner/provisioner.(*freenasProvisioner).Provision(0xc00000c4c0, 0x1983940, 0xc00011bc80, 0xc000678000, 0xc000042240, 0x28, 0xc00081a000, 0x0, 0x5d, 0x1, ...)
	/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/provisioner/provisioner.go:245 +0x43
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).provisionClaimOperation(0xc0001b8f00, 0x1983940, 0xc00011bc80, 0xc00081a000, 0xc000042101, 0x0, 0x0, 0x0)
	/home/travis/gopath/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:1404 +0xd2e
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).syncClaim(0xc0001b8f00, 0x1983940, 0xc00011bc80, 0x1771fc0, 0xc00081a000, 0xc0004e4070, 0x1)
	/home/travis/gopath/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:1090 +0x10e
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).syncClaimHandler(0xc0001b8f00, 0x1983940, 0xc00011bc80, 0xc00076c120, 0x24, 0x10, 0x10)
	/home/travis/gopath/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:1058 +0xca
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextClaimWorkItem.func1(0xc0001b8f00, 0xc00054c618, 0x1567fe0, 0xc0004e4060, 0x0, 0x0)
	/home/travis/gopath/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:959 +0x125
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextClaimWorkItem(0xc0001b8f00, 0x1983940, 0xc00011bc80, 0x0)
	/home/travis/gopath/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:981 +0x71
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runClaimWorker(0xc0001b8f00, 0x1983940, 0xc00011bc80)
	/home/travis/gopath/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:927 +0x3f
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.2()
	/home/travis/gopath/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:883 +0x3c
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00033c0a0)
	/home/travis/gopath/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00033c0a0, 0x1947f00, 0xc000542030, 0x1, 0xc00053c480)
	/home/travis/gopath/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xad
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00033c0a0, 0x3b9aca00, 0x0, 0x1, 0xc00053c480)
	/home/travis/gopath/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(0xc00033c0a0, 0x3b9aca00, 0xc00053c480)
	/home/travis/gopath/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90 +0x4d
created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
	/home/travis/gopath/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:883 +0x4af
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x145a9e3]
goroutine 145 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/home/travis/gopath/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:55 +0x10c
panic(0x15ce420, 0x22c5770)
	/home/travis/.gimme/versions/go1.15.12.linux.amd64/src/runtime/panic.go:969 +0x1b9
github.com/nmaupu/freenas-provisioner/provisioner.(*freenasProvisioner).Provision(0xc00000c4c0, 0x1983940, 0xc00011bc80, 0xc000678000, 0xc000042240, 0x28, 0xc00081a000, 0x0, 0x5d, 0x1, ...)
	/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/provisioner/provisioner.go:245 +0x43
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).provisionClaimOperation(0xc0001b8f00, 0x1983940, 0xc00011bc80, 0xc00081a000, 0xc000042101, 0x0, 0x0, 0x0)
	/home/travis/gopath/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:1404 +0xd2e
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).syncClaim(0xc0001b8f00, 0x1983940, 0xc00011bc80, 0x1771fc0, 0xc00081a000, 0xc0004e4070, 0x1)
	/home/travis/gopath/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:1090 +0x10e
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).syncClaimHandler(0xc0001b8f00, 0x1983940, 0xc00011bc80, 0xc00076c120, 0x24, 0x10, 0x10)
	/home/travis/gopath/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:1058 +0xca
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextClaimWorkItem.func1(0xc0001b8f00, 0xc00054c618, 0x1567fe0, 0xc0004e4060, 0x0, 0x0)
	/home/travis/gopath/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:959 +0x125
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextClaimWorkItem(0xc0001b8f00, 0x1983940, 0xc00011bc80, 0x0)
	/home/travis/gopath/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:981 +0x71
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runClaimWorker(0xc0001b8f00, 0x1983940, 0xc00011bc80)
	/home/travis/gopath/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:927 +0x3f
sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.2()
	/home/travis/gopath/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:883 +0x3c
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00033c0a0)
	/home/travis/gopath/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00033c0a0, 0x1947f00, 0xc000542030, 0x1, 0xc00053c480)
	/home/travis/gopath/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156 +0xad
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00033c0a0, 0x3b9aca00, 0x0, 0x1, 0xc00053c480)
	/home/travis/gopath/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(0xc00033c0a0, 0x3b9aca00, 0xc00053c480)
	/home/travis/gopath/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:90 +0x4d
created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
	/home/travis/gopath/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/[email protected]/controller/controller.go:883 +0x4af

Environment:

  • Kubernetes: v1.23.4+k3s1
  • FreeNAS: 11.3-U5
  • freenas-provisioner: v2.7

A shame as this has been working great for a number of years.

deployment.yaml not automatically updated

If you look at the image being pulled, it's currently 2.4 and the last release was 2.6. Suggest either set up an automatic update for it, or set it to :latest and add that tag.

Thanks for putting this together! I appreciate it.

Fix: This option can only be used for datasets.

I'm wondering if a race condition is causing this. I've seen it a few times and I suspect what's happening is the file mode call is somehow creating a directory in the same path as the child dataset before the dataset is exported. Not sure the best way to resolve it but maybe can mitigate it with a crude sleep function before the chmod api call triggers.

Error creating NFS share for {Id:0 Alldirs:true Comment:freenas-provisioner (freenas-nfs-provisioner): tank/k8s/primary/r-39-trax-5212-equipment-to-recipient-bug-fix/standard Hosts: MapallUser: MapallGroup: MaprootUser:root MaprootGroup:wheel Network: Paths:[/mnt/tank/k8s/primary/r-39-trax-5212-equipment-to-recipient-bug-fix/standard] Security:[] Quiet:false ReadOnly:false} - {"nfs_alldirs":["This option can only be used for datasets."]}

provide force delete patches for FreeNAS API v1

After running in semi production setup for a while things have been very smooth. We did however bump into some situations where stale nfs file handles left the dataset busy and made the delete API request fail.

I created a crude patch to unconditionally force delete with v1 API and FreeNAS has implemented in v2, however v2 eliminates quite a few installations. We may consider documenting the patches for older installations.

https://redmine.ixsystems.com/issues/53039

fails creation of dataset

What version of FreeNAS are you on? I'm currently running 11.1-U1 and I'm attempting to POST to create child dataset and it fails with 400 error (which doesn't print to the console by the way) with the following body:

{"name": ["Dataset names must begin with an alphanumeric character and may only contain \"-\", \"_\", \":\", \" \" and \".\"."]}

It seemingly doesn't like the fact that the name has a / in it. Not sure how this is working for you unless something is off between version of the server.

Broken in FreeNAS 11.3-RELEASE

Seems FreeNAS v11.3-RELEASE changed the capacity to IEC Suffixes. New PVC yield the error:

I0212 04:58:09.995962       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"test", Name:"test-config", UID:"9f22bf96-256f-405f-adc4-9f6bb5ca373f", APIVersion:"v1", ResourceVersion:"1431038", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "freenas-nfs": Error creating dataset "tank/Kubernetes/test/test-config" - message: {"refquota":["Specify the value with IEC suffixes, e.g. 10 GiB"]}, status: 400

This happens when setting spec.resources.requests.storage to nG and nGi.

Does this need to suffix the value with B?

Failing to authenticate to FreeNAS

I have a new FreeNAS 11.3-U3.2 setup. Have the provisioner 2.6 installed. Haven't been able to get any NFS volumes provisioned.

I went into the provisioner and verified that my freenas.local box does resolve properly, so I dug into the tcpdump. Curl works fine, but the provisioner does not. Here's the curl I used from inside the provisioner bash shell:

curl --user root:mypassword http://freenas.local/api/v1.0/storage/dataset/metalgods/k8s/
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
17:30:52.752752 IP freenas-nfs-provisioner-cffbb8f44-dpvp7.57348 > freenas.local.80: Flags [P.], seq 487283753:487283908, ack 2211867418, win 219, options [nop,nop,TS val 1957179417 ecr 4132469310], length 155: HTTP: GET /api/v1.0/storage/dataset/metalgods/k8s/ HTTP/1.1
E....~@.@...
bp.
......P..\)..k......u.....
t.0..Pz>GET /api/v1.0/storage/dataset/metalgods/k8s/ HTTP/1.1
Host: freenas.local
Authorization: Basic cm9vdDpteXBhc3N3b3JkCg==
User-Agent: curl/7.52.1
Accept: */*


17:30:52.809679 IP freenas.local.80 > freenas-nfs-provisioner-cffbb8f44-dpvp7.57348: Flags [P.], seq 1:809, ack 155, win 1028, options [nop,nop,TS val 4132469366 ecr 1957179417], length 808: HTTP: HTTP/1.1 200 OK
E..\..@.?...
...
bp..P....k...\............
.Pzvt.0.HTTP/1.1 200 OK
Server: nginx
Date: Tue, 23 Jun 2020 17:30:52 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Connection: keep-alive
Vary: Accept, Accept-Language, Cookie
Cache-Control: no-cache
Content-Language: en
Strict-Transport-Security: max-age=0
X-Content-Type-Options: nosniff
X-XSS-Protection: 1

1cf
{"atime": "on", "avail": 1529620118136, "comments": "Used by the cluster as remote PV storage", "compression": "lz4", "dedup": "off", "exec": "on", "inherit_props": ["compression", "aclinherit", "org.freebsd.ioc:active"], "mountpoint": "/mnt/metalgods/k8s", "name": "metalgods/k8s", "pool": "metalgods", "quota": 0, "readonly": "off", "recordsize": 131072, "refer": 253704, "refquota": 0, "refreservation": 0, "reservation": 0, "sync": "standard", "used": 253704}
0

Unfortunately, this is what I see when snooping the provisioner:

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
17:19:33.667899 IP freenas-nfs-provisioner-cffbb8f44-dpvp7.56334 > freenas.local.80: Flags [P.], seq 2892095849:2892096082, ack 4282234469, win 219, options [nop,nop,TS val 1956500347 ecr 1027750863], length 233: HTTP: GET /api/v1.0/storage/dataset/metalgods/k8s/ HTTP/1.1
E....C@.@...
bp.
......P.a.i.=.e...........
t..{=B;.GET /api/v1.0/storage/dataset/metalgods/k8s/ HTTP/1.1
Host: freenas.local:80
User-Agent: Go-http-client/1.1
Accept: application/json
Authorization: Basic cm9vdDpteXBhc3N3b3JkCg==
Content-Type: application/json
Accept-Encoding: gzip


17:19:33.704049 IP freenas.local.80 > freenas-nfs-provisioner-cffbb8f44-dpvp7.56334: Flags [P.], seq 1:281, ack 233, win 1028, options [nop,nop,TS val 1027750899 ecr 1956500347], length 280: HTTP: HTTP/1.1 401 Unauthorized
E..L..@.?...
...
bp..P...=.e.a.R...........
=B;.t..{HTTP/1.1 401 Unauthorized
Server: nginx
Date: Tue, 23 Jun 2020 17:19:33 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
WWW-Authenticate: Basic Realm="django-tastypie"
Vary: Accept-Language, Cookie
Content-Language: en

0


17:19:33.819787 IP freenas-nfs-provisioner-cffbb8f44-dpvp7.56340 > freenas.local.80: Flags [P.], seq 442110070:442110303, ack 4167529060, win 219, options [nop,nop,TS val 1956500499 ecr 65777863], length 233: HTTP: GET /api/v1.0/storage/dataset/metalgods/k8s/ HTTP/1.1
E....=@.@...
bp.
......P.Z.v.grd...........
t.......GET /api/v1.0/storage/dataset/metalgods/k8s/ HTTP/1.1
Host: freenas.local:80
User-Agent: Go-http-client/1.1
Accept: application/json
Authorization: Basic cm9vdDpteXBhc3N3b3JkCg==
Content-Type: application/json
Accept-Encoding: gzip


17:19:33.848877 IP freenas.local.80 > freenas-nfs-provisioner-cffbb8f44-dpvp7.56340: Flags [P.], seq 1:281, ack 233, win 1028, options [nop,nop,TS val 65777892 ecr 1956500499], length 280: HTTP: HTTP/1.1 401 Unauthorized
E..L..@.?...
...
bp..P...grd.Z._...........
....t...HTTP/1.1 401 Unauthorized
Server: nginx
Date: Tue, 23 Jun 2020 17:19:33 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked
Connection: keep-alive
WWW-Authenticate: Basic Realm="django-tastypie"
Vary: Accept-Language, Cookie
Content-Language: en

0

It's unclear to me how I have set it up wrong, I'm guessing the more restrictive Accept and Accept-Encoding are causing problems, but I'm far from an authentication expert. Any ideas how to fix this?

JH

support quotas

The more I've thought about this approach dataset + nfs share the more I like it. I'd really like to start exploring supporting setting a quota on the dataset when created but am having troubles getting the build to work. Being new to go makes this a bit of a stretch, but I'd like to help. If you've got some pointers on building and running this from a development perspective that would be great.

Mount point not created when using Helm 3

Output of helm version:

version.BuildInfo{Version:"v3.0.0-beta.3", GitCommit:"5cb923eecbe80d1ad76399aee234717c11931d9a", GitTreeState:"clean", GoVersion:"go1.12.9"}

Output of kubectl version:

Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T12:36:28Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}

Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.10", GitCommit:"37d169313237cb4ceb2cc4bef300f2ae3053c1a2", GitTreeState:"clean", BuildDate:"2019-08-19T10:44:49Z", GoVersion:"go1.11.13", Compiler:"gc", Platform:"linux/amd64"}

Cloud Provider/Platform (AKS, GKE, Minikube etc.):

Self-hosting Kubernetes Cluster on DELL R930 Servers.

I am using Helm 2 install stable/redis-ha, and it works fine.
But when I change the Helm 3, with same arguments, it faced:

MountVolume.SetUp failed for volume "pvc-d7ffe33c-d796-11e9-8568-801844f0932c" : 
mount failed: exit status 32 

Mounting command: systemd-run 
Mounting arguments: 
--description=Kubernetes transient mount for /var/lib/kubelet/pods/d802cecd-d796-11e9-8568-801844f0932c/volumes/kubernetes.io~nfs/pvc-d7ffe33c-d796-11e9-8568-801844f0932c 
--scope 
-- mount -t nfs 192.168.1.99:/mnt/data/super-kubernetes/my-space/data-redis-ha-server-0 /var/lib/kubelet/pods/d802cecd-d796-11e9-8568-801844f0932c/volumes/kubernetes.io~nfs/pvc-d7ffe33c-d796-11e9-8568-801844f0932c 

Output: 
Running scope as unit: run-r717513c4480b481ea2e4f0e3486caaeb.scope 
mount.nfs: 
access denied by server while mounting 192.168.1.99:/mnt/data/super-kubernetes/my-space/data-redis-ha-server-0

The PV and PVC was successful created on my FreeNAS just like I am using Helm 2.
I am not sure if this a Helm 3 problem, but I works fine with Helm 2.
I thought the key point maybe access denied by server while mounting ...,
I had check the NFS services, and found that /mnt/data/super-kubernetes/my-space/data-redis-ha-server-0 did not exist at all.

Maybe the freenas-provisioner are not compatible with Helm 3?

NFS exports hard coded?

I may be missing something here, but it seems like the nfs export hosts are hard coded to "knode1 knode2 knode3", I'm not seeing anywhere in the code where this could be dynamic based on the nodes in a cluster, or even an ipaddress range, is this by design?

status 404 Sorry, this request could not be processed. Please try again later

After run kubectl apply -f deploy\test-claim.yaml, pvc is in pending status.

Log message from pvc shows following

failed to provision volume with StorageClass "freenas-nfs": Error getting dataset "tank" - message: {"error_message":"Sorry, this request could not be processed. Please try again later."}, status: 404

segfault if storageClassName not defined

If I don't define storageClassName, then the provisioner will segfault.
While this may be unlikely, I hit this because I have a bunch of pvc resources which I am moving across to this provisioner, and a lot of them specify the class via an annotation (I wrote them years ago)

I0517 22:29:47.841178       1 controller.go:926] provision "ci/builds-minio-data" class "zfs-array": started
I0517 22:29:47.841397       1 controller.go:926] provision "ci/builds-minio-config" class "zfs-array": started
E0517 22:29:47.843664       1 runtime.go:69] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:76
/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
/home/travis/.gimme/versions/go1.10.8.linux.amd64/src/runtime/asm_amd64.s:573
/home/travis/.gimme/versions/go1.10.8.linux.amd64/src/runtime/panic.go:502
/home/travis/.gimme/versions/go1.10.8.linux.amd64/src/runtime/panic.go:63
/home/travis/.gimme/versions/go1.10.8.linux.amd64/src/runtime/signal_unix.go:388
/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/provisioner/provisioner.go:243
/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:1014
/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:786
/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:759
/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:683
/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:697
/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:657
/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:616
/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
/home/travis/.gimme/versions/go1.10.8.linux.amd64/src/runtime/asm_amd64.s:2361
I0517 22:29:47.843651       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"ci", Name:"builds-minio-data", UID:"6347a89e-cba8-47fb-9bf6-979f1ba741db", APIVersion:"v1", ResourceVersion:"106896691", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "ci/builds-minio-data"
I0517 22:29:47.843765       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"ci", Name:"builds-minio-config", UID:"0f0366b2-0c04-4f9c-a98e-a66b6fa1e1ef", APIVersion:"v1", ResourceVersion:"106896687", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "ci/builds-minio-config"
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x107fec1]

goroutine 106 [running]:
github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x107
panic(0x11e5de0, 0x1cc7c60)
	/home/travis/.gimme/versions/go1.10.8.linux.amd64/src/runtime/panic.go:502 +0x229
github.com/nmaupu/freenas-provisioner/provisioner.(*freenasProvisioner).Provision(0xc420401f40, 0xc4200ae69a, 0x6, 0xc4203ec210, 0x28, 0x0, 0x0, 0x0, 0xc420010b98, 0xc420729f80, ...)
	/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/provisioner/provisioner.go:243 +0x61
github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller.(*ProvisionController).provisionClaimOperation(0xc4204d8000, 0xc420010b98, 0x1cda720, 0xc42073e6e0)
	/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:1014 +0xcf8
github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller.(*ProvisionController).syncClaim(0xc4204d8000, 0x134e0c0, 0xc420010b98, 0x134e0c0, 0xc420010b98)
	/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:786 +0xaf
github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller.(*ProvisionController).syncClaimHandler(0xc4204d8000, 0xc42073e6e0, 0x14, 0x11804c0, 0xc4200a21d0)
	/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:759 +0x8d
github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller.(*ProvisionController).processNextClaimWorkItem.func1(0xc4204d8000, 0x11804c0, 0xc4200a21d0, 0x0, 0x0)
	/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:683 +0xe3
github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller.(*ProvisionController).processNextClaimWorkItem(0xc4204d8000, 0xc4200b4201)
	/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:697 +0x55
github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller.(*ProvisionController).runClaimWorker(0xc4204d8000)
	/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:657 +0x2b
github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller.(*ProvisionController).(github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller.runClaimWorker)-fm()
	/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:616 +0x2a
github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc4205b60a0)
	/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x54
github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc4205b60a0, 0x3b9aca00, 0x0, 0x1, 0x0)
	/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xbd
github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc4205b60a0, 0x3b9aca00, 0x0)
	/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
created by github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller.(*ProvisionController).Run.func1
	/home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:616 +0x3e1

Invalid character 'E' looking for beginning of value

If I set the 'host' portion of secret.yml to a base64 of an IP address or a DNS hostname, I get this issue in the logs:

kubectl -n kube-system logs -f freenas-nfs-provisioner-5559b967df-js7sb

I0114 21:55:22.266603       1 controller.go:926] provision "default/freenas-test-pvc" class "freenas-nfs": started
I0114 21:55:22.271765       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"freenas-test-pvc", UID:"a2d0a669-1846-11e9-b420-000c29289e1c", APIVersion:"v1", ResourceVersion:"10976528", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/freenas-test-pvc"
W0114 21:55:22.310721       1 dataset.go:60] invalid character 'E' looking for beginning of value
W0114 21:55:22.310772       1 controller.go:685] Retrying syncing claim "default/freenas-test-pvc" because failures 14 < threshold 15
E0114 21:55:22.310798       1 controller.go:700] error syncing claim "default/freenas-test-pvc": failed to provision volume with StorageClass "freenas-nfs": invalid character 'E' looking for beginning of value
I0114 21:55:22.310998       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"freenas-test-pvc", UID:"a2d0a669-1846-11e9-b420-000c29289e1c", APIVersion:"v1", ResourceVersion:"10976528", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "freenas-nfs": invalid character 'E' looking for beginning of value

message: {"error_message":"Sorry, this request could not be processed. Please try again later."}, status: 404

Setting up a monster Lab and would love to get this working. I have the managed-nfs-storage working but that seems to proxy an export vs interact with the API.

Normal Provisioning 4s (x2 over 19s) freenas.org/nfs_freenas-nfs-provisioner-5746757fdd-cj9kg_10e56dd8-f801-11e9-b9dd-62478df8baf0 External provisioner is provisioning volume for claim "default/freenas-test-pvc"
Warning ProvisioningFailed 4s (x2 over 19s) freenas.org/nfs_freenas-nfs-provisioner-5746757fdd-cj9kg_10e56dd8-f801-11e9-b9dd-62478df8baf0 failed to provision volume with StorageClass "freenas-nfs": Error getting dataset "mnt/k8spool03" - message: {"error_message":"Sorry, this request could not be processed. Please try again later."}, status: 404
Normal ExternalProvisioning 3s (x3 over 19s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "freenas.org/nfs" or manually created by system administrator

Here is what I see when calling the API manually.

{
"avail": 101417787392,
"id": 109,
"mountpoint": "/mnt/k8spool03",
"name": "k8spool03",
"path": "k8spool03",
"status": "-",
"type": "dataset",
"used": 421888,
"used_pct": 0
}

FreeNas version: FreeNAS-11.2-U6

crash when creating PV in OKD3.11

FreeNAS-11.2-U5
OpenShift Master:
v3.11.0+f8b6451-269
Kubernetes Master:
v1.11.0+d4cacc0
OpenShift Web Console:
v3.11.0+ea42280

freenas-provisioner:v2.4

Created a PV (any size), POD of freenas-nfs-provisioner crashed with the following log

I0819 09:50:33.675241 1 leaderelection.go:187] attempting to acquire leader lease kube-system/freenas.org-nfs...

  | I0819 09:50:51.079613 1 leaderelection.go:196] successfully acquired lease kube-system/freenas.org-nfs
  | I0819 09:50:51.079725 1 controller.go:571] Starting provisioner controller freenas.org/nfs_freenas-nfs-provisioner-597d5b54ff-b5h5q_c9a12f6a-c266-11e9-ab40-0a580a830207!
  | I0819 09:50:51.080589 1 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"freenas.org-nfs", UID:"9feece69-c265-11e9-af03-005056837f20", APIVersion:"v1", ResourceVersion:"623006", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' freenas-nfs-provisioner-597d5b54ff-b5h5q_c9a12f6a-c266-11e9-ab40-0a580a830207 became leader
  | I0819 09:50:51.179903 1 controller.go:620] Started provisioner controller freenas.org/nfs_freenas-nfs-provisioner-597d5b54ff-b5h5q_c9a12f6a-c266-11e9-ab40-0a580a830207!
  | I0819 09:50:51.180060 1 controller.go:926] provision "kube-system/k1" class "freenas-nfs": started
  | E0819 09:50:51.183384 1 runtime.go:69] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:76
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51
  | /home/travis/.gimme/versions/go1.10.5.linux.amd64/src/runtime/asm_amd64.s:573
  | /home/travis/.gimme/versions/go1.10.5.linux.amd64/src/runtime/panic.go:502
  | /home/travis/.gimme/versions/go1.10.5.linux.amd64/src/runtime/panic.go:63
  | /home/travis/.gimme/versions/go1.10.5.linux.amd64/src/runtime/signal_unix.go:388
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/provisioner/provisioner.go:243
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:1014
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:786
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:759
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:683
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:697
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:657
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:616
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
  | /home/travis/.gimme/versions/go1.10.5.linux.amd64/src/runtime/asm_amd64.s:2361
  | I0819 09:50:51.183915 1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"kube-system", Name:"k1", UID:"b763fdec-c266-11e9-a2cb-0050568364d3", APIVersion:"v1", ResourceVersion:"622848", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "kube-system/k1"
  | panic: runtime error: invalid memory address or nil pointer dereference [recovered]
  | panic: runtime error: invalid memory address or nil pointer dereference
  | [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x10194d1]
  |  
  | goroutine 109 [running]:
  | github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:58 +0x107
  | panic(0x1178760, 0x1c108d0)
  | /home/travis/.gimme/versions/go1.10.5.linux.amd64/src/runtime/panic.go:502 +0x229
  | github.com/nmaupu/freenas-provisioner/provisioner.(*freenasProvisioner).Provision(0xc4200ac2c0, 0xc4200bf0d0, 0x6, 0xc420355530, 0x28, 0x0, 0x0, 0x0, 0xc4203b1da8, 0xc4200b7a40, ...)
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/provisioner/provisioner.go:243 +0x61
  | github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller.(*ProvisionController).provisionClaimOperation(0xc4203ae4e0, 0xc4203b1da8, 0x1c23160, 0xc4204e14a0)
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:1014 +0xcf8
  | github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller.(*ProvisionController).syncClaim(0xc4203ae4e0, 0x12d8220, 0xc4203b1da8, 0x12d8220, 0xc4203b1da8)
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:786 +0xaf
  | github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller.(*ProvisionController).syncClaimHandler(0xc4203ae4e0, 0xc4204e14a0, 0xe, 0x11149a0, 0xc4205096b0)
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:759 +0x8d
  | github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller.(*ProvisionController).processNextClaimWorkItem.func1(0xc4203ae4e0, 0x11149a0, 0xc4205096b0, 0x0, 0x0)
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:683 +0xe3
  | github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller.(*ProvisionController).processNextClaimWorkItem(0xc4203ae4e0, 0xc4205c0201)
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:697 +0x55
  | github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller.(*ProvisionController).runClaimWorker(0xc4203ae4e0)
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:657 +0x2b
  | github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller.(*ProvisionController).(github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller.runClaimWorker)-fm()
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:616 +0x2a
  | github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc4204392f0)
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x54
  | github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc4204392f0, 0x3b9aca00, 0x0, 0x20001, 0x0)
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 +0xbd
  | github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc4204392f0, 0x3b9aca00, 0x0)
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 +0x4d
  | created by github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller.(*ProvisionController).Run.func1
  | /home/travis/gopath/src/github.com/nmaupu/freenas-provisioner/vendor/github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/controller/controller.go:616 +0x3e1

doesn't work at all

nothing happens. I follow directions in readme and and all I get is pending forever volume claim. No traffic between freenas and kubernetes node.

Update your readme or class.yaml comments for FreeNas 11.3

For FreeNas 11.3 I needed to change these 2 values in the class.
datasetEnableQuotas: 'false'
datasetEnableReservation: 'false'

With out this change I would receive this error;
Warning ProvisioningFailed 15s (x2 over 30s) freenas.org/nfs_freenas-nfs-provisioner-7769df7f96-5wnwf_63dd7f68-9c3f-11ea-b0bc-0a468e8acd9a failed to provision volume with StorageClass "freenas-nfs": Error creating dataset "tank/default/freenas-test-pvc" - message: {"refquota":["Specify the value with IEC suffixes, e.g. 10 GiB"],"refreservation":["Specify the value with IEC suffixes, e.g. 10 GiB"]}, status: 400

Interested in building out iscsi provisioning?

This is pretty cool...looks similar to the nfs-client provisioner but probably better handles size limits. Any interest in working out the provisioner to also support block storage and return iscsi volumes? I'd be interested in helping.

Error claiming PVC byte measurement

I've setup the deployment following the guide. Everything runs well and is deployed correctly. When I try create a PVC the status says pending and the freenas pod state the following in the logs:

[...]

I0208 11:24:44.552136       1 controller.go:926] provision "default/test" class "freenas-nfs": started
I0208 11:24:44.554605       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"test", UID:"a53f182d-2b93-11e9-b69e-005056a6835c", APIVersion:"v1", ResourceVersion:"2742491", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/test"
E0208 11:24:44.557407       1 controller.go:688] Giving up syncing claim "default/test" because failures 15 >= threshold 15
E0208 11:24:44.557427       1 controller.go:700] error syncing claim "default/test": failed to provision volume with StorageClass "freenas-nfs": byte quantity must be a positive integer with a unit of measurement like M, MB, MiB, G, GiB, or GB
I0208 11:24:44.557629       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"test", UID:"a53f182d-2b93-11e9-b69e-005056a6835c", APIVersion:"v1", ResourceVersion:"2742491", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "freenas-nfs": byte quantity must be a positive integer with a unit of measurement like M, MB, MiB, G, GiB, or GB

I cannot find the message in the code nor anywhere else in Kubernetes. I am not sure if it is a message from FreeNAS or from Kubernetes.

I am using FreeNAS 11.2 and kubernetes 1.11. Any thoughts about this issue? Or people who have the same issues?

claiming storage fails with "invalid character '<' looking for beginning of value"

Here's the log from the freenas-provisioner container:

W1123 16:48:09.350090       1 controller.go:685] Retrying syncing claim "default/freenas-test-pvc" because failures 0 < threshold 15
E1123 16:48:09.350480       1 controller.go:700] error syncing claim "default/freenas-test-pvc": failed to provision volume with StorageClass "freenas-nfs": invalid character '<' looking for beginning of value

After reading through the code, I'm guessing it's getting HTML back and, obviously, can't parse it correctly.

curling the same FreeNAS API endpoint directly from within the cluster works perfectly fine, and I've verified you're looking for the right fields with the right data types in the response.

Here's the response when I curl the API myself:

{
  "atime": "off",
  "avail": 107374092288,
  "comments": "provisioned automatically through kubernetes",
  "compression": "lz4",
  "dedup": "off",
  "exec": "on",
  "inherit_props": [
    "aclinherit",
    "org.freebsd.ioc:active"
  ],
  "mountpoint": "/mnt/data/kube",
  "name": "data/kube",
  "pool": "data",
  "quota": 107374182400,
  "readonly": "off",
  "recordsize": 131072,
  "refer": 90112,
  "refquota": 0,
  "refreservation": 0,
  "reservation": 0,
  "sync": "standard",
  "used": 90112
}

I really wish there was a debug/loglevel flag I could pass to see the raw responses from FreeNAS to aid in troubleshooting.

I've tried versions 2.3, 2.4 and 2.5 from docker hub.

Here's the spam of my uninteresting YAML:

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: freenas-nfs-provisioner
  namespace: kube-system
  labels:
    app: freenas-nfs-provisioner
spec:
  replicas: 1
  selector:
    matchLabels:
      app: freenas-nfs-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: freenas-nfs-provisioner
    spec:
      serviceAccountName: freenas-nfs-provisioner
      containers:
        - name: freenas-nfs-provisioner
          image: docker.io/nmaupu/freenas-provisioner:2.5

---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: freenas-nfs
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: freenas.org/nfs
allowVolumeExpansion: true
reclaimPolicy: Delete
mountOptions: []
parameters:
  datasetParentName: "data/kube"

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: freenas-test-pvc
spec:
  storageClassName: freenas-nfs
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10M

Secret is entirely omitted from this post, but I can tell you that I'm using HTTPS to an IP with port 443 with allowInsecure turned on. Unlike the other yaml resource files above, I've supplied answers to every key in the sample secret.yaml. All values are base64.

If I can help any more, please let me know.

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.