Comments (19)
I think from a driver point of view, we can't really tell the difference between a top level share and a subdirectory. The "share' in the volumeAttributes can be either:
volumeAttributes:
server: 127.0.0.1
share: /export
The main advantages of the csi driver over the nfs-client external provisioner is that we can potentially support CSI-only features like snapshots and cloning, although a generic driver will be a bit limited in what we can offer. I think snapshots/cloning would basically just be copying directories to another place in the nfs share.
from csi-driver-nfs.
I do have an implementation of the nfs-client equivalent behavior that provisions subdirectories from an existing nfs server. I would be happy to contribute if you think the nfs-client-like provisioner is the way to go.
Another alternative is that the provisioner creates an entire nfs server, like the ganesha-based nfs external provisioner.
I think it would be hard for a single driver to support both. I think we should just pick one mode for simplicity. There was an attempt in the past to support different provisioning modules via go plugins, but I think there were some challenges with that model: #13
cc @kmova for thoughts
from csi-driver-nfs.
Here's an example StorageClass for my nfs-client equivalent implementation:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-nfs
provisioner: csi-nfsplugin
parameters:
server: 10.123.42.242
base-dir: vol1
You specify the nfs share you want to use as the "base". And with every CreateVolume call, it mounts that share, and creates a subdirectory under it, then unmounts it.
from csi-driver-nfs.
that provisions subdirectories from an existing nfs server
that sounds like a solid implementation, I am in favor of supporting this approach. There might be a path to also support multiple NFS servers in cluster but I think it would make sense to have separate StorageClass
for each NFS server and the plugin can obtain connection information from the StorageClass
. Kind of like the old ganesha provisioner but without spinning up the entire NFS server, only as a reference point to one created independently.
There was an attempt in the past to support different provisioning modules via go plugins, but I think there were some challenges with that model: #13
this is a very good resource, thanks. Let's see if we can resurrect that PR, I could find some spare time in early May. @huchengze also feel free to chime in, every idea is welcome.
from csi-driver-nfs.
Typically, the df -h
of a sub-directory will show the usage details of the main export path. Except for NFS implementations that can treat subdirectory as a separate volume.
df -h
is useful when using full block devices. The information it provides can be misleading when using subdirectories. du
is safer, but expensive when there are lots of subdirectories and files.
Example:
NFS Server does the following to provide /export
.
- Format a block device
/dev/sdb
with ext4 - Mounts the
/dev/sdb
as/mnt/nfs-export
- Add the export path saying
/mnt/nfs-export => /export
NFS client df -h
on /export
will match NFS Server df -h
on /mnt/nfs-export
NFS client df -h
on /export/subdir
will also be same as NFS Server df -h
on /mnt/nfs-export
Assume a slight variation to the above example, where NFS server uses a hostpath like: /var/export => /export
, where /var/export
is a subdirectory on OS disk itself. df -h
for /export
and /export/subdir
will be that of the OS disk from the NFS server.
One case where I have seen NFS sub directories showing their own capacity and usage are NFS volumes backed by ZFS datasets.
from csi-driver-nfs.
I think we can do an empty implementation first on CreateVolume
func to support storage class first, that's what I have done on SMB CSI driver:
https://github.com/kubernetes-csi/csi-driver-smb/blob/master/deploy/example/e2e_usage.md#option1-storage-class-usage
Shall I do the same implementation on this NFS driver? @msau42
there is already discussion here: kubernetes-csi/csi-driver-smb#61 (comment)
from csi-driver-nfs.
Hi @huchengze, for now the development of this plugin is a little bit slower due to no dedicated full time maintainer, only few hobbyists slowly grinding through the unfinished tasks in their free time. No immediate plan for finishing that in next weeks afaik but if you'd like to submit a PR, we appreciate all help and gladly review to successful merge :)
from csi-driver-nfs.
/reopen
from csi-driver-nfs.
@msau42: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
from csi-driver-nfs.
/kind feature
from csi-driver-nfs.
I agree with this approach.
I think it would be hard for a single driver to support both. I think we should just pick one mode for simplicity.
@msau42 @wozniakjan - couple of questions related to the scope/implementation:
-
Will csi-driver-nfs be considered a replacement for the in-tree NFS driver? Should it support:
- Managing PV with NFS spec.
- Managing PV with NFS spec that includes sub-directories
-
What are the advantages of going with CSI driver approach for NFS sub directory volumes as opposed to using external-provisioner? One advantage I see is that we can have node driver potentially support volume metrics. (Though metrics could be done via daemonset, CSI approach seems cleaner) . What else?
from csi-driver-nfs.
Supporting volume metrics is another useful feature. Do you know if "df" works if you mount a subdirectory in an nfs share?
from csi-driver-nfs.
/assign
from csi-driver-nfs.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
from csi-driver-nfs.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
from csi-driver-nfs.
/remove-lifecycle rotten
I haven't had time to work on this. If anyone is interested, I can push my incomplete workspace and someone can take over.
from csi-driver-nfs.
/remove-lifecycle rotten
I haven't had time to work on this. If anyone is interested, I can push my incomplete workspace and someone can take over.
@msau42 could you push your code somewhere? I have one intern looking at this first.
And what about implement an empty CreateVolume first? that would support storage class first(still need bring your own nfs server first), refer to https://github.com/kubernetes-csi/csi-driver-smb/blob/ca5f7e2fc6ae8cc49e9c47d3311eab0d8486d22e/pkg/smb/controllerserver.go#L28-L40
from csi-driver-nfs.
Here's my branch, happy to help walk through it if needed: msau42@da0e3d6
Re: empty CreateVolume, I think not supporting delete is a downside. I would rather try to go with nfs-client behavior (which I think is close) instead of adding this and having people depend on it, and then removing it in the future.
from csi-driver-nfs.
Here's my branch, happy to help walk through it if needed: msau42@da0e3d6
Re: empty CreateVolume, I think not supporting delete is a downside. I would rather try to go with nfs-client behavior (which I think is close) instead of adding this and having people depend on it, and then removing it in the future.
@msau42 thanks, I see, I will start with an e2e test first and then continue with this PR, I see this PR is close to complete, thanks.
from csi-driver-nfs.
Related Issues (20)
- [BUG][GKE] Upgrade to 4.5.0 fails if cluster role snapshot-controller-runner already exist in cluster HOT 6
- Time helm-chart release after image release HOT 4
- VolumeFailedDelete, when i deleted pvc but pv wasn't deleted HOT 9
- dynamic provisioning of existing data HOT 2
- Race condition: terminating pod destroys PV mount on new pod HOT 6
- Pod usage error after pvc mounting HOT 2
- Unable to mount nfs without "nolock" option HOT 2
- Link to MicroK8S Documentation in Readme leads to 404-page
- Please publish Helm Chart to OCI HOT 3
- helm chart volumesnapshot rbac problem HOT 1
- configMap can't mounted as file on the NFS PVC HOT 2
- [Feature Request] Share a PV across namespaces HOT 5
- Warning VolumeResizeFailed pod/pueel-sxx-0 NodeExpandVolume.NodeExpandVolume failed for volume "logging-nfs-pv" : Expander.NodeExpand found CSI plugin kubernetes.io/csi/nfs.csi.k8s.io to not support node expansion HOT 3
- csi-driver-nfs stops working for new PVC on randomly affected nodes. HOT 6
- Documentation for fsGroupPolicy is outdated HOT 1
- Single-node (WIP) cluster can't schedule controller HOT 2
- Failed to pull image "registry.k8s.io/sig-storage/nfsplugin:v4.7.0": rpc error: code = NotFound HOT 2
- wrong chart intent when using multiple mountOptions
- Remount when connection lost HOT 2
- Mounting fails with error "/usr/sbin/start-statd: 10: cannot create /run/rpc.statd.lock: Read-only file system" HOT 7
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from csi-driver-nfs.