Code Monkey home page Code Monkey logo

docs's Introduction

Kubernetes-CSI Documentation

This repository contains documentation capturing how to develop and deploy a Container Storage Interface (CSI) driver on Kubernetes.

To access the documentation go to: kubernetes-csi.github.io/docs

To make changes to the documentation, modify the files in book/src and submit a new PR.

Once the PR is reviewed and merged, the CI will automatically generate (using mdbook) the HTML to serve and check it in to src/_book.

To update the CRD API documentation, run:

./hack/gen-api.sh

The script uses the gen-crd-api-reference-docs tool to generate a markdown document for the VolumeSnapshot CRD API. See the script for more information and supported configuration.

Community, discussion, contribution, and support

Learn how to engage with the Kubernetes community on the community page.

You can reach the maintainers of this project at:

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

To start editing on localhost

$ git clone [email protected]:kubernetes-csi/docs.git
$ cd docs
$ make $(pwd)/mdbook
$ make serve

Access to http:localhost:3000 and you can view a book on localhost!

docs's People

Contributors

andyzhangx avatar arahamad avatar bells17 avatar chrishenzie avatar coulof avatar evan37717 avatar ggriffiths avatar hoyho avatar huffmanca avatar humblec avatar j-griffith avatar jmccormick2001 avatar jsafrane avatar k8s-ci-robot avatar lpabon avatar madhu-1 avatar msau42 avatar ntap-cfouts avatar pohly avatar qeas avatar saad-ali avatar sagyvolkov-lb avatar sdodsley avatar verult avatar vladimirvivien avatar wuxueyi19 avatar xiaogaozi avatar xing-yang avatar zhangzhenguang2017 avatar zshihang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docs's Issues

Cannot run hostpath integration example

Link to page: https://kubernetes-csi.github.io/docs/Example.html

I've been trying to develop my own CSI driver, but unfortunately I can't even seem to get the example in the docs running with minikube. I have tried it on both a macOS and linux host with the kubeadm bootstraps and the kvm2, virtualbox, and hyperkit drivers for minikube.

I am consistently getting the following error:

Name:         csi-pod
Namespace:    default
Node:         minikube/192.168.64.3
Start Time:   Fri, 03 Aug 2018 11:13:58 -0700
Labels:       app=hostpath-driver
Annotations:  <none>
Status:       Running
IP:           172.17.0.4
Containers:
  external-provisioner:
    Container ID:  docker://1e8f87f4d3a4eb3c603afc6304c6040d746e7fe1b698f1339dce3899dbd467b5
    Image:         quay.io/k8scsi/csi-provisioner:v0.2.1
    Image ID:      docker-pullable://quay.io/k8scsi/csi-provisioner@sha256:fd4ed32315e846b6654f97c95b373da001fd9638cd5935c20a5bf9f5889e8602
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --provisioner=csi-hostpath
      --csi-address=/csi/csi.sock
    State:          Running
      Started:      Fri, 03 Aug 2018 11:14:02 -0700
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /csi from socket-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from csi-service-account-token-qgwmr (ro)
  driver-registrar:
    Container ID:  docker://27a57b906712f5b2be9d700d54fc614be0677ace085bb12da7bc3d2f55909dfc
    Image:         quay.io/k8scsi/driver-registrar:v0.2.0
    Image ID:      docker-pullable://quay.io/k8scsi/driver-registrar@sha256:9a84ec490b5ff5390b12be21acf707273781cd0911cc597712a254bc1862f220
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --csi-address=/csi/csi.sock
    State:          Running
      Started:      Fri, 03 Aug 2018 11:14:04 -0700
    Ready:          True
    Restart Count:  0
    Environment:
      KUBE_NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /csi from socket-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from csi-service-account-token-qgwmr (ro)
  external-attacher:
    Container ID:  docker://88c9dbd33133c01c15b1b59dd32c7ea8104bc0862eedb30dcc43224e9804327c
    Image:         quay.io/k8scsi/csi-attacher:v0.2.0
    Image ID:      docker-pullable://quay.io/k8scsi/csi-attacher@sha256:5cbb7934bd86d400c221379cff8b24ed4c06e121ea59608cfd7e67690100ba54
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --csi-address=$(ADDRESS)
    State:          Running
      Started:      Fri, 03 Aug 2018 11:14:07 -0700
    Ready:          True
    Restart Count:  0
    Environment:
      ADDRESS:  /csi/csi.sock
    Mounts:
      /csi from socket-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from csi-service-account-token-qgwmr (ro)
  hostpath-driver:
    Container ID:  docker://14923bdcda87b52deae5fccf045788a17b45cc1f2bb1e95e39ead96ca6e5d26f
    Image:         quay.io/k8scsi/hostpathplugin:v0.2.0
    Image ID:      docker-pullable://quay.io/k8scsi/hostpathplugin@sha256:6c640a9b6a87e9f7261ff73be2e000367aa21f8f0c6ebfda97d4eefa5523ab53
    Port:          <none>
    Host Port:     <none>
    Args:
      --v=5
      --endpoint=$(CSI_ENDPOINT)
      --nodeid=$(KUBE_NODE_NAME)
    State:          Waiting
      Reason:       RunContainerError
    Last State:     Terminated
      Reason:       ContainerCannotRun
      Message:      OCI runtime create failed: open /var/run/docker/runtime-runc/moby/14923bdcda87b52deae5fccf045788a17b45cc1f2bb1e95e39ead96ca6e5d26f/state.json: no such file or directory: unknown
      Exit Code:    128
      Started:      Fri, 03 Aug 2018 11:14:08 -0700
      Finished:     Fri, 03 Aug 2018 11:14:08 -0700
    Ready:          False
    Restart Count:  0
    Environment:
      CSI_ENDPOINT:    unix:///csi/csi.sock
      KUBE_NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /csi from socket-dir (rw)
      /var/lib/kubelet/pods from mountpoint-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from csi-service-account-token-qgwmr (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          False 
  PodScheduled   True 
Volumes:
  socket-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/plugins/csi-hostpath
    HostPathType:  DirectoryOrCreate
  mountpoint-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/pods
    HostPathType:  DirectoryOrCreate
  csi-service-account-token-qgwmr:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  csi-service-account-token-qgwmr
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                 Age   From               Message
  ----     ------                 ----  ----               -------
  Normal   Scheduled              13s   default-scheduler  Successfully assigned csi-pod to minikube
  Normal   SuccessfulMountVolume  12s   kubelet, minikube  MountVolume.SetUp succeeded for volume "mountpoint-dir"
  Normal   SuccessfulMountVolume  12s   kubelet, minikube  MountVolume.SetUp succeeded for volume "socket-dir"
  Normal   SuccessfulMountVolume  12s   kubelet, minikube  MountVolume.SetUp succeeded for volume "csi-service-account-token-qgwmr"
  Normal   Pulling                12s   kubelet, minikube  pulling image "quay.io/k8scsi/csi-provisioner:v0.2.1"
  Normal   Created                9s    kubelet, minikube  Created container
  Normal   Pulled                 9s    kubelet, minikube  Successfully pulled image "quay.io/k8scsi/csi-provisioner:v0.2.1"
  Normal   Started                9s    kubelet, minikube  Started container
  Normal   Pulling                9s    kubelet, minikube  pulling image "quay.io/k8scsi/driver-registrar:v0.2.0"
  Normal   Pulling                7s    kubelet, minikube  pulling image "quay.io/k8scsi/csi-attacher:v0.2.0"
  Normal   Pulled                 7s    kubelet, minikube  Successfully pulled image "quay.io/k8scsi/driver-registrar:v0.2.0"
  Normal   Started                7s    kubelet, minikube  Started container
  Normal   Created                7s    kubelet, minikube  Created container
  Normal   Pulled                 5s    kubelet, minikube  Successfully pulled image "quay.io/k8scsi/csi-attacher:v0.2.0"
  Normal   Created                5s    kubelet, minikube  Created container
  Normal   Started                4s    kubelet, minikube  Started container
  Normal   Pulling                4s    kubelet, minikube  pulling image "quay.io/k8scsi/hostpathplugin:v0.2.0"
  Normal   Pulled                 3s    kubelet, minikube  Successfully pulled image "quay.io/k8scsi/hostpathplugin:v0.2.0"
  Normal   Created                3s    kubelet, minikube  Created container
  Warning  Failed                 3s    kubelet, minikube  Error: failed to start container "hostpath-driver": Error response from daemon: OCI runtime create failed: open /var/run/docker/runtime-runc/moby/hostpath-driver/state.json: no such file or directory: unknown

I have tried to create the folder manually before deploying to no avail. I'm not sure if there's a problem with my config or if it's a problem with the driver itself. Either way, the example is not working and should probably be updated to work. I'll continue working on fixing it (any help would be appreciated) and if I find a solution I'll submit a PR.

Define a deprecation policy for csi sidecars

It can mirror the Kubernetes policy.

Define policy on:

  • sidecar CLI arguments
  • sidecar behavior
  • compatibility with K8s
  • compatibility with CSI spec

This may also tie into defining our sidecar versioning scheme and what warrants a major, minor and patch version bump.

/assign
cc @jsafrane @pohly @saad-ali

Add csi-digitalocean to the list of drivers

Hi,

As per the instructions written at https://kubernetes-csi.github.io/docs/Drivers.html I would like to add our CSI plugin for DigitalOcean block storage to the list of production drivers.

The driver repo is here: https://github.com/digitalocean/csi-digitalocean
Name: DigitalOcean Block Storage
Status: Alpha
Description: A Container Storage Interface (CSI) Driver for DigitalOcean Block Storage

We're working on releasing the initial version v0.0.1. Until then it's marked as alpha. Please let me know if you need more information.

Thanks

Add documentation on Snapshot secret support

The external-provisioner currently supports special key for specifying secret during volume provisioning. However, the https://kubernetes-csi.github.io/docs/secrets-and-credentials.html page does not document that.

So we should:

  1. Add two sub pages to the "Secrets & Credentials" page, one for external-provisioner and one for external-snapshotter to document the "special keys" supported by both CSI volume and snapshot provisioning.
  2. We should add use cases to the "Secrets & Credentials" page to document common use cases, for example:
    • For backend credential (which does not change per volume), just populate that directly in the CSI driver (does not need to go through CSI protocol.
    • For encrypting a disk or snapshot...

@oleksiys maybe a good candidate to help with 2.

Enable release notes automation

It's hard to keep track of changes between releases and compose correct CHANGELOG.

Proposal:

  • Add PR template to all kubernetes-csi repos. Template of the template: kubernetes-csi/external-attacher#139. It's slightly trimmed version of kubernetes/kubernetes template with removed links to Kubernetes specific docs.

  • Enable release-notes bot in prow that will nag people to fill ```release-note (same as in kubernetes/kubernetes).

  • PR reviewers/approvers must ensure that ```release-note is filled in PRs where is makes sense.

  • Use release-notes tool to generate CHANGELOG from ```release-note, ```release-note Action required: and /kind {feature|bug} in PRs. See Kubernetes 1.11 - we won't get SIG sections, but the rest should be similar. Some manual editing is still required (deprecations?).

@kubernetes-csi/csi-misc

staging directory must be a pod volumeMount

If a CSI driver implements NodeStageVolume, then it must have a volumeMount entry which ensures that /var/lib/kubelet/plugins/kubernetes.io/csi (or is it /var/lib/kubelet/plugins/kubernetes.io/csi/pv/?) is shared with the host.

Otherwise the driver is not going to see the staging directory that kubelet creates for it, like /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-4863bcae-4b2f-11e9-9bd3-deadbeef0100/globalmount.

In practice, drivers like gcepd have solved this instead by creating the staging directory themselves if needed:
https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/blob/5d62c770f3e331e4236ebe42b74321f007f8c1e7/pkg/gce-pd-csi-driver/node.go#L201-L208

We should:

use StatefulSet and DaemonSet in example

The recommended approach is to deploy the controller in a StatefulSet and the node plugin in a DaemonSet. The example should follow that recommendation, even though it is not strictly needed for a driver that only works in a single node.

It does fix one specific issue: directly creating a pod as in the current example can fail due to a race condition (kubernetes/kubernetes#67882).

Add more examples for secret templating

The docs currently have example StorageClasses for using only fixed secret references, or only templated secret references.

It might be useful to also add an example StorageClass that combines both fixed and templated references.

And also describe some more sample user stories that this can support.

/help

Add PR check (travis?) to verify generated files

Updating docs is a two step process:

  1. Update file in book/*
  2. Run generation command to generate files in docs/*

To ensure that some one doesn't accidentally only update book/* or only modifies the generated files, we should add a new PR check (via travis maybe?) to verify that generated files do not differ from the generated files in the PR.

Incorrect link in docs

The documentation for the node driver registrar links to the Device Plugin registration instructions. While we were reimplementing the registration API, it turns out this is incorrect. Insofar as I can tell, the node driver registrar uses the plugin watcher mechanism to register a CSI Driver and not the device plugin endpoint: https://github.com/kubernetes/kubelet/blob/088ee84ea259bf9c445109ea75b2938dd39d2074/pkg/apis/pluginregistration/v1/api.proto#L57

I would open a PR, but I am unclear how this should be documented because there isn't much user facing (as opposed to developer) documentation on the plugin watcher. Any direction would be welcome and then I can open a PR

Best practices page

We should have a best practices page outlining suggestions for deploying CSI drivers that document things like:

  • Tolerations on the DaemonSet
  • Leader election
  • Restricted RBACs

We can also extend that by providing a linter that vendors can use on their deployment specs. The linter could also detect things like:

  • Outdated RBACs
  • Deprecated options

@kubernetes-csi/csi-misc

Update Drivers page

Please update in the CSI Drivers page (https://kubernetes-csi.github.io/docs/Drivers.html)

ScaleIO | v0.1.0 | A Container Storage Interface (CSI) Storage Plugin for DellEMC ScaleIO

to

ScaleIO | v0.1.0 (Tech Preview)| A Container Storage Interface (CSI) Storage Plugin for DellEMC ScaleIO

This is temporary, until we make the new VxFlex OS CSI driver available.

Regards,

Eyal S.
VxFlex OS Dell EMC

Add Makefile for docs

Updating docs is a two step process:

  1. Update file in book/*
  2. Run generation command to generate files in docs/*

We should add a Makefile at the root that makes it dead simple to run the command:

make all <-- generate all files
make clean <-- remove temp files
make clobber <-- remove all generated files

Ambiguous terminology `Access Modes (RWO,ROX,RWX)`

The table from
https://github.com/kubernetes-csi/docs/blob/master/book/src/drivers.md#production-drivers
has column Supported Access Modes which has value like Read/Write Single Pod or Read/Write Multiple Pods

According to Kubernetes document https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes

The access modes are:
ReadWriteOnce – the volume can be mounted as read-write by a single node
ReadOnlyMany – the volume can be mounted read-only by many nodes
ReadWriteMany – the volume can be mounted as read-write by many nodes

My point is that Read/Write Multiple Pods is not equal to read-write by many nodes
case 1 : what if Pods in the same node?
case 2: what if Pods in different node?

Also take a look at csi specification https://github.com/container-storage-interface/spec/blob/4731db0e0bc53238b93850f43ab05d9355df0fd9/csi.proto#L365
it also define AccessMode base on SINGLE_NODE/MULTI_NODE.

It is necessary to clarify Access Mode.

building and testing sidecar containers

For new developers it would be useful to document how they can build and test CSI. This could include:

  • building and pushing container images for driver-registrar/hostpath (aka drivers)/external-attacher/external-provisioner
  • running the kubernetes/test/e2e/storage/csi_volumes.go test for hostpath

livenessprobe version info broken

The version table for https://kubernetes-csi.github.io/docs/livenessprobe.html currently looks like this:

Latest stable release | Branch | Min CSI Version | Max CSI Version | Container Image | Min K8s Version | Max K8s Version | --|--|--|--|--|-- livenessprobe v2.0.0 | release-2.0 | v1.0.0 | quay.io/k8scsi/livenessprobe:v2.0.0 | v1.13 | - livenessprobe v1.1.0 | release-1.1 | v1.0.0 | quay.io/k8scsi/livenessprobe:v1.1.0 | v1.13 | - Unsupported. | No 0.x branch. | v0.3.0 | quay.io/k8scsi/livenessprobe:v0.4.1 | v1.10 | v1.16

Looks like it may have broken in #290

CC @msau42

RBAC definitions

Depending on how the sidecar containers are deployed (for example, with or without leadership election), different RBAC definitions are needed. The example in this repo contain definitions that work for this particular example, but more documentation is needed.

The example also could be improved:

Two options:

  • each sidecar developer is responsible for documenting the required permissions as part of the individual releases and provides the reference .yaml filed, then the "doc" example links to these upstream definitions, or
  • we choose this docs repo as the authoritative location for the deployment information of all sidecar containers

Add troubleshooting: deploy CSI plugin based on CSI spec v0.2.0 in Kubernetes v1.12

  1. Add troubleshooting: deploy CSI plugin based on CSI spec v0.2.0 in Kubernetes v1.12
  • Reason: Because KubeletPluginsWatcher is enabled by default in Kubernetes v1.12, the CSI plugin registration mechanism has been changed and Kubelet registers CSI plugins through kubelet-registration-path.
  • Problem: Users cannot deploy their CSI plugins based on CSI spec v0.2.0 in Kubernetes v1.12 just like the way in Kubernetes v1.10 and v1.11.
  • Solution: From my experience, the problem is solved by disabling KubeletPluginsWatcher feature gate on Kubelet. In my opinion, we could add this case to the chapter of troubleshooting.
  1. If we must disable KubeletPluginsWatcher feature gate before deploying CSI plugin based on CSI spec v0.2.0, can we deploy and use CSI plugins based on v0.3.0 and v0.2.0 in a Kubernetes v1.12 cluster simultaneously?

What is the difference between NodePublishVolume and NodeStageVolume?

  1. I would develope a CSI driver to make Kubernetes pod mount block storage provided by IaaS.
    Will I only implement six methods, such as CreateVolume/DeleteVolume, ControllerPublish/ControllerUnpublish and NodePublishVolume/NodeUnpublishVolume?
  2. I have a question about the difference between NodePublishVolume and NodeStageVolume.

Add metrics documentation

I need to add documentation to https://kubernetes-csi.github.io/docs/sidecar-containers.html

Background:

A new CSI Metrics Library was added to csi-lib-utils in and is part of v0.7.0 release. This library can be used to automatically generate Prometheus metrics for all CSI operations including total count, error count, and call latency. This library was integrated in to the following CSI Sidecar containers:

New flags “--metrics-address” or “--metrics-path” are now part of all 4 of those sidecars. Driver deployments should set those flags to ensure the metrics are being emitted.

Driver Feature table is a bit out of control

Keeping a table of production plugins and the features they implement is handy, but as the project has grown, along with the number of plugins and features; this simple table has become a bit difficult to view and even more difficult to update and keep accurate.

I don't have a solution to reformat what's in place, but I wonder if instead of trying to keep this up to date for features, replace the features columns with a link to the plugins documentation? Said link could be a plugin doc (we could provide a template for plugin authors to use) and we could host that in this repo under sub-sections (sub-section for each plugin).

The downside of this is that users like to have the matrix in front of them to compare things, so this doesn't help with that. It does however clean things up in terms of listing available plugins and their features; it provides the information, just no easy way to compare plugins.

Opening this issue to propose the template idea or to solicit input for a better suggestion if someone has one.

Create a SECURITY_CONTACTS file.

As per the email sent to kubernetes-dev[1], please create a SECURITY_CONTACTS
file.

The template for the file can be found in the kubernetes-template repository[2].
A description for the file is in the steering-committee docs[3], you might need
to search that page for "Security Contacts".

Please feel free to ping me on the PR when you make it, otherwise I will see when
you close this issue. :)

Thanks so much, let me know if you have any questions.

(This issue was generated from a tool, apologies for any weirdness.)

[1] https://groups.google.com/forum/#!topic/kubernetes-dev/codeiIoQ6QE
[2] https://github.com/kubernetes/kubernetes-template-project/blob/master/SECURITY_CONTACTS
[3] https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance-template-short.md

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.