Code Monkey home page Code Monkey logo

k8s-device-plugin's Introduction

NVIDIA device plugin for Kubernetes

End-to-end Tests Go Report Card Latest Release

Table of Contents

About

The NVIDIA device plugin for Kubernetes is a Daemonset that allows you to automatically:

  • Expose the number of GPUs on each nodes of your cluster
  • Keep track of the health of your GPUs
  • Run GPU enabled containers in your Kubernetes cluster.

This repository contains NVIDIA's official implementation of the Kubernetes device plugin. As of v0.15.0 this repository also holds the implementation for GPU Feature Discovery labels, for further information on GPU Feature Discovery see here.

Please note that:

  • The NVIDIA device plugin API is beta as of Kubernetes v1.10.
  • The NVIDIA device plugin is currently lacking
    • Comprehensive GPU health checking features
    • GPU cleanup features
  • Support will only be provided for the official NVIDIA device plugin (and not for forks or other variants of this plugin).

Prerequisites

The list of prerequisites for running the NVIDIA device plugin is described below:

  • NVIDIA drivers ~= 384.81
  • nvidia-docker >= 2.0 || nvidia-container-toolkit >= 1.7.0 (>= 1.11.0 to use integrated GPUs on Tegra-based systems)
  • nvidia-container-runtime configured as the default low-level runtime
  • Kubernetes version >= 1.10

Quick Start

Preparing your GPU Nodes

The following steps need to be executed on all your GPU nodes. This README assumes that the NVIDIA drivers and the nvidia-container-toolkit have been pre-installed. It also assumes that you have configured the nvidia-container-runtime as the default low-level runtime to use.

Please see: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html

Example for debian-based systems with docker and containerd

Install the NVIDIA Container Toolkit

For instructions on installing and getting started with the NVIDIA Container Toolkit, refer to the installation guide.

Also note the configuration instructions for:

Remembering to restart each runtime after applying the configuration changes.

If the nvidia runtime should be set as the default runtime (required for docker), the --set-as-default argument must also be included in the commands above. If this is not done, a RuntimeClass needs to be defined.

Notes on CRI-O configuration

When running kubernetes with CRI-O, add the config file to set the nvidia-container-runtime as the default low-level OCI runtime under /etc/crio/crio.conf.d/99-nvidia.conf. This will take priority over the default crun config file at /etc/crio/crio.conf.d/10-crun.conf:

[crio]

  [crio.runtime]
    default_runtime = "nvidia"

    [crio.runtime.runtimes]

      [crio.runtime.runtimes.nvidia]
        runtime_path = "/usr/bin/nvidia-container-runtime"
        runtime_type = "oci"

As stated in the linked documentation, this file can automatically be generated with the nvidia-ctk command:

$ sudo nvidia-ctk runtime configure --runtime=crio --set-as-default --config=/etc/crio/crio.conf.d/99-nvidia.conf

CRI-O uses crun as default low-level OCI runtime so crun needs to be added to the runtimes of the nvidia-container-runtime in the config file at /etc/nvidia-container-runtime/config.toml:

[nvidia-container-runtime]
runtimes = ["crun", "docker-runc", "runc"]

And then restart CRI-O:

$ sudo systemctl restart crio

Enabling GPU Support in Kubernetes

Once you have configured the options above on all the GPU nodes in your cluster, you can enable GPU support by deploying the following Daemonset:

$ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v0.15.0/deployments/static/nvidia-device-plugin.yml

Note: This is a simple static daemonset meant to demonstrate the basic features of the nvidia-device-plugin. Please see the instructions below for Deployment via helm when deploying the plugin in a production setting.

Running GPU Jobs

With the daemonset deployed, NVIDIA GPUs can now be requested by a container using the nvidia.com/gpu resource type:

$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: gpu-pod
spec:
  restartPolicy: Never
  containers:
    - name: cuda-container
      image: nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda10.2
      resources:
        limits:
          nvidia.com/gpu: 1 # requesting 1 GPU
  tolerations:
  - key: nvidia.com/gpu
    operator: Exists
    effect: NoSchedule
EOF
$ kubectl logs gpu-pod
[Vector addition of 50000 elements]
Copy input data from the host memory to the CUDA device
CUDA kernel launch with 196 blocks of 256 threads
Copy output data from the CUDA device to the host memory
Test PASSED
Done

WARNING: if you don't request GPUs when using the device plugin with NVIDIA images all the GPUs on the machine will be exposed inside your container.

Configuring the NVIDIA device plugin binary

The NVIDIA device plugin has a number of options that can be configured for it. These options can be configured as command line flags, environment variables, or via a config file when launching the device plugin. Here we explain what each of these options are and how to configure them directly against the plugin binary. The following section explains how to set these configurations when deploying the plugin via helm.

As command line flags or envvars

Flag Envvar Default Value
--mig-strategy $MIG_STRATEGY "none"
--fail-on-init-error $FAIL_ON_INIT_ERROR true
--nvidia-driver-root $NVIDIA_DRIVER_ROOT "/"
--pass-device-specs $PASS_DEVICE_SPECS false
--device-list-strategy $DEVICE_LIST_STRATEGY "envvar"
--device-id-strategy $DEVICE_ID_STRATEGY "uuid"
--config-file $CONFIG_FILE ""

As a configuration file

version: v1
flags:
  migStrategy: "none"
  failOnInitError: true
  nvidiaDriverRoot: "/"
  plugin:
    passDeviceSpecs: false
    deviceListStrategy: "envvar"
    deviceIDStrategy: "uuid"

Note: The configuration file has an explicit plugin section because it is a shared configuration between the plugin and gpu-feature-discovery. All options inside the plugin section are specific to the plugin. All options outside of this section are shared.

Configuration Option Details

MIG_STRATEGY: the desired strategy for exposing MIG devices on GPUs that support it

[none | single | mixed] (default 'none')

The MIG_STRATEGY option configures the daemonset to be able to expose Multi-Instance GPUs (MIG) on GPUs that support them. More information on what these strategies are and how they should be used can be found in Supporting Multi-Instance GPUs (MIG) in Kubernetes.

Note: With a MIG_STRATEGY of mixed, you will have additional resources available to you of the form nvidia.com/mig-<slice_count>g.<memory_size>gb that you can set in your pod spec to get access to a specific MIG device.

FAIL_ON_INIT_ERROR: fail the plugin if an error is encountered during initialization, otherwise block indefinitely

(default 'true')

When set to true, the FAIL_ON_INIT_ERROR option fails the plugin if an error is encountered during initialization. When set to false, it prints an error message and blocks the plugin indefinitely instead of failing. Blocking indefinitely follows legacy semantics that allow the plugin to deploy successfully on nodes that don't have GPUs on them (and aren't supposed to have GPUs on them) without throwing an error. In this way, you can blindly deploy a daemonset with the plugin on all nodes in your cluster, whether they have GPUs on them or not, without encountering an error. However, doing so means that there is no way to detect an actual error on nodes that are supposed to have GPUs on them. Failing if an initialization error is encountered is now the default and should be adopted by all new deployments.

NVIDIA_DRIVER_ROOT: the root path for the NVIDIA driver installation

(default '/')

When the NVIDIA drivers are installed directly on the host, this should be set to '/'. When installed elsewhere (e.g. via a driver container), this should be set to the root filesystem where the drivers are installed (e.g. '/run/nvidia/driver').

Note: This option is only necessary when used in conjunction with the $PASS_DEVICE_SPECS option described below. It tells the plugin what prefix to add to any device file paths passed back as part of the device specs.

PASS_DEVICE_SPECS: pass the paths and desired device node permissions for any NVIDIA devices being allocated to the container

(default 'false')

This option exists for the sole purpose of allowing the device plugin to interoperate with the CPUManager in Kubernetes. Setting this flag also requires one to deploy the daemonset with elevated privileges, so only do so if you know you need to interoperate with the CPUManager.

DEVICE_LIST_STRATEGY: the desired strategy for passing the device list to the underlying runtime

[envvar | volume-mounts | cdi-annotations | cdi-cri ] (default 'envvar')

Note: Multiple device list strategies can be specified (as a comma-separated list).

The DEVICE_LIST_STRATEGY flag allows one to choose which strategy the plugin will use to advertise the list of GPUs allocated to a container. Possible values are:

  • envvar (default): the NVIDIA_VISIBLE_DEVICES environment variable as described here is used to select the devices that are to be injected by the NVIDIA Container Runtime.
  • volume-mounts: the list of devices is passed as a set of volume mounts instead of as an environment variable to instruct the NVIDIA Container Runtime to inject the devices. Details for the rationale behind this strategy can be found here.
  • cdi-annotations: CDI annotations are used to select the devices that are to be injected. Note that this does not require the NVIDIA Container Runtime, but does required a CDI-enabled container engine.
  • cdi-cri: the CDIDevices CRI field is used to select the CDI devices that are to be injected. This requries support in Kubernetes to forward these requests in the CRI to a CDI-enabled container engine.

DEVICE_ID_STRATEGY: the desired strategy for passing device IDs to the underlying runtime

[uuid | index] (default 'uuid')

The DEVICE_ID_STRATEGY flag allows one to choose which strategy the plugin will use to pass the device ID of the GPUs allocated to a container. The device ID has traditionally been passed as the UUID of the GPU. This flag lets a user decide if they would like to use the UUID or the index of the GPU (as seen in the output of nvidia-smi) as the identifier passed to the underlying runtime. Passing the index may be desirable in situations where pods that have been allocated GPUs by the plugin get restarted with different physical GPUs attached to them.

CONFIG_FILE: point the plugin at a configuration file instead of relying on command line flags or environment variables

(default '')

The order of precedence for setting each option is (1) command line flag, (2) environment variable, (3) configuration file. In this way, one could use a pre-defined configuration file, but then override the values set in it at launch time. As described below, a ConfigMap can be used to point the plugin at a desired configuration file when deploying via helm.

Shared Access to GPUs

The NVIDIA device plugin allows oversubscription of GPUs through a set of extended options in its configuration file. There are two flavors of sharing available: Time-Slicing and MPS.

Note: The use of time-slicing and MPS are mutually exclusive.

In the case of time-slicing, CUDA time-slicing is used to allow workloads sharing a GPU to interleave with each other. However, nothing special is done to isolate workloads that are granted replicas from the same underlying GPU, and each workload has access to the GPU memory and runs in the same fault-domain as of all the others (meaning if one workload crashes, they all do).

In the case of MPS, a control daemon is used to manage access to the shared GPU. In contrast to time-slicing, MPS does space partitioning and allows memory and compute resources to be explicitly partitioned and enforces these limits per workload.

With CUDA Time-Slicing

The extended options for sharing using time-slicing can be seen below:

version: v1
sharing:
  timeSlicing:
    renameByDefault: <bool>
    failRequestsGreaterThanOne: <bool>
    resources:
    - name: <resource-name>
      replicas: <num-replicas>
    ...

That is, for each named resource under sharing.timeSlicing.resources, a number of replicas can now be specified for that resource type. These replicas represent the number of shared accesses that will be granted for a GPU represented by that resource type.

If renameByDefault=true, then each resource will be advertised under the name <resource-name>.shared instead of simply <resource-name>.

If failRequestsGreaterThanOne=true, then the plugin will fail to allocate any shared resources to a container if they request more than one. The container’s pod will fail with an UnexpectedAdmissionError and need to be manually deleted, updated, and redeployed.

For example:

version: v1
sharing:
  timeSlicing:
    resources:
    - name: nvidia.com/gpu
      replicas: 10

If this configuration were applied to a node with 8 GPUs on it, the plugin would now advertise 80 nvidia.com/gpu resources to Kubernetes instead of 8.

$ kubectl describe node
...
Capacity:
  nvidia.com/gpu: 80
...

Likewise, if the following configuration were applied to a node, then 80 nvidia.com/gpu.shared resources would be advertised to Kubernetes instead of 8 nvidia.com/gpu resources.

version: v1
sharing:
  timeSlicing:
    renameByDefault: true
    resources:
    - name: nvidia.com/gpu
      replicas: 10
    ...
$ kubectl describe node
...
Capacity:
  nvidia.com/gpu.shared: 80
...

In both cases, the plugin simply creates 10 references to each GPU and indiscriminately hands them out to anyone that asks for them.

If failRequestsGreaterThanOne=true were set in either of these configurations and a user requested more than one nvidia.com/gpu or nvidia.com/gpu.shared resource in their pod spec, then the container would fail with the resulting error:

$ kubectl describe pod gpu-pod
...
Events:
  Type     Reason                    Age   From               Message
  ----     ------                    ----  ----               -------
  Warning  UnexpectedAdmissionError  13s   kubelet            Allocate failed due to rpc error: code = Unknown desc = request for 'nvidia.com/gpu: 2' too large: maximum request size for shared resources is 1, which is unexpected
...

Note: Unlike with "normal" GPU requests, requesting more than one shared GPU does not imply that you will get guaranteed access to a proportional amount of compute power. It only implies that you will get access to a GPU that is shared by other clients (each of which has the freedom to run as many processes on the underlying GPU as they want). Under the hood CUDA will simply give an equal share of time to all of the GPU processes across all of the clients. The failRequestsGreaterThanOne flag is meant to help users understand this subtlety, by treating a request of 1 as an access request rather than an exclusive resource request. Setting failRequestsGreaterThanOne=true is recommended, but it is set to false by default to retain backwards compatibility.

As of now, the only supported resource available for time-slicing are nvidia.com/gpu as well as any of the resource types that emerge from configuring a node with the mixed MIG strategy.

For example, the full set of time-sliceable resources on a T4 card would be:

nvidia.com/gpu

And the full set of time-sliceable resources on an A100 40GB card would be:

nvidia.com/gpu
nvidia.com/mig-1g.5gb
nvidia.com/mig-2g.10gb
nvidia.com/mig-3g.20gb
nvidia.com/mig-7g.40gb

Likewise, on an A100 80GB card, they would be:

nvidia.com/gpu
nvidia.com/mig-1g.10gb
nvidia.com/mig-2g.20gb
nvidia.com/mig-3g.40gb
nvidia.com/mig-7g.80gb

With CUDA MPS

Note: Sharing with MPS is currently not supported on devices with MIG enabled.

The extended options for sharing using MPS can be seen below:

version: v1
sharing:
  mps:
    renameByDefault: <bool>
    resources:
    - name: <resource-name>
      replicas: <num-replicas>
    ...

That is, for each named resource under sharing.mps.resources, a number of replicas can be specified for that resource type. As is the case with time-slicing, these replicas represent the number of shared accesses that will be granted for a GPU associated with that resource type. In contrast with time-slicing, the amount of memory allowed per client (i.e. per partition) is managed by the MPS control daemon and limited to an equal fraction of the total device memory. In addition to controlling the amount of memory that each client can consume, the MPS control daemon also limits the amount of compute capacity that can be consumed by a client.

If renameByDefault=true, then each resource will be advertised under the name <resource-name>.shared instead of simply <resource-name>.

For example:

version: v1
sharing:
  mps:
    resources:
    - name: nvidia.com/gpu
      replicas: 10

If this configuration were applied to a node with 8 GPUs on it, the plugin would now advertise 80 nvidia.com/gpu resources to Kubernetes instead of 8.

$ kubectl describe node
...
Capacity:
  nvidia.com/gpu: 80
...

Likewise, if the following configuration were applied to a node, then 80 nvidia.com/gpu.shared resources would be advertised to Kubernetes instead of 8 nvidia.com/gpu resources.

version: v1
sharing:
  mps:
    renameByDefault: true
    resources:
    - name: nvidia.com/gpu
      replicas: 10
    ...
$ kubectl describe node
...
Capacity:
  nvidia.com/gpu.shared: 80
...

Furthermore, each of these resources -- either nvidia.com/gpu or nvidia.com/gpu.shared -- would have access to the same fraction (1/10) of the total memory and compute resources of the GPU.

Note: As of now, the only supported resource available for MPS are nvidia.com/gpu resources and only with full GPUs.

Deployment via helm

The preferred method to deploy the device plugin is as a daemonset using helm. Instructions for installing helm can be found here.

Begin by setting up the plugin's helm repository and updating it at follows:

$ helm repo add nvdp https://nvidia.github.io/k8s-device-plugin
$ helm repo update

Then verify that the latest release (v0.15.0) of the plugin is available:

$ helm search repo nvdp --devel
NAME                     	  CHART VERSION  APP VERSION	DESCRIPTION
nvdp/nvidia-device-plugin	  0.15.0	 0.15.0		A Helm chart for ...

Once this repo is updated, you can begin installing packages from it to deploy the nvidia-device-plugin helm chart.

The most basic installation command without any options is then:

helm upgrade -i nvdp nvdp/nvidia-device-plugin \
  --namespace nvidia-device-plugin \
  --create-namespace \
  --version 0.15.0

Note: You only need the to pass the --devel flag to helm search repo and the --version flag to helm upgrade -i if this is a pre-release version (e.g. <version>-rc.1). Full releases will be listed without this.

Configuring the device plugin's helm chart

The helm chart for the latest release of the plugin (v0.15.0) includes a number of customizable values.

Prior to v0.12.0 the most commonly used values were those that had direct mappings to the command line options of the plugin binary. As of v0.12.0, the preferred method to set these options is via a ConfigMap. The primary use case of the original values is then to override an option from the ConfigMap if desired. Both methods are discussed in more detail below.

The full set of values that can be set are found here: here.

Passing configuration to the plugin via a ConfigMap.

In general, we provide a mechanism to pass multiple configuration files to to the plugin's helm chart, with the ability to choose which configuration file should be applied to a node via a node label.

In this way, a single chart can be used to deploy each component, but custom configurations can be applied to different nodes throughout the cluster.

There are two ways to provide a ConfigMap for use by the plugin:

  1. Via an external reference to a pre-defined ConfigMap
  2. As a set of named config files to build an integrated ConfigMap associated with the chart

These can be set via the chart values config.name and config.map respectively. In both cases, the value config.default can be set to point to one of the named configs in the ConfigMap and provide a default configuration for nodes that have not been customized via a node label (more on this later).

Single Config File Example

As an example, create a valid config file on your local filesystem, such as the following:

cat << EOF > /tmp/dp-example-config0.yaml
version: v1
flags:
  migStrategy: "none"
  failOnInitError: true
  nvidiaDriverRoot: "/"
  plugin:
    passDeviceSpecs: false
    deviceListStrategy: envvar
    deviceIDStrategy: uuid
EOF

And deploy the device plugin via helm (pointing it at this config file and giving it a name):

$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
    --version=0.15.0 \
    --namespace nvidia-device-plugin \
    --create-namespace \
    --set-file config.map.config=/tmp/dp-example-config0.yaml

Under the hood this will deploy a ConfigMap associated with the plugin and put the contents of the dp-example-config0.yaml file into it, using the name config as its key. It will then start the plugin such that this config gets applied when the plugin comes online.

If you don’t want the plugin’s helm chart to create the ConfigMap for you, you can also point it at a pre-created ConfigMap as follows:

$ kubectl create ns nvidia-device-plugin
$ kubectl create cm -n nvidia-device-plugin nvidia-plugin-configs \
    --from-file=config=/tmp/dp-example-config0.yaml
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
    --version=0.15.0 \
    --namespace nvidia-device-plugin \
    --create-namespace \
    --set config.name=nvidia-plugin-configs
Multiple Config File Example

For multiple config files, the procedure is similar.

Create a second config file with the following contents:

cat << EOF > /tmp/dp-example-config1.yaml
version: v1
flags:
  migStrategy: "mixed" # Only change from config0.yaml
  failOnInitError: true
  nvidiaDriverRoot: "/"
  plugin:
    passDeviceSpecs: false
    deviceListStrategy: envvar
    deviceIDStrategy: uuid
EOF

And redeploy the device plugin via helm (pointing it at both configs with a specified default).

$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
    --version=0.15.0 \
    --namespace nvidia-device-plugin \
    --create-namespace \
    --set config.default=config0 \
    --set-file config.map.config0=/tmp/dp-example-config0.yaml \
    --set-file config.map.config1=/tmp/dp-example-config1.yaml

As before, this can also be done with a pre-created ConfigMap if desired:

$ kubectl create ns nvidia-device-plugin
$ kubectl create cm -n nvidia-device-plugin nvidia-plugin-configs \
    --from-file=config0=/tmp/dp-example-config0.yaml \
    --from-file=config1=/tmp/dp-example-config1.yaml
$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
    --version=0.15.0 \
    --namespace nvidia-device-plugin \
    --create-namespace \
    --set config.default=config0 \
    --set config.name=nvidia-plugin-configs

Note: If the config.default flag is not explicitly set, then a default value will be inferred from the config if one of the config names is set to 'default'. If neither of these are set, then the deployment will fail unless there is only one config provided. In the case of just a single config being provided, it will be chosen as the default because there is no other option.

Updating Per-Node Configuration With a Node Label

With this setup, plugins on all nodes will have config0 configured for them by default. However, the following label can be set to change which configuration is applied:

kubectl label nodes <node-name> –-overwrite \
    nvidia.com/device-plugin.config=<config-name>

For example, applying a custom config for all nodes that have T4 GPUs installed on them might be:

kubectl label node \
    --overwrite \
    --selector=nvidia.com/gpu.product=TESLA-T4 \
    nvidia.com/device-plugin.config=t4-config

Note: This label can be applied either before or after the plugin is started to get the desired configuration applied on the node. Anytime it changes value, the plugin will immediately be updated to start serving the desired configuration. If it is set to an unknown value, it will skip reconfiguration. If it is ever unset, it will fallback to the default.

Setting other helm chart values

As mentioned previously, the device plugin's helm chart continues to provide direct values to set the configuration options of the plugin without using a ConfigMap. These should only be used to set globally applicable options (which should then never be embedded in the set of config files provided by the ConfigMap), or used to override these options as desired.

These values are as follows:

  migStrategy:
      the desired strategy for exposing MIG devices on GPUs that support it
      [none | single | mixed] (default "none")
  failOnInitError:
      fail the plugin if an error is encountered during initialization, otherwise block indefinitely
      (default 'true')
  compatWithCPUManager:
      run with escalated privileges to be compatible with the static CPUManager policy
      (default 'false')
  deviceListStrategy:
      the desired strategy for passing the device list to the underlying runtime
      [envvar | volume-mounts | cdi-annotations | cdi-cri] (default "envvar")
  deviceIDStrategy:
      the desired strategy for passing device IDs to the underlying runtime
      [uuid | index] (default "uuid")
  nvidiaDriverRoot:
      the root path for the NVIDIA driver installation (typical values are '/' or '/run/nvidia/driver')

Note: There is no value that directly maps to the PASS_DEVICE_SPECS configuration option of the plugin. Instead a value called compatWithCPUManager is provided which acts as a proxy for this option. It both sets the PASS_DEVICE_SPECS option of the plugin to true AND makes sure that the plugin is started with elevated privileges to ensure proper compatibility with the CPUManager.

Besides these custom configuration options for the plugin, other standard helm chart values that are commonly overridden are:

  runtimeClassName:
      the runtimeClassName to use, for use with clusters that have multiple runtimes. (typical value is 'nvidia')

Please take a look in the values.yaml file to see the full set of overridable parameters for the device plugin.

Examples of setting these options include:

Enabling compatibility with the CPUManager and running with a request for 100ms of CPU time and a limit of 512MB of memory.

$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
    --version=0.15.0 \
    --namespace nvidia-device-plugin \
    --create-namespace \
    --set compatWithCPUManager=true \
    --set resources.requests.cpu=100m \
    --set resources.limits.memory=512Mi

Enabling compatibility with the CPUManager and the mixed migStrategy

$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
    --version=0.15.0 \
    --namespace nvidia-device-plugin \
    --create-namespace \
    --set compatWithCPUManager=true \
    --set migStrategy=mixed

Deploying with gpu-feature-discovery for automatic node labels

As of v0.12.0, the device plugin's helm chart has integrated support to deploy gpu-feature-discovery (GFD) as a subchart. One can use GFD to automatically generate labels for the set of GPUs available on a node. Under the hood, it leverages Node Feature Discovery to perform this labeling.

To enable it, simply set gfd.enabled=true during helm install.

helm upgrade -i nvdp nvdp/nvidia-device-plugin \
    --version=0.15.0 \
    --namespace nvidia-device-plugin \
    --create-namespace \
    --set gfd.enabled=true

Under the hood this will also deploy node-feature-discovery (NFD) since it is a prerequisite of GFD. If you already have NFD deployed on your cluster and do not wish for it to be pulled in by this installation, you can disable it with nfd.enabled=false.

In addition to the standard node labels applied by GFD, the following label will also be included when deploying the plugin with the time-slicing extensions described above.

nvidia.com/<resource-name>.replicas = <num-replicas>

Additionally, the nvidia.com/<resource-name>.product will be modified as follows if renameByDefault=false.

nvidia.com/<resource-name>.product = <product name>-SHARED

Using these labels, users have a way of selecting a shared vs. non-shared GPU in the same way they would traditionally select one GPU model over another. That is, the SHARED annotation ensures that a nodeSelector can be used to attract pods to nodes that have shared GPUs on them.

Since having renameByDefault=true already encodes the fact that the resource is shared on the resource name , there is no need to annotate the product name with SHARED. Users can already find the shared resources they need by simply requesting it in their pod spec.

Note: When running with renameByDefault=false and migStrategy=single both the MIG profile name and the new SHARED annotation will be appended to the product name, e.g.:

nvidia.com/gpu.product = A100-SXM4-40GB-MIG-1g.5gb-SHARED

Deploying gpu-feature-discovery in standalone mode

As of v0.15.0, the device plugin's helm chart has integrated support to deploy gpu-feature-discovery

When gpu-feature-discovery in deploying standalone, begin by setting up the plugin's helm repository and updating it at follows:

$ helm repo add nvdp https://nvidia.github.io/k8s-device-plugin
$ helm repo update

Then verify that the latest release (v0.15.0) of the plugin is available (Note that this includes the GFD chart):

$ helm search repo nvdp --devel
NAME                     	  CHART VERSION  APP VERSION	DESCRIPTION
nvdp/nvidia-device-plugin	  0.15.0	 0.15.0		A Helm chart for ...

Once this repo is updated, you can begin installing packages from it to deploy the gpu-feature-discovery component in standalone mode.

The most basic installation command without any options is then:

$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
  --version 0.15.0 \
  --namespace gpu-feature-discovery \
  --create-namespace \
  --set devicePlugin.enabled=false

Disabling auto-deployment of NFD and running with a MIG strategy of 'mixed' in the default namespace.

$ helm upgrade -i nvdp nvdp/nvidia-device-plugin \
    --version=0.15.0 \
    --set allowDefaultNamespace=true \
    --set nfd.enabled=false \
    --set migStrategy=mixed \
    --set devicePlugin.enabled=false

Note: You only need the to pass the --devel flag to helm search repo and the --version flag to helm upgrade -i if this is a pre-release version (e.g. <version>-rc.1). Full releases will be listed without this.

Deploying via helm install with a direct URL to the helm package

If you prefer not to install from the nvidia-device-plugin helm repo, you can run helm install directly against the tarball of the plugin's helm package. The example below installs the same chart as the method above, except that it uses a direct URL to the helm chart instead of via the helm repo.

Using the default values for the flags:

$ helm upgrade -i nvdp \
    --namespace nvidia-device-plugin \
    --create-namespace \
    https://nvidia.github.io/k8s-device-plugin/stable/nvidia-device-plugin-0.15.0.tgz

Building and Running Locally

The next sections are focused on building the device plugin locally and running it. It is intended purely for development and testing, and not required by most users. It assumes you are pinning to the latest release tag (i.e. v0.15.0), but can easily be modified to work with any available tag or branch.

With Docker

Build

Option 1, pull the prebuilt image from Docker Hub:

$ docker pull nvcr.io/nvidia/k8s-device-plugin:v0.15.0
$ docker tag nvcr.io/nvidia/k8s-device-plugin:v0.15.0 nvcr.io/nvidia/k8s-device-plugin:devel

Option 2, build without cloning the repository:

$ docker build \
    -t nvcr.io/nvidia/k8s-device-plugin:devel \
    -f deployments/container/Dockerfile.ubuntu \
    https://github.com/NVIDIA/k8s-device-plugin.git#v0.15.0

Option 3, if you want to modify the code:

$ git clone https://github.com/NVIDIA/k8s-device-plugin.git && cd k8s-device-plugin
$ docker build \
    -t nvcr.io/nvidia/k8s-device-plugin:devel \
    -f deployments/container/Dockerfile.ubuntu \
    .

Run

Without compatibility for the CPUManager static policy:

$ docker run \
    -it \
    --security-opt=no-new-privileges \
    --cap-drop=ALL \
    --network=none \
    -v /var/lib/kubelet/device-plugins:/var/lib/kubelet/device-plugins \
    nvcr.io/nvidia/k8s-device-plugin:devel

With compatibility for the CPUManager static policy:

$ docker run \
    -it \
    --privileged \
    --network=none \
    -v /var/lib/kubelet/device-plugins:/var/lib/kubelet/device-plugins \
    nvcr.io/nvidia/k8s-device-plugin:devel --pass-device-specs

Without Docker

Build

$ C_INCLUDE_PATH=/usr/local/cuda/include LIBRARY_PATH=/usr/local/cuda/lib64 go build

Run

Without compatibility for the CPUManager static policy:

$ ./k8s-device-plugin

With compatibility for the CPUManager static policy:

$ ./k8s-device-plugin --pass-device-specs

Changelog

See the changelog

Issues and Contributing

Checkout the Contributing document!

Versioning

Before v1.10 the versioning scheme of the device plugin had to match exactly the version of Kubernetes. After the promotion of device plugins to beta this condition was was no longer required. We quickly noticed that this versioning scheme was very confusing for users as they still expected to see a version of the device plugin for each version of Kubernetes.

This versioning scheme applies to the tags v1.8, v1.9, v1.10, v1.11, v1.12.

We have now changed the versioning to follow SEMVER. The first version following this scheme has been tagged v0.0.0.

Going forward, the major version of the device plugin will only change following a change in the device plugin API itself. For example, version v1beta1 of the device plugin API corresponds to version v0.x.x of the device plugin. If a new v2beta2 version of the device plugin API comes out, then the device plugin will increase its major version to 1.x.x.

As of now, the device plugin API for Kubernetes >= v1.10 is v1beta1. If you have a version of Kubernetes >= 1.10 you can deploy any device plugin version > v0.0.0.

Upgrading Kubernetes with the Device Plugin

Upgrading Kubernetes when you have a device plugin deployed doesn't require you to do any, particular changes to your workflow. The API is versioned and is pretty stable (though it is not guaranteed to be non breaking). Starting with Kubernetes version 1.10, you can use v0.3.0 of the device plugin to perform upgrades, and Kubernetes won't require you to deploy a different version of the device plugin. Once a node comes back online after the upgrade, you will see GPUs re-registering themselves automatically.

Upgrading the device plugin itself is a more complex task. It is recommended to drain GPU tasks as we cannot guarantee that GPU tasks will survive a rolling upgrade. However we make best efforts to preserve GPU tasks during an upgrade.

k8s-device-plugin's People

Contributors

achim92 avatar arangogutierrez avatar carmark avatar cdesiniotis avatar dependabot[bot] avatar dvavili avatar elezar avatar everpeace avatar fabiendupont avatar flx42 avatar guptanswati avatar jasine avatar jiri-pinkava avatar jjacobelli avatar jwcheong0420 avatar kerthcet avatar klueska avatar maistho avatar mikemckiernan avatar mkumatag avatar nvjmayo avatar pbxqdown avatar rajatchopra avatar renaudwastaken avatar rockrush avatar rorajani avatar rptaylor avatar scorpiocph avatar shivamerla avatar tariq1890 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

k8s-device-plugin's Issues

Handling hot plug events

I'm working on a feature where I can hotplug nvidia gpus to the host. But when I did that, the device plugin is not recognizing the hotplugged gpu.

It would be great if the support for hotplug events is provided.

Is it better to change the docker runtime from nvidia to the default runC?

In the current settings, the nvidia runtime is set to the default docker runtime instead of the origin runC.
so an issue as described in kubernetes/kubernetes#59631 kubernetes/kubernetes#59629
All GPUs are exposed into the container.

so one way of the solution is to use env to tell nvidia-container-runtime not to expose the GPUs.
another better way:

  1. set the default docker runtime back to runC.
  2. we do not need nvidia-container-runtime to do pre-start hooks, use k8s-device-plugin to do the same jobs, such as inject GPU device.

so the issue kubernetes/kubernetes#59631 can be fixed.

@flx42 @RenaudWasTaken @cmluciano @jiayingz @vikaschoudhary16

not working on a local one-node cluster created by local-up-cluster.sh

Hi, I'm building a single machine testing environment, because Minikube doesn't support GPU well, so I use the local-up-cluster.sh provided at https://github.com/kubernetes/kubernetes/blob/master/hack/local-up-cluster.sh to build up single node cluster, but it not work with k8s-device-plugin well.

Do the following to reproduce it:

  • get source code using go get -d k8s.io/kubernetes

  • In order to make the local-up-cluster.sh launch kubelet with gate options, I insert the following line to the top of local-up-cluster.sh
    FEATURE_GATES="DevicePlugins=true"

  • start the cluster using sudo ./hack/local-up-cluster.sh

When running
docker run -it -v /var/lib/kubelet/device-plugins:/var/lib/kubelet/device-plugins nvidia/k8s-device-plugin:1.9

I got:

2018/02/28 12:14:51 Loading NVML
2018/02/28 12:14:51 Fetching devices.
2018/02/28 12:14:51 Starting FS watcher.
2018/02/28 12:14:51 Starting OS watcher.
2018/02/28 12:14:51 Starting to serve on /var/lib/kubelet/device-plugins/nvidia.sock
2018/02/28 12:14:51 Could not register device plugin: rpc error: code = Unimplemented desc = unknown service deviceplugin.Registration
2018/02/28 12:14:51 Could not contact Kubelet, retrying. Did you enable the device plugin feature gate?
2018/02/28 12:14:51 You can check the prerequisites at: https://github.com/NVIDIA/k8s-device-plugin#prerequisites
2018/02/28 12:14:51 You can learn how to set the runtime at: https://github.com/NVIDIA/k8s-device-plugin#quick-start
2018/02/28 12:14:51 Starting to serve on /var/lib/kubelet/device-plugins/nvidia.sock
2018/02/28 12:14:51 Could not register device plugin: rpc error: code = Unimplemented desc = unknown service deviceplugin.Registration
2018/02/28 12:14:51 Could not contact Kubelet, retrying. Did you enable the device plugin feature gate?
2018/02/28 12:14:51 You can check the prerequisites at: https://github.com/NVIDIA/k8s-device-plugin#prerequisites
2018/02/28 12:14:51 You can learn how to set the runtime at: https://github.com/NVIDIA/k8s-device-plugin#quick-start
2018/02/28 12:14:51 Starting to serve on /var/lib/kubelet/device-plugins/nvidia.sock
2018/02/28 12:14:51 Could not register device plugin: rpc error: code = Unimplemented desc = unknown service deviceplugin.Registration
2018/02/28 12:14:51 Could not contact Kubelet, retrying. Did you enable the device plugin feature gate?
2018/02/28 12:14:51 You can check the prerequisites at: https://github.com/NVIDIA/k8s-device-plugin#prerequisites
2018/02/28 12:14:51 You can learn how to set the runtime at: https://github.com/NVIDIA/k8s-device-plugin#quick-start
2018/02/28 12:14:51 Starting to serve on /var/lib/kubelet/device-plugins/nvidia.sock

Here are some of my configs:
/etc/docker/daemon.json

root@ubuntu-10-53-66-17:~# cat /etc/docker/daemon.json
{
    "default-runtime": "nvidia",
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    }
}

The command line to run kubelet

# ps aux | grep kubelet

root      71370  3.4  0.0 2621216 123892 pts/0  Sl+  20:20   0:06 /home/mi/go/src/k8s.io/kubernetes/_output/local/bin/linux/amd64/hyperkube kubelet --v=3 --vmodule= --chaos-chance=0.0 --container-runtime=docker --rkt-path= --rkt-stage1-image= --hostname-override=127.0.0.1 --cloud-provider= --cloud-config= --address=127.0.0.1 --kubeconfig /var/run/kubernetes/kubelet.kubeconfig --feature-gates=DevicePlugins=true --cpu-cfs-quota=true --enable-controller-attach-detach=true --cgroups-per-qos=true --cgroup-driver=cgroupfs --keep-terminated-pod-volumes=true --eviction-hard=memory.available<100Mi,nodefs.available<10%,nodefs.inodesFree<5% --eviction-soft= --eviction-pressure-transition-period=1m --pod-manifest-path=/var/run/kubernetes/static-pods --fail-swap-on=false --cluster-dns=10.0.0.10 --cluster-domain=cluster.local --port=10250

The output of nvidia-smi on the machine:

Wed Feb 28 20:26:18 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.81                 Driver Version: 384.81                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla K80           Off  | 00000000:05:00.0 Off |                    0 |
| N/A   34C    P8    26W / 149W |      0MiB / 11439MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  Tesla K80           Off  | 00000000:06:00.0 Off |                    0 |
| N/A   28C    P8    30W / 149W |      0MiB / 11439MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   2  Tesla K80           Off  | 00000000:84:00.0 Off |                    0 |
| N/A   40C    P8    27W / 149W |      0MiB / 11439MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   3  Tesla K80           Off  | 00000000:85:00.0 Off |                    0 |
| N/A   33C    P8    29W / 149W |      0MiB / 11439MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

Using kubernetes to run tensorflow training much slower than `docker run`

I've installed nvidia-docker2(nvidia-container-runtime) + kubernetes 1.9.7, and run tensorflow training in 8 1080ti nvidia-gpu . When I'm using kubectl to deploy pod with nvidia.com/gpu=8. The logs shows

Iteration 200 (0.833003 iter/s)

But when I run with the command: docker run --runtime=nvidia caffe-mpi:v0.2.22.test1, the performance is much better.

Iteration 200 (1.655003 iter/sr)

But when I add "cgroup-parent" which of the pod I created first, I find the performance is as the same as the pod.

docker run --runtime=nvidia --cgroup-parent=kubepods-besteffort-podf4e9758b_6fda_11e8_93ce_00163e008c08.slice caffe-mpi:v0.2.22.test1
Iteration 200 (0.851113 iter/s)

I suspect it's related to the cgroup setting of the kubernetes. Do you have any suggestions? Thanks in advanced.

device plugin runs on ALL nodes

I have 9 worker nodes in my cluster but only ONE of them has a GPU. However, the device plugin seems to be running on ALL nodes. On the nodes without a GPU you can see the device plugin failing to find NVML .. (succeeds on the node with a GPU) ... so it seems to me that this plugin should only be running on the node that has a GPU.

Q: How can make the device plugin only run on my GPU node? Labels? Taints? Something else?

allocatable stuck at zero

i have a kubernetes node on 1.10.2 with nvidia/k8s-device-plugin:1.10. Everything worked great initially, but now i can't schedule any pods with nvidia.com/gpu. Looking at the output of kubectl get node, i see:

status:
  addresses:
  - address: 134.79.129.97
    type: InternalIP
  - address: ocio-gpu01
    type: Hostname
  allocatable:
    cpu: "48"
    ephemeral-storage: "9391196145"
    hugepages-1Gi: "0"
    hugepages-2Mi: "0"
    memory: 263412492Ki
    nvidia.com/gpu: "0"
    pods: "110"
  capacity:
    cpu: "48"
    ephemeral-storage: 10190100Ki
    hugepages-1Gi: "0"
    hugepages-2Mi: "0"
    memory: 263514892Ki
    nvidia.com/gpu: "16"
    pods: "110"

i think i cannot schedule any pods because the allocatable is zero. i have pods running on the box, but none that requested any gpus.

any pointers in how i can troubleshoot this?

thanks,

pods cannot share a GPU?

I'm using JupyterLAB on Kubernetes and have a cluster of 8 CPU worker nodes and one CPU/GPU worker node. I have the device plugin set up etc... and when I log into JupyterLAB, a user pod is created and the device plugin/scheduler run it on my GPU node ... all is great! .. until .. a second user logs in .. the second user pod fails to start as the GPU has already been allocated by the first user....

Q: is it correct that pods can't share a GPU device? If so why not? seems like there is a valid use case here of multiple users being able to do training tasks on a shared GPU at different times at least?

OpenShift 3.9/Docker-CE, Could not register device plugin: context deadline exceeded

Following blog posting "How to use GPUs with Device Plugin in OpenShift 3.9 (Now Tech Preview!)" in blog.openshift.com

In my case, nvidia-device-plugin shows errors like below:

# oc logs -f nvidia-device-plugin-daemonset-nj9p8
2018/06/06 12:40:11 Loading NVML
2018/06/06 12:40:11 Fetching devices.
2018/06/06 12:40:11 Starting FS watcher.
2018/06/06 12:40:11 Starting OS watcher.
2018/06/06 12:40:11 Starting to serve on /var/lib/kubelet/device-plugins/nvidia.sock
2018/06/06 12:40:16 Could not register device plugin: context deadline exceeded
2018/06/06 12:40:16 Could not contact Kubelet, retrying. Did you enable the device plugin feature gate?
2018/06/06 12:40:16 You can check the prerequisites at: https://github.com/NVIDIA/k...
2018/06/06 12:40:16 You can learn how to set the runtime at: https://github.com/NVIDIA/k...
2018/06/06 12:40:16 Starting to serve on /var/lib/kubelet/device-plugins/nvidia.sock
...
  • One of the device-plugin-daemonset pod description is
# oc describe pod nvidia-device-plugin-daemonset-2
Name:           nvidia-device-plugin-daemonset-2jqgk
Namespace:      nvidia
Node:           node02/192.168.5.102
Start Time:     Wed, 06 Jun 2018 22:59:32 +0900
Labels:         controller-revision-hash=4102904998
                name=nvidia-device-plugin-ds
                pod-template-generation=1
Annotations:    openshift.io/scc=nvidia-deviceplugin
Status:         Running
IP:             192.168.5.102
Controlled By:  DaemonSet/nvidia-device-plugin-daemonset
Containers:
  nvidia-device-plugin-ctr:
    Container ID:   docker://b92280bd124df9fd46fe08ab4bbda76e2458cf5572f5ffc651661580bcd9126d
    Image:          nvidia/k8s-device-plugin:1.9
    Image ID:       docker-pullable://nvidia/k8s-device-plugin@sha256:7ba244bce75da00edd907209fe4cf7ea8edd0def5d4de71939899534134aea31
    Port:           <none>
    State:          Running
      Started:      Wed, 06 Jun 2018 22:59:34 +0900
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/lib/kubelet/device-plugins from device-plugin (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from nvidia-deviceplugin-token-cv7p5 (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  device-plugin:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/device-plugins
    HostPathType:  
  nvidia-deviceplugin-token-cv7p5:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  nvidia-deviceplugin-token-cv7p5
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
Events:
  Type    Reason                 Age   From             Message
  ----    ------                 ----  ----             -------
  Normal  SuccessfulMountVolume  1h    kubelet, node02  MountVolume.SetUp succeeded for volume "device-plugin"
  Normal  SuccessfulMountVolume  1h    kubelet, node02  MountVolume.SetUp succeeded for volume "nvidia-deviceplugin-token-cv7p5"
  Normal  Pulled                 1h    kubelet, node02  Container image "nvidia/k8s-device-plugin:1.9" already present on machine
  Normal  Created                1h    kubelet, node02  Created container
  Normal  Started                1h    kubelet, node02  Started container
  • And running
    "docker run -it -v /var/lib/kubelet/device-plugins:/var/lib/kubelet/device-plugins nvidia/k8s-device-plugin:1.9" shows the log messages just like above.

  • On each origin-nodes, docker run test shows like this(its normal, right?),

# docker run --rm nvidia/cuda nvidia-smi --query-gpu=gpu_name --format=csv,noheader --id=0 | sed -e 's/ /-/g'
Tesla-P40
# docker run -it --rm docker.io/mirrorgoogleconta...
[Vector addition of 50000 elements]
Copy input data from the host memory to the CUDA device
CUDA kernel launch with 196 blocks of 256 threads
Copy output data from the CUDA device to the host memory
Test PASSED
Done

[Test Env.]

  • 1 Master with OpenShift v3.9(Origin)
  • 2 GPU nodes with Tesla-P40*2
  • Docker-CE, nvidia-docker2 on GPU nodes

[Master]

# oc version
oc v3.9.0+46ff3a0-18
kubernetes v1.9.1+a0ce1bc657
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://MYDOMAIN.local:8443
openshift v3.9.0+46ff3a0-18
kubernetes v1.9.1+a0ce1bc657
# uname -r
3.10.0-862.3.2.el7.x86_64
# cat /etc/redhat-release 
CentOS Linux release 7.5.1804 (Core)

[GPU nodes]

# docker version
Client:
Version: 18.03.1-ce
API version: 1.37
Go version: go1.9.5
Git commit: 9ee9f40
Built: Thu Apr 26 07:20:16 2018
OS/Arch: linux/amd64
Experimental: false
Orchestrator: swarm

Server:
Engine:
Version: 18.03.1-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.9.5
Git commit: 9ee9f40
Built: Thu Apr 26 07:23:58 2018
OS/Arch: linux/amd64
Experimental: false
# uname -r
3.10.0-862.3.2.el7.x86_64
# cat /etc/redhat-release 
CentOS Linux release 7.5.1804 (Core)
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4b1a37d31cb9 openshift/node:v3.9.0 "/usr/local/bin/orig…" 22 minutes ago Up 21 minutes origin-node
efbedeeb88f0 fe3e6b0d95b5 "nvidia-device-plugin" About an hour ago Up About an hour k8s_nvidia-device-plugin-ctr_nvidia-device-plugin-daemonset-4sn5v_nvidia_bffb6d61-6986-11e8-8dd7-0cc47ad9bf7a_0
36aa988447b8 openshift/origin-pod:v3.9.0 "/usr/bin/pod" About an hour ago Up About an hour k8s_POD_nvidia-device-plugin-daemonset-4sn5v_nvidia_bffb6d61-6986-11e8-8dd7-0cc47ad9bf7a_0
6e6b598fa144 openshift/openvswitch:v3.9.0 "/usr/local/bin/ovs-…" 2 hours ago Up 2 hours openvswitch
# cat /etc/docker/daemon.json 
{
    "default-runtime": "nvidia",
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    }
}

Please help me with this problem. TIA!

Can't deploy NVIDIA device plugin on k8s 1.8.6 because could not load NVML library

version

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:34:11Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:23:29Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:23:29Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

$ kubelet --version
Kubernetes v1.8.6

NVIDIA-SMI 375.26

$ docker version
Client:
 Version:      17.03.2-ce
 API version:  1.27
 Go version:   go1.7.5
 Git commit:   f5ec1e2
 Built:        Tue Jun 27 02:31:19 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.03.2-ce
 API version:  1.27 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   f5ec1e2
 Built:        Tue Jun 27 02:31:19 2017
 OS/Arch:      linux/amd64
 Experimental: false

OS system is Debian 9.
GPU: Tesla K40m.
CUDA: Cuda compilation tools, release 8.0, V8.0.61

error

I installed Nvidia-docker according to the Debian instrunctions and NVIDIA/nvidia-docker#516 and can run docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi successfully. I set Nvidia as default-runtime and enabled the DevicePlugins feature gate on my 2-node k8s cluster equipped with Tesla K40m.

But when I run

kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v1.8/nvidia-device-plugin.yml

or

docker run --security-opt=no-new-privileges --cap-drop=ALL --network=none -it -v /var/lib/kubelet/device-plugins:/var/lib/kubelet/device-plugins nvidia/k8s-device-plugin:1.8

they gave error:

2018/01/05 12:25:01 Loading NVML
2018/01/05 12:25:01 Failed to start nvml with error: could not load NVML library.

The output of ldconfig is

$ ldconfig -p | grep nvidia-ml
	libnvidia-ml.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1
	libnvidia-ml.so.1 (libc6) => /usr/lib32/libnvidia-ml.so.1
	libnvidia-ml.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libnvidia-ml.so
	libnvidia-ml.so (libc6) => /usr/lib32/libnvidia-ml.so

I checked other issues like NVIDIA/nvidia-docker#74 and NVIDIA/nvidia-docker#470, they failed to run nvidia-docker but I can.

Another strange thing is that there is no nvidia-device-plugin in my path and the output of locate nvidia-device-plugin is blank.

Could please me check what went wrong?
Thanks!

I deploy k8s 1.10 and deploy the k8s-device-plugin but the Capacity of GPU not found

I deploy the k8s v1.10,and deploy k8s-device-plugin v1.10,but when I use
$kubectl describe node bjpg-g271.yz02
can not find the capatity of GPU

I deploy cuda is
$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Tue_Jan_10_13:22:03_CST_2017
Cuda compilation tools, release 8.0, V8.0.61

nvidia-docker is 17.03.2
I can run the GPU container by docker run,but the GPU can not scheduler by k8s

a small change in README

docker build -t nvidia/k8s-device-plugin:1.9 https://github.com/NVIDIA/k8s-device-plugin.git

should be:

docker build -t nvidia/k8s-device-plugin:1.9 https://github.com/NVIDIA/k8s-device-plugin.git#v1.9

Allocate() need return mount path of libcuda

Hi, I'm trying to deploy TensorFlow (with GPU support) on kubernetes with this device plugin.
And some error occurred:

ImportError: libcuda.so.1: cannot open shared object file: No such file or directory
Failed to load the native TensorFlow runtime.

After debug the source code I think this is because Allocate function don't return mount path of libcuda.so.

@flx42 PTAL, and I think i can send a PR to fix it later.

Manifest in upstream kubernetes?

The yaml manifest available upstream [1] a) is not the one suggested in the project's README [2] and b) is gke specific. As a result it is not clear to the kubernetes distributions (such as for example CDK [3]) what manifest should be shipped with each k8s release. We are doing our best, but any feedback from you on what the right path is, would be much appreciated.

[1] https://github.com/kubernetes/kubernetes/blob/release-1.10/cluster/addons/device-plugins/nvidia-gpu/daemonset.yaml
[2] https://github.com/NVIDIA/k8s-device-plugin/blob/v1.9/nvidia-device-plugin.yml
[3] https://www.ubuntu.com/kubernetes

0/3 nodes are available: 1 PodToleratesNodeTaints, 3 Insufficient nvidia.com/gpu.

I deployed device-plugin container on k8s via the guide. But when i run tensorflow-notebook (By exeucte kubectl create -f tensorflow-notebook.yml),the pod was sill pending:

[root@mlssdi010001 k8s]# kubectl describe pod tf-notebook-747db6987b-86zts
Name: tf-notebook-747db6987b-86zts
....
Events:
Type Reason Age From Message


Warning FailedScheduling 47s (x15 over 3m) default-scheduler 0/3 nodes are available: 1 PodToleratesNodeTaints, 3 Insufficient nvidia.com/gpu.

Pod info:

[root@mlssdi010001 k8s]# kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
default tf-notebook-747db6987b-86zts 0/1 Pending 0 5s
....
kube-system nvidia-device-plugin-daemonset-ljrwc 1/1 Running 0 34s 10.244.1.11 mlssdi010003
kube-system nvidia-device-plugin-daemonset-m7h2r 1/1 Running 0 34s 10.244.2.12 mlssdi010002

Nodes info:

NAME STATUS ROLES AGE VERSION
mlssdi010001 Ready master 1d v1.9.0
mlssdi010002 Ready 1d v1.9.0 (GPU Node,1 * Tesla M40)
mlssdi010003 Ready 1d v1.9.0 (GPU Node,1 * Tesla M40)

Can I disable the device plugin pod for an individual node?

I have a mixed cluster -- some nodes with GPUs, some without. The plugins start up nicely on the GPU nodes, but not so much on the nodes without GPUs (obviously).

The implementation uses a DaemonSet, so each node gets a pod ... but the pod on the non-GPU node is in CrashLoopBackOff -- I assume because of the lack of GPU. My question is whether I can set a flag/label/something to tell the pod running on the non-GPU node to just stop trying? I'd rather not just leave it there continually trying to restart ...

Minikube doesn't recognize GPU

I installed NVIDIA docker and now trying to test it on my local minikube without success.
I followed few threads around the same topics, also without luck.

sudo minikube start --vm-driver=none --feature-gates=Accelerators=true 
kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v1.10/nvidia-device-plugin.yml 
kubectl get nodes -o=custom-columns=NAME:.metadata.name,GPUs:.status.capacity.'nvidia\.com/gpu' 

Getting:

NAME GPUs
minikube <none>

Failed to start nvml with error

I install nvidia-docker2, and deploment device plugin in kubernetes 1.8, but kubectl describe pods, I get the error.

loading NVML
Failed to start nvml with error: could not load NVML library

failed to start container "nvidia-device-plugin-ctr"

Trying to install device plugin, but no luck

Error: failed to start container "nvidia-device-plugin-ctr": Error response from daemon: OCI runtime create failed: container_linux.go:296: starting container process caused "process_linux.go:398: container init caused \"process_linux.go:381: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig --device=all --utility --pid=31021 /var/lib/docker/aufs/mnt/a2f849e29fcb8dc87d51e90497d7e44a38d7ecf93acabc285523d13c1cdf9046]\\\\nnvidia-container-cli: ldcache error: process /sbin/ldconfig failed with error code: 127\\\\n\\\"\"": unknown
Back-off restarting failed container

Installed default and configured runtime

# nvidia-docker version
NVIDIA Docker: 2.0.2
Client:
 Version:	17.12.0-ce
 API version:	1.35
 Go version:	go1.9.2
 Git commit:	c97c6d6
 Built:	Wed Dec 27 20:11:19 2017
 OS/Arch:	linux/amd64

Server:
 Engine:
  Version:	17.12.0-ce
  API version:	1.35 (minimum version 1.12)
  Go version:	go1.9.2
  Git commit:	c97c6d6
  Built:	Wed Dec 27 20:09:53 2017
  OS/Arch:	linux/amd64
  Experimental:	false
# docker info | grep -i runtime
Runtimes: nvidia runc
WARNING: No swap limit support
Default Runtime: nvidia

Configured kubernetes with feature gates

# ps -ef | grep kube | grep featu
root     23964 23945  3 15:03 ?        00:00:16 kube-apiserver --bind-address=0.0.0.0 --insecure-bind-address=127.0.0.1 --insecure-port=8080 --service-node-port-range=30000-32767 --storage-backend=etcd3 --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ValidatingAdmissionWebhook,ResourceQuota --allow-privileged=true --apiserver-count=1 \
--feature-gates=Initializers=False,PersistentLocalVolumes=False,DevicePlugins=True --runtime-config=admissionregistration.k8s.io/v1alpha1 --requestheader-extra-headers-prefix=X-Remote-Extra- --advertise-address=192.168.0.102 --service-account-key-file=/etc/kubernetes/ssl/sa.pub --enable-bootstrap-token-auth=true --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --requestheader-group-headers=X-Remote-Group --client-ca-file=/etc/kubernetes/ssl/ca.crt --tls-private-key-file=/etc/kubernetes/ssl/apiserver.key --kubelet-client-key=/etc/kubernetes/ssl/apiserver-kubelet-client.key --requestheader-client-ca-file=/etc/kubernetes/ssl/front-proxy-ca.crt --proxy-client-cert-file=/etc/kubernetes/ssl/front-proxy-client.crt --tls-cert-file=/etc/kubernetes/ssl/apiserver.crt --proxy-client-key-file=/etc/kubernetes/ssl/front-proxy-client.key --requestheader-username-headers=X-Remote-User --requestheader-allowed-names=front-proxy-client --service-cluster-ip-range=10.233.0.0/18 --kubelet-client-certificate=/etc/kubernetes/ssl/apiserver-kubelet-client.crt --secure-port=6443 --authorization-mode=Node,RBAC --etcd-servers=https://192.168.0.102:2379 --etcd-cafile=/etc/kubernetes/ssl/etcd/ca.pem --etcd-certfile=/etc/kubernetes/ssl/etcd/node-rig3.pem --etcd-keyfile=/etc/kubernetes/ssl/etcd/node-rig3-key.pem
root     24226 24208  1 15:03 ?        00:00:07 kube-controller-manager --feature-gates=Initializers=False,PersistentLocalVolumes=False,DevicePlugins=True \
--node-monitor-grace-period=40s --node-monitor-period=5s --pod-eviction-timeout=5m0s --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.crt --cluster-signing-key-file=/etc/kubernetes/ssl/ca.key --use-service-account-credentials=true --root-ca-file=/etc/kubernetes/ssl/ca.crt --service-account-private-key-file=/etc/kubernetes/ssl/sa.key --kubeconfig=/etc/kubernetes/controller-manager.conf --address=127.0.0.1 --leader-elect=true --controllers=*,bootstrapsigner,tokencleaner --allocate-node-cidrs=true --cluster-cidr=10.233.64.0/18 --node-cidr-mask-size=24
root     25315     1  2 15:04 ?        00:00:09 /usr/local/bin/kubelet --logtostderr=true --v=2 --address=0.0.0.0 --node-ip=192.168.0.102 --hostname-override=rig3 --allow-privileged=true --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/ssl/ca.crt --pod-manifest-path=/etc/kubernetes/manifests --cadvisor-port=0 --pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0 --kube-reserved cpu=100m,memory=256M --node-status-update-frequency=10s --cgroup-driver=cgroupfs --docker-disable-shared-pid=True --anonymous-auth=false --read-only-port=0 --fail-swap-on=True --cluster-dns=10.233.0.3 --cluster-domain=umine.farm --resolv-conf=/etc/resolv.conf --kube-reserved cpu=200m,memory=512M \
--feature-gates=Initializers=False,PersistentLocalVolumes=False,DevicePlugins=True --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin

Latest version of kubernetes

# kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2+coreos.0", GitCommit:"b427929b2982726eeb64e985bc1ebb41aaa5e095", GitTreeState:"clean", BuildDate:"2018-01-18T22:56:14Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2+coreos.0", GitCommit:"b427929b2982726eeb64e985bc1ebb41aaa5e095", GitTreeState:"clean", BuildDate:"2018-01-18T22:56:14Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}

Pod description:

# kubectl describe pod nvidia-device-plugin-daemonset-kzlbx -n kube-system
Name:           nvidia-device-plugin-daemonset-kzlbx
Namespace:      kube-system
Node:           rig1/192.168.0.103
Start Time:     Fri, 16 Feb 2018 15:06:31 +0200
Labels:         controller-revision-hash=54069593
                name=nvidia-device-plugin-ds
                pod-template-generation=1
Annotations:    scheduler.alpha.kubernetes.io/critical-pod=
Status:         Running
IP:             10.233.101.88
Controlled By:  DaemonSet/nvidia-device-plugin-daemonset
Containers:
  nvidia-device-plugin-ctr:
    Container ID:   docker://42676f92a1cce3489f87650433029ad27aa2bb24d9529a15689641410ed31d41
    Image:          nvidia/k8s-device-plugin:1.9
    Image ID:       docker-pullable://nvidia/k8s-device-plugin@sha256:ed1cb6269dd827bada9691a7ae59dab4f431a05a9fb8082f8c28bfa9fd90b6c4
    Port:           <none>
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       ContainerCannotRun
      Message:      OCI runtime create failed: container_linux.go:296: starting container process caused "process_linux.go:398: container init caused \"process_linux.go:381: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig --device=all --utility --pid=2193 /var/lib/docker/aufs/mnt/ac16d904f39b452545a1bebf06148a8802b1a4b088a183f4fe733cf2547ed32c]\\\\nnvidia-container-cli: ldcache error: process /sbin/ldconfig failed with error code: 127\\\\n\\\"\"": unknown
      Exit Code:    128
      Started:      Fri, 16 Feb 2018 15:17:32 +0200
      Finished:     Fri, 16 Feb 2018 15:17:32 +0200
    Ready:          False
    Restart Count:  7
    Environment:    <none>
    Mounts:
      /var/lib/kubelet/device-plugins from device-plugin (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-pm75k (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          False
  PodScheduled   True
Volumes:
  device-plugin:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/kubelet/device-plugins
    HostPathType:
  default-token-pm75k:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-pm75k
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
Events:
  Type     Reason                 Age                From           Message
  ----     ------                 ----               ----           -------
  Normal   SuccessfulMountVolume  13m                kubelet, rig1  MountVolume.SetUp succeeded for volume "device-plugin"
  Normal   SuccessfulMountVolume  13m                kubelet, rig1  MountVolume.SetUp succeeded for volume "default-token-pm75k"
  Warning  Failed                 13m                kubelet, rig1  Error: failed to start container "nvidia-device-plugin-ctr": Error response from daemon: OCI runtime create failed: container_linux.go:296: starting container process caused "process_linux.go:398: container init caused \"process_linux.go:381: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig --device=all --utility --pid=31021 /var/lib/docker/aufs/mnt/a2f849e29fcb8dc87d51e90497d7e44a38d7ecf93acabc285523d13c1cdf9046]\\\\nnvidia-container-cli: ldcache error: process /sbin/ldconfig failed with error code: 127\\\\n\\\"\"": unknown
  Warning  Failed                 13m                kubelet, rig1  Error: failed to start container "nvidia-device-plugin-ctr": Error response from daemon: OCI runtime create failed: container_linux.go:296: starting container process caused "process_linux.go:398: container init caused \"process_linux.go:381: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig --device=all --utility --pid=31068 /var/lib/docker/aufs/mnt/508159dc054cd38ef20a75373a230703de9cba817f44e69da02b82ceac08fb64]\\\\nnvidia-container-cli: ldcache error: process /sbin/ldconfig failed with error code: 127\\\\n\\\"\"": unknown
  Warning  Failed                 12m                kubelet, rig1  Error: failed to start container "nvidia-device-plugin-ctr": Error response from daemon: OCI runtime create failed: container_linux.go:296: starting container process caused "process_linux.go:398: container init caused \"process_linux.go:381: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig --device=all --utility --pid=31210 /var/lib/docker/aufs/mnt/9dab03a8dcf80c0de647bc46b985c0e66fed9cead529e20d499dfaf7d9dcc49c]\\\\nnvidia-container-cli: ldcache error: process /sbin/ldconfig failed with error code: 127\\\\n\\\"\"": unknown
  Warning  Failed                 12m                kubelet, rig1  Error: failed to start container "nvidia-device-plugin-ctr": Error response from daemon: OCI runtime create failed: container_linux.go:296: starting container process caused "process_linux.go:398: container init caused \"process_linux.go:381: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig --device=all --utility --pid=31378 /var/lib/docker/aufs/mnt/2dbe1488b7df983513be06da0e3d439e0dda69c169ac4cbe4e5c7204a892c448]\\\\nnvidia-container-cli: ldcache error: process /sbin/ldconfig failed with error code: 127\\\\n\\\"\"": unknown
  Normal   Created                11m (x5 over 13m)  kubelet, rig1  Created container
  Normal   Pulled                 11m (x5 over 13m)  kubelet, rig1  Container image "nvidia/k8s-device-plugin:1.9" already present on machine
  Warning  Failed                 11m                kubelet, rig1  Error: failed to start container "nvidia-device-plugin-ctr": Error response from daemon: OCI runtime create failed: container_linux.go:296: starting container process caused "process_linux.go:398: container init caused \"process_linux.go:381: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig --device=all --utility --pid=31670 /var/lib/docker/aufs/mnt/148216fd0c884ee7e2a6978c4035b7cc7651ad715b086b1e9aba14f0a24a733e]\\\\nnvidia-container-cli: ldcache error: process /sbin/ldconfig failed with error code: 127\\\\n\\\"\"": unknown
  Warning  BackOff                2m (x42 over 12m)  kubelet, rig1  Back-off restarting failed containe

Using in clusters which contains both GPU nodes and non-GPU nodes

When using daemon sets in this kind of cluster, non-GPU nodes will complains

Error: failed to start container "nvidia-device-plugin-ctr": Error response from daemon: oci runtime error: container_linux.go:265: starting container process caused "process_linux.go:368: container init caused \"process_linux.go:351: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig --device=ALL --utility --compute --pid=16424 /var/lib/docker/overlay/a86473af4c52afb44dfdfdcc817edb45316d520cccfb086d87cc227314d09015/merged]\\\\nnvidia-container-cli: initialization error: load library failed: libcuda.so.1: cannot open shared object file: no such file or directory\\\\n\\\"\""

It's straightforward to use taints (which could be documented), but how about also done it in this plugin (i.e. better error handling)?

Enhance Nvidia Device plugin with more health checking features

Quoting what @RenaudWasTaken mentioned in another thread:
"The Nvidia Device plugin has a lot of such features coming up a few of these are:

memory scrubbing
healthCheck and reset in case of bad state
GPU Allocated memory checks
"Zombie processes" checks
...
"

Creating this issue to track the progress on these improvements.

@RenaudWasTaken could you also provide more details on some of these features, like what GPU Allocated memory checks and "Zombie processes" checks do?

failed create pod sandbox

I'm trying to install Nvidia plugin over kubeadm 1.9
I already install Nvidia driver, cuda toolkit, Nvidia-docker.
But when I create k8s-device-plugin at the master node, the pod is stuck in ContainerCreating state.
When i use kubectl describe pod, it shows the error failed create pod sandbox
image

k8s-device-plugin Failed to initialize NVML: could not load NVML library

checked issue #19 does not help me out.

versions:

docker version
Client:
 Version:      1.13.1
 API version:  1.26
 Go version:   go1.6.2
 Git commit:   092cba3
 Built:        Thu Nov  2 20:40:23 2017
 OS/Arch:      linux/amd64

Server:
 Version:      1.13.1
 API version:  1.26 (minimum version 1.12)
 Go version:   go1.6.2
 Git commit:   092cba3
 Built:        Thu Nov  2 20:40:23 2017
 OS/Arch:      linux/amd64
 Experimental: false


kubectl version
GitVersion:"v1.10.2"


kubeadm version
GitVersion:"v1.10.2

nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Tue_Jan_10_13:22:03_CST_2017
Cuda compilation tools, release 8.0, V8.0.61


ldconfig -p | grep nvidia-ml
	libnvidia-ml.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.1
	libnvidia-ml.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libnvidia-ml.so


nvidia-smi
Wed May  2 21:18:39 2018       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 367.124                Driver Version: 367.124                   |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GRID K1             Off  | 0000:0B:00.0     Off |                  N/A |
| N/A   29C    P8     8W /  31W |      0MiB /  4036MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

errors:

docker run --security-opt=no-new-privileges --cap-drop=ALL --network=none -it -v /var/lib/kubelet/device-plugins:/var/lib/kubelet/device-plugins nvidia/k8s-device-plugin:1.10
2018/05/02 21:18:02 Loading NVML
2018/05/02 21:18:02 Failed to initialize NVML: could not load NVML library.
2018/05/02 21:18:02 If this is a GPU node, did you set the docker default runtime to `nvidia`?
2018/05/02 21:18:02 You can check the prerequisites at: https://github.com/NVIDIA/k8s-device-plugin#prerequisites
2018/05/02 21:18:02 You can learn how to set the runtime at: https://github.com/NVIDIA/k8s-device-plugin#quick-start

@RenaudWasTaken mentions at #19 saying old GPU might got this issue,
is this the case here, please help to take a look, thanks a lot.

k8s-device-plugin v1.9 deployment CrashLoopBackOff

I try tp deployed device-plugin v1.9 on k8s.

And I have similar problem nvidia-device-plugin container CrashLoopBackOff error v1.8

and container CrashLoopBackOff error

NAME                                   READY     STATUS             RESTARTS   AGE
nvidia-device-plugin-daemonset-2h9rh   0/1       CrashLoopBackOff   11          33m

Use docker Run locally problem

docker build -t nvidia/k8s-device-plugin:1.9 .

Successfully built d12ed13b386a
Successfully tagged nvidia/k8s-device-plugin:1.9
14:25:40 Loading NVML
14:25:40 Failed to start nvml with error: could not load NVML library.

Environment :

$ cat /etc/ld.so.conf.d/x86_64-linux-gnu_GL.conf 
/usr/lib/nvidia-384
/usr/lib32/nvidia-384
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 384.90                 Driver Version: 384.90                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 106...  Off  | 00000000:03:00.0 Off |                  N/A |
| 38%   29C    P8     6W / 120W |      0MiB /  6069MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+


And I used docker run --runtime=nvidia --security-opt=no-new-privileges --cap-drop=ALL --network=none -it -v /var/lib/kubelet/device-plugins:/var/lib/kubelet/device-plugins nvidia/k8s-device-plugin:1.9

show error :

2017/12/27 14:38:22 Loading NVML
2017/12/27 14:38:22 Fetching devices.
2017/12/27 14:38:22 Starting FS watcher.
2017/12/27 14:38:22 Starting OS watcher.
2017/12/27 14:38:22 Starting to serve on /var/lib/kubelet/device-plugins/nvidia.sock
2017/12/27 14:38:27 Could not register device plugin: context deadline exceeded
2017/12/27 14:38:27 Could not contact Kubelet, retrying. Did you enable the device plugin feature gate?
2017/12/27 14:38:27 Starting to serve on /var/lib/kubelet/device-plugins/nvidia.sock
2017/12/27 14:38:32 Could not register device plugin: context deadline exceeded
2017/12/27 14:38:32 Could not contact Kubelet, retrying. Did you enable the device plugin feature gate?
2017/12/27 14:38:32 Starting to serve on /var/lib/kubelet/device-plugins/nvidia.sock
2017/12/27 14:38:37 Could not register device plugin: context deadline exceeded
.
.
.

Could not register device plugin: context deadline exceeded

I am getting following error when starting the plugin as docker container

2017/11/24 09:06:24 Loading NVML
2017/11/24 09:06:24 Starting to serve on /var/lib/kubelet/device-plugins/nvidia.sock
2017/11/24 09:06:29 Could not register device plugin: context deadline exceeded

My installation of nvidia-docker works fine.

What is the problem?

Allocating same GPU to multiple requests

Are you open to a PR that allocates the same GPU to multiple requests based on additional requirements passed to the process?

I'm thinking I could ask for nvidia/gpu:1 and get 1 whole GPU, or I could ask for nvidia/gpu-memory:1Gi and nvidia/gpu-cpu:2 and get "allocated" 1Gi of memory and 2 cores on 1 GPU, leaving whatever is left for other nvidia/gpu-memory and nvidia/gpu-cpu requests.

It wouldn't be enforced, but this way we can at least context switch between multiple processes on 1 GPU, which is something the main kubernetes project doesn't seem to want to support until at least v1.11 (kubernetes/kubernetes#52757)

container CrashLoopBackOff error

Hi, everyone.
I've got the CrashLoopBackOff error too:

NAME READY STATUS RESTARTS AGE
nvidia-device-plugin-daemonset-csjxw 0/1 CrashLoopBackOff 12 39m

However, when i ran the container on the node with nvidia:
docker run --runtime=nvidia -it -v /var/lib/kubelet/device-plugins:/var/lib/kubelet/device-plugins nvidia/k8s-device-plugin:1.8
It shows that the device plugin is registered succeessfully, isn't it?

2017/12/08 06:41:44 Loading NVML
2017/12/08 06:41:45 Starting to serve on /var/lib/kubelet/device-plugins/nvidia.sock
2017/12/08 06:41:45 Registered device plugin with Kubelet

And the kubelet has started with --feature-gates=DevicePlugins=true.

By the way, my Nividia is GeForce GTX 1070.
Why this error came out?

Does the nvidia-device-plugin need to be running on all worker nodes?

I can deploy the device-plugin on my gpu nodes successfully. After running kubectl create command, all the worker nodes deploy one nvidia-device-plugin. I know this is because using DaemonSet to deploy the plugin, but what confuses me is that do we also need to deploy the plugin on these non-gpu nodes?

Suggest to add Node affinity requiredDuringSchedulingIgnoredDuringExecution type or nodeSelector

Can we use k8s-device-plugin and Nvidia-docker2 in minikube?

1.Question
This is not issue, only Question.
Can we use k8s-device-plugin and Nvidia-docker2 in minikube

2.Enviroment
・OS:Ubuntu16.04
・minikube: v0.24.1
・ kubectl : v1.10.0
・ Nvidia driver :384.111
・ Docker :Client:
 Version: 18.03.0-ce
 API version: 1.37
 Go version: go1.9.4
  Git commit: 0520e24
 Built: Wed Mar 21 23:10:01 2018
 OS/Arch: linux/amd64
 Experimental: false
 Orchestrator: swarm

 Server:
 Engine:
 Version: 18.03.0-ce
 API version: 1.37 (minimum version 1.12)
 Go version: go1.9.4
 Git commit: 0520e24
 Built: Wed Mar 21 23:08:31 2018
 OS/Arch: linux/amd64
 Experimental: false

・Results of kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"0b9efaeb34a2fc51ff8e4d34ad9bc6375459c4a4", GitTreeState:"clean", BuildDate:"2017-11-29T22:43:34Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}

・GPU:Maxwell Geforce TITUN-X

・ /erc/docker/daemon.json
{
"dns": ["150.16.X.X", "150.16.X.X"],
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
}
}

3.Problem
(1) I installerd minikube and kubectl to test Nvidia-docker2 .

(2)I started minikube as below
sudo CHANGE_MINIKUBE_NONE_USER=true minikube start --vm-driver=none --featuregates=Accelerators=true

★Hyper visor=on(Ubuntu PC BIOS)

(2) I did as below
 $ kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v1.8/nvidia-device-plugin.yml

(3)Next I did as below
$ kubectl create -f test.yml

(4) test.yml file
apiVersion: v1
kind: Pod
metadata:
name: cuda-vector-add
spec:
restartPolicy: OnFailure
containers:
- name: cuda-vector-add
# https://github.com/kubernetes/kubernetes/blob/v1.7.11/test/images/nvidi$
image: "k8s.gcr.io/cuda-vector-add:v0.1"
resources:
limits:
nvidia.com/gpu: 1 # requesting 1 GPU

(5)Results
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
cuda-vector-add 0/1 Pending 0 11s
nvidia-device-plugin-daemonset-mq4pm 0/1 CrashLoopBackOff 4 2m

★Pod error and nvidia-device-plugin-daemonset error

(6)My opinion
I faced on this eroor(pods ans Daemon Set were nor Running), I think nvidia-device-plug in was disabel.
But I don't kown the way to eanble nvidia-device-plug.
Perhaps I must set --feature-gates="DevicePlugins=true.
But minikube looks like that kubelet is not used.

★Could you any advices to use nvidia-docker2 in minikube?

Issues when requesting for more than 1 GPU

Hi there,

My Kubernetes cluster is as such

Master (no GPU)
Node 1 (GPU)
Node 2 (GPU)
Node 3 (GPU)
Node 4 (GPU)

Nodes 1 - 4 have Nvidia drivers (384) and nvidia docker 2 installed.

First issue:
When i run the command
kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v1.9/nvidia-device-plugin.yml"

The nvidia plugin is also running on the master node which have no nvidia drivers and nvidia docker installed. Is this behaviour correct?

Second issue:
I can only run 1 GPU on my cluster at a time. For example, if I run the tensorflow notebook with 1 GPU, it works. But if I deploy another pod utilising another 1 GPU, the pod status gets stuck on pending, stating that there are insufficient GPU resource.

How do i solve this? Thanks.

can't schedule GPU pod

I'm running K8s 1.10.2-0 on RHEL7.4 with docker 18.03.1
I have a 9 worker node K8s cluster. Only one of those nodes has a GPU on it (NVIDIA TitanXp).
I installed nvidia-docker2 on ALL worker nodes:
nvidia-docker2.noarch 2.0.3-1.docker18.03.1.ce
I installed nvidia-container-runtime on ALL worker nodes:
nvidia-container-runtime.x86_64 2.0.0-1.docker18.03.1
I installed nvidia-device-plugin.yml v1.10 via kubectl (the device plugin is running OK on all worker nodes)

I can ssh into my GPU worker node and run nvidia-smi inside a container OK:

[whacuser@gpu ~]$ sudo docker run --rm nvidia/cuda nvidia-smi
Unable to find image 'nvidia/cuda:latest' locally
latest: Pulling from nvidia/cuda
297061f60c36: Pull complete
e9ccef17b516: Pull complete
dbc33716854d: Pull complete
8fe36b178d25: Pull complete
686596545a94: Pull complete
f611dfbee954: Pull complete
c51814f3e9ba: Pull complete
5da0fc07e73a: Pull complete
97462b1887aa: Pull complete
924ea239f6fe: Pull complete
Digest: sha256:69f3780f80a72cb7cebc7f401a716370f79412c5aa9362306005ca4eb84d0f3c
Status: Downloaded newer image for nvidia/cuda:latest
Mon May 14 20:14:16 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.25                 Driver Version: 390.25                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  TITAN X (Pascal)    Off  | 00000000:13:00.0 Off |                  N/A |
| 23%   21C    P8     8W / 250W |      0MiB / 12196MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

I label my GPU worker node like so:
kubectl label nodes gpu accelerator=nvidia-titan-xp --overwrite=true

However, when I try to run a pod:

apiVersion: v1
kind: Pod
metadata:
  name: gpu-pod
spec:
  containers:
    - name: cuda-container
      image: nvidia/cuda:9.0-devel
      resources:
        limits:
          nvidia.com/gpu: 1 # requesting 1 GPU
    - name: digits-container
      image: nvidia/digits:6.0
      resources:
        limits:
          nvidia.com/gpu: 1 # requesting 1 GPU
  nodeSelector:
    accelerator: nvidia-titan-xp

I get an error:

0/12 nodes are available: 11 MatchNodeSelector, 12 Insufficient nvidia.com/gpu, 3 PodToleratesNodeTaints.

any ideas?

”nvidia-container-cli: initialization error: cuda error: unknown error“ on CPU node

on k8s CPU node, set nvidia as the default runtime,

{
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    },
    "default-runtime": "nvidia",
    "registry-mirrors": ["https://registry.docker-cn.com"]
}

when we start the pod of nvidia/k8s-device-plugin:1.10
the error is:

kubelet, 00-25-90-c0-f7-c8  Error: failed to start container "nvidia-device-plugin-ctr": Error response from daemon: oci runtime error: container_linux.go:265: starting container process caused "process_linux.go:368: container init caused \"process_linux.go:351: running prestart hook 0 caused \\\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig --device=all --utility --pid=3567 /var/lib/docker/overlay2/c4498cb4052e704adff6d4ce5d4a8190afb89764a7bc8645d97c6b0520ba3a81/merged]\\\\nnvidia-container-cli: initialization error: cuda error: unknown error\\\\n\\\"\""
  Warning  BackOff                1s (x3 over 5s)    kubelet, 00-25-90-c0-f7-c8  Back-off restarting failed container

what we expected:
nvidia/k8s-device-plugin:1.10 can run on non-GPU node with nvidia docker runtime.

what k8s does behind device plugin ?

I read another device plugin example, https://github.com/vikaschoudhary16/sfc-device-plugin
In this device plugin, during allocate phase, it only response the Host Path and Container Path of the device. So does it mean the k8s will mount the device to container?
In nvidia device plugin, it set an env "NVIDIA_VISIBLE_DEVICES", then the nvidia-container-cli will use the env, then mount the device to container.

Cannot restart docker after configuring /etc/docker/daemon.json

Hi everyone.
I got some trouble today installing this plugin.
Here is my environment
AWS Ubuntu Server 16.04
docker 18.03.1-ce
NVIDIA Docker: 2.0.3
CUDA Version 9.1.85

I have already installed nvidia-docker 2 . Then I used the following command to test the nvidia-docker2 and it is successful.
docker run --runtime=nvidia -it -p 8888:8888 tensorflow/tensorflow:latest-gpu

Then I followed the guide to install this plugin.I tried to configure the /etc/docker/daemon.json and
run the following commands:
sudo systemctl daemon-reload && sudo systemctl restart docker

And my configuration in daemon.json is here
{ "default-runtime": "nvidia", "runtimes": { "nvidia": { "path": "/usr/bin/nvidia-container-runtime", "runtimeArgs": [] } } }

But this step was wrong and I got the following output
job for docker.service failed because the control process exited with error code

Who can help me?
Thank you!

what's the difference between v1.8 and v1.9?

I mean:
Does k8s-device-plugin:v1.8 only work for kubernetes:v1.8.x ,
k8s-device-plugin:v1.9 -> kubernetes:v1.9.x?

could we use k8s-device-plugin:v1.9 for kubernetes:v1.8.x?

can k8s-device-plugin support numa-aware alloc

we op one gpu cluster , every server has 4 gpus. suppose the id is 0, 1, 2,3. one job taken id 0, if the comming job needs 2 gpus. can the plugin give 2,3 to kubelet?.(now is 1,2 ) if do this ,job in the same pcie can connect faster than in diffrent pcie slot 。

Always pending

I have install k8s v1.9. And do everything in Readme file.
the gpu job shows pending. when I try kubectl restart, the feature gates show nothing
I0302 17:35:17.372045 16680 feature_gate.go:220] feature gates: &{{} map[]}
but I have add Environment="KUBELET_EXTRA_ARGS=--feature-gates=DevicePlugins=true" in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Is it normal ? How should I debug this problem ?

when kubectl describe pod gpu-pod
it shows
oot@a-Z170-HD3P:/home/a/fyk/k8s-device-plugin# kubectl describe pod gpu-pod
Name: gpu-pod
Namespace: default
Node:
Labels:
Annotations:
Status: Pending
IP:
Containers:
cuda-container:
Image: nvidia/cuda:9.0-devel
Port:
Limits:
nvidia.com/gpu: 1
Requests:
nvidia.com/gpu: 1
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zxxzs (ro)
digits-container:
Image: nvidia/digits:6.0
Port:
Limits:
nvidia.com/gpu: 1
Requests:
nvidia.com/gpu: 1
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zxxzs (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-zxxzs:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zxxzs
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Warning FailedScheduling 2m (x124 over 37m) default-scheduler 0/1 nodes are available: 1 Insufficient nvidia.com/gpu.

I tried kubectl describe node, result shows nothing about gpu
Capacity:
cpu: 8
memory: 16387484Ki
pods: 110

I think the device plugin does not work at all.

Error running GPU pod: "Insufficient nvidia.com/gpu"

I am unable get GPU device support through k8s.
I am running 2 p2.xlarge nodes on AWS with a manual installation of K8s.
The nvidia-docker2 is installed and set as the default runtime. I tested this by running the following and getting the expected output.
docker run --rm nvidia/cuda nvidia-smi

I followed all the steps in the readme of this repo, and cannot seem to get the containers to have GPU access. Running the nvidia-device-plugin.yml seems to be up and working, but running a pod gives this error when trying to launch the digits job:

$ kubectl get pod gpu-pod --template '{{.status.conditions}}' [map[type:PodScheduled lastProbeTime:<nil> lastTransitionTime:2018-02-26T21:58:32Z message:0/2 nodes are available: 1 PodToleratesNodeTaints, 2 Insufficient nvidia.com/gpu. reason:Unschedulable status:False]]

I thought that it might be that I was requiring too many resources (2 per node), but even lowering the requirements in the yml still yielded the same result. Any ideas where things could be going wrong?

nvidia-device-plugin container CrashLoopBackOff error

I deployed device-plugin container on k8s via the guide.
However I got container CrashLoopBackOff error:

NAME                                   READY     STATUS             RESTARTS   AGE
nvidia-device-plugin-daemonset-zb8xn   0/1       CrashLoopBackOff   6          9m

And when I run

docker run -it -v /var/lib/kubelet/device-plugins:/var/lib/kubelet/device-plugins nvidia/k8s-device-plugin:1.8

I got error like this:

2017/11/29 01:54:30 Loading NVML
2017/11/29 01:54:30 could not load NVML library

But I am pretty sure that I have installed NVML library.
So did I miss anything here?
How to check if I installed NVML library?

0/1 nodes are available: 1 Insufficient nvidia.com/gpu

Deploying any PODS with the nvidia.com/gpu resource limits results in "0/1 nodes are available: 1 Insufficient nvidia.com/gpu."

I also see this error in the Daemonset POD logs:
2018/02/27 16:43:50 Warning: GPU with UUID GPU-edae6d5d-6698-fb8d-2c6b-2a791224f089 is too old to support healtchecking with error: %!s(MISSING). Marking it unhealthy

running nvidia-docker2, have deployed the nvidia device plugin as a daemonset.

On worker Node
uname -a
Linux gpu 4.4.0-116-generic #140-Ubuntu SMP Mon Feb 12 21:23:04 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

docker run --rm nvidia/cuda nvidia-smi
Wed Feb 28 18:07:07 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 390.30 Driver Version: 390.30 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 760 Off | 00000000:0B:00.0 N/A | N/A |
| 34% 43C P8 N/A / N/A | 0MiB / 1999MiB | N/A Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 760 Off | 00000000:90:00.0 N/A | N/A |
| 34% 42C P8 N/A / N/A | 0MiB / 1999MiB | N/A Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 Not Supported |
| 1 Not Supported |
+-----------------------------------------------------------------------------+

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.