Code Monkey home page Code Monkey logo

mountpoint-s3-csi-driver's Introduction

Mountpoint for Amazon S3 CSI Driver

Overview

The Mountpoint for Amazon S3 Container Storage Interface (CSI) Driver allows your Kubernetes applications to access Amazon S3 objects through a file system interface. Built on Mountpoint for Amazon S3, the Mountpoint CSI driver presents an Amazon S3 bucket as a storage volume accessible by containers in your Kubernetes cluster. The Mountpoint CSI driver implements the CSI specification for container orchestrators (CO) to manage storage volumes.

For Amazon EKS clusters, the Mountpoint for Amazon S3 CSI driver is also available as an EKS add-on to provide automatic installation and management.

Features

  • Static Provisioning - Associate an existing S3 bucket with a PersistentVolume (PV) for consumption within Kubernetes.
  • Mount Options - Mount options can be specified in the PersistentVolume (PV) resource to define how the volume should be mounted. For Mountpoint-specific options, take a look at the Mountpoint docs for configuration.

Mountpoint for Amazon S3 does not implement all the features of a POSIX file system, and there are some differences that may affect compatibility with your application. See Mountpoint file system behavior for a detailed description of Mountpoint's behavior and POSIX support and how they could affect your application.

Container Images

Driver Version ECR Public Image
v1.7.0 public.ecr.aws/mountpoint-s3-csi-driver/aws-mountpoint-s3-csi-driver:v1.7.0
Previous Images
Driver Version ECR Public Image
v1.6.0 public.ecr.aws/mountpoint-s3-csi-driver/aws-mountpoint-s3-csi-driver:v1.6.0
v1.5.1 public.ecr.aws/mountpoint-s3-csi-driver/aws-mountpoint-s3-csi-driver:v1.5.1
v1.4.0 public.ecr.aws/mountpoint-s3-csi-driver/aws-mountpoint-s3-csi-driver:v1.4.0
v1.3.1 public.ecr.aws/mountpoint-s3-csi-driver/aws-mountpoint-s3-csi-driver:v1.3.1
v1.3.0 public.ecr.aws/mountpoint-s3-csi-driver/aws-mountpoint-s3-csi-driver:v1.3.0
v1.2.0 public.ecr.aws/mountpoint-s3-csi-driver/aws-mountpoint-s3-csi-driver:v1.2.0
v1.1.0 public.ecr.aws/mountpoint-s3-csi-driver/aws-mountpoint-s3-csi-driver:v1.1.0
v1.0.0 public.ecr.aws/mountpoint-s3-csi-driver/aws-mountpoint-s3-csi-driver:v1.0.0

Releases

The Mountpoint for S3 CSI Driver follows semantic versioning. The version will be bumped following the rules below:

  • Significant breaking changes will be released as a MAJOR update.
  • New features will be released as a MINOR update.
  • Bug or vulnerability fixes will be released as a PATCH update.

Monthly releases will contain at minimum a MINOR version bump, even if the content would normally be treated as a PATCH version.

Support

Support will be provided for the latest version and one prior version. Bugs or vulnerabilities found in the latest version will be backported to the previous release in a new minor version.

This policy is non-binding and subject to change.

Compatibility

The Mountpoint for S3 CSI Driver is compatible with Kubernetes versions v1.23+ and implements the CSI Specification v1.8.0. The driver supports x86-64 and arm64 architectures.

Distros Support Matrix

The following table provides the support status for various distros with regards to CSI Driver version:

Distro Experimental Stable Deprecated Removed
Amazon Linux 2 - 1.0.0 - -
Amazon Linux 2023 - 1.0.0 - -
Ubuntu 20.04 - 1.0.0 - -
Ubuntu 22.04 - 1.0.0 - -
Bottlerocket >1.19.2 - 1.4.0 - -

Documentation

Contributing

We welcome contributions to the Mountpoint for Amazon S3 CSI driver! Please see CONTRIBUTING.md for more information on how to report bugs or submit pull requests.

Security

If you discover a potential security issue in this project we ask that you notify AWS Security via our vulnerability reporting page. Please do not create a public GitHub issue.

Code of conduct

This project has adopted the Amazon Open Source Code of Conduct. See CODE_OF_CONDUCT.md for more details.

mountpoint-s3-csi-driver's People

Contributors

amazon-auto avatar dannycjones avatar dependabot[bot] avatar discanto avatar dlakhaws avatar glaucius avatar jamesbornholt avatar jjkr avatar laghoule avatar mmoscher avatar monthonk avatar muddyfish avatar netikras avatar passaro avatar truestory1 avatar unexge avatar vladem avatar vramahandry avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mountpoint-s3-csi-driver's Issues

PVC error "storageClassName does not match". Takes the default gp3

/kind bug

What happened?

Using the static provisioning guide, when the pvc is created it takes automatically the default storageClass gp3

kubectl get sc gp3
NAME            PROVISIONER       RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
gp3 (default)   ebs.csi.aws.com   Delete          WaitForFirstConsumer   true                   153d

And I get this error when I describe the pvc

kubectl describe pvc -n platform text-generation-inference-s3 | grep -A 5 ^Events
Events:
  Type     Reason          Age                   From                         Message
  ----     ------          ----                  ----                         -------
  Warning  VolumeMismatch  2m31s (x62 over 17m)  persistentvolume-controller  Cannot bind to requested volume "text-generation-inference-s3": storageClassName does not match

What you expected to happen?

Expecting the PVCs to be ready and they're not

kubectl get pvc -n platform |grep text
text-embeddings-inference-s3      Pending   text-embeddings-inference-s3               0                         gp3            20m
text-generation-inference-s3      Pending   text-generation-inference-s3               0                         gp3            20m

The PVs are ready

kubectl get pv -n platform|grep text
text-embeddings-inference-s3               10Gi       ROX            Retain           Available                                                                                                                                                      21m
text-generation-inference-s3               10Gi       ROX            Retain           Available                

How to reproduce it (as minimally and precisely as possible)?

Here are the manifests

---
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    meta.helm.sh/release-name: text-inference
    meta.helm.sh/release-namespace: platform
  labels:
    app.kubernetes.io/instance: text-inference
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: text-generation-inference
    helm.sh/chart: helm-universal-0.0.0
  name: text-generation-inference-s3
spec:
  accessModes:
  - ReadOnlyMany
  capacity:
    storage: 10Gi
  csi:
    driver: s3.csi.aws.com
    volumeAttributes:
      bucketName: <REDACTED>
    volumeHandle: s3-csi-driver-volume
  mountOptions:
  - allow-delete
  - region eu-west-3
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    meta.helm.sh/release-name: text-inference
    meta.helm.sh/release-namespace: platform
  labels:
    app.kubernetes.io/instance: text-inference
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: text-generation-inference
    helm.sh/chart: helm-universal-0.0.0
  name: text-generation-inference-s3
  namespace: platform
spec:
  accessModes:
  - ReadOnlyMany
  resources:
    requests:
      storage: 8Gi
  storageClassName: ""
  volumeName: text-generation-inference-s3

Environment

  • Kubernetes version (use kubectl version):
kubectl version
Client Version: v1.28.5
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.7-eks-b9c9ed7
  • Driver version: v1.5.1

Simplify caching configuration

/feature

Is your feature request related to a problem? Please describe.
Caching is supported today by adding a cache option to a persistent volume configuration and passing in a directory on the node's filesystem. This works, but comes with a couple sharp edges. Creating the directory on the node is not done automatically, so it has to be created manually ahead of time.

Describe the solution you'd like in detail
Caching configuration should be possible without manually making changes to the nodes and should make it easy to define different types of storage to use as cache like a ramdisk.

Describe alternatives you've considered
One potential solution is to reference other persistent volumes or mounts as cache, which could make for nice composability of the k8s constructs.

Additional context
Mountpoint's documentation on caching: https://github.com/awslabs/mountpoint-s3/blob/main/doc/CONFIGURATION.md#caching-configuration

s3-plugin exit code 2

/kind bug

NOTE: If this is a filesystem related bug, please take a look at the Mountpoint repo to submit a bug report

What happened?
We see quite often s3-csi-node- pods' stop with exit code 2.

2024-01-30 13:54:45.207	
problem_container_name=liveness-probe exit_code=2 problem_pod_name=s3-csi-node-rwzj5
2024-01-30 13:54:45.207	
problem_container_name=s3-plugin exit_code=2 problem_pod_name=s3-csi-node-rwzj5
2024-01-30 13:54:46.075	
problem_container_name=liveness-probe exit_code=2 problem_pod_name=s3-csi-node-rwzj5
2024-01-30 13:54:46.075	
problem_container_name=s3-plugin exit_code=2 problem_pod_name=s3-csi-node-rwzj5
2024-01-30 13:54:46.082	
problem_container_name=liveness-probe exit_code=2 problem_pod_name=s3-csi-node-rwzj5
2024-01-30 13:54:46.082	
problem_container_name=s3-plugin exit_code=2 problem_pod_name=s3-csi-node-rwzj5
2024-01-30 13:55:35.781	
problem_container_name=liveness-probe exit_code=2 problem_pod_name=s3-csi-node-xzlwh
2024-01-30 13:55:35.781	
problem_container_name=s3-plugin exit_code=2 problem_pod_name=s3-csi-node-xzlwh
2024-01-30 13:55:36.435	
problem_container_name=liveness-probe exit_code=2 problem_pod_name=s3-csi-node-xzlwh
2024-01-30 13:55:36.435	
problem_container_name=s3-plugin exit_code=2 problem_pod_name=s3-csi-node-xzlwh
2024-01-30 13:55:36.522	
problem_container_name=liveness-probe exit_code=2 problem_pod_name=s3-csi-node-xzlwh
2024-01-30 13:55:36.523	
problem_container_name=s3-plugin exit_code=2 problem_pod_name=s3-csi-node-xzlwh
2024-01-30 14:14:12.884	
problem_container_name=liveness-probe exit_code=2 problem_pod_name=s3-csi-node-hndkx
2024-01-30 14:14:12.884	
problem_container_name=s3-plugin exit_code=2 problem_pod_name=s3-csi-node-hndkx
2024-01-30 14:14:13.639	
problem_container_name=liveness-probe exit_code=2 problem_pod_name=s3-csi-node-hndkx
2024-01-30 14:14:13.639	
problem_container_name=s3-plugin exit_code=2 problem_pod_name=s3-csi-node-hndkx
2024-01-30 14:14:13.647	
problem_container_name=liveness-probe exit_code=2 problem_pod_name=s3-csi-node-hndkx
2024-01-30 14:14:13.647	
problem_container_name=s3-plugin exit_code=2 problem_pod_name=s3-csi-node-hndkx
2024-01-30 14:24:42.846	
problem_container_name=liveness-probe exit_code=2 problem_pod_name=s3-csi-node-56jpl
2024-01-30 14:24:42.846	
problem_container_name=s3-plugin exit_code=2 problem_pod_name=s3-csi-node-56jpl
2024-01-30 14:24:43.675	
problem_container_name=liveness-probe exit_code=2 problem_pod_name=s3-csi-node-56jpl
2024-01-30 14:24:43.675	
problem_container_name=s3-plugin exit_code=2 problem_pod_name=s3-csi-node-56jpl
2024-01-30 14:24:43.790	
problem_container_name=liveness-probe exit_code=2 problem_pod_name=s3-csi-node-56jpl
2024-01-30 14:24:43.790	
problem_container_name=s3-plugin exit_code=2 problem_pod_name=s3-csi-node-56jpl

What you expected to happen?
We want to understand the reason pods restart.

How to reproduce it (as minimally and precisely as possible)?
We use ARM image public.ecr.aws/mountpoint-s3-csi-driver/aws-mountpoint-s3-csi-driver:v1.2.0
repoURL: "https://awslabs.github.io/mountpoint-s3-csi-driver"
targetRevision: 1.2.0
chart: aws-mountpoint-s3-csi-driver

Anything else we need to know?:
We have this message in the pods logs:
Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:true,Error:,}
Also, we noticed a memory leak

Environment

  • Kubernetes version (use kubectl version):
    Client Version: v1.28.2
    Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
    Server Version: v1.27.8-eks-8cb36c9

  • Driver version:
    -We use ARM image public.ecr.aws/mountpoint-s3-csi-driver/aws-mountpoint-s3-csi-driver:v1.2.0

S3 Express One AZ Support

/feature

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is.
Not sure if current S3 mountpoint CSI driver can support S3 express or not. I tried with existing static provision example, but it failed. If it is already supported transparently, would suggest to have either an example or doc for it. I am also would like to contribute for the example if it's already supported.

Describe the solution you'd like in detail
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

AWS ROSA support

/feature

Is your feature request related to a problem? Please describe.
Current Helm Chart not working in AWS ROSA.

Describe the solution you'd like in detail
AWS ROSA support.

Additional context
ImagePullBackOff

I1213 11:10:02.072905 1 driver.go:61] Driver version: 1.1.0, Git commit: c681ab1, build date: 2023-12-05T19:47:03Z, nodeID: ip-10-252-14-48.eu-west-1.compute.internal, mount-s3 version: 1.3.1
I1213 11:10:02.075541 1 mount_linux.go:282] Detected umount with safe 'not mounted' behavior
I1213 11:10:02.087546 1 driver.go:113] Listening for connections on address: &net.UnixAddr{Name:"/csi/csi.sock", Net:"unix"}

exitCode: 2

Feature Request: Enhanced Rsync Support for S3 CSI Driver

Problem Description

When attempting to use rsync for file synchronization between a Kubernetes pod and an AWS S3 bucket mounted via the AWS S3 CSI driver, several challenges arise. The primary issues include rsync performing filesystem operations that are not supported by S3, such as permission setting and atomic renaming. This results in errors like "Operation not permitted" and "Function not implemented," complicating the use of rsync for data synchronization tasks.

Desired Solution

I propose the development of enhanced support for POSIX-like filesystem operations within the S3 service or specifically within the AWS S3 CSI driver to better accommodate file synchronization tools like rsync. The solution could include:

  • A compatibility layer or additional driver options that adapt rsync filesystem operations to be compatible with S3 object storage behaviors.
  • Support for preserving file metadata, such as timestamps, in a way that aligns with rsync's functionality.
  • Improved error handling to manage rsync expectations and common operation pitfalls, especially around file renaming and permissions.

Alternatives Considered

To address these challenges, I have explored:

  • Utilizing the AWS CLI s3 sync command for synchronization, which lacks some of rsync's advanced features and efficiency.
  • Developing custom scripts with AWS SDKs, which increases complexity and maintenance requirements.
  • Adjusting rsync command-line options to mitigate errors, which has not fully resolved the underlying compatibility issues.

Additional Context

Seamless integration of rsync with S3 would greatly benefit a wide array of applications, from backup systems to dynamic content management for web services, by simplifying data synchronization processes. Enhancing S3's compatibility with rsync would leverage S3's storage capabilities in distributed systems like Kubernetes, where efficient and reliable data synchronization is a frequent requirement.

Support for topology labels

/feature

Is your feature request related to a problem? Please describe.
Support for topology labels.

Describe the solution you'd like in detail
Specifically the ability to set the AZ (or AZ ID for S3 Express One Zone support) to specify where the pod is scheduled.

Describe alternatives you've considered
For now, we have documentation on the way a customer can set the pod location for an AZ using node affinity.

Better instructions for microk8s users

/feature

Is your feature request related to a problem? Please describe.
I was not able to get the csi driver working with microk8s due to permission issues (securityContext doesnt help)

Describe the solution you'd like in detail
Need to add an extra section in the tutorial describing how to configure the helm chart values for microk8s users. This is what i needed to change to get it to work:

helm upgrade --install aws-mountpoint-s3-csi-driver
--namespace kube-system
--set node.kubeletPath=/var/snap/microk8s/common/var/lib/kubelet
aws-mountpoint-s3-csi-driver/aws-mountpoint-s3-csi-driver

Support S3 compatible custom endpoints

/feature

Is your feature request related to a problem? Please describe.
Users which do not use AWS S3 buckets may want to use different (self-hosted) S3 products like MinIO or Ceph S3 and still be able to use the S3 CSI driver.

Describe the solution you'd like in detail
Offer an optional setting to use a custom endpoint for S3 API calls. Other AWS libraries and products offer such settings.

Describe alternatives you've considered
None so far.

Additional context
None so far.

Support customizing tolerations

/feature

Is your feature request related to a problem? Please describe.
We have nodes with taints and the driver is not being scheduled there

Describe the solution you'd like in detail
Similiar to the eks-pod-identity-agent configuration that allows taints

Fine-grained access control using pod-level IAM permissions

/feature

Allow for accessing S3 using pod-level IAM permissions granted through IRSA or recently launched EKS pod identities. Right now, the interaction with S3 is happening through the controller, and the S3 permissions are granted to it rather than the application pod accessing data.

Is your feature request related to a problem? Please describe.
When using the mountpoint for S3 CSI driver, the IAM permissions are granted to the IAM role associated with the driver rather than the pod accessing S3; this means that we cannot do fine-grained access control where each pod is only allowed access to the buckets/objects it needs.

Describe the solution you'd like in detail
Allow for using the pod-level IAM permissions when accessing the S3 through mountpoint S3 CSI driver.

Describe alternatives you've considered

  1. Using S3 API directly rather than the mountpoint for S3 CSI driver
  2. Using the mountpoint for S3 CSI driver within the pod rather than having it as a separate layer running as a daemon set

Driver name s3.csi.aws.com not found in the list of registered CSI drivers

/kind bug

What happened?

When mounting the volume on the pod, it cannot locate the drive

Warning FailedMount 12s (x8 over 76s) kubelet MountVolume.MountDevice failed for volume "s3-pv" : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name s3.csi.aws.com not found in the list of registered CSI drivers

What you expected to happen?

That it can normally mount the volume without failure

How to reproduce it (as minimally and precisely as possible)?

Apply the example yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: s3-pv
spec:
  capacity:
    storage: 120Gi # ignored, required
  accessModes:
    - ReadWriteMany # supported options: ReadWriteMany / ReadOnlyMany
  mountOptions:
    - allow-delete
    - region us-east-1
  csi:
    driver: s3.csi.aws.com # required
    volumeHandle: s3-csi-driver-volume
    volumeAttributes:
      bucketName: s3-csi-driver-private
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: s3-claim
spec:
  accessModes:
    - ReadWriteMany # supported options: ReadWriteMany / ReadOnlyMany
  storageClassName: "" # required for static provisioning
  resources:
    requests:
      storage: 120Gi # ignored, required
  volumeName: s3-pv
---
apiVersion: v1
kind: Pod
metadata:
  name: s3-app
spec:
  containers:
    - name: app
      image: centos
      command: ["/bin/sh"]
      args: ["-c", "echo 'Hello from the container!' >> /data/$(date -u).txt; tail -f /dev/null"]
      volumeMounts:
        - name: persistent-storage
          mountPath: /data
  volumes:
    - name: persistent-storage
      persistentVolumeClaim:
        claimName: s3-claim

Anything else we need to know?:

 kc get pvc
NAME       STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
s3-claim   Bound    s3-pv    120Gi      RWX                           4m47s
kc get pv
s3-pv       120Gi      RWX            Retain          Bound    kube-system/s3-claim                                                                                                                    
 kubectl get csidriver
NAME                         ATTACHREQUIRED   PODINFOONMOUNT   STORAGECAPACITY   TOKENREQUESTS   REQUIRESREPUBLISH   MODES        AGE
ebs.csi.aws.com              true             false            false             <unset>         false               Persistent   48d
efs.csi.aws.com              false            false            false             <unset>         false               Persistent   48d
s3.csi.aws.com               false            false            false             <unset>         false               Persistent   135m
 kc get storageClass
NAME            PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
gp2             kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   false                  203d
gp3 (default)   kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   false                  203d

Is it necessary to create a new storage class?

kc logs pod/s3-csi-node-r2w75
Defaulted container "s3-plugin" out of: s3-plugin, node-driver-registrar, liveness-probe, install-mountpoint (init)
I1212 16:35:21.276393       1 driver.go:61] Driver version: 1.1.0, Git commit: c681ab1f19ccba5976e3263f0e3df65718750369, build date: 2023-12-05T19:47:03Z, nodeID: ip-0-00-0-00.ec2.internal, mount-s3 version: 1.3.1
I1212 16:35:21.282921       1 mount_linux.go:285] 'umount /tmp/kubelet-detect-safe-umount3132235530' failed with: exit status 32, output: umount: /tmp/kubelet-detect-safe-umount3132235530: must be superuser to unmount.
I1212 16:35:21.282946       1 mount_linux.go:287] Detected umount with unsafe 'not mounted' behavior
I1212 16:35:21.289423       1 driver.go:83] Found AWS_WEB_IDENTITY_TOKEN_FILE, syncing token
I1212 16:35:21.289599       1 driver.go:113] Listening for connections on address: &net.UnixAddr{Name:"/csi/csi.sock", Net:"unix"}
I1212 16:35:22.113470       1 node.go:204] NodeGetInfo: called with args
kc describe sa s3-csi-driver-sa
Name:                s3-csi-driver-sa
Namespace:           kube-system
Labels:              app.kubernetes.io/component=csi-driver
                     app.kubernetes.io/instance=aws-mountpoint-s3-csi-driver
                     app.kubernetes.io/managed-by=EKS
                     app.kubernetes.io/name=aws-mountpoint-s3-csi-driver
Annotations:         eks.amazonaws.com/role-arn: arn:aws:iam::0000000:role/TMP_AmazonEKS_S3_CSI_DriverRole
Image pull secrets:  <none>
Mountable secrets:   <none>
Tokens:              <none>
Events:              <none>

Environment

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.3", GitCommit:"25b4e43193bcda6c7328a6d147b1fb73a33f1598", GitTreeState:"clean", BuildDate:"2023-06-14T09:47:38Z", GoVersion:"go1.20.5", Compiler:"gc", Platform:"darwin/amd64"}
Kustomize Version: v5.0.1
Server Version: version.Info{Major:"1", Minor:"25+", GitVersion:"v1.25.15-eks-4f4795d", GitCommit:"9587e521d190ecb7ce201993ceea41955ed4a556", GitTreeState:"clean", BuildDate:"2023-10-20T23:22:38Z", GoVersion:"go1.20.10", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.27) and server (1.25) exceeds the supported minor version skew of +/-1
  • Driver version: v1.1.0-eksbuild.1

Support for Windows based Container Images

/feature

Is your feature request related to a problem? Please describe.
Using s3 mountpoints on a windows container would be convenient.

Describe the solution you'd like in detail
Same functionality, just for windows pods.

Describe alternatives you've considered
Many, there's no good Windows solutions for storage for multi AZ support

Cannot write when using `allow-other` MP option

/kind bug

What happened?
Unable to write. Currently testing with AWS IAM role that has all s3 action permissions on the bucket being used by the EKS Mountpoint S3 addon.

/datas3_us/live/pg-manager/pg_wal/spilo/****-*****10140$ tar -czvf archive_name.tar.gz 13edd11c-7e37-4b11-b54d-c8308013957d/
13edd11c-7e37-4b11-b54d-c8308013957d/
13edd11c-7e37-4b11-b54d-c8308013957d/wal/
13edd11c-7e37-4b11-b54d-c8308013957d/wal/11/
13edd11c-7e37-4b11-b54d-c8308013957d/wal/11/basebackups_005/
13edd11c-7e37-4b11-b54d-c8308013957d/wal/11/basebackups_005/base_00000001000000000000000D_00000040/
13edd11c-7e37-4b11-b54d-c8308013957d/wal/11/basebackups_005/base_00000001000000000000000D_00000040/extended_version.txt
13edd11c-7e37-4b11-b54d-c8308013957d/wal/11/basebackups_005/base_00000001000000000000000D_00000040/tar_partitions/
13edd11c-7e37-4b11-b54d-c8308013957d/wal/11/basebackups_005/base_00000001000000000000000D_00000040/tar_partitions/part_00000000.tar.lzo

gzip: stdout: Input/output error
tar: archive_name.tar.gz: Cannot write: Broken pipe
tar: Child returned status 1
tar: Error is not recoverable: exiting now

What you expected to happen?
Zip a large directory recursively to a zip file within the S3 bucket.
How to reproduce it (as minimally and precisely as possible)?
see above
Anything else we need to know?:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: s3-eks-us-pv
  namespace: ****
spec:
  capacity:
    storage: 1200Gi
  accessModes:
    - ReadWriteMany
  mountOptions:
    - allow-delete
    - allow-other
    - region us-east-1
    - uid=1000
    - gid=1000
  csi:
    driver: s3.csi.aws.com
    volumeHandle: eks-logging-****-volume
    volumeAttributes:
      bucketName: eks-logging-****
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: s3-eks-us-pvc
  namespace: ****
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  resources:
    requests:
      storage: 1200Gi
  volumeName: s3-eks-us-pv

Environment

  • Kubernetes version (use kubectl version): Server Version: version.Info{Major:"1", Minor:"25+", GitVersion:"v1.25.16-eks-8cb36c9", GitCommit:"3a3ea80e673d7867f47bdfbccd4ece7cb5f4a83a", GitTreeState:"clean", BuildDate:"2023-11-22T21:53:22Z", GoVersion:"go1.20.10", Compiler:"gc", Platform:"linux/amd64"}
  • Driver version: v1.2.0-eksbuild.1

No schedule on fargate node

/feature

Is your feature request related to a problem? Please describe.
No schedule daemonset on fargate node

Describe the solution you'd like in detail
Add the possibility to use node affinity

Already created a PR #175 to solve the issue

Tasks

No tasks being tracked yet.

volume is not accessible from the container

/kind bug
What happened?
When successfully mounted, the s3 volume is not accessible from the container and shows permission denied:
image
When accessing the mounted directory on the host (/var/lib/kubelet/pods/3e9a54c9-04b4-421b-a4e0-4d981e8c8139/volumes/kubernetes.io~csi/s3-pv/mount) everything is fine (we can read and write)....
Any ideas?

What you expected to happen?

How to reproduce it (as minimally and precisely as possible)?

Anything else we need to know?:

Environment

  • Kubernetes version (use kubectl version):
    kubectl version
    Client Version: v1.28.2
    Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
    Server Version: v1.29.0+k3s1
  • Driver version:
    helm chart 1.2.0
    Driver version: 1.2.0, Git commit: 8a832dc, build date: 2024-01-17T16:52:48Z, nodeID: lp3dnode, mount-s3 version: 1.3.2

Support of EKS-Anywhere with NetApp S3

We are running EKS-Anywhere (with sunscription) in house (our data must stay in house). We have a NetApp device with S3 server support.
I may overlooked something but it seems this CSI driver only Amazon S3 buckets. If this was true then this is a feature request to support 3rd party S3 bucket solutions (that are fully compatible with Amazon's S3 protocol).

Support Bottlerocket OS

/feature

Is your feature request related to a problem? Please describe.

When I'm trying to mount S3 bucket to a pod running on Bottlerocket OS worker node, mount failed and the error on pod events said:

MountVolume.SetUp failed for volume "<Redacted>" : rpc error: code = Internal desc = Could not mount "<Redacted>" at "/var/lib/kubelet/pods/<Redacted>/volumes/kubernetes.io~csi/<Redacted>/mount": Mount failed: Failed to start systemd unit on host: SELinux policy denies access: Permission denied output:

System Info from kubectl describe node:

  Kernel Version:             5.15.136
  OS Image:                   Bottlerocket OS 1.16.1 (aws-k8s-1.27-nvidia)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.6.24+bottlerocket
  Kubelet Version:            v1.27.7-eks-1670f88
  Kube-Proxy Version:         v1.27.7-eks-1670f88

Describe the solution you'd like in detail

I know Bottlerocket is not at the first batch of support. Is there any workaround or ETA to support Bottlerocket?

Dual stack S3 endpoint support/documentation

/feature

Is your feature request related to a problem? Please describe.
If VPCs are created with DNS64 and NAT64 located in other VPC (e.g. Egress VPC), this adds lots of latency and costs compared to using dual stack endpoints that can be reached with IPv6.

Describe the solution you'd like in detail
My assumption is that it only needs option to configure if dual stack endpoints should be used or not. (Seems like they're off by default https://github.com/awslabs/mountpoint-s3/blob/main/mountpoint-s3-client/src/endpoint_config.rs#L67)

If this already supported, I think the documentation should be improved a bit, so it would be clear that it is already working as expected.

ImagePullSecret is missing

/kind bug
The imagePullSecret ref is missing in the DaemonSet

What happened?
The Chart not pulling containers from private ECR

What you expected to happen?
Provide the PullSecret

How to reproduce it (as minimally and precisely as possible)?
Try with private ecr connection and authentication

Anything else we need to know?:

Environment

  • Kubernetes version (use kubectl version):
  • Driver version:

Is this possible to mount S3 under non-root user?

/feature

Is your feature request related to a problem? Please describe.
Hi! I’m trying to mount S3 bucket to pod under non-root user
Bucket itself seems to be mounted with no problems, but for some reason when I go to mount path I got permission denied error.

File permissions look weird
ls -la /tmp/ result:

total 0
drwxrwxrwt 1 root root 16 Jan  8 11:22 .
drwxr-xr-x 1 root root 28 Jan  8 11:22 ..
d????????? ? ?    ?     ?            ? s3

Deployment.yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment
  namespace: internal
spec:
  selector:
    matchLabels:
      app: 
  replicas: 1
  template:
    metadata:
      labels:
        app: 
    spec:
      securityContext:
        runAsUser: 1001
        runAsGroup: 0
        fsGroup: 0
      containers:
      - name: foobarservice
        image: foobarimage
        command:
          - bash
        args:
          - '-c'
          - while true; do echo 'This is an infinite loop'; done
        volumeMounts:
        - name: s3-mount-point
          mountPath: /tmp/s3
      volumes:
        - name: s3-mount-point
          persistentVolumeClaim:
            claimName: s3-mount-point-pvc

Describe the solution you'd like in detail
Ability to run under non-root user

Describe alternatives you've considered
I tried to manage permissions using init-container and chown-ing folder but no success, even more init container has permissions denied error as well. S3 mounting working only if container is running root user.

Additional context
Add any other context or screenshots about the feature request here.

Multiple AWS credentials

I used one AWS credential for multiple S3 buckets, just a quick question for someone who may work around the issue that the helm chart allows to use different AWS S3 credentials for different buckets?

/triage support

Access Denied Error: Failed to create mount process in AWS China region

/kind bug

NOTE: If this is a filesystem related bug, please take a look at the Mountpoint repo to submit a bug report

What happened?

Driver:
image

E0311 06:04:17.300386 1 driver.go:96] GRPC error: rpc error: code = Internal desc = Could not mount "alphafold2-dataset-bjs" at "/var/lib/kubelet/pods/efb8c26a-e4f0-44e6-8685-a739dcb82c81/volumes/kubernetes.io~csi/s3-pv/mount": Mount failed: Failed to start service output: Error: Failed to create S3 client Caused by: 0: initial ListObjectsV2 failed for bucket alphafold2-dataset-bjs in region cn-north-1 1: Client error 2: Forbidden: Access Denied Error: Failed to create mount process
What you expected to happen?

How to reproduce it (as minimally and precisely as possible)?

REGION=cn-north-1
CLUSTER_NAME=EKS-s3-csi-test
aws configure set default.region $REGION
cat > cluster-config.yaml <<EOF
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: $CLUSTER_NAME
  region: $REGION
  version: "1.29"

managedNodeGroups:
  - name: ng-1-workers
    labels: { role: workers }
    instanceType: m5.large
    desiredCapacity: 1
    volumeSize: 80
    privateNetworking: true
  - name: ng-2-builders
    labels: { role: builders }
    instanceType: m5.large
    desiredCapacity: 2
    volumeSize: 100
    privateNetworking: true
EOF

eksctl create cluster -f cluster-config.yaml

aws eks update-kubeconfig --region $REGION --name $CLUSTER_NAME

eksctl utils associate-iam-oidc-provider --cluster $CLUSTER_NAME --approve

eksctl create iamserviceaccount --name s3-csi-driver-sa \
--namespace kube-system \
--cluster $CLUSTER_NAME \
--role-name s3-csi-driver-role \
--attach-policy-arn arn:aws-cn:iam::aws:policy/AmazonS3FullAccess \
--approve 

kubectl apply -k "github.com/awslabs/mountpoint-s3-csi-driver/deploy/kubernetes/overlays/stable/"

kubectl get pods -n kube-system -l app.kubernetes.io/name=aws-mountpoint-s3-csi-driver

wget https://raw.githubusercontent.com/awslabs/mountpoint-s3-csi-driver/main/examples/kubernetes/static_provisioning/static_provisioning.yaml

sed 's/- region us-west-2/- region cn-north-1/g; s/bucketName: s3-csi-driver/bucketName: alphafold2-dataset-bjs/g' static_provisioning.yaml > static_provisioning_wt.yaml

kubectl apply -f static_provisioning_wt.yaml

kubectl describe pod s3-app
image

Anything else we need to know?:
image

image

Environment

  • Kubernetes version (use kubectl version): 1.29
  • Driver version: 1.4.0

mount same pv in more than one pod

hey guys, maybe this is a dummy question, but, I would like to know how to mount the same pv+pvc into more than one pod.

the documentation shows us how to mount an s3 bucket using pvc and pv, ok, it worked to me fine, but, in case my pods needs to scale, well, how can we do it ? scale pods that uses same pv or pvc ?

thanks

Feature - Dynamic Provisioning

/feature

I have a multi cloud application which we want to provide on different Hyperscalers. For this purpose I have the use case to write data to a Azure SA, S3 on AWS, and so on
For Azure Environments I use the Azure Blob CSI Driver which have the ability to dynamically provision Azure Storage Accounts.
Same for GCP.

At the moment it's not possible to dynamically create s3 buckets.

This Issue urgently requests the ability to provision s3 buckets like in other csi drivers.

Describe alternatives you've considered
Manual process, Crossplane or Terraform is an alternative but there should be a solution to provision S3 Buckets from the CSI dynamically to have the best variability.

Additional context
Add any other context or screenshots about the feature request here.

Unmounting is not successful and the deletion of the deployment gets stuck

/kind bug

What happened?
I'm using Minikube locally on my laptop and deployed a pod that uses a mounted bucket using the driver. When I delete the pod (and the PV and PVC), the Kubernetes command get stuck.

% kubectl delete -f dfv.yaml      
persistentvolume "s3-pv" deleted
persistentvolumeclaim "s3-claim" deleted
pod "dfv-app" deleted
service "dfv-service" deleted
(stuck here forever)

In the logs of the driver, I can see that the bucket is mounted correctly but then the unmount with the same name fails stating that it's not mounted. But actually it is because I could access the files inside the bucket. If I remove the volumes mount from the yaml file, everything works fine.

I0118 09:34:11.527549       1 node.go:49] NodePublishVolume: called with args volume_id:"s3-csi-driver-volume" target_path:"/var/lib/kubelet/pods/e8ee9200-c877-4b39-b3d8-a3aa7de7c772/volumes/kubernetes.io~csi/s3-pv/mount" volume_capability:<mount:<mount_flags:"allow-delete" mount_flags:"region Lisbon" mount_flags:"force-path-style" mount_flags:"endpoint-url http://192.168.5.2:9000" > access_mode:<mode:MULTI_NODE_MULTI_WRITER > > volume_context:<key:"bucketName" value:"s3.select" > 
I0118 09:34:11.527601       1 node.go:81] NodePublishVolume: creating dir /var/lib/kubelet/pods/e8ee9200-c877-4b39-b3d8-a3aa7de7c772/volumes/kubernetes.io~csi/s3-pv/mount
I0118 09:34:11.527650       1 node.go:108] NodePublishVolume: mounting s3.select at /var/lib/kubelet/pods/e8ee9200-c877-4b39-b3d8-a3aa7de7c772/volumes/kubernetes.io~csi/s3-pv/mount with options [--allow-delete --endpoint-url=http://192.168.5.2:9000 --force-path-style --region=Lisbon]
I0118 09:34:11.532402       1 systemd.go:99] Creating service to run cmd /opt/mountpoint-s3-csi/bin/mount-s3 with args [s3.select /var/lib/kubelet/pods/e8ee9200-c877-4b39-b3d8-a3aa7de7c772/volumes/kubernetes.io~csi/s3-pv/mount --allow-delete --endpoint-url=http://192.168.5.2:9000 --force-path-style --region=Lisbon --user-agent-prefix=s3-csi-driver/1.1.0]: mount-s3-1.3.1-0b940dcc-f0cd-44ba-b55d-f2ce45118fa9.service
I0118 09:34:11.685355       1 node.go:113] NodePublishVolume: /var/lib/kubelet/pods/e8ee9200-c877-4b39-b3d8-a3aa7de7c772/volumes/kubernetes.io~csi/s3-pv/mount was mounted

I0118 12:19:29.065261       1 node.go:188] NodeGetCapabilities: called with args 
I0118 12:19:48.582278       1 node.go:144] NodeUnpublishVolume: called with args volume_id:"s3-csi-driver-volume" target_path:"/var/lib/kubelet/pods/e8ee9200-c877-4b39-b3d8-a3aa7de7c772/volumes/kubernetes.io~csi/s3-pv/mount" 
I0118 12:19:48.582959       1 node.go:166] NodeUnpublishVolume: target path /var/lib/kubelet/pods/e8ee9200-c877-4b39-b3d8-a3aa7de7c772/volumes/kubernetes.io~csi/s3-pv/mount not mounted, skipping unmount

Any suggestion to show more information about what is wrong?

What you expected to happen?
The bucket is unmounted

How to reproduce it (as minimally and precisely as possible)?

Anything else we need to know?:

Environment

  • Kubernetes version (use kubectl version):
    Client Version: v1.29.0
    Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
    Server Version: v1.28.3
  • Driver version: aws-mountpoint-s3-csi-driver-1.1.0

It should not consume system reserved computational resources

I understand this CSI driver launches mountpoint-s3(FUSE) process in host's systemd.

In normal kubernetes node, cluster administrator configured appropriate reserved computing resource for system components (non-kubernetes components running in host). (https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/).

Launching fuse processes in host's systemd makes it difficult to manage computational resources and its reservation in the node, doesn't it? Particularly, when many pods using this csi driver are scheduled on the node, the situation could be worse.

When using the csi driver in kubernetes, how to manage node's computational resource as cluster administrator?? Are there any recommendation??

append file error

/kind bug

What happened?

`

bash-4.4$ cd /data/
bash-4.4$ ls
'Tue Dec 12 08:56:36 UTC 2023.txt'
bash-4.4$ ls
'Tue Dec 12 08:56:36 UTC 2023.txt'
bash-4.4$ echo 1 > 1.txt
bash-4.4$ cat 1.txt
1
bash-4.4$ echo 11 >> 1.txt
bash: 1.txt: Operation not permitted
bash-4.4$ cat 1.txt
1
bash-4.4$ ls -l
total 1
-rw-r--r-- 1 1000 2000 2 Dec 12 08:57 1.txt
-rw-r--r-- 1 1000 2000 26 Dec 12 08:56 'Tue Dec 12 08:56:36 UTC 2023.txt'
`

What you expected to happen?

echo 11 >> 1.txt

How to reproduce it (as minimally and precisely as possible)?

Anything else we need to know?:

Environment

  • Kubernetes version (use kubectl version): EKS 1.27
  • Driver version: v1.1.0-eksbuild.1

Feature - Make credentials definable per Persistent Volume

/feature

Is your feature request related to a problem? Please describe.
The mountpoint CSI driver handles all mountpoint s3 PV's within a EKS cluster.
When data is segmented due to security concerns using different buckets for multi tenancy or cross account buckets (e.g. BYOB), the current setup requires the CSI driver to be able to access all locations. This requires the IAM policy to global access to all resources and allows the CSI to operate with more access than required (cfr. Least Privilege principle) when handling specific PV requests.

Describe the solution you'd like in detail
Allow credential definition (including temp AWS credentials) per PV or reference to namespace/secret per PV.
This allows to define a IAM policy per PV to minimise the authorisations aligned with the Least privilege principle.
These credentials will be read and propagated to the mount-s3 command.

Describe alternatives you've considered
None

Additional context
I did a small POC where I modified the CSI driver.

  • Added additional attributes (awsAccesKeyId, awsSecretAccessKey, awsSessionToken) in spec.csi.volumeAttributes on the PV definition
  • Read those additional attributes in pkg/driver/node.go and propagate those the mount function (cfr. d.Mounter.Mount) defined in pkg/driver/mount.go
  • When defined use these creds and add those to the env []string. Due to order of credential usage by the AWS client, the creds from the PV take precedence above the IRSA role defined on the csi DaemonSet via the Service account or on the Env vars defined on the CSI DaemonSet

Support control which buckets specific pod can mount

/feature

Is your feature request related to a problem? Please describe.
According to this doc it means I need to authorize the S3 CSI addon which buckets it can mount to pods, the problem is I don't want different pods be able to mount limited set of buckets but not all buckets the addon can mount.

Assuming I have 3 distinct projects hosted in EKS, they have their own set of assets and config files stored in S3. Project B should not mount Project A's bucket and potentially touches or views what are inside.

Describe the solution you'd like in detail
Is it possible to use EKS Pod Identity or allowing us to specific which role to use on mounting S3 buckets annotating in PVC?

Describe alternatives you've considered

Additional context

Support encryption using KMS key

/feature

Hey! It would be very useful to have support for encryption using KMS key. Currently on S3 bucket policies we have below controls implemented:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "DenyIncorrectEncryptionHeader",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::mybucket/*",
            "Condition": {
                "StringNotEquals": {
                    "s3:x-amz-server-side-encryption": "AES256"
                }
            }
        },
        {
            "Sid": "DenyUnEncryptedObjectUploads",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::mybucket/*",
            "Condition": {
                "Null": { 
                    "s3:x-amz-server-side-encryption": "true"
                }
            }
        }
    ]
}

Even though we could read from S3/volumes, due to this limitation we cannot write to volumes/S3 buckets as there is no way to explicitly say that object should be encrypted when writing.

/app # echo "test" > /data/test.txt
sh: write error: I/O error

On s3-csi-node daemonset` i cannot find any related errors, even with increased verbosity.

v1.3.1 install-mountpoint failed with permission denied on Bottlerocket OS 1.19.2 (aws-k8s-1.29)

/kind bug

NOTE: If this is a filesystem related bug, please take a look at the Mountpoint repo to submit a bug report

What happened?

s3-csi-dirver addon failed to install with latest (to this date) Bottlerocket 1.19.2-29cc92cc

What you expected to happen?

It was working with v1.2.0

How to reproduce it (as minimally and precisely as possible)?

Create a node group with Bottlerocket image and install the addon. The addon initContainer failed:

Copying file mount-s3
Failed install binDir /mountpoint-s3/bin installDir /target: Failed to copy file mount-s3: open /target/mount-s3.tmp: permission denied

https://github.com/awslabs/mountpoint-s3-csi-driver/blob/main/deploy/kubernetes/base/node-daemonset.yaml#L30

Anything else we need to know?:

#155

Environment

  • Kubernetes version (use kubectl version): EKS 1.29
  • Driver version: 1.3.1

SELinux support for S3 CSI Driver for EKS Addon

/feature

Is your feature request related to a problem? Please describe.
Our enterprise desires all of their instances to be security-hardened with SELinux enabled (we're also installing the CIS buildkit on the AMIs, starting from the EKS-optimized Amazon Linux 2 AMI). However SELinux does not make the s3-plugin container (part of the S3 CSI driver pod) start as it fails to perform a mount operation. AWS Support has advised us to submit a feature request about this issue.

Describe the solution you'd like in detail
Enhance the support of S3 CSI driver for SELinux so that it can work without any issues.

Describe alternatives you've considered
The only alternatives would be to:

  • use audit2allow to generate SELinux custom policies (however it could be tedious to maintain long-term)
  • disable SELinux (the least favourite option)

Additional context
Pod logs:

failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/proc/395888/mounts" to rootfs at "/host/proc/mounts": change mount propagation through procfd: mount /host/proc/mounts (via /proc/self/fd/6), flags: 0x44000: permission denied: unknown

Audit.log logs:

avc: denied { mounton } for pid=40998 comm="runc:[2:INIT]" path="/run/containerd/io.containerd.runtime.v2.task/k8s.io/c79bb808487e15e9d58a01ad593c8d446fd4bb20643c9ef154437596283ee42b/rootfs/host/proc/mounts" dev="proc" ino=34311 scontext=system_u:system_r:unconfined_service_t:s0 tcontext=system_u:system_r:unconfined_service_t:s0 tclass=file permissive=0

Internal AWS support reference (case ID): 171041866401170

Make hostTokenPath configurable (/var/lib/kubelet/plugins/s3.csi.aws.com/token)

/kind bug

What happened?

The csi-driver is making assumptions about the kubelet directory (default /var/lib/kubelet) so it doesn't work in environments not running with the default. Specifically this constant:

hostTokenPath = "/var/lib/kubelet/plugins/s3.csi.aws.com/token"
means that in our environment where the kubelet directory is in a different location the AWS_WEB_IDENTITY_TOKEN_FILE doesn't work.

What you expected to happen?

It should be possible to define a custom path for the kubelet directory e.g. via a flag, such that AWS_WEB_IDENTITY_TOKEN_FILE can work in environments with custom settings. It's fine if the current constant is the default, it should just be possible to change.

How to reproduce it (as minimally and precisely as possible)?

Run kubelet with a different setting for --root-dir and configure the monthpath accordingly in the s3-csi-driver daemonset.

We run with a non-standard root because we do some tricks on the AMI to put the root on the root disk for instances without instance-store and for instance-storage instance-types, we put the root directory on the SSD (I don't think the actual use case is relevant, just to give context why we have it different from the default).

Anything else we need to know?:

Environment

  • Kubernetes version (use kubectl version):
  • Driver version:

Support for S3 Buckets in Different Accounts?

/feature

I'm looking at running this in a self-hosted K8s environment but desire to access different S3 buckets that are spread across multiple AWS accounts, which means each will need a unique access/secret key combination.

I've spent a fair bit of time looking around, but it isn't clear to me whether this is achievable. The only thing I can envisage right now is using mountOptions on the Persistent Volume definition to be able to select the right credential profile, but I can't see a way to provide the profiles needed.

Many thanks

Can't mount two S3 mount points in the same pod

/kind bug

NOTE: If this is a filesystem related bug, please take a look at the Mountpoint repo to submit a bug report

What happened?

Tried to mount two different buckets in the same pod:

Pod volume mounts

            - name: front-bucket
              mountPath: /mnt/front              
              readOnly: true
            - name: assets-bucket
              mountPath: /mnt/assets    

Pod Volumes

  - name: assets-bucket
    persistentVolumeClaim: 
      claimName: xxx-assets-claim
  - name: front-bucket
    persistentVolumeClaim: 
      claimName: xxx-front-claim 

If I comment one or the other VolumeMounts, it's okay, the pod start correctly

What you expected to happen?

I except the Pod run with two S3 mount points attached :)

How to reproduce it (as minimally and precisely as possible)?

Create two PV (one different bucket for each), PVC and try to mount in a pod.

Anything else we need to know?:

I also use CSI Secret Store + CSI Secret AWS Plugin

Logs of one S3 CSI Driver Controller, we only see 1/2 is detected/mounted

I1222 15:22:23.195319       1 driver.go:61] Driver version: 1.1.0, Git commit: c681ab1f19ccba5976e3263f0e3df65718750369, build date: 2023-12-05T19:47:03Z, nodeID: ip-10-1-19-61.eu-west-3.compute.internal, mount-s3 version: 1.3.1
I1222 15:22:23.199800       1 mount_linux.go:285] 'umount /tmp/kubelet-detect-safe-umount2175845353' failed with: exit status 32, output: umount: /tmp/kubelet-detect-safe-umount2175845353: must be superuser to unmount.
I1222 15:22:23.199818       1 mount_linux.go:287] Detected umount with unsafe 'not mounted' behavior
I1222 15:22:23.205186       1 driver.go:83] Found AWS_WEB_IDENTITY_TOKEN_FILE, syncing token
I1222 15:22:23.205394       1 driver.go:113] Listening for connections on address: &net.UnixAddr{Name:"/csi/csi.sock", Net:"unix"}
I1222 15:22:23.887141       1 node.go:204] NodeGetInfo: called with args 
I1222 15:22:31.688535       1 node.go:188] NodeGetCapabilities: called with args 
I1222 15:22:31.689731       1 node.go:188] NodeGetCapabilities: called with args 
I1222 15:22:31.690351       1 node.go:188] NodeGetCapabilities: called with args 
I1222 15:22:31.690838       1 node.go:188] NodeGetCapabilities: called with args 
I1222 15:22:31.691495       1 node.go:49] NodePublishVolume: called with args volume_id:"s3-csi-driver-volume" target_path:"/var/lib/kubelet/pods/b5dc74ab-ac7a-4f34-8a45-14596e68137c/volumes/kubernetes.io~csi/xxx-front-pv/mount" volume_capability:<mount:<mount_flags:"region eu-west-3" > access_mode:<mode:MULTI_NODE_MULTI_WRITER > > volume_context:<key:"bucketName" value:"xxx-front-production" > 
I1222 15:22:31.691547       1 node.go:81] NodePublishVolume: creating dir /var/lib/kubelet/pods/b5dc74ab-ac7a-4f34-8a45-14596e68137c/volumes/kubernetes.io~csi/xxx-front-pv/mount
I1222 15:22:31.691613       1 node.go:108] NodePublishVolume: mounting xxx-front-production at /var/lib/kubelet/pods/b5dc74ab-ac7a-4f34-8a45-14596e68137c/volumes/kubernetes.io~csi/xxx-front-pv/mount with options [--region=eu-west-3]
I1222 15:22:31.693249       1 systemd.go:99] Creating service to run cmd /opt/mountpoint-s3-csi/bin/mount-s3 with args [xxx-front-production /var/lib/kubelet/pods/b5dc74ab-ac7a-4f34-8a45-14596e68137c/volumes/kubernetes.io~csi/xxx-front-pv/mount --region=eu-west-3 --user-agent-prefix=s3-csi-driver/1.1.0]: mount-s3-1.3.1-df2c8236-7483-4356-8c7c-ebb2f88904be.service
I1222 15:22:31.813628       1 node.go:113] NodePublishVolume: /var/lib/kubelet/pods/b5dc74ab-ac7a-4f34-8a45-14596e68137c/volumes/kubernetes.io~csi/xxx-front-pv/mount was mounted
I1222 15:22:46.776028       1 node.go:188] NodeGetCapabilities: called with args 
I1222 15:24:11.527808       1 node.go:188] NodeGetCapabilities: called with args 
I1222 15:25:48.044354       1 node.go:188] NodeGetCapabilities: called with args 
I1222 15:27:03.892153       1 node.go:188] NodeGetCapabilities: called with args 
I1222 15:28:25.606031       1 node.go:188] NodeGetCapabilities: called with args 
I1222 15:29:40.029468       1 node.go:188] NodeGetCapabilities: called with args 
I1222 15:31:38.040287       1 node.go:188] NodeGetCapabilities: called with args

Environment

  • Kubernetes version (use kubectl version): 1.28 (EKS)
  • Driver version: v1.1.0-eksbuild.1

providing credentials via awsAccessSecret does not work

/kind bug

What happened?
Providing credentials via values.yaml awsAccessSecret does not work.
Creating /root/.aws/credentials with the correct credentials and retrying works.

What you expected to happen?
It should work without the credentials file on the host (/root/.aws/credentials)

How to reproduce it (as minimally and precisely as possible)?

Anything else we need to know?:

Environment

  • Kubernetes version (use kubectl version):
    kubectl version
    Client Version: v1.28.2
    Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
    Server Version: v1.29.0+k3s1
  • Driver version:
    helm chart 1.2.0
    Driver version: 1.2.0, Git commit: 8a832dc, build date: 2024-01-17T16:52:48Z, nodeID: lp3dnode, mount-s3 version: 1.3.2

Ephemeral CSI support

/feature

Is your feature request related to a problem? Please describe.
Kubernetes pods can use https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#csi-ephemeral-volumes

To directly specify config for volumes so that users can use their own buckets/credentials with their workloads without involving cluster-admins.

Describe the solution you'd like in detail
The driver should set the Ephemeral volumeLifecycleModes property of the CSIDriver object: https://kubernetes-csi.github.io/docs/csi-driver-object.html#what-fields-does-the-csidriver-object-have

And support the bucket name via attributes. If they need to specify their credentials, https://github.com/kubernetes/enhancements/blob/master/keps/sig-storage/596-csi-inline-volumes/README.md#secret-reference can be used to pass in a secret from the users namespace. Alternately, it may be possible to get a projected service account token for the workload to federate via oidc.

Bottlerocket AMI mounting fail event in pod

/kind bug
What happened?
When using the Bottlerocket AMI with Karpenter NodeClass.
Describing the pod, the events shows:

 Warning  FailedMount       3m54s (x7 over 4m26s)  kubelet            MountVolume.MountDevice failed for volume "3416296-pv" : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name s3.csi.aws.com not found in the list of registered CSI drivers
kubectl describe csidrivers.storage.k8s.io/s3.csi.aws.com

Name:         s3.csi.aws.com
Namespace:    
Labels:       app.kubernetes.io/component=csi-driver
              app.kubernetes.io/instance=aws-mountpoint-s3-csi-driver
              app.kubernetes.io/managed-by=EKS
              app.kubernetes.io/name=aws-mountpoint-s3-csi-driver
Annotations:  <none>
API Version:  storage.k8s.io/v1
Kind:         CSIDriver
Metadata:
  Creation Timestamp:  2024-02-07T02:01:48Z
  Resource Version:    5363335
  UID:                 c7037a7c-edc6-473b-bcab-4c9443cdef7f
Spec:
  Attach Required:     false
  Fs Group Policy:     ReadWriteOnceWithFSType
  Pod Info On Mount:   false
  Requires Republish:  false
  Se Linux Mount:      false
  Storage Capacity:    false
  Volume Lifecycle Modes:
    Persistent
Events:  <none>

This error does not appear in when using AL2 AMI.
However, even with the warning, I am still able to read data from the S3 mountpoint.

What you expected to happen?
No warnings messages.

How to reproduce it (as minimally and precisely as possible)?

Anything else we need to know?:

Environment

  • Kubernetes version (use kubectl version): v1.28
  • Driver version: v1.4.0-eksbuild.1

Input/output error while access s3 file

Hello,

It seems I encountered a similar issue:
I've mounted my s3 bucket with this command: mount-s3 <bucket_name> <directory_to_associate>

It works and I can list file and repositories on bucket from my instance.
But when I want to do 'cat' command, for example, to one of this files, I have this issue:
cat: <filename>: Input/output error

If I try to get the file on my laptop with aws s3 command, It works and I can read the content of file.

This is the policy I've applied to my instance to access bucket:

{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Action": [
				"s3:*"
			],
			"Effect": "Allow",
			"Resource": "<bucket_arn>"
		}
	]
}

I hope my question helps and is in the right place.
Thank you

Originally posted by @pch05 in #142 (comment)

Installation failure

/kind bug

NOTE: If this is a filesystem related bug, please take a look at the Mountpoint repo to submit a bug report

What happened?
An error occured while following the installation instructions here:
https://github.com/awslabs/mountpoint-s3-csi-driver/blob/main/docs/install.md#deploy-driver

When we try to install the driver via Helm, we get:
$ helm upgrade --install aws-mountpoint-s3-csi-driver \ --namespace kube-system \ aws-mountpoint-s3-csi-driver/aws-mountpoint-s3-csi-driver Release "aws-mountpoint-s3-csi-driver" does not exist. Installing it now. Error: Unable to continue with install: ServiceAccount "s3-csi-driver-sa" in namespace "kube-system" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: key "app.kubernetes.io/managed-by" must equal "Helm": current value is "eksctl"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "aws-mountpoint-s3-csi-driver"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "kube-system"

When trying to install via the kubectl method:
`$ kubectl apply -k "github.com/awslabs/mountpoint-s3-csi-driver/deploy/kubernetes/overlays/stable/"
Warning: resource serviceaccounts/s3-csi-driver-sa is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
serviceaccount/s3-csi-driver-sa configured
secret/aws-credentials-99tfgtg98h created
daemonset.apps/s3-csi-node created
csidriver.storage.k8s.io/s3.csi.aws.com created

$ kubectl get pods -n kube-system -l app.kubernetes.io/name=aws-mountpoint-s3-csi-driver
No resources found in kube-system namespace.
`

What you expected to happen?
I expected the instructions to yield a working implementation

How to reproduce it (as minimally and precisely as possible)?
Follow the install instructions in the docs here:
https://github.com/awslabs/mountpoint-s3-csi-driver/blob/main/docs/install.md#deploy-driver

Anything else we need to know?:
Helm chart seems broken; Kubectl does not create a deployment in kube-system (or any other namespace)

Environment

  • Kubernetes version (use kubectl version):
    Client Version: v1.28.2 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.25.16-eks-8cb36c9
  • Driver version: Latest available in Helm repository

Pod "Sometimes" cannot mount PVC in CSI version 1.4.0

Hello Team,

I am trying to test CSI driver 1.4.0 in K8s 1.27. But I found "Some Times" the Pod cannot mount the PVC and the CSI driver Pod reports below error:

I0331 13:53:37.687569       1 node.go:65] NodePublishVolume: req: volume_id:"s3-csi-driver-volume" target_path:"/var/lib/kubelet/pods/adaac085-e689-49f8-b9f5-d0467907d875/volumes/kubernetes.io~csi/comfyui-outputs-pv/mount" volume_capability:<mount:<mount_flags:"allow-delete" mount_flags:"region us-east-2" > access_mode:<mode:MULTI_NODE_MULTI_WRITER > > volume_context:<key:"bucketName" value:"comfyui-outputs-835894076989-us-east-2" > 
I0331 13:53:37.687634       1 node.go:112] NodePublishVolume: mounting comfyui-outputs-835894076989-us-east-2 at /var/lib/kubelet/pods/adaac085-e689-49f8-b9f5-d0467907d875/volumes/kubernetes.io~csi/comfyui-outputs-pv/mount with options [--allow-delete --region=us-east-2]
E0331 13:53:37.687730       1 driver.go:96] GRPC error: rpc error: code = Internal desc = Could not mount "comfyui-outputs-835894076989-us-east-2" at "/var/lib/kubelet/pods/adaac085-e689-49f8-b9f5-d0467907d875/volumes/kubernetes.io~csi/comfyui-outputs-pv/mount": Could not check if "/var/lib/kubelet/pods/adaac085-e689-49f8-b9f5-d0467907d875/volumes/kubernetes.io~csi/comfyui-outputs-pv/mount" is a mount point: stat /var/lib/kubelet/pods/adaac085-e689-49f8-b9f5-d0467907d875/volumes/kubernetes.io~csi/comfyui-outputs-pv/mount: no such file or directory, Failed to read /host/proc/mounts: open /host/proc/mounts: invalid argument

And even in the case of normal mounting, the CSI driver Pod still posting such of the above logs. Seems like the CSI driver keep tring to mount the same PVC to the sam Pod and failed. Not sure whether there is anything is misconfig.

Those problems only happend with "Karpenter" scale up worker nodes with new deployment/pod in EKS v1.27.
All mount operations are normal for static k8s worker nodes.

Workaround:

  1. delete the s3-csi-xxxxx Pod running on the Karpenter worker node
  2. using S3 CSi driver 1.0.0

Looking forward to your support, thanks.

Can't mount s3 bucket. (Permission denied)

/kind bug
Problem not exist in Kubernetes version 1.27
Hello! I have a service account with a role that contains the following policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": "s3:ListBucket",
            "Effect": "Allow",
            "Resource": [
                "bucket"
            ],
            "Sid": "S3ListBuckets"
        },
        {
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject"
            ],
            "Effect": "Allow",
            "Resource": [
                "bucket/*"
            ],
            "Sid": "S3CRUD"
        },
        {
            "Action": [
                "kms:ReEncrypt*",
                "kms:GetPublicKey",
                "kms:GenerateDataKey*",
                "kms:Encrypt",
                "kms:DescribeKey",
                "kms:Decrypt"
            ],
            "Effect": "Allow",
            "Resource": "key-arn",
            "Sid": "KMS"
        }
    ]
}

My PV:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: {{ .Values.volume.pvcName }}-pv
spec:
  capacity:
    storage: 1200Gi # ignored, required
  accessModes:
    - ReadWriteMany # supported options: ReadWriteMany / ReadOnlyMany
  mountOptions:
    - allow-delete
    - region eu-central-1
  csi:
    driver: s3.csi.aws.com # required
    volumeHandle: s3-csi-driver-volume
    volumeAttributes:
      bucketName: {{ .Values.volume.versions.s3VersionsBucketName }}

My PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: {{ .Values.volume.pvcName }}-claim
spec:
  accessModes:
    - ReadWriteMany # supported options: ReadWriteMany / ReadOnlyMany
  storageClassName: "" # required for static provisioning
  resources:
    requests:
      storage: 1200Gi # ignored, required
  volumeName: {{ .Values.volume.pvcName }}-pv

But I encounter this problem in the logs:

1 node.go:65] NodePublishVolume: req: volume_id:"s3-csi-driver-volume" target_path:"/var/lib/kubelet/pods/420753ac-d284-4e86-bc6f-4083ae1de68c/volumes/kubernetes.io~csi/<volume>/mount" volume_capability:<mount:<mount_flags:"allow-delete" mount_flags:"region eu-central-1" > access_mode:<mode:MULTI_NODE_MULTI_WRITER > > volume_context:<key:"bucketName" value:"bucket" > 
1 node.go:112] NodePublishVolume: mounting bucket at /var/lib/kubelet/pods/420753ac-d284-4e86-bc6f-4083ae1de68c/volumes/kubernetes.io~csi/<pv>/mount with options [--allow-delete --region=eu-central-1]
1 driver.go:96] GRPC error: rpc error: code = Internal desc = Could not mount "bucket" at "/var/lib/kubelet/pods/420753ac-d284-4e86-bc6f-4083ae1de68c/volumes/kubernetes.io~csi/<pv>/mount": Mount failed: Failed to start service output: Error: Failed to create S3 client  Caused by:     0: initial ListObjectsV2 failed for bucket bucket in region eu-central-1     1: Client error     2: Forbidden: Access Denied Error: Failed to create mount process
 Could you please help me with this?

**Environment**
- Kubernetes version (use `kubectl version`): 1.28
- Driver version: 1.4.0 (same with 1.1.0)

Mount 2 buckets to a pod

When I mount 2 pvcs with 2 different bucket into a pod the pod is not starting and stuck on init state.

I expect that the pod will be start running

You can reproduce it by mounting 2 different pvcs with 2 different buckets into a pod

  • Kubernetes version: 1.29

Dynamic provisioning

/feature

Is your feature request related to a problem? Please describe.
For each PersistentVolumeClaim that I want backed by S3, I need to provision a PersistentVolume with a prefix.

Describe the solution you'd like in detail
I can create a PersistentVolumeClaim with a s3-mountpoint (?) storageClassName. Because of this, a PersistentVolume is created with the prefix MountOption.

Describe alternatives you've considered
For each helm chart we deploy with a PVC, we can include a PV as well. But this is not a clean solution, because as a service, I don't want to worry about the technology backing my storage. I just want to create a PVC and get it over with.

Additional context
We have a cluster with multiple tenants. Each tenant has a set of services that use PVCs. Deploying and maintaining a PV for all volumes is a lot overhead and dynamic provisioning would make my setup cleaner and my life easier.

Add capability to specify the S3 bucket prefix in 'volumeAttributes'

/feature

Problem statement
I was attempting to specify an S3 prefix as a 'spec.csi.volumeAttributes'. Either it is not supported or not documented. I used bucketprefix as the parameter. It didn't work

Expected outcome
I checked all the examples and documentation. There is no way for us to use S3 prefix as the mount point for the 'PersistentVolume'. But, the original tool mountpoint-s3 clearly has the capability to do it.

Here is the YAML manifest I attempted,

apiVersion: v1
kind: PersistentVolume
metadata:
  name: s3-pv
spec:
  capacity:
    storage: 1Gi # ignored, required
  accessModes:
    - ReadWriteMany # supported options: ReadWriteMany / ReadOnlyMany
  mountOptions:
    - allow-delete
    - region us-east-1
  csi:
    driver: s3.csi.aws.com # required
    volumeHandle: s3-csi-driver-volume
    volumeAttributes:
      bucketName: s3-csi-driver
      bucketprefix: data/
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: s3-claim
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: "s3"
  resources:
    requests:
      storage: 1Gi
  volumeName: s3-pv
---
apiVersion: v1
kind: Pod
metadata:
  name: s3-app
spec:
  containers:
    - name: app
      image: centos
      command: ["/bin/sh"]
      args: ["-c", "echo 'Hello from the container!' >> /data/$(date -u).txt; tail -f /dev/null"]
      volumeMounts:
        - name: persistent-storage
          mountPath: /data
  volumes:
    - name: persistent-storage
      persistentVolumeClaim:
        claimName: s3-claim

Alternatives considered
I tried different parameter names 'volumeAttributes'. But didn't work. for I had no other alternative.

Additional context
If this is already possible, are you able to provide an example on how to do this?

Can't mount S3 bucket. (Permission Denied)

Hello! I have a service account with a role that contains the following policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": "s3:ListBucket",
            "Effect": "Allow",
            "Resource": [
                "bucket"
            ],
            "Sid": "S3ListBuckets"
        },
        {
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject"
            ],
            "Effect": "Allow",
            "Resource": [
                "bucket/*"
            ],
            "Sid": "S3CRUD"
        },
        {
            "Action": [
                "kms:ReEncrypt*",
                "kms:GetPublicKey",
                "kms:GenerateDataKey*",
                "kms:Encrypt",
                "kms:DescribeKey",
                "kms:Decrypt"
            ],
            "Effect": "Allow",
            "Resource": "key-arn",
            "Sid": "KMS"
        }
    ]
}

My PV:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: {{ .Values.volume.pvcName }}-pv
spec:
  capacity:
    storage: 1200Gi # ignored, required
  accessModes:
    - ReadWriteMany # supported options: ReadWriteMany / ReadOnlyMany
  mountOptions:
    - allow-delete
    - region eu-central-1
  csi:
    driver: s3.csi.aws.com # required
    volumeHandle: s3-csi-driver-volume
    volumeAttributes:
      bucketName: {{ .Values.volume.versions.s3VersionsBucketName }}

My PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: {{ .Values.volume.pvcName }}-claim
spec:
  accessModes:
    - ReadWriteMany # supported options: ReadWriteMany / ReadOnlyMany
  storageClassName: "" # required for static provisioning
  resources:
    requests:
      storage: 1200Gi # ignored, required
  volumeName: {{ .Values.volume.pvcName }}-pv

But I encounter this problem in the logs:

1 node.go:65] NodePublishVolume: req: volume_id:"s3-csi-driver-volume" target_path:"/var/lib/kubelet/pods/420753ac-d284-4e86-bc6f-4083ae1de68c/volumes/kubernetes.io~csi/<volume>/mount" volume_capability:<mount:<mount_flags:"allow-delete" mount_flags:"region eu-central-1" > access_mode:<mode:MULTI_NODE_MULTI_WRITER > > volume_context:<key:"bucketName" value:"bucket" > 
1 node.go:112] NodePublishVolume: mounting bucket at /var/lib/kubelet/pods/420753ac-d284-4e86-bc6f-4083ae1de68c/volumes/kubernetes.io~csi/<pv>/mount with options [--allow-delete --region=eu-central-1]
1 driver.go:96] GRPC error: rpc error: code = Internal desc = Could not mount "bucket" at "/var/lib/kubelet/pods/420753ac-d284-4e86-bc6f-4083ae1de68c/volumes/kubernetes.io~csi/<pv>/mount": Mount failed: Failed to start service output: Error: Failed to create S3 client  Caused by:     0: initial ListObjectsV2 failed for bucket bucket in region eu-central-1     1: Client error     2: Forbidden: Access Denied Error: Failed to create mount process
 Could you please help me with this?

<!-- DO NOT EDIT BELOW THIS LINE -->

/triage support

Make container resources configurable

/feature

Is your feature request related to a problem? Please describe.
We require every container in every pod running in our cluster having set resource requests and limits.
This is not possible yet

Describe the solution you'd like in detail
I will submit a PR

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.