Code Monkey home page Code Monkey logo

operator's Introduction

MinIO Operator

build license

MinIO

MinIO is a Kubernetes-native high performance object store with an S3-compatible API. The MinIO Kubernetes Operator supports deploying MinIO Tenants onto private and public cloud infrastructures ("Hybrid" Cloud).

This README provides a high level description of the MinIO Operator and quickstart instructions. See https://min.io/docs/minio/kubernetes/upstream/index.html for complete documentation on the MinIO Operator.

Table of Contents

Architecture

Each MinIO Tenant represents an independent MinIO Object Store within the Kubernetes cluster. The following diagram describes the architecture of a MinIO Tenant deployed into Kubernetes:

Tenant Architecture

MinIO provides multiple methods for accessing and managing the MinIO Tenant:

MinIO Console

The MinIO Console provides a graphical user interface (GUI) for interacting with MinIO Tenants. The MinIO Operator installs and configures the Console for each tenant by default.

Console Dashboard

Administrators of MinIO Tenants can perform a variety of tasks through the Console, including user creation, policy configuration, and bucket replication. The Console also provides a high level view of Tenant health, usage, and healing status.

For more complete documentation on using the MinIO Console, see the MinIO Console Github Repository.

Deploy the MinIO Operator and Create a Tenant

This procedure installs the MinIO Operator and creates a 4-node MinIO Tenant for supporting object storage operations in a Kubernetes cluster.

Prerequisites

Kubernetes 1.21 or Later

Starting with Operator v5.0.0, MinIO requires Kubernetes version 1.21.0 or later. You must upgrade your Kubernetes cluster to 1.21.0 or later to use Operator v5.0.0+.

Starting with Operator v4.0.0, MinIO requires Kubernetes version 1.19.0 or later. Previous versions of the Operator supported Kubernetes 1.17.0 or later. You must upgrade your Kubernetes cluster to 1.19.0 or later to use Operator v4.0.0+.

This procedure assumes the host machine has kubectl installed and configured with access to the target Kubernetes cluster.

MinIO Tenant Namespace

MinIO supports no more than one MinIO Tenant per Namespace. The following kubectl command creates a new namespace for the MinIO Tenant.

kubectl create namespace minio-tenant

The MinIO Operator Console supports creating a namespace as part of the Tenant Creation procedure.

Tenant Storage Class

The MinIO Kubernetes Operator automatically generates Persistent Volume Claims (PVC) as part of deploying a MinIO Tenant.

The plugin defaults to creating each PVC with the default Kubernetes Storage Class. If the default storage class cannot support the generated PVC, the tenant may fail to deploy.

MinIO Tenants require that the StorageClass sets volumeBindingMode to WaitForFirstConsumer. The default StorageClass may use the Immediate setting, which can cause complications during PVC binding. MinIO strongly recommends creating a custom StorageClass for use by PV supporting a MinIO Tenant.

The following StorageClass object contains the appropriate fields for supporting a MinIO Tenant using MinIO DirectPV-managed drives:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: directpv-min-io
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

Tenant Persistent Volumes

The MinIO Operator generates one Persistent Volume Claim (PVC) for each volume in the tenant plus two PVC to support collecting Tenant Metrics and logs. The cluster must have sufficient Persistent Volumes that meet the capacity requirements of each PVC for the tenant to start correctly. For example, deploying a Tenant with 16 volumes requires 18 (16 + 2). If each PVC requests 1TB capacity, then each PV must also provide at least 1TB of capacity.

MinIO recommends using the MinIO DirectPV Driver to automatically provision Persistent Volumes from locally attached drives. This procedure assumes MinIO DirectPV is installed and configured.

For clusters which cannot deploy MinIO DirectPV, use Local Persistent Volumes. The following example YAML describes a local persistent volume:

The following YAML describes a local PV:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: <PV-NAME>
spec:
  capacity:
    storage: 1Ti
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: </mnt/disks/ssd1>
  nodeAffinity:
    required:
      nodeSelectorTerms:
        - matchExpressions:
            - key: kubernetes.io/hostname
              operator: In
              values:
                - <NODE-NAME>

Replace values in brackets <VALUE> with the appropriate value for the local drive.

You can estimate the number of PVC by multiplying the number of minio server pods in the Tenant by the number of drives per node. For example, a 4-node Tenant with 4 drives per node requires 16 PVC and therefore 16 PV.

MinIO strongly recommends using the following CSI drivers for creating local PV to ensure best object storage performance:

Procedure

1) Install the MinIO Operator via Kustomization

The standard kubectl tool ships with support for kustomize out of the box, so you can use that to install MiniO Operator.

kubectl kustomize github.com/minio/operator\?ref=v5.0.14

Run the following command to verify the status of the Operator:

kubectl get pods -n minio-operator

The output resembles the following:

NAME                              READY   STATUS    RESTARTS   AGE
console-6b6cf8946c-9cj25          1/1     Running   0          99s
minio-operator-69fd675557-lsrqg   1/1     Running   0          99s

The console-* pod runs the MinIO Operator Console, a graphical user interface for creating and managing MinIO Tenants.

The minio-operator-* pod runs the MinIO Operator itself.

2) Access the Operator Console via NodePort

Get the token:

kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: console-sa-secret
  namespace: minio-operator
  annotations:
    kubernetes.io/service-account.name: console-sa
type: kubernetes.io/service-account-token
EOF
SA_TOKEN=$(kubectl -n minio-operator  get secret console-sa-secret -o jsonpath="{.data.token}" | base64 --decode)
echo $SA_TOKEN

Change the console service to use NodePort:

spec:
  ports:
    - name: http
      protocol: TCP
      port: 9090
      targetPort: 9090
      nodePort: 30080 <--------------- Using this port in the node
    - name: https
      protocol: TCP
      port: 9443
      targetPort: 9443
      nodePort: 30869
  selector:
    app: console
  clusterIP: 10.96.69.150
  clusterIPs:
    - 10.96.69.150
  type: NodePort <-------------------- Using NodePort

Open your browser to the provided address and use the JWT token to log in to the Operator Console.

Operator Console

Click + Create Tenant to open the Tenant Creation workflow.

3) Build the Tenant Configuration

The Operator Console Create New Tenant walkthrough builds out a MinIO Tenant. The following list describes the basic configuration sections.

  • Name - Specify the Name, Namespace, and Storage Class for the new Tenant.

    The Storage Class must correspond to a Storage Class that corresponds to Local Persistent Volumes that can support the MinIO Tenant.

    The Namespace must correspond to an existing Namespace that does not contain any other MinIO Tenant.

    Enable Advanced Mode to access additional advanced configuration options.

  • Tenant Size - Specify the Number of Servers, Number of Drives per Server, and Total Size of the Tenant.

    The Resource Allocation section summarizes the Tenant configuration based on the inputs above.

    Additional configuration inputs may be visible if Advanced Mode was enabled in the previous step.

  • Preview Configuration - summarizes the details of the new Tenant.

After configuring the Tenant to your requirements, click Create to create the new tenant.

The Operator Console displays credentials for connecting to the MinIO Tenant. You must download and secure these credentials at this stage. You cannot trivially retrieve these credentials later.

You can monitor Tenant creation from the Operator Console.

4) Connect to the Tenant

Use the following command to list the services created by the MinIO Operator:

kubectl get svc -n NAMESPACE

Replace NAMESPACE with the namespace for the MinIO Tenant. The output resembles the following:

NAME                             TYPE            CLUSTER-IP        EXTERNAL-IP   PORT(S)      
minio                            LoadBalancer    10.104.10.9       <pending>     443:31834/TCP
myminio-console           LoadBalancer    10.104.216.5      <pending>     9443:31425/TCP
myminio-hl                ClusterIP       None              <none>        9000/TCP
myminio-log-hl-svc        ClusterIP       None              <none>        5432/TCP
myminio-log-search-api    ClusterIP       10.102.151.239    <none>        8080/TCP
myminio-prometheus-hl-svc ClusterIP       None              <none>        9090/TCP

Applications internal to the Kubernetes cluster should use the minio service for performing object storage operations on the Tenant.

Administrators of the Tenant should use the minio-tenant-1-console service to access the MinIO Console and manage the Tenant, such as provisioning users, groups, and policies for the Tenant.

MinIO Tenants deploy with TLS enabled by default, where the MinIO Operator uses the Kubernetes certificates.k8s.io API to generate the required x.509 certificates. Each certificate is signed using the Kubernetes Certificate Authority (CA) configured during cluster deployment. While Kubernetes mounts this CA on Pods in the cluster, Pods do not trust that CA by default. You must copy the CA to a directory such that the update-ca-certificates utility can find and add it to the system trust store to enable validation of MinIO TLS certificates:

cp /var/run/secrets/kubernetes.io/serviceaccount/ca.crt /usr/local/share/ca-certificates/
update-ca-certificates

For applications external to the Kubernetes cluster, you must configure Ingress or a Load Balancer to expose the MinIO Tenant services. Alternatively, you can use the kubectl port-forward command to temporarily forward traffic from the local host to the MinIO Tenant.

License

Use of MinIO Operator is governed by the GNU AGPLv3 or later, found in the LICENSE file.

Explore Further

MinIO Hybrid Cloud Storage Documentation

Github Resources

operator's People

Contributors

adfost avatar aead avatar alevsk avatar allanrogerr avatar anjalshireesh avatar bexsoft avatar cesnietor avatar cniackz avatar dependabot[bot] avatar dnskr avatar donatello avatar drivebyer avatar dvaldivia avatar harshavardhana avatar jiuker avatar kaankabalak avatar kanagarajkm avatar kerneltime avatar krisis avatar minio-bot avatar minio-trusted avatar nitisht avatar pjuarezd avatar praveenrajmani avatar ravindk89 avatar reivaj05 avatar sathieu avatar shtripat avatar stefanhenseler avatar vadmeste avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

operator's Issues

Named Service Port for Minio Service

Currently, a named service port for the minio-service created by the operator would be useful for discovery by ingress controllers (istio, heptio-contour...)

rbac issue with latest

minio operator pod reports :

E0501 06:07:52.821618       1 reflector.go:134] k8s.io/[email protected]/tools/cache/reflector.go:95: Failed to list *v1beta1.MinIOInstance: minioinstances.miniocontroller.min.io is forbidden: User "system:serviceaccount:minio:minio-operator-sa" cannot list resource "minioinstances" in API group "miniocontroller.min.io" at the cluster scope

Allow `externalCertSecret` to be of type `kubernetes.io/tls`

At the moment minio-operator expects the externalCertSecret to have the keys public.crt and private.key.

Kubernetes has a specific kind of secrets with the type kubernetes.io/tls, which have the keys ca.crt, tls.crt and tls.key. This kind of secret is even generated by tool like cert-manager.

It would be great, if minio-operator could support those secrets, maybe by adding a new field named type under externalCertSecret:

spec:
  externalCertSecret:
    name: my-example-minio-service-tls
    type: kubernetes.io/tls

imagePullSecrets not being honoured...

we would like to pull the image from a private repository secured by a kubernetes secret. Typically we do the following:

  spec:
      imagePullSecrets:
      - name: {{ .Values.global.imagePullSecret }}

How do we specify this for example in the minioinstances.yaml example in this repo?

Add Prometheus annotations to the service

Add prometheus annotations to the service:

  kind: Service
  metadata:
    annotations:
      prometheus.io/path: /minio/prometheus/metrics
      prometheus.io/port: "9000"
      prometheus.io/scrape: "true"```
which will allow prometheus automatic scraping (alternatively add service annotations custom field to the crd)

YAML does not reflect released version

The tagged releases of the operator do not seem to seem to set the version in minio-operator.yaml which would be beneficial to allow installation of a specific version and updates of the operator to a specific version by running:

$ kubectl apply -f https://raw.githubusercontent.com/minio/minio-operator/<version>/minio-operator.yaml

This will, however, for all current versions deploy version 1.0.4 and only for latest 1.0.7 because of 3c7e5ae.

Observed a panic: runtime.boundsError{x:0, y:0, signed:true, code:0x0} (runtime error: index out of range [0] with length 0)

operator: docker.io/minio/k8s-operator:1.0.8
kubernetes : Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:22:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}

E0421 05:53:16.217716       1 runtime.go:66] Observed a panic: runtime.boundsError{x:0, y:0, signed:true, code:0x0} (runtime error: index out of range [0] with length 0)
/home/harsha/mygo/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:72
/home/harsha/mygo/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:65
/home/harsha/mygo/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:51
/home/harsha/.gimme/versions/go1.13.10.linux.amd64/src/runtime/panic.go:679
/home/harsha/.gimme/versions/go1.13.10.linux.amd64/src/runtime/panic.go:75
/home/harsha/mygo/src/github.com/minio/minio-operator/pkg/resources/statefulsets/statefulset.go:144
/home/harsha/mygo/src/github.com/minio/minio-operator/pkg/resources/statefulsets/statefulset.go:251
/home/harsha/mygo/src/github.com/minio/minio-operator/pkg/controller/cluster/controller.go:333
/home/harsha/mygo/src/github.com/minio/minio-operator/pkg/controller/cluster/controller.go:248
/home/harsha/mygo/src/github.com/minio/minio-operator/pkg/controller/cluster/controller.go:256
/home/harsha/mygo/src/github.com/minio/minio-operator/pkg/controller/cluster/controller.go:209
/home/harsha/mygo/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133
/home/harsha/mygo/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:134
/home/harsha/mygo/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88
/home/harsha/.gimme/versions/go1.13.10.linux.amd64/src/runtime/asm_amd64.s:1357
panic: runtime error: index out of range [0] with length 0 [recovered]
	panic: runtime error: index out of range [0] with length 0
goroutine 115 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/home/harsha/mygo/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:58 +0x105
panic(0x1146dc0, 0xc0002860c0)
	/home/harsha/.gimme/versions/go1.13.10.linux.amd64/src/runtime/panic.go:679 +0x1b2
github.com/minio/minio-operator/pkg/resources/statefulsets.minioServerContainer(0xc0000f0480, 0xc0003eb500, 0x17, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/home/harsha/mygo/src/github.com/minio/minio-operator/pkg/resources/statefulsets/statefulset.go:144 +0x678
github.com/minio/minio-operator/pkg/resources/statefulsets.NewForCluster(0xc0000f0480, 0xc0003eb500, 0x17, 0x8)
	/home/harsha/mygo/src/github.com/minio/minio-operator/pkg/resources/statefulsets/statefulset.go:251 +0x37c
github.com/minio/minio-operator/pkg/controller/cluster.(*Controller).syncHandler(0xc000300000, 0xc0001d86c0, 0x16, 0xc00056a2d0, 0xc0000aed28)
	/home/harsha/mygo/src/github.com/minio/minio-operator/pkg/controller/cluster/controller.go:333 +0xcd2
github.com/minio/minio-operator/pkg/controller/cluster.(*Controller).processNextWorkItem.func1(0xc000300000, 0x1032a60, 0xc0001e4c30, 0x0, 0x0)
	/home/harsha/mygo/src/github.com/minio/minio-operator/pkg/controller/cluster/controller.go:248 +0x169
github.com/minio/minio-operator/pkg/controller/cluster.(*Controller).processNextWorkItem(0xc000300000, 0x0)
	/home/harsha/mygo/src/github.com/minio/minio-operator/pkg/controller/cluster/controller.go:256 +0x50
github.com/minio/minio-operator/pkg/controller/cluster.(*Controller).runWorker(0xc000300000)
	/home/harsha/mygo/src/github.com/minio/minio-operator/pkg/controller/cluster/controller.go:209 +0x2b
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00021a040)
	/home/harsha/mygo/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x5e
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00021a040, 0x3b9aca00, 0x0, 0x1, 0xc0000a69c0)
	/home/harsha/mygo/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:134 +0xf8
k8s.io/apimachinery/pkg/util/wait.Until(0xc00021a040, 0x3b9aca00, 0xc0000a69c0)
	/home/harsha/mygo/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88 +0x4d
created by github.com/minio/minio-operator/pkg/controller/cluster.(*Controller).Run
	/home/harsha/mygo/src/github.com/minio/minio-operator/pkg/controller/cluster/controller.go:195 +0x29a

Minio cluster doesn't initialize on Kind v0.4.0 with Kubernetes 1.15.0

I'm finding with the current version of kind that the Minio operator doesn't seem to want to deploy properly. When the problem occurs, the Minio pods themselves reach the Running state but never initialize:

k logs minio-0 -f
Waiting for all other servers to be online to format the disks.
Waiting for configuration to be initialized..
Waiting for configuration to be initialized..
Waiting for configuration to be initialized..

It continues to log Waiting for configuration to be initialized persistently past this point, and you can find from the Minio Golang client the same problem (e.g. buckets can't be created with the error Server not initialized, please try again). The UI is accessible during this time but also responds with Server not initialized, please try again.

The following are the steps to recreate this issue:

GO111MODULE="on" go get sigs.k8s.io/[email protected] && kind create cluster
kind create cluster --image kindest/node:v1.15.0
export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
kubectl create -f https://github.com/minio/minio-operator/blob/master/docs/minio-operator.yaml?raw=true
kubectl create -f https://github.com/minio/minio-operator/blob/master/docs/minio-examples/minio-secret.yaml?raw=true
kubectl create -f https://github.com/minio/minio-operator/blob/master/docs/minio-examples/minioinstance.yaml?raw=true

When the problem occurs, there are no logs emitted from the operator pod. This problem is affecting my CI tests wherein integration tests run against a kind cluster.

Dynamic PVC using storageclass fails because name is nill

Requesting a new MinIO instance with a dynamic PVC doesn't work with the given example in this repository. You need to define the name in the volumeClaimTemplate and in the volumeClaimTemplate.metadata to get it working otherwise you'll get the following error in the generated statefullset:

Events:
  Type     Reason        Age                From                    Message
  ----     ------        ----               ----                    -------
  Warning  FailedCreate  1s (x12 over 11s)  statefulset-controller  create Claim -minio-0 for Pod minio-0 in StatefulSet minio failed error: PersistentVolumeClaim "-minio-0" is invalid: metadata.name: Invalid value: "-minio-0": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
  Warning  FailedCreate  1s (x12 over 11s)  statefulset-controller  create Pod minio-0 in StatefulSet minio failed error: Failed to create PVC -minio-0: PersistentVolumeClaim "-minio-0" is invalid: metadata.name: Invalid value: "-minio-0": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')

This is because the name is nill:

$ kubectl describe statefulset minio

Name:               minio
Namespace:          default
CreationTimestamp:  Fri, 11 Oct 2019 16:12:34 +0200
Selector:           v1beta1.min.io/instance=minio                                                                                                                                                                                                            Labels:             <none>
Annotations:        <none>                                                                                                                                                                                                                                   Replicas:           2 desired | 0 total                                                                                                                                                                                                                      Update Strategy:    RollingUpdate                                                                                                                                                                                                                            Pods Status:        0 Running / 0 Waiting / 0 Succeeded / 0 Failed                                                                                                                                                                                           Pod Template:                                                                                                                                                                                                                                                  Labels:       app=minio                                                                                                                                                                                                                                                    v1beta1.min.io/instance=minio                                                                                                                                                                                                                  Annotations:  prometheus.io/path: /minio/prometheus/metrics                                                                                                                                                                                                                prometheus.io/port: 9000                                                                                                                                                                                                                                     prometheus.io/scrape: true                                                                                                                                                                                                                     Containers:                                                                                                                                                                                                                                                   minio:                                                                                                                                                                                                                                                        Image:      minio/minio:RELEASE.2019-10-11T00-38-09Z                                                                                                                                                                                                         Port:       9000/TCP                                                                                                                                                                                                                                         Host Port:  0/TCP                                                                                                                                                                                                                                            Args:                                                                                                                                                                                                                                                          server                                                                                                                                                                                                                                                       http://minio-0.minio-hl-svc.default.svc.cluster.local/export                                                                                                                                                                                                 http://minio-1.minio-hl-svc.default.svc.cluster.local/export                                                                                                                                                                                               Requests:                                                                                                                                                                                                                                                      cpu:     250m                                                                                                                                                                                                                                                memory:  512Mi                                                                                                                                                                                                                                             Liveness:  http-get http://:9000/minio/health/live delay=120s timeout=1s period=20s #success=1 #failure=3                                                                                                                                                    Environment:                                                                                                                                                                                                                                                   MINIO_BROWSER:     on                                                                                                                                                                                                                                        MINIO_ACCESS_KEY:  <set to the key 'accesskey' in secret 'minio-creds-secret'>  Optional: false                                                                                                                                                              MINIO_SECRET_KEY:  <set to the key 'secretkey' in secret 'minio-creds-secret'>  Optional: false                                                                                                                                                            Mounts:                                                                                                                                                                                                                                                        /export from  (rw)                                                                                                                                                                                                                                       Volumes:  <none>                                                                                                                                                                                                                                           Volume Claims:                                                                                                                                                                                                                                                 Name:                                                                                                                                                                                                                                                        StorageClass:  csi-archive                                                                                                                                                                                                                                   Labels:        <none>                                                                                                                                                                                                                                        Annotations:   <none>                                                                                                                                                                                                                                        Capacity:      10Gi                                                                                                                                                                                                                                          Access Modes:  [ReadWriteOnce]

The MinIOInstance below works:

apiVersion: miniocontroller.min.io/v1beta1
kind: MinIOInstance
metadata:
  name: minio
spec:
  ## Add metadata to the pods created by the StatefulSet
  metadata:
    labels:
      app: minio
    annotations:
      prometheus.io/path: /minio/prometheus/metrics
      prometheus.io/port: "9000"
      prometheus.io/scrape: "true"
  ## Registry location and Tag to download MinIO Server image
  image: minio/minio:RELEASE.2019-10-11T00-38-09Z
  ## Secret with credentials to be used by MinIO instance.
  credsSecret:
    name: minio-creds-secret
  ## Supply number of replicas.
  ## For standalone mode, supply 1. For distributed mode, supply 4 or more (should be even).
  ## Note that the operator does not support upgrading from standalone to distributed mode.
  replicas: 2
  ## PodManagement policy for pods created by StatefulSet. Can be "OrderedReady" or "Parallel"
  ## Refer https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#pod-management-policy
  ## for details. Defaults to "Parallel"
  podManagementPolicy: Parallel
  ## Enable Kubernetes based certificate generation and signing as explained in
  ## https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster
  requestAutoCert: false
  ## Used when "requestAutoCert" is set to true. Set CommonName for the auto-generated certificate.
  ## Internal DNS name for the pod will be used if CommonName is not provided.
  certConfig:
    commonName: ""
    organizationName: []
    dnsNames: []

  ## Used to specify a toleration for a pod
  #tolerations:
  #  - effect: NoSchedule
  #    key: dedicated
  #    operator: Equal
  #    value: storage
  ## Add environment variables to be set in MinIO container (https://github.com/minio/minio/tree/master/docs/config)
  env:
    - name: MINIO_BROWSER
      value: "on"
    # - name: MINIO_STORAGE_CLASS_RRS
    #   value: "EC:2"
  ## Configure resource requests and limits for MinIO containers
  resources:
    requests:
      memory: 512Mi
      cpu: 250m
  ## Liveness probe detects situations where MinIO server instance
  ## is not working properly and needs restart. Kubernetes automatically
  ## restarts the pods if liveness checks fail.
  liveness:
    httpGet:
      path: /minio/health/live
      port: 9000
    initialDelaySeconds: 120
    periodSeconds: 20
  ## Readiness probe detects situations when MinIO server instance
  ## is not ready to accept traffic. Kubernetes doesn't forward
  ## traffic to the pod while readiness checks fail.
  ## Recommended to be used only for standalone MinIO Instances. (replicas = 1)
  # readiness:
  #   httpGet:
  #     path: /minio/health/ready
  #     port: 9000
  #   initialDelaySeconds: 120
  #   periodSeconds: 20
  ## Affinity settings for MinIO pods. Read more about affinity
  ## here: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity.
  # affinity:
  ## Secret with certificates to configure TLS for MinIO certs. Create secrets as explained
  ## here: https://github.com/minio/minio/tree/master/docs/tls/kubernetes#2-create-kubernetes-secret
  # externalCertSecret:
  # name: tls-ssl-minio
  ## Mountpath where PV will be mounted inside container(s). Defaults to "/export".
  # mountPath: /export
  ## Subpath inside Mountpath where MinIO starts. Defaults to "".
  # subPath: /data
  volumeClaimTemplate:
    name: data
    metadata:
      name: data
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
      storageClassName: csi-archive

[Feature Request] Custom DNS/naming scheme for StatefulSet

This is a request to allow to change the naming scheme used by the generated StatefulSet. Right now, I'm generating the StatefulSet and then manually updating the args.

Why? I have setup Minio with a valid public certificate that contains <service>.<domain> and <host-X>.<service>.<domain>. This means that my clients trust connections to the cluster, but that I have to manually update the args to be a list of <host-X>.<service>.<domain> because the svc.cluster.local is hardcoded into minio operator (as well as <instance>-hl-svc), and I can't get a public certificate for svc.cluster.local.

The easiest solution would be to allow replacing both the cluster name and service name in the arg generation.

Allow to set the serviceAccountName and securityContext for the managed containers

This is a feature-request:

I try to use minio-operator in a self-managed internal Kubernetes-cluster.
Our default PodSecurityPolicy sets all containers to a read-only root-filesystem, so starting the minio-container results in ERROR Unable to create directory specified config-dir=/.minio: mkdir /.minio: read-only file system.

We have an additional PSP available, that allows writing to the containers root-filesystem, but to assign this, I need to specify a serviceAccountName that is allowed to use this PSP and to set the containers securityContext to disable read-only for the root-filesystem.

Most likely the next problem then will be, that we do not run the pods as root, so the user (nobody) will not be allowed to create/write the specified directory, but that should be a new issue.

Add custom tolerations to the statefulsets

Please, can you add support for the following field in the statefulset to be available for configuration in the minioinstance CRD.

apiVersion: apps/v1
kind: StatefulSet
spec:
  template:
    spec:
      tolerations:
      - effect: NoSchedule
        key: dedicated
        operator: Equal
        value: storage

Thanks.

Update Example on Minio Operator CSV

While attempting to deploy the example MinIOInstance showing in operatorhub.io, I discovered the example did not deploy as a result of the replicas[1] and requestAutoCert[2] values were quoted. However, the Custom Resource Definition defines the mentioned fields as integer and boolean respectively.

The recommended changes have been added to the gist for your review[3].

In addition, I noticed the secret is not rendering in the operatorhub UI. I would recommend including that in the description of the operator to ensure users know to create the secret.

[1] https://github.com/operator-framework/community-operators/blob/349d34d8faab26d02e1f4645217eb82db276ef5e/upstream-community-operators/minio-operator/1.0.3/minio-operator.v1.0.3.clusterserviceversion.yaml#L35
[2] https://github.com/operator-framework/community-operators/blob/349d34d8faab26d02e1f4645217eb82db276ef5e/upstream-community-operators/minio-operator/1.0.3/minio-operator.v1.0.3.clusterserviceversion.yaml#L39
[3] https://gist.github.com/OchiengEd/f9093ad81492854b127b3cd2a30d2d07

add custom scheduler to the statefulset

Please add support for the following field in the statefulset to be available for configuration in the minioinstance CRD.

kind: StatefulSet
spec:
  template:
    spec:
      schedulerName: my-custom-scheduler

Cluster Scope or Namespaced Scope

I see in the docs mentioned that we can run the operator as cluster or namespaced scope. As per my knowledge namespaced scope operators are generally preferred. Could someone suggest what is the best for minio, does running cluster scope have any issues.....this is just a question out of curiosity :)

Operator Capabilities and Prometheus ServiceMonitor

Based on the operator levels on operatorhub.io, I think we can have the Minio operator capabilities can be marked as a level 4 operator. This is based on the fact that the minio operand or cluster exposes a metrics endpoint. From the diagram below, that is one of the functionalities sought after in level 4 operators

Levels

However, given that the metrics endpoint is non-standard, I would recommend auto-creating a Prometheus servicemonitor resource. I have started working on that in a branch in my fork in case you find it interesting.

Allow selector in minioinstance spec

I was trying to deploy minio by minio-operator on OpenEBS, it is a Container attached storage. Everything goes well until I tried to deploy the minio instance and OpenEBS target pod on the same node, I believe it can improve the perfermance.

To do this, I need add selector See here in the statefulset according the docs, while the minio-operator does not support it. The MinioInstanceSpec has no selector field and this code make selector by itself.

I'd like to add a filed to MinioIntanceSpec as this:
Selector *metav1.LabelSelector json:"selector,omitempty"
and append them when we call NewForCluster in pkg/resources/statefulsets.go

Operator scoped to namespace

Hi
I'm currently deploying operator-per-namespace setup in my kubernetes cluster. I've noticed that, by default, this operator is working globally and operating in all namespaces. However, I'd like to have option to limit the scope to the ns operator is deployed in, or more flexibly, to have freedom of choice which namespaces should be watched (like https://strimzi.io/docs/master/#deploying-cluster-operator-to-watch-multiple-namespacesstr)
I've searched though examples but found no way to do it. How it's currently managed? Are there any chances to have this feature expanded?

Feature request: trigger self healing

According to minio/minio#6442 self healing after a node outage is not done automatically and has to be done manually by an administrator or periodically via cron.

It makes absolute sense that the operator can handlie this as well.

Use case:

  • trigger a heal if a node returns after an outage and remains a configurable time in the cluster

Document how to use multiple volumes per node

I am looking to set up minio-operator in standalone mode with 4 disks. I can not find the documentation / best practice for attaching multiple drives. I see that /export is where a single PV may be mounted but how do I tell the operator that I have multiple?

Thanks a lot in advance.

minio-hl-svc has no ClusterIP

I am working through the minio-operator/README.md. The operator installed successfully it its own namespace "minio-operator-ns". The minio instance installed without error (kubectl create -f ...) and resulted in a pod "minio-*" and a secret as expected. When checking kubectl get svc the minio-hl-svc has a ClusterIP None.

NAME           TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
hello-node     LoadBalancer   10.111.103.95   <pending>     8080:32164/TCP   8m28s
kubernetes     ClusterIP      10.96.0.1       <none>        443/TCP          50m
minio-hl-svc   ClusterIP      None            <none>        9000/TCP         19s

When executing the steps from minio/README.md manually the install works perfectly.

I have changed the image in the minioinstance.yaml to minio/minio:RELEASE.2019-10-12T01-39-57Z.

Switching it to the original minio-image-tag minio/minio:RELEASE.2019-09-11T19-53-16Z did not make a difference.

kubernetes is on:

λ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}

Error or user problem?

why we need `mirror_controller`

I find that pkg/controller/cluster/controller.go is very similar to pkg/controller/mirror/mirror_controller.go.
But we're just using pkg/controller/cluster/controller.go 's NewController .
Why do we need pkg/controller/mirror/mirror_controller.go?

minio-operator.yaml error in Kubernetes v1.16

With Kubernetes v1.16, the Deployment apiVersion apps/v1beta1 has been removed, so when apply minio-operator.yaml, there is error:

[root@node49 ~]# kubectl create -f https://github.com/minio/minio-operator/blob/master/minio-operator.yaml?raw=true
namespace/minio-operator-ns created
customresourcedefinition.apiextensions.k8s.io/minioinstances.miniocontroller.min.io created
clusterrole.rbac.authorization.k8s.io/minio-operator-role created
serviceaccount/minio-operator-sa created
clusterrolebinding.rbac.authorization.k8s.io/minio-operator-binding created
error: unable to recognize "https://github.com/minio/minio-operator/blob/master/minio-operator.yaml?raw=true": no matches for kind "Deployment" in version "apps/v1beta1"

minioinstances.miniocontroller.min.io vs. minioinstances.miniooperator.min.io

Just deployed a new instance of the minio-operator when I noticed errors indicating the operator could not find the requested minioinstance resource minioinstances.miniocontroller.min.io [1].

A review of the CRDs installed (see [2] and [3] below) suggests the domain should be minioinstances.miniooperator.min.io. Are these one and the same resource and is there something being done to resolve this already?

[1]

E0506 20:00:10.046978       1 reflector.go:134] k8s.io/[email protected]/tools/cache/reflector.go:95: Failed to list *v1beta1.MinIOInstance: the server could not find the requested resource (get minioinstances.miniocontroller.min.io)

[2]

❯ kubectl get crds minioinstances.miniooperator.min.io                        
NAME                                  CREATED AT
minioinstances.miniooperator.min.io   2020-05-06T19:50:19Z

[3]

❯ kubectl explain minioinstances                                   
KIND:     MinIOInstance
VERSION:  miniooperator.min.io/v1beta1

DESCRIPTION:
     <empty>

error when applying the CRD

when applying the updated crd in the cluster i got this error:

 kubectl apply -n minio-operator -f docs/minio-operator/minio-operator.yaml
error: error validating "docs/minio-operator/minio-operator.yaml": error validating data: ValidationError(CustomResourceDefinition.spec): unknown field "preserveUnknownFields" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.CustomResourceDefinitionSpec; if you choose to ignore these errors, turn validation off with --validate=false

k8s cluster version: kindest/node:v1.13.4

Question - replicas 1 or even

I am looking to use the operator in a HA environment, and was curious about why replicas has to be even

So if I have 3 nodes in my cluster, is the recommend approach to have 6 replicas in total, with 2 replicas on each node ?

Unknown field when applying examples/minioinstance.yaml

When I tried apply minio operator following the steps below, please note I've changed the namespace of the operator and instance:

    1. kubectl apply -f minio-operator.yaml (this can be applied successfully)
    1. kubectl apply -f examples/minioinstance.yaml

I got errors below when applying examples/minioinstance.yaml

[ValidationError(MinIOInstance.spec): unknown field "certConfig" in io.min.miniocontroller.v1beta1.MinIOInstance.spec, 

ValidationError(MinIOInstance.spec): unknown field "credsSecret" in io.min.miniocontroller.v1beta1.MinIOInstance.spec, 

ValidationError(MinIOInstance.spec): unknown field "env" in io.min.miniocontroller.v1beta1.MinIOInstance.spec, 

ValidationError(MinIOInstance.spec): unknown field "image" in io.min.miniocontroller.v1beta1.MinIOInstance.spec, 

ValidationError(MinIOInstance.spec): unknown field "liveness" in io.min.miniocontroller.v1beta1.MinIOInstance.spec, 

ValidationError(MinIOInstance.spec): unknown field "metadata" in io.min.miniocontroller.v1beta1.MinIOInstance.spec, 

ValidationError(MinIOInstance.spec): unknown field "podManagementPolicy" in io.min.miniocontroller.v1beta1.MinIOInstance.spec, 

ValidationError(MinIOInstance.spec): unknown field "requestAutoCert" in io.min.miniocontroller.v1beta1.MinIOInstance.spec, 

ValidationError(MinIOInstance.spec): unknown field "resources" in io.min.miniocontroller.v1beta1.MinIOInstance.spec, 

ValidationError(MinIOInstance.spec): unknown field "volumeClaimTemplate" in io.min.miniocontroller.v1beta1.MinIOInstance.spec]

don't know how to use the operator??

I did all 3 objects creation via your yml files

no route is being exposed
no new PVC is created
I can't see any other pod other than the operator

able to delete file w/o login in minio browser

I am able to delete backup file without login, even though I already configured secret for login, as shwon below

minio

apiVersion: miniocontroller.min.io/v1beta1
kind: MinIOInstance
metadata:
  name: minio
spec:
  metadata:
    labels:
      app: minio
    annotations:
      prometheus.io/path: /minio/prometheus/metrics
      prometheus.io/port: "9000"
      prometheus.io/scrape: "true"
  ## Registry location and Tag to download MinIO Server image
  image: minio/minio:RELEASE.2019-10-12T01-39-57Z
  ## Secret with credentials to be used by MinIO instance.
  credsSecret:
    name: minio-creds-secret
spec:
      serviceAccountName: minio-operator-sa
      containers:
        - name: minio-operator
          image: minio/k8s-operator:1.0.4
          imagePullPolicy: IfNotPresent

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.