Code Monkey home page Code Monkey logo

operator's Issues

Allow `externalCertSecret` to be of type `kubernetes.io/tls`

At the moment minio-operator expects the externalCertSecret to have the keys public.crt and private.key.

Kubernetes has a specific kind of secrets with the type kubernetes.io/tls, which have the keys ca.crt, tls.crt and tls.key. This kind of secret is even generated by tool like cert-manager.

It would be great, if minio-operator could support those secrets, maybe by adding a new field named type under externalCertSecret:

spec:
  externalCertSecret:
    name: my-example-minio-service-tls
    type: kubernetes.io/tls

Add custom tolerations to the statefulsets

Please, can you add support for the following field in the statefulset to be available for configuration in the minioinstance CRD.

apiVersion: apps/v1
kind: StatefulSet
spec:
  template:
    spec:
      tolerations:
      - effect: NoSchedule
        key: dedicated
        operator: Equal
        value: storage

Thanks.

minioinstances.miniocontroller.min.io vs. minioinstances.miniooperator.min.io

Just deployed a new instance of the minio-operator when I noticed errors indicating the operator could not find the requested minioinstance resource minioinstances.miniocontroller.min.io [1].

A review of the CRDs installed (see [2] and [3] below) suggests the domain should be minioinstances.miniooperator.min.io. Are these one and the same resource and is there something being done to resolve this already?

[1]

E0506 20:00:10.046978       1 reflector.go:134] k8s.io/[email protected]/tools/cache/reflector.go:95: Failed to list *v1beta1.MinIOInstance: the server could not find the requested resource (get minioinstances.miniocontroller.min.io)

[2]

❯ kubectl get crds minioinstances.miniooperator.min.io                        
NAME                                  CREATED AT
minioinstances.miniooperator.min.io   2020-05-06T19:50:19Z

[3]

❯ kubectl explain minioinstances                                   
KIND:     MinIOInstance
VERSION:  miniooperator.min.io/v1beta1

DESCRIPTION:
     <empty>

add custom scheduler to the statefulset

Please add support for the following field in the statefulset to be available for configuration in the minioinstance CRD.

kind: StatefulSet
spec:
  template:
    spec:
      schedulerName: my-custom-scheduler

Dynamic PVC using storageclass fails because name is nill

Requesting a new MinIO instance with a dynamic PVC doesn't work with the given example in this repository. You need to define the name in the volumeClaimTemplate and in the volumeClaimTemplate.metadata to get it working otherwise you'll get the following error in the generated statefullset:

Events:
  Type     Reason        Age                From                    Message
  ----     ------        ----               ----                    -------
  Warning  FailedCreate  1s (x12 over 11s)  statefulset-controller  create Claim -minio-0 for Pod minio-0 in StatefulSet minio failed error: PersistentVolumeClaim "-minio-0" is invalid: metadata.name: Invalid value: "-minio-0": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
  Warning  FailedCreate  1s (x12 over 11s)  statefulset-controller  create Pod minio-0 in StatefulSet minio failed error: Failed to create PVC -minio-0: PersistentVolumeClaim "-minio-0" is invalid: metadata.name: Invalid value: "-minio-0": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')

This is because the name is nill:

$ kubectl describe statefulset minio

Name:               minio
Namespace:          default
CreationTimestamp:  Fri, 11 Oct 2019 16:12:34 +0200
Selector:           v1beta1.min.io/instance=minio                                                                                                                                                                                                            Labels:             <none>
Annotations:        <none>                                                                                                                                                                                                                                   Replicas:           2 desired | 0 total                                                                                                                                                                                                                      Update Strategy:    RollingUpdate                                                                                                                                                                                                                            Pods Status:        0 Running / 0 Waiting / 0 Succeeded / 0 Failed                                                                                                                                                                                           Pod Template:                                                                                                                                                                                                                                                  Labels:       app=minio                                                                                                                                                                                                                                                    v1beta1.min.io/instance=minio                                                                                                                                                                                                                  Annotations:  prometheus.io/path: /minio/prometheus/metrics                                                                                                                                                                                                                prometheus.io/port: 9000                                                                                                                                                                                                                                     prometheus.io/scrape: true                                                                                                                                                                                                                     Containers:                                                                                                                                                                                                                                                   minio:                                                                                                                                                                                                                                                        Image:      minio/minio:RELEASE.2019-10-11T00-38-09Z                                                                                                                                                                                                         Port:       9000/TCP                                                                                                                                                                                                                                         Host Port:  0/TCP                                                                                                                                                                                                                                            Args:                                                                                                                                                                                                                                                          server                                                                                                                                                                                                                                                       http://minio-0.minio-hl-svc.default.svc.cluster.local/export                                                                                                                                                                                                 http://minio-1.minio-hl-svc.default.svc.cluster.local/export                                                                                                                                                                                               Requests:                                                                                                                                                                                                                                                      cpu:     250m                                                                                                                                                                                                                                                memory:  512Mi                                                                                                                                                                                                                                             Liveness:  http-get http://:9000/minio/health/live delay=120s timeout=1s period=20s #success=1 #failure=3                                                                                                                                                    Environment:                                                                                                                                                                                                                                                   MINIO_BROWSER:     on                                                                                                                                                                                                                                        MINIO_ACCESS_KEY:  <set to the key 'accesskey' in secret 'minio-creds-secret'>  Optional: false                                                                                                                                                              MINIO_SECRET_KEY:  <set to the key 'secretkey' in secret 'minio-creds-secret'>  Optional: false                                                                                                                                                            Mounts:                                                                                                                                                                                                                                                        /export from  (rw)                                                                                                                                                                                                                                       Volumes:  <none>                                                                                                                                                                                                                                           Volume Claims:                                                                                                                                                                                                                                                 Name:                                                                                                                                                                                                                                                        StorageClass:  csi-archive                                                                                                                                                                                                                                   Labels:        <none>                                                                                                                                                                                                                                        Annotations:   <none>                                                                                                                                                                                                                                        Capacity:      10Gi                                                                                                                                                                                                                                          Access Modes:  [ReadWriteOnce]

The MinIOInstance below works:

apiVersion: miniocontroller.min.io/v1beta1
kind: MinIOInstance
metadata:
  name: minio
spec:
  ## Add metadata to the pods created by the StatefulSet
  metadata:
    labels:
      app: minio
    annotations:
      prometheus.io/path: /minio/prometheus/metrics
      prometheus.io/port: "9000"
      prometheus.io/scrape: "true"
  ## Registry location and Tag to download MinIO Server image
  image: minio/minio:RELEASE.2019-10-11T00-38-09Z
  ## Secret with credentials to be used by MinIO instance.
  credsSecret:
    name: minio-creds-secret
  ## Supply number of replicas.
  ## For standalone mode, supply 1. For distributed mode, supply 4 or more (should be even).
  ## Note that the operator does not support upgrading from standalone to distributed mode.
  replicas: 2
  ## PodManagement policy for pods created by StatefulSet. Can be "OrderedReady" or "Parallel"
  ## Refer https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#pod-management-policy
  ## for details. Defaults to "Parallel"
  podManagementPolicy: Parallel
  ## Enable Kubernetes based certificate generation and signing as explained in
  ## https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster
  requestAutoCert: false
  ## Used when "requestAutoCert" is set to true. Set CommonName for the auto-generated certificate.
  ## Internal DNS name for the pod will be used if CommonName is not provided.
  certConfig:
    commonName: ""
    organizationName: []
    dnsNames: []

  ## Used to specify a toleration for a pod
  #tolerations:
  #  - effect: NoSchedule
  #    key: dedicated
  #    operator: Equal
  #    value: storage
  ## Add environment variables to be set in MinIO container (https://github.com/minio/minio/tree/master/docs/config)
  env:
    - name: MINIO_BROWSER
      value: "on"
    # - name: MINIO_STORAGE_CLASS_RRS
    #   value: "EC:2"
  ## Configure resource requests and limits for MinIO containers
  resources:
    requests:
      memory: 512Mi
      cpu: 250m
  ## Liveness probe detects situations where MinIO server instance
  ## is not working properly and needs restart. Kubernetes automatically
  ## restarts the pods if liveness checks fail.
  liveness:
    httpGet:
      path: /minio/health/live
      port: 9000
    initialDelaySeconds: 120
    periodSeconds: 20
  ## Readiness probe detects situations when MinIO server instance
  ## is not ready to accept traffic. Kubernetes doesn't forward
  ## traffic to the pod while readiness checks fail.
  ## Recommended to be used only for standalone MinIO Instances. (replicas = 1)
  # readiness:
  #   httpGet:
  #     path: /minio/health/ready
  #     port: 9000
  #   initialDelaySeconds: 120
  #   periodSeconds: 20
  ## Affinity settings for MinIO pods. Read more about affinity
  ## here: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity.
  # affinity:
  ## Secret with certificates to configure TLS for MinIO certs. Create secrets as explained
  ## here: https://github.com/minio/minio/tree/master/docs/tls/kubernetes#2-create-kubernetes-secret
  # externalCertSecret:
  # name: tls-ssl-minio
  ## Mountpath where PV will be mounted inside container(s). Defaults to "/export".
  # mountPath: /export
  ## Subpath inside Mountpath where MinIO starts. Defaults to "".
  # subPath: /data
  volumeClaimTemplate:
    name: data
    metadata:
      name: data
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
      storageClassName: csi-archive

able to delete file w/o login in minio browser

I am able to delete backup file without login, even though I already configured secret for login, as shwon below

minio

apiVersion: miniocontroller.min.io/v1beta1
kind: MinIOInstance
metadata:
  name: minio
spec:
  metadata:
    labels:
      app: minio
    annotations:
      prometheus.io/path: /minio/prometheus/metrics
      prometheus.io/port: "9000"
      prometheus.io/scrape: "true"
  ## Registry location and Tag to download MinIO Server image
  image: minio/minio:RELEASE.2019-10-12T01-39-57Z
  ## Secret with credentials to be used by MinIO instance.
  credsSecret:
    name: minio-creds-secret
spec:
      serviceAccountName: minio-operator-sa
      containers:
        - name: minio-operator
          image: minio/k8s-operator:1.0.4
          imagePullPolicy: IfNotPresent

Allow selector in minioinstance spec

I was trying to deploy minio by minio-operator on OpenEBS, it is a Container attached storage. Everything goes well until I tried to deploy the minio instance and OpenEBS target pod on the same node, I believe it can improve the perfermance.

To do this, I need add selector See here in the statefulset according the docs, while the minio-operator does not support it. The MinioInstanceSpec has no selector field and this code make selector by itself.

I'd like to add a filed to MinioIntanceSpec as this:
Selector *metav1.LabelSelector json:"selector,omitempty"
and append them when we call NewForCluster in pkg/resources/statefulsets.go

Add Prometheus annotations to the service

Add prometheus annotations to the service:

  kind: Service
  metadata:
    annotations:
      prometheus.io/path: /minio/prometheus/metrics
      prometheus.io/port: "9000"
      prometheus.io/scrape: "true"```
which will allow prometheus automatic scraping (alternatively add service annotations custom field to the crd)

Operator Capabilities and Prometheus ServiceMonitor

Based on the operator levels on operatorhub.io, I think we can have the Minio operator capabilities can be marked as a level 4 operator. This is based on the fact that the minio operand or cluster exposes a metrics endpoint. From the diagram below, that is one of the functionalities sought after in level 4 operators

Levels

However, given that the metrics endpoint is non-standard, I would recommend auto-creating a Prometheus servicemonitor resource. I have started working on that in a branch in my fork in case you find it interesting.

Minio cluster doesn't initialize on Kind v0.4.0 with Kubernetes 1.15.0

I'm finding with the current version of kind that the Minio operator doesn't seem to want to deploy properly. When the problem occurs, the Minio pods themselves reach the Running state but never initialize:

k logs minio-0 -f
Waiting for all other servers to be online to format the disks.
Waiting for configuration to be initialized..
Waiting for configuration to be initialized..
Waiting for configuration to be initialized..

It continues to log Waiting for configuration to be initialized persistently past this point, and you can find from the Minio Golang client the same problem (e.g. buckets can't be created with the error Server not initialized, please try again). The UI is accessible during this time but also responds with Server not initialized, please try again.

The following are the steps to recreate this issue:

GO111MODULE="on" go get sigs.k8s.io/[email protected] && kind create cluster
kind create cluster --image kindest/node:v1.15.0
export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
kubectl create -f https://github.com/minio/minio-operator/blob/master/docs/minio-operator.yaml?raw=true
kubectl create -f https://github.com/minio/minio-operator/blob/master/docs/minio-examples/minio-secret.yaml?raw=true
kubectl create -f https://github.com/minio/minio-operator/blob/master/docs/minio-examples/minioinstance.yaml?raw=true

When the problem occurs, there are no logs emitted from the operator pod. This problem is affecting my CI tests wherein integration tests run against a kind cluster.

minio-operator.yaml error in Kubernetes v1.16

With Kubernetes v1.16, the Deployment apiVersion apps/v1beta1 has been removed, so when apply minio-operator.yaml, there is error:

[root@node49 ~]# kubectl create -f https://github.com/minio/minio-operator/blob/master/minio-operator.yaml?raw=true
namespace/minio-operator-ns created
customresourcedefinition.apiextensions.k8s.io/minioinstances.miniocontroller.min.io created
clusterrole.rbac.authorization.k8s.io/minio-operator-role created
serviceaccount/minio-operator-sa created
clusterrolebinding.rbac.authorization.k8s.io/minio-operator-binding created
error: unable to recognize "https://github.com/minio/minio-operator/blob/master/minio-operator.yaml?raw=true": no matches for kind "Deployment" in version "apps/v1beta1"

Document how to use multiple volumes per node

I am looking to set up minio-operator in standalone mode with 4 disks. I can not find the documentation / best practice for attaching multiple drives. I see that /export is where a single PV may be mounted but how do I tell the operator that I have multiple?

Thanks a lot in advance.

Question - replicas 1 or even

I am looking to use the operator in a HA environment, and was curious about why replicas has to be even

So if I have 3 nodes in my cluster, is the recommend approach to have 6 replicas in total, with 2 replicas on each node ?

YAML does not reflect released version

The tagged releases of the operator do not seem to seem to set the version in minio-operator.yaml which would be beneficial to allow installation of a specific version and updates of the operator to a specific version by running:

$ kubectl apply -f https://raw.githubusercontent.com/minio/minio-operator/<version>/minio-operator.yaml

This will, however, for all current versions deploy version 1.0.4 and only for latest 1.0.7 because of 3c7e5ae.

Operator scoped to namespace

Hi
I'm currently deploying operator-per-namespace setup in my kubernetes cluster. I've noticed that, by default, this operator is working globally and operating in all namespaces. However, I'd like to have option to limit the scope to the ns operator is deployed in, or more flexibly, to have freedom of choice which namespaces should be watched (like https://strimzi.io/docs/master/#deploying-cluster-operator-to-watch-multiple-namespacesstr)
I've searched though examples but found no way to do it. How it's currently managed? Are there any chances to have this feature expanded?

rbac issue with latest

minio operator pod reports :

E0501 06:07:52.821618       1 reflector.go:134] k8s.io/[email protected]/tools/cache/reflector.go:95: Failed to list *v1beta1.MinIOInstance: minioinstances.miniocontroller.min.io is forbidden: User "system:serviceaccount:minio:minio-operator-sa" cannot list resource "minioinstances" in API group "miniocontroller.min.io" at the cluster scope

Named Service Port for Minio Service

Currently, a named service port for the minio-service created by the operator would be useful for discovery by ingress controllers (istio, heptio-contour...)

Update Example on Minio Operator CSV

While attempting to deploy the example MinIOInstance showing in operatorhub.io, I discovered the example did not deploy as a result of the replicas[1] and requestAutoCert[2] values were quoted. However, the Custom Resource Definition defines the mentioned fields as integer and boolean respectively.

The recommended changes have been added to the gist for your review[3].

In addition, I noticed the secret is not rendering in the operatorhub UI. I would recommend including that in the description of the operator to ensure users know to create the secret.

[1] https://github.com/operator-framework/community-operators/blob/349d34d8faab26d02e1f4645217eb82db276ef5e/upstream-community-operators/minio-operator/1.0.3/minio-operator.v1.0.3.clusterserviceversion.yaml#L35
[2] https://github.com/operator-framework/community-operators/blob/349d34d8faab26d02e1f4645217eb82db276ef5e/upstream-community-operators/minio-operator/1.0.3/minio-operator.v1.0.3.clusterserviceversion.yaml#L39
[3] https://gist.github.com/OchiengEd/f9093ad81492854b127b3cd2a30d2d07

don't know how to use the operator??

I did all 3 objects creation via your yml files

no route is being exposed
no new PVC is created
I can't see any other pod other than the operator

why we need `mirror_controller`

I find that pkg/controller/cluster/controller.go is very similar to pkg/controller/mirror/mirror_controller.go.
But we're just using pkg/controller/cluster/controller.go 's NewController .
Why do we need pkg/controller/mirror/mirror_controller.go?

[Feature Request] Custom DNS/naming scheme for StatefulSet

This is a request to allow to change the naming scheme used by the generated StatefulSet. Right now, I'm generating the StatefulSet and then manually updating the args.

Why? I have setup Minio with a valid public certificate that contains <service>.<domain> and <host-X>.<service>.<domain>. This means that my clients trust connections to the cluster, but that I have to manually update the args to be a list of <host-X>.<service>.<domain> because the svc.cluster.local is hardcoded into minio operator (as well as <instance>-hl-svc), and I can't get a public certificate for svc.cluster.local.

The easiest solution would be to allow replacing both the cluster name and service name in the arg generation.

Unknown field when applying examples/minioinstance.yaml

When I tried apply minio operator following the steps below, please note I've changed the namespace of the operator and instance:

    1. kubectl apply -f minio-operator.yaml (this can be applied successfully)
    1. kubectl apply -f examples/minioinstance.yaml

I got errors below when applying examples/minioinstance.yaml

[ValidationError(MinIOInstance.spec): unknown field "certConfig" in io.min.miniocontroller.v1beta1.MinIOInstance.spec, 

ValidationError(MinIOInstance.spec): unknown field "credsSecret" in io.min.miniocontroller.v1beta1.MinIOInstance.spec, 

ValidationError(MinIOInstance.spec): unknown field "env" in io.min.miniocontroller.v1beta1.MinIOInstance.spec, 

ValidationError(MinIOInstance.spec): unknown field "image" in io.min.miniocontroller.v1beta1.MinIOInstance.spec, 

ValidationError(MinIOInstance.spec): unknown field "liveness" in io.min.miniocontroller.v1beta1.MinIOInstance.spec, 

ValidationError(MinIOInstance.spec): unknown field "metadata" in io.min.miniocontroller.v1beta1.MinIOInstance.spec, 

ValidationError(MinIOInstance.spec): unknown field "podManagementPolicy" in io.min.miniocontroller.v1beta1.MinIOInstance.spec, 

ValidationError(MinIOInstance.spec): unknown field "requestAutoCert" in io.min.miniocontroller.v1beta1.MinIOInstance.spec, 

ValidationError(MinIOInstance.spec): unknown field "resources" in io.min.miniocontroller.v1beta1.MinIOInstance.spec, 

ValidationError(MinIOInstance.spec): unknown field "volumeClaimTemplate" in io.min.miniocontroller.v1beta1.MinIOInstance.spec]

Observed a panic: runtime.boundsError{x:0, y:0, signed:true, code:0x0} (runtime error: index out of range [0] with length 0)

operator: docker.io/minio/k8s-operator:1.0.8
kubernetes : Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:22:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}

E0421 05:53:16.217716       1 runtime.go:66] Observed a panic: runtime.boundsError{x:0, y:0, signed:true, code:0x0} (runtime error: index out of range [0] with length 0)
/home/harsha/mygo/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:72
/home/harsha/mygo/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:65
/home/harsha/mygo/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:51
/home/harsha/.gimme/versions/go1.13.10.linux.amd64/src/runtime/panic.go:679
/home/harsha/.gimme/versions/go1.13.10.linux.amd64/src/runtime/panic.go:75
/home/harsha/mygo/src/github.com/minio/minio-operator/pkg/resources/statefulsets/statefulset.go:144
/home/harsha/mygo/src/github.com/minio/minio-operator/pkg/resources/statefulsets/statefulset.go:251
/home/harsha/mygo/src/github.com/minio/minio-operator/pkg/controller/cluster/controller.go:333
/home/harsha/mygo/src/github.com/minio/minio-operator/pkg/controller/cluster/controller.go:248
/home/harsha/mygo/src/github.com/minio/minio-operator/pkg/controller/cluster/controller.go:256
/home/harsha/mygo/src/github.com/minio/minio-operator/pkg/controller/cluster/controller.go:209
/home/harsha/mygo/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133
/home/harsha/mygo/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:134
/home/harsha/mygo/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88
/home/harsha/.gimme/versions/go1.13.10.linux.amd64/src/runtime/asm_amd64.s:1357
panic: runtime error: index out of range [0] with length 0 [recovered]
	panic: runtime error: index out of range [0] with length 0
goroutine 115 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	/home/harsha/mygo/pkg/mod/k8s.io/[email protected]/pkg/util/runtime/runtime.go:58 +0x105
panic(0x1146dc0, 0xc0002860c0)
	/home/harsha/.gimme/versions/go1.13.10.linux.amd64/src/runtime/panic.go:679 +0x1b2
github.com/minio/minio-operator/pkg/resources/statefulsets.minioServerContainer(0xc0000f0480, 0xc0003eb500, 0x17, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/home/harsha/mygo/src/github.com/minio/minio-operator/pkg/resources/statefulsets/statefulset.go:144 +0x678
github.com/minio/minio-operator/pkg/resources/statefulsets.NewForCluster(0xc0000f0480, 0xc0003eb500, 0x17, 0x8)
	/home/harsha/mygo/src/github.com/minio/minio-operator/pkg/resources/statefulsets/statefulset.go:251 +0x37c
github.com/minio/minio-operator/pkg/controller/cluster.(*Controller).syncHandler(0xc000300000, 0xc0001d86c0, 0x16, 0xc00056a2d0, 0xc0000aed28)
	/home/harsha/mygo/src/github.com/minio/minio-operator/pkg/controller/cluster/controller.go:333 +0xcd2
github.com/minio/minio-operator/pkg/controller/cluster.(*Controller).processNextWorkItem.func1(0xc000300000, 0x1032a60, 0xc0001e4c30, 0x0, 0x0)
	/home/harsha/mygo/src/github.com/minio/minio-operator/pkg/controller/cluster/controller.go:248 +0x169
github.com/minio/minio-operator/pkg/controller/cluster.(*Controller).processNextWorkItem(0xc000300000, 0x0)
	/home/harsha/mygo/src/github.com/minio/minio-operator/pkg/controller/cluster/controller.go:256 +0x50
github.com/minio/minio-operator/pkg/controller/cluster.(*Controller).runWorker(0xc000300000)
	/home/harsha/mygo/src/github.com/minio/minio-operator/pkg/controller/cluster/controller.go:209 +0x2b
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00021a040)
	/home/harsha/mygo/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133 +0x5e
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00021a040, 0x3b9aca00, 0x0, 0x1, 0xc0000a69c0)
	/home/harsha/mygo/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:134 +0xf8
k8s.io/apimachinery/pkg/util/wait.Until(0xc00021a040, 0x3b9aca00, 0xc0000a69c0)
	/home/harsha/mygo/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:88 +0x4d
created by github.com/minio/minio-operator/pkg/controller/cluster.(*Controller).Run
	/home/harsha/mygo/src/github.com/minio/minio-operator/pkg/controller/cluster/controller.go:195 +0x29a

Cluster Scope or Namespaced Scope

I see in the docs mentioned that we can run the operator as cluster or namespaced scope. As per my knowledge namespaced scope operators are generally preferred. Could someone suggest what is the best for minio, does running cluster scope have any issues.....this is just a question out of curiosity :)

error when applying the CRD

when applying the updated crd in the cluster i got this error:

 kubectl apply -n minio-operator -f docs/minio-operator/minio-operator.yaml
error: error validating "docs/minio-operator/minio-operator.yaml": error validating data: ValidationError(CustomResourceDefinition.spec): unknown field "preserveUnknownFields" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.CustomResourceDefinitionSpec; if you choose to ignore these errors, turn validation off with --validate=false

k8s cluster version: kindest/node:v1.13.4

Allow to set the serviceAccountName and securityContext for the managed containers

This is a feature-request:

I try to use minio-operator in a self-managed internal Kubernetes-cluster.
Our default PodSecurityPolicy sets all containers to a read-only root-filesystem, so starting the minio-container results in ERROR Unable to create directory specified config-dir=/.minio: mkdir /.minio: read-only file system.

We have an additional PSP available, that allows writing to the containers root-filesystem, but to assign this, I need to specify a serviceAccountName that is allowed to use this PSP and to set the containers securityContext to disable read-only for the root-filesystem.

Most likely the next problem then will be, that we do not run the pods as root, so the user (nobody) will not be allowed to create/write the specified directory, but that should be a new issue.

Feature request: trigger self healing

According to minio/minio#6442 self healing after a node outage is not done automatically and has to be done manually by an administrator or periodically via cron.

It makes absolute sense that the operator can handlie this as well.

Use case:

  • trigger a heal if a node returns after an outage and remains a configurable time in the cluster

minio-hl-svc has no ClusterIP

I am working through the minio-operator/README.md. The operator installed successfully it its own namespace "minio-operator-ns". The minio instance installed without error (kubectl create -f ...) and resulted in a pod "minio-*" and a secret as expected. When checking kubectl get svc the minio-hl-svc has a ClusterIP None.

NAME           TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
hello-node     LoadBalancer   10.111.103.95   <pending>     8080:32164/TCP   8m28s
kubernetes     ClusterIP      10.96.0.1       <none>        443/TCP          50m
minio-hl-svc   ClusterIP      None            <none>        9000/TCP         19s

When executing the steps from minio/README.md manually the install works perfectly.

I have changed the image in the minioinstance.yaml to minio/minio:RELEASE.2019-10-12T01-39-57Z.

Switching it to the original minio-image-tag minio/minio:RELEASE.2019-09-11T19-53-16Z did not make a difference.

kubernetes is on:

λ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}

Error or user problem?

imagePullSecrets not being honoured...

we would like to pull the image from a private repository secured by a kubernetes secret. Typically we do the following:

  spec:
      imagePullSecrets:
      - name: {{ .Values.global.imagePullSecret }}

How do we specify this for example in the minioinstances.yaml example in this repo?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.