Code Monkey home page Code Monkey logo

community-operators's Introduction

Repository is obsolete

The repository community-operators was migrated to two different repositories. We are doing this for better separation of concerns. All functionality remains the same.

Old location New location Operators appear on
operator-framework/community-operators/upstream-community-operators k8s-operatorhub/community-operators/operators OperatorHub.io
operator-framework/community-operators/community-operators redhat-openshift-ecosystem/community-operators-prod/operators Embedded OperatorHub in OpenShift and OKD

About this repository

This repo was the canonical source for Kubernetes Operators that appear on OperatorHub.io, OpenShift Container Platform and OKD. Since July 16th, 2021, the repository was split into two new locations, since this project is has moved in the Cloud-Native Computing Foundation.

community-operators's People

Contributors

aneeshkp avatar awgreene avatar cap1984 avatar che-incubator-bot avatar davidfestal avatar dmesser avatar esara avatar estroz avatar f41gh7 avatar galderz avatar gregsheremeta avatar j0zi avatar jmazzitelli avatar jomkz avatar jpkrohling avatar lbroudoux avatar leochr avatar matzew avatar mkuznyetsov avatar mvalahtv avatar mvalarh avatar nicolaferraro avatar nikhil-thomas avatar raffaelespazzoli avatar ricardozanini avatar rigazilla avatar robszumski avatar samisousa avatar scholzj avatar ssimk0 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

community-operators's Issues

Elastic Cloud Operator is unable to start

I followed the instructions to install the Operator Lifecycle Manager and the Elastic Search Operator. The Lifecycle Manager ist running and I see a operator-pod in the operators-namespace but the startup fails.

It seems that some CRDs are missing. In this file, that is mentioned on the Elastic-Homepage: https://download.elastic.co/downloads/eck/0.8.0/all-in-one.yaml there are these CRDs defined:
name: apmservers.apm.k8s.elastic.co
name: clusterlicenses.elasticsearch.k8s.elastic.co
name: elasticsearches.elasticsearch.k8s.elastic.co
name: enterpriselicenses.elasticsearch.k8s.elastic.co
name: remoteclusters.elasticsearch.k8s.elastic.co
name: trustrelationships.elasticsearch.k8s.elastic.co
name: users.elasticsearch.k8s.elastic.co
name: kibanas.kibana.k8s.elastic.co
but if the operator is installed via the Operator Lifecycle Manager, I can only find these CRDs:
[RBGOOE\lrzgmar_p@ocpjump5101 local_git_repo]$ oc get crd --all-namespaces | grep elastic
elasticsearches.elasticsearch.k8s.elastic.co 2019-06-07T12:47:30Z
kibanas.kibana.k8s.elastic.co 2019-06-07T12:47:30Z

The test was executed on OpenShift 3.11. Please find attached the errors that were available in the operator-pod.

logs-operator.2019-06-07.txt

BR,
Matthias

Missing metadata in oneagent CSV

@awgreene could you please update your CSV with the missing fields? Here is a doc with the definitions, etc: https://github.com/operator-framework/operator-marketplace/blob/master/docs/marketplace-required-csv-annotations.md

Thanks!

$ operator-courier verify --ui_validate_io .
ERROR:operatorcourier.validate:csv metadata.annotations.capabilities not defined. Without this field, the operator will be assigned the basic install capability - you can read more about operator maturity models here https://www.operatorhub.io/getting-started#How-do-I-start-writing-an-Operator?.
WARNING:operatorcourier.validate:csv metadata.annotations.repository not defined.Without this field, the link to the operator source code will not be displayed in the UI.
ERROR:operatorcourier.validate:UI validation failed to verify required fields for operatorhub.io exist.
ERROR:operatorcourier.api:Bundle failed validation.
Resulting bundle is invalid, input yaml is improperly defined.

CrashLoopBackoff in olm-operator / catalog-operator pods

I set up a working 2 node (1 slave, 1 master) k8s cluster on fedora 29 running in scaleway using kubeadm, with calico networking. (basically followed this guide: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm)

When I install OLM via kubectl create -f https://raw.githubusercontent.com/operator-framework/operator-lifecycle-manager/master/deploy/upstream/quickstart/olm.yaml ,

I get the following output:

kube-system calico-node-5klqw 1/2 Running 5 6m34s
kube-system calico-node-slfqb 1/2 Running 0 8m37s
kube-system coredns-86c58d9df4-cj4bt 1/1 Running 0 10m
kube-system coredns-86c58d9df4-j79n4 1/1 Running 0 10m
kube-system etcd-k8s-master 1/1 Running 0 9m48s
kube-system kube-apiserver-k8s-master 1/1 Running 0 9m39s
kube-system kube-controller-manager-k8s-master 1/1 Running 0 9m57s
kube-system kube-proxy-kjqmc 1/1 Running 0 10m
kube-system kube-proxy-ztsvh 1/1 Running 0 6m34s
kube-system kube-scheduler-k8s-master 1/1 Running 0 9m50s
olm catalog-operator-5448fc5b95-l2cgl 0/1 CrashLoopBackOff 4 4m53s
olm olm-operator-57b6cf86dd-nljcf 0/1 CrashLoopBackOff 4 4m53s

Describing the failed pods:
kubectl describe pods -n olm

Name: catalog-operator-5448fc5b95-l2cgl
Namespace: olm
Priority: 0
PriorityClassName:
Node: k8s-slave-0/10.15.88.1
Start Time: Sat, 16 Mar 2019 15:05:02 +0000
Labels: app=catalog-operator
pod-template-hash=5448fc5b95
Annotations: cni.projectcalico.org/podIP: 192.168.2.3/32
Status: Running
IP: 192.168.2.3
Controlled By: ReplicaSet/catalog-operator-5448fc5b95
Containers:
catalog-operator:
Container ID: docker://e54a7370029463fa2d42de851e482ebc7714037591cfe1ee9de5d89ef412b86d
Image: quay.io/openshift/origin-operator-lifecycle-manager:latest
Image ID: docker-pullable://quay.io/openshift/origin-operator-lifecycle-manager@sha256:5ebbe4ce6fb3ad52b3f9749cf015f04b396ca26955e008b412901c272d9ca59c
Port: 8080/TCP
Host Port: 0/TCP
Command:
/bin/catalog
Args:
-namespace
olm
-configmapServerImage=quay.io/operatorframework/configmap-operator-registry:latest
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sat, 16 Mar 2019 15:13:48 +0000
Finished: Sat, 16 Mar 2019 15:14:18 +0000
Ready: False
Restart Count: 6
Liveness: http-get http://:8080/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8080/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from olm-operator-serviceaccount-token-jhpbw (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
olm-operator-serviceaccount-token-jhpbw:
Type: Secret (a volume populated by a Secret)
SecretName: olm-operator-serviceaccount-token-jhpbw
Optional: false
QoS Class: BestEffort
Node-Selectors: beta.kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Normal Scheduled 12m default-scheduler Successfully assigned olm/catalog-operator-5448fc5b95-l2cgl to k8s-slave-0
Warning Unhealthy 10m kubelet, k8s-slave-0 Readiness probe failed: Get http://192.168.2.3:8080/healthz: dial tcp 192.168.2.3:8080: connect: connection refused
Normal Pulled 9m6s (x5 over 12m) kubelet, k8s-slave-0 Container image "quay.io/openshift/origin-operator-lifecycle-manager:latest" already present on machine
Normal Created 9m6s (x5 over 12m) kubelet, k8s-slave-0 Created container
Normal Started 9m6s (x5 over 12m) kubelet, k8s-slave-0 Started container
Warning Unhealthy 8m35s kubelet, k8s-slave-0 Liveness probe failed: Get http://192.168.2.3:8080/healthz: dial tcp 192.168.2.3:8080: connect: connection refused
Warning BackOff 2m25s (x35 over 11m) kubelet, k8s-slave-0 Back-off restarting failed container

Name: olm-operator-57b6cf86dd-nljcf
Namespace: olm
Priority: 0
PriorityClassName:
Node: k8s-slave-0/10.15.88.1
Start Time: Sat, 16 Mar 2019 15:05:02 +0000
Labels: app=olm-operator
pod-template-hash=57b6cf86dd
Annotations: cni.projectcalico.org/podIP: 192.168.2.2/32
Status: Running
IP: 192.168.2.2
Controlled By: ReplicaSet/olm-operator-57b6cf86dd
Containers:
olm-operator:
Container ID: docker://dca79048a035c9ecd2c6ba0eda1261aa36edc4f8e6859ac2bf2ba48bc1c213be
Image: quay.io/openshift/origin-operator-lifecycle-manager:latest
Image ID: docker-pullable://quay.io/openshift/origin-operator-lifecycle-manager@sha256:5ebbe4ce6fb3ad52b3f9749cf015f04b396ca26955e008b412901c272d9ca59c
Port: 8080/TCP
Host Port: 0/TCP
Command:
/bin/olm
Args:
-writeStatusName

State:          Waiting
  Reason:       CrashLoopBackOff
Last State:     Terminated
  Reason:       Completed
  Exit Code:    0
  Started:      Sat, 16 Mar 2019 15:13:40 +0000
  Finished:     Sat, 16 Mar 2019 15:14:10 +0000
Ready:          False
Restart Count:  6
Liveness:       http-get **http://:8080/healthz delay=0s timeout=1s period=10s #success=1 #failure=3**
Readiness:      http-get http://:8080/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
  OPERATOR_NAMESPACE:  olm (v1:metadata.namespace)
  OPERATOR_NAME:       olm-operator
Mounts:
  /var/run/secrets/kubernetes.io/serviceaccount from olm-operator-serviceaccount-token-jhpbw (ro)

Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
olm-operator-serviceaccount-token-jhpbw:
Type: Secret (a volume populated by a Secret)
SecretName: olm-operator-serviceaccount-token-jhpbw
Optional: false
QoS Class: BestEffort
Node-Selectors: beta.kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Normal Scheduled 12m default-scheduler Successfully assigned olm/olm-operator-57b6cf86dd-nljcf to k8s-slave-0
Normal Pulled 9m4s (x5 over 12m) kubelet, k8s-slave-0 Container image "quay.io/openshift/origin-operator-lifecycle-manager:latest" already present on machine
Normal Created 9m4s (x5 over 12m) kubelet, k8s-slave-0 Created container
Normal Started 9m4s (x5 over 12m) kubelet, k8s-slave-0 Started container
Warning BackOff 2m23s (x35 over 11m) kubelet, k8s-slave-0 Back-off restarting failed container

kubernetes version 1.13.4, docker version 18.09.3

Any advice?

Improve the description of Kong operator

@hbagdi - I think the description of the Kong Operator needs some love. Here are some things that users often would like to know before installing an Operator:

  • The capabilities of the Operator, intended use cases
  • What app the Operator manages and (to a lesser extent) what the app does
  • Where to find more information on the app itself
  • Any manual pre-installation steps
  • Any other requirements to use the Operator successfully

Duplicate keywords in strimzi-cluster-operator.v0.9.0.clusterserviceversion.yaml

Missing metadata in node-problem-detector CSV

@joelsmith could you please update your CSV with the missing fields? Here is a doc with the definitions, etc: https://github.com/operator-framework/operator-marketplace/blob/master/docs/marketplace-required-csv-annotations.md

Thanks!

$ operator-courier verify --ui_validate_io .
WARNING:operatorcourier.validate:csv metadata.annotations.certified not defined.
WARNING:operatorcourier.validate:csv spec.icon not defined
WARNING:operatorcourier.validate:csv spec.maturity not defined
ERROR:operatorcourier.validate:csv metadata.annotations.capabilities not defined. Without this field, the operator will be assigned the basic install capability - you can read more about operator maturity models here https://www.operatorhub.io/getting-started#How-do-I-start-writing-an-Operator?.
ERROR:operatorcourier.validate:csv spec.links not defined. Without this field, no links will be displayed in the details page side panel. You can for example link to some additional Documentation, related Blogs or Repositories.
WARNING:operatorcourier.validate:csv spec.icon not defined. Without this field, the operator will display a default operator framework icon.
ERROR:operatorcourier.validate:UI validation failed to verify required fields for operatorhub.io exist.
ERROR:operatorcourier.api:Bundle failed validation.
Resulting bundle is invalid, input yaml is improperly defined.

Couchbase Operator Persistent Volumes

When applying according to docs couchbase-operator CR with persistent volumes, it fails to work, because it needs get and watch on persistent volumes. Persistent volumes is a cluster scope object. So this CSV is not working don't know why someone put get and watch for pv in permissions and not in clusterPermissions. Now PV CR fails to spin up cluster as necessary ClusterRole and Binding is not created.

Who provided that CSV ? Redhat included it in Openshift catalog but they as well have an OLM that does not support clusterPermissions

CockroachDB fails with poddisruptionbudgets.policy RBAC error

CockroachDB provision fails on OKD 3.11 with the following error:

2019-04-05T15:08:08.933Z	DEBUG	helm.controller	Reconciling	{"namespace": "cockroachdb-test", "name": "example", "apiVersion": "charts.helm.k8s.io/v1alpha1", "kind": "Cockroachdb"}
2019-04-05T15:08:08.982Z	ERROR	helm.controller	failed to install release	{"namespace": "cockroachdb-test", "name": "example", "apiVersion": "charts.helm.k8s.io/v1alpha1", "kind": "Cockroachdb", "release": "example-582r90pa3xyqjf1yxbgqrytik", "error": "release example-582r90pa3xyqjf1yxbgqrytik failed: poddisruptionbudgets.policy \"example-582r90pa3xyqjf1yxbgqrytik-cockroachdb-budget\" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: no RBAC policy matched, <nil>"}
github.com/operator-framework/operator-sdk/vendor/github.com/go-logr/zapr.(*zapLogger).Error
	/home/joe/go/src/github.com/operator-framework/operator-sdk/vendor/github.com/go-logr/zapr/zapr.go:128
github.com/operator-framework/operator-sdk/pkg/helm/controller.HelmOperatorReconciler.Reconcile
	/home/joe/go/src/github.com/operator-framework/operator-sdk/pkg/helm/controller/reconcile.go:125
github.com/operator-framework/operator-sdk/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
	/home/joe/go/src/github.com/operator-framework/operator-sdk/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:213
github.com/operator-framework/operator-sdk/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
	/home/joe/go/src/github.com/operator-framework/operator-sdk/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158
github.com/operator-framework/operator-sdk/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
	/home/joe/go/src/github.com/operator-framework/operator-sdk/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
github.com/operator-framework/operator-sdk/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil
	/home/joe/go/src/github.com/operator-framework/operator-sdk/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
github.com/operator-framework/operator-sdk/vendor/k8s.io/apimachinery/pkg/util/wait.Until
	/home/joe/go/src/github.com/operator-framework/operator-sdk/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
2019-04-05T15:08:08.982Z	ERROR	kubebuilder.controller	Reconciler error	{"controller": "cockroachdb-controller", "request": "cockroachdb-test/example", "error": "release example-582r90pa3xyqjf1yxbgqrytik failed: poddisruptionbudgets.policy \"example-582r90pa3xyqjf1yxbgqrytik-cockroachdb-budget\" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: no RBAC policy matched, <nil>"}
github.com/operator-framework/operator-sdk/vendor/github.com/go-logr/zapr.(*zapLogger).Error
	/home/joe/go/src/github.com/operator-framework/operator-sdk/vendor/github.com/go-logr/zapr/zapr.go:128
github.com/operator-framework/operator-sdk/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
	/home/joe/go/src/github.com/operator-framework/operator-sdk/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215
github.com/operator-framework/operator-sdk/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
	/home/joe/go/src/github.com/operator-framework/operator-sdk/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158
github.com/operator-framework/operator-sdk/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
	/home/joe/go/src/github.com/operator-framework/operator-sdk/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133
github.com/operator-framework/operator-sdk/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil
	/home/joe/go/src/github.com/operator-framework/operator-sdk/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134
github.com/operator-framework/operator-sdk/vendor/k8s.io/apimachinery/pkg/util/wait.Until
	/home/joe/go/src/github.com/operator-framework/operator-sdk/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88

I believe the problem can be fixed by replacing the two explicit resources listed at the link below with '*'.

Add Rook EdgeFS Operator

Create a new operator under community-operators for Rook the Storage
Orchestrator for Kubernetes. This is the upstream version.

It has all Rook's EdgeFS capabilities when it comes to creating, managing
and upgrading a cluster. Simply edit the cluster CR to apply any changes
to your deployment.

EdgeFS is part of Rook project and following the same guidelines as Ceph and the rest of providers.

EdgeFS differentiates from Ceph and other Storage providers significantly. It is capable of spanning an unlimited number of geographically distributed sites (Geo-site), connected with each other as one global namespace data fabric running on top of Kubernetes platform, providing persistent, fault-tolerant and high-performance volumes for stateful Kubernetes Applications.

At each Geo-site, EdgeFS nodes deployed as containers (StatefulSet) on physical or virtual Kubernetes nodes, pooling available storage capacity and presenting it via compatible S3/NFS/iSCSI/etc storage emulated protocols for cloud-native applications running on the same or dedicated servers.

Additionally, EdgeFS allowing transparent connectivity of many Object cloud providers and sites as one fully synchronized S3-compatible data layer. Not only it saves on ingress/egress due to global deduplication properties, but it also saves on capacity due to the ability to operate in Metadata-Only synchronization mode, where data chunks fetched on demand.

http://edgefs.io
https://rook.io/docs/rook/v1.0/edgefs-storage.html

ability to specify roleRef in permissions

In the permissions and clusterPermissions section I would like to be able to specify a pre-defined role or clusterRole to be used for the permissions of an SA rather than having to expand all of the rules. For example, I would like to be able to say

      clusterPermissions:
      - serviceAccountName: cluster-logging-operator
        roleRefs:
        - cluster-reader
        - ... some other roles ...
        rules:
        - apiGroups:
          - scheduling.k8s.io
          resources:
          - priorityclasses
          verbs:
          - "*"
         ... other additional rules not covered by roles ...

Question: annotated examples

What is the best practice for sharing a file with annotated examples? Can we ship readme files in community-operators and upstream-community-operators?

Our examples live in a .yaml file with extensive annotations. The goal of this doc is to allow anyone to have what they need to dive deeper into their usage of our operator. It looks like alm-examples in planetscale-operator.v0.1.8.clusterserviceversion.yaml is json.

We have a 313-line example doc. Where's the best place for it to live in this repo?

Missing metadata in templateservicebroker CSV

@awgreene could you please update your CSV with the missing fields? Here is a doc with the definitions, etc: https://github.com/operator-framework/operator-marketplace/blob/master/docs/marketplace-required-csv-annotations.md

Thanks!

$ operator-courier verify --ui_validate_io .
WARNING:operatorcourier.validate:csv metadata.annotations.containerImage not defined
WARNING:operatorcourier.validate:csv metadata.annotations.createdAt not defined
WARNING:operatorcourier.validate:csv metadata.annotations.support not defined
WARNING:operatorcourier.validate:csv metadata.annotations.certified not defined.
WARNING:operatorcourier.validate:csv spec.icon not defined
WARNING:operatorcourier.validate:csv metadata.annotations.repository not defined.Without this field, the link to the operator source code will not be displayed in the UI.
WARNING:operatorcourier.validate:csv metadata.annotations.createdAt not defined.Without this field, the time stamp at which the operator was created will not be displayed in the UI.
ERROR:operatorcourier.validate:csv metadata.annotations.containerImage not defined. Without this field, the link to the operator image will not be displayed in the UI.
WARNING:operatorcourier.validate:csv spec.icon not defined. Without this field, the operator will display a default operator framework icon.
ERROR:operatorcourier.validate:UI validation failed to verify required fields for operatorhub.io exist.
ERROR:operatorcourier.api:Bundle failed validation.
Resulting bundle is invalid, input yaml is improperly defined.

Change title of PlanetScale Operator

Making an official request on behalf of PlanetScale:

To change the title โ€œPlanetScale Operatorโ€ on your list of community operators to โ€œPlanetScale Operator for Vitessโ€

cc @jvaidya

Did Crunchy's operator subscription stopped being noticed by OLM?

Last week I was using Crunchy PostgreSQL Operator v3.5.0 and all was going well (sorta of, I had to correct the deployment config and also change the service account, but this is a whole other topic โ€” long story short: it was working).

Today, I had to deploy it in another cluster and, for my surprise, it never gets installed. As far as I can tell, the subscription never gets noticed by the OLM Pod (olm-operator-<hash> in olm namespace)...

I also noticed a possible update in the OLM installation because it now adds

apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  name: operatorhubio-catalog
  namespace: olm
spec:
  sourceType: grpc
  image: quay.io/operator-framework/upstream-community-operators:latest
  displayName: Community Operators
  publisher: OperatorHub.io

which conflicts with what the operator is trying to install (here).

This is the subscription I have requested:

$ kubectl get subscriptions -n operators
NAME            PACKAGE      SOURCE                  CHANNEL
my-postgresql   postgresql   operatorhubio-catalog   alpha

Can anyone point me to a debug strategy so I can further inspect what went wrong and why the OLM never seems to pick up the fact it needs to add an operator to the cluster?

Regards, DAVI

Missing metadata in camel-k CSV

@anik120 could you please update your CSV with the missing fields? Here is a doc with the definitions, etc: https://github.com/operator-framework/operator-marketplace/blob/master/docs/marketplace-required-csv-annotations.md

Thanks!

$ operator-courier verify --ui_validate_io .
WARNING:operatorcourier.validate:csv metadata.annotations.createdAt not defined
WARNING:operatorcourier.validate:csv metadata.annotations.support not defined
WARNING:operatorcourier.validate:csv spec.icon not defined
WARNING:operatorcourier.validate:csv spec.maturity not defined
ERROR:operatorcourier.validate:csv metadata.annotations.capabilities not defined. Without this field, the operator will be assigned the basic install capability - you can read more about operator maturity models here https://www.operatorhub.io/getting-started#How-do-I-start-writing-an-Operator?.
WARNING:operatorcourier.validate:csv metadata.annotations.repository not defined.Without this field, the link to the operator source code will not be displayed in the UI.
WARNING:operatorcourier.validate:csv metadata.annotations.createdAt not defined.Without this field, the time stamp at which the operator was created will not be displayed in the UI.
WARNING:operatorcourier.validate:csv spec.icon not defined. Without this field, the operator will display a default operator framework icon.
ERROR:operatorcourier.validate:UI validation failed to verify required fields for operatorhub.io exist.
ERROR:operatorcourier.api:Bundle failed validation.
Resulting bundle is invalid, input yaml is improperly defined.

Link in README.md not working

The link to the below text in readme.md file under Operator CI Pipeline section throws 404 error

You can learn more about the tests run on submitted Operators in this doc

Conflicting Postgres Operator Names

There are two different PostgreSQL operators in upstream-community-operators (postgres-operator and postgresql) and they both appear to be using the same name internally of postgres-operator. I noticed the issue on operatorhub.io because both operators link to the same Crunchydata page for the operator.

Not sure what other issues this may be causing.

CI often fails on branch builds

The CI using TravisCI today was written to test operator changes from incoming PRs. Because of this, the behavior of these tests when run against a non-PR branch seems unpredictable; merging a PR that passes in CI can cause the master branch build to ultimately fail.
The CI should support builds against branches in addition to PR builds.

To begin the discussion, what should the behavior be for CI running against the latest master?

Missing metadata in couchbase-enterprise CSV

@awgreene could you please update your CSV with the missing fields? Here is a doc with the definitions, etc: https://github.com/operator-framework/operator-marketplace/blob/master/docs/marketplace-required-csv-annotations.md

Thanks!

$ operator-courier verify --ui_validate_io .
WARNING:operatorcourier.validate:csv metadata.annotations.containerImage not defined
WARNING:operatorcourier.validate:csv metadata.annotations.createdAt not defined
WARNING:operatorcourier.validate:csv metadata.annotations.support not defined
WARNING:operatorcourier.validate:csv metadata.annotations.certified not defined.
WARNING:operatorcourier.validate:csv metadata.annotations.repository not defined.Without this field, the link to the operator source code will not be displayed in the UI.
WARNING:operatorcourier.validate:csv metadata.annotations.createdAt not defined.Without this field, the time stamp at which the operator was created will not be displayed in the UI.
ERROR:operatorcourier.validate:csv metadata.annotations.containerImage not defined. Without this field, the link to the operator image will not be displayed in the UI.
ERROR:operatorcourier.validate:UI validation failed to verify required fields for operatorhub.io exist.
ERROR:operatorcourier.api:Bundle failed validation.
Resulting bundle is invalid, input yaml is improperly defined.

Adding Seldon Go Operator

Seldon Core is an open source machine learning orchestration and deployment framework built on Kubernetes with integrations to popular ML frameworks (e.g. Tensorflow, Sklearn, etc).

We recently launched the Seldon Go Operator, which is now in production orchestrating machine learning deployments. We would like to submit a PR to add Seldon as a Community Operator. Our operator was initially built with Kubebuilder so it had a different structure, but it seems we have now added the structural changes required for this to work with the operator-sdk command.

We have made relevant changes to our Seldon Go Operator to make sure it can be aligned, in order to test it with the operator-sdk you can pull the operatorhub_scorecard branch in a fork of our operator: https://github.com/axsauze/seldon-operator/tree/operatorhub_scorecard.

Right now we are experiencing a segfault when running the scorecard command. To reproduce you can run the following command from the top level directory:

operator-sdk scorecard --cr-manifest config/crds/machinelearning_v1alpha2_seldondeployment.json --csv-path ./seldonoperator.0.1.2.clusterserviceversion.yaml

You will actually be able to see that the seldon operator is started and terminated:

kubectl get pods -w
NAME                                   READY   STATUS    RESTARTS   AGE
seldon-operator-controller-manager-0   1/1     Running   1          43s
seldon-operator-controller-manager-0   1/1     Terminating   1          60s

Here is the seg fault error mentioned above:

donoperator.v1alpha2.clusterserviceversion.yaml
WARN[0000] Could not load config file; using flags
Running for cr: config/crds/machinelearning_v1alpha2_seldondeployment.yaml
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x48 pc=0x1820aac]

goroutine 1 [running]:
github.com/operator-framework/operator-sdk/vendor/k8s.io/apimachinery/pkg/apis/meta/v1.(*ObjectMeta).GetNamespace(...)
        ~/go/src/github.com/operator-framework/operator-sdk/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/meta.go:131
github.com/operator-framework/operator-sdk/internal/pkg/scorecard.getProxyLogs(0x0, 0x0, 0x0, 0x0, 0x0)
        ~/go/src/github.com/operator-framework/operator-sdk/internal/pkg/scorecard/resource_handler.go:374 +0x1bc
github.com/operator-framework/operator-sdk/internal/pkg/scorecard.(*WritingIntoCRsHasEffectTest).Run(0xc000878370, 0x209a6c0, 0xc0000560d0, 0x1)
        ~/go/src/github.com/operator-framework/operator-sdk/internal/pkg/scorecard/basic_tests.go:144 +0x82
github.com/operator-framework/operator-sdk/internal/pkg/scorecard.(*TestSuite).Run(0xc00063c5b0, 0x209a6c0, 0xc0000560d0)
        ~/go/src/github.com/operator-framework/operator-sdk/internal/pkg/scorecard/test_definitions.go:102 +0x95
github.com/operator-framework/operator-sdk/internal/pkg/scorecard.runTests(0x0, 0x0, 0x0, 0x0, 0x0)
        ~/go/src/github.com/operator-framework/operator-sdk/internal/pkg/scorecard/scorecard.go:291 +0x2d0b
github.com/operator-framework/operator-sdk/internal/pkg/scorecard.ScorecardTests(0xc0003eb680, 0xc0004b4b00, 0x0, 0x4, 0x0, 0x0)
        ~/go/src/github.com/operator-framework/operator-sdk/internal/pkg/scorecard/scorecard.go:382 +0x8a
github.com/operator-framework/operator-sdk/vendor/github.com/spf13/cobra.(*Command).execute(0xc0003eb680, 0xc0004b4ac0, 0x4, 0x4, 0xc0003eb680, 0xc0004b4ac0)
        ~/go/src/github.com/operator-framework/operator-sdk/vendor/github.com/spf13/cobra/command.go:762 +0x465
github.com/operator-framework/operator-sdk/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0000fcc80, 0x2053500, 0xc0003ca200, 0x0)
        ~/go/src/github.com/operator-framework/operator-sdk/vendor/github.com/spf13/cobra/command.go:852 +0x2ec
github.com/operator-framework/operator-sdk/vendor/github.com/spf13/cobra.(*Command).Execute(...)
        ~/go/src/github.com/operator-framework/operator-sdk/vendor/github.com/spf13/cobra/command.go:800
main.main()
        ~/go/src/github.com/operator-framework/operator-sdk/cmd/operator-sdk/main.go:80 +0x4ce

prometheusoperator.0.22.2 has incorrect InstallModes

The CSV for prometheusoperator.0.22.2 claims to support OperatorGroups that select multiple namespaces, but does not surface the selection to the operator's deployment:

  installModes:
  - type: OwnNamespace
    supported: true
  - type: SingleNamespace
    supported: true
  - type: MultiNamespace
    supported: true
  - type: AllNamespaces
    supported: false
 containers:
              - name: prometheus-operator
                image: quay.io/coreos/prometheus-operator@sha256:3daa69a8c6c2f1d35dcf1fe48a7cd8b230e55f5229a1ded438f687debade5bcf
                args:
                - -namespace=$(K8S_NAMESPACE)
                - -manage-crds=false
                - -logtostderr=true
                - --config-reloader-image=quay.io/coreos/configmap-reload:v0.0.1
                - --prometheus-config-reloader=quay.io/coreos/prometheus-config-reloader:v0.22.2
                env:
                - name: K8S_NAMESPACE
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace

Missing metadata in descheduler CSV

@ravisantoshgudimetla could you please update your CSV with the missing fields? Here is a doc with the definitions, etc: https://github.com/operator-framework/operator-marketplace/blob/master/docs/marketplace-required-csv-annotations.md

Thanks!

$ operator-courier verify --ui_validate_io .
WARNING:operatorcourier.validate:csv metadata.annotations.certified not defined.
WARNING:operatorcourier.validate:csv spec.icon not defined
WARNING:operatorcourier.validate:csv spec.maturity not defined
ERROR:operatorcourier.validate:csv spec.links not defined. Without this field, no links will be displayed in the details page side panel. You can for example link to some additional Documentation, related Blogs or Repositories.
WARNING:operatorcourier.validate:csv spec.icon not defined. Without this field, the operator will display a default operator framework icon.
ERROR:operatorcourier.validate:UI validation failed to verify required fields for operatorhub.io exist.
ERROR:operatorcourier.api:Bundle failed validation.
Resulting bundle is invalid, input yaml is improperly defined.

OneAgent Dynatrace Operator not working on vanilla k8s

OneAgent Dynatrace CSV contains permissions that are only understood on OpenShift:

 clusterPermissions:
        - rules:
            - verbs:
                - use
              apiGroups:
                - security.openshift.io
              resources:
                - securitycontextconstraints
              resourceNames:
                - privileged
                - host
          serviceAccountName: dynatrace-oneagent

... fails to load with:

"spec.install" must validate one and only one schema (oneOf). Found none valid spec.install.spec.permissions.rules.verbs in body should be one of [* assign get list watch create update patch delete deletecollection initialize]

There needs to be either a separate version of this for vanilla k8s or this Operators needs to be removed from upstream-community-operators.

Improve description of SVT Operator

The SVT Operator description is lacking context on what it actually manages. The SVT tool is not described, nor are any examples or upstream repositories / docs referenced.
The container image referenced is coming from a personal docker hub account despite this apparently being a Red Hat project.

CC @hongkailiu

Missing metadata in node-network-operator CSV

@pliurh could you please update your CSV with the missing fields? Here is a doc with the definitions, etc: https://github.com/operator-framework/operator-marketplace/blob/master/docs/marketplace-required-csv-annotations.md

Thanks!

$ operator-courier verify --ui_validate_io .
ERROR:operatorcourier.validate:csv spec.maintainers not defined. Without this field, the operator details page will not display the name and contact for users to get support in using the operator. The field should be a yaml list of name & email pairs.
ERROR:operatorcourier.validate:UI validation failed to verify required fields for operatorhub.io exist.
ERROR:operatorcourier.api:Bundle failed validation.
Resulting bundle is invalid, input yaml is improperly defined.

Duplicate Spinnaker Operator

@j-sandy - there are two Spinnaker Operators from OpsMx. They look to be the same Operator actually - can you please submit PR deleting one of those? Alternatively explain the difference :)

Missing metadata in automationbroker CSV

@shawn-hurley could you please update your CSV with the missing fields? Here is a doc with the definitions, etc: https://github.com/operator-framework/operator-marketplace/blob/master/docs/marketplace-required-csv-annotations.md

Thanks!

$ operator-courier verify --ui_validate_io .
WARNING:operatorcourier.validate:csv metadata.annotations.containerImage not defined
WARNING:operatorcourier.validate:csv metadata.annotations.createdAt not defined
WARNING:operatorcourier.validate:csv metadata.annotations.support not defined
WARNING:operatorcourier.validate:csv metadata.annotations.certified not defined.
WARNING:operatorcourier.validate:csv metadata.annotations.repository not defined.Without this field, the link to the operator source code will not be displayed in the UI.
WARNING:operatorcourier.validate:csv metadata.annotations.createdAt not defined.Without this field, the time stamp at which the operator was created will not be displayed in the UI.
ERROR:operatorcourier.validate:csv metadata.annotations.containerImage not defined. Without this field, the link to the operator image will not be displayed in the UI.
ERROR:operatorcourier.validate:UI validation failed to verify required fields for operatorhub.io exist.
ERROR:operatorcourier.api:Bundle failed validation.

REQUEST: Support all types of community operators

This is a follow up to the discussion on Slack. I would like to start with thanking RedHat for investing in what I hope will be a great community resource.

I am concerned about the very strong ties between Operator Lifecycle Manager and OperatorHub/this repo. OLM is an interesting and powerful tool that definitely answers a need for folks at the top end of the complexity spectrum (such as big, multi-tenant clusters). However the operator pattern has been embraced across the entire Kubernetes community, including many places where that level of complexity is not required. Additionally OLM is a fairly opinionated tool, for example it requires that CRDs be available as YAML and managed outside of the operator vs. the increasing pattern of having operators self-register their CRDs. For some operators, a Helm chart is an entirely reasonable deployment mechanism, even if this does not address all of the same edge cases that OLM does. Beyond the complexity issues (OLM is a fairly complex suite of metadata files), there is also the problem that every update to the OLM manifests (I think) has to go through this repo which means it can be blocked on getting signoff from a completely unrelated project. I'm sure the folks on this repo will do their best, but that seems like a solution that will not scale to larger community uptake.

As a more general statement, I would like to take a few steps back and try to separate out the (very good) RedHat operator stack from the broader community use of the term. I feel that RedHat is perceived by the community, or at least by me, as trying to exert some level of ownership over the term "operator" as the originators of that term and pattern, however the community has grown far beyond those specific initial meanings and I think OperatorHub should reflect that.

That said, some level of metadata is clearly needed to usefully operate a website like this. I think maybe a good path forward would be to identify some limited subset of the OLM metadata that can act as a more general purpose operator metadata, without the specifics of installation and configuration management that OLM implements. Off the top of my head, I think maybe an improved "channels" function that allows drawing the operator manifests directly from their own repository (or other HTTP server), and using a stripped down version of the existing ClusterServiceVersion with:

  • Name
  • Version
  • Description
  • Keywords
  • Maturity (debatable)
  • Maintainers
  • Links
  • Icon

To that we would need to add fields to describe the installation. In the OLM case, it would use the remaining metadata. For Helm, we could describe the Helm repository and chart name, possibly with some optional info about the chart values. For plain manifests, it could have a link to the manifest with a description.

I would love to see this website live up to its tag line of "a new home for the Kubernetes community to share Operators", and I think an important step will be to make OLM only one of many options for installation and management that have been embraced by our community.

Missing metadata in metering CSV

@chancez could you please update your CSV with the missing fields? Here is a doc with the definitions, etc: https://github.com/operator-framework/operator-marketplace/blob/master/docs/marketplace-required-csv-annotations.md

Thanks!

$ operator-courier verify --ui_validate_io .
WARNING:operatorcourier.validate:csv spec.icon not defined
ERROR:operatorcourier.validate:csv metadata.annotations.capabilities not defined. Without this field, the operator will be assigned the basic install capability - you can read more about operator maturity models here https://www.operatorhub.io/getting-started#How-do-I-start-writing-an-Operator?.
WARNING:operatorcourier.validate:csv metadata.annotations.repository not defined.Without this field, the link to the operator source code will not be displayed in the UI.
ERROR:operatorcourier.validate:csv spec.links not defined. Without this field, no links will be displayed in the details page side panel. You can for example link to some additional Documentation, related Blogs or Repositories.
WARNING:operatorcourier.validate:csv spec.icon not defined. Without this field, the operator will display a default operator framework icon.
ERROR:operatorcourier.validate:UI validation failed to verify required fields for operatorhub.io exist.
ERROR:operatorcourier.api:Bundle failed validation.
Resulting bundle is invalid, input yaml is improperly defined.

upstream.Dockerfile build fails due to percona 0.3.0-rc1 validation error

Attempt to build upstream.Dockerfile fails with the following error:
...
time="2019-02-28T08:04:26Z" level=info msg="found csv, loading bundle" dir=manifests file=percona-xtradb-cluster-operator-community.v0.3.0-rc1.clusterserviceversion.yaml load=bundles time="2019-02-28T08:04:26Z" level=fatal msg="could not decode contents of file manifests/percona/0.3.0-rc1/percona-xtradb-cluster-operator-community.v0.3.0-rc1.clusterserviceversion.yaml into CSV: v1alpha1.ClusterServiceVersion.Spec: v1alpha1.ClusterServiceVersionSpec.Provider: readObjectStart: expect { or n, but found \", error found in #10 byte of ...|rovider\":\"Percona\",\"|..., bigger context ...|luster-operator\"}],\"maturity\":\"alpha\",\"provider\":\"Percona\",\"version\":\"0.3.0-rc1\"}}|..."

Federation Operator doc issue - kubefed2 installation

Following the instructions under the Federation Operator for "Get the kubefed2 CLI tool", the method for retrieving the binary from a container run seems to fail locally:

$ docker run --name=hyperfed --entrypoint=/bin/sh quay.io/openshift/origin-federation-controller:v4.0.0 sleep 50000
/usr/bin/sleep: /usr/bin/sleep: cannot execute binary file

There's probably just a tweak needed to the container run command to get around that error, but I ended up grabbing the kubefed2 binary from the releases here: https://github.com/kubernetes-sigs/federation-v2/releases

docker version info below:

$ docker version
Client:
 Version:         1.13.1
 API version:     1.26
 Package version: docker-1.13.1-62.git9cb56fd.fc29.x86_64
 Go version:      go1.11beta2
 Git commit:      accfe55-unsupported
 Built:           Wed Jul 25 18:54:07 2018
 OS/Arch:         linux/amd64

Server:
 Version:         1.13.1
 API version:     1.26 (minimum version 1.12)
 Package version: docker-1.13.1-62.git9cb56fd.fc29.x86_64
 Go version:      go1.11beta2
 Git commit:      accfe55-unsupported
 Built:           Wed Jul 25 18:54:07 2018
 OS/Arch:         linux/amd64
 Experimental:    false

Operator install plans fail w/repeat count regexp parsing error on OCP 3.11

Attempt to create couchbase v1.0.0 upstream-community-operator install plan on OCP 3.11 with latest origin-operator-lifecycle-manager and updated OCP manifests fails w/regexp parsing error below:

catalog-operator-75c4bfcf59-j9ctt_openshift-operator-lifecycle-manager_catalog-operator-e1abc73ca1f136694e9069da67db15357d99ba43aa380071365889435a909e7d.log:{"log":"time=\"2019-02-28T11:34:13Z\" level=warning msg=\"no 
installplan found with matching manifests, creating new one\" id=6tpyu namespace=couchbase-test\n","stream":"stderr","time":"2019-02-28T11:34:13.042953747Z"}
catalog-operator-75c4bfcf59-j9ctt_openshift-operator-lifecycle-manager_catalog-operator-e1abc73ca1f136694e9069da67db15357d99ba43aa380071365889435a909e7d.log:{"log":"time=\"2019-02-28T11:34:13Z\" level=info msg=syncing 
id=6TftZ ip=install-pglww namespace=couchbase-test phase=\n","stream":"stderr","time":"2019-02-28T11:34:13.046178509Z"}
catalog-operator-75c4bfcf59-j9ctt_openshift-operator-lifecycle-manager_catalog-operator-e1abc73ca1f136694e9069da67db15357d99ba43aa380071365889435a909e7d.log:{"log":"time=\"2019-02-28T11:34:13Z\" level=info msg=\"skip p
rocessing installplan without status - subscription sync responsible for initial status\" id=6TftZ ip=install-pglww namespace=couchbase-test phase=\n","stream":"stderr","time":"2019-02-28T11:34:13.046203749Z"}
catalog-operator-75c4bfcf59-j9ctt_openshift-operator-lifecycle-manager_catalog-operator-e1abc73ca1f136694e9069da67db15357d99ba43aa380071365889435a909e7d.log:{"log":"time=\"2019-02-28T11:34:13Z\" level=info msg=syncing 
id=CAHyk ip=install-pglww namespace=couchbase-test phase=Installing\n","stream":"stderr","time":"2019-02-28T11:34:13.056421992Z"}
catalog-operator-75c4bfcf59-j9ctt_openshift-operator-lifecycle-manager_catalog-operator-e1abc73ca1f136694e9069da67db15357d99ba43aa380071365889435a909e7d.log:{"log":"time=\"2019-02-28T11:34:13Z\" level=info msg=\"retryi
ng couchbase-test/install-pglww\"\n","stream":"stderr","time":"2019-02-28T11:34:13.071293187Z"}
catalog-operator-75c4bfcf59-j9ctt_openshift-operator-lifecycle-manager_catalog-operator-e1abc73ca1f136694e9069da67db15357d99ba43aa380071365889435a909e7d.log:{"log":"E0228 11:34:13.071162       1 queueinformer_operator.
go:155] Sync \"couchbase-test/install-pglww\" failed: error creating csv couchbase-operator.v1.0.0: an error on the server (\"This request caused apiserver to panic. Look in the logs for details.\") has prevented the r
equest from succeeding (post clusterserviceversions.operators.coreos.com)\n","stream":"stderr","time":"2019-02-28T11:34:13.071330938Z"}
catalog-operator-75c4bfcf59-j9ctt_openshift-operator-lifecycle-manager_catalog-operator-e1abc73ca1f136694e9069da67db15357d99ba43aa380071365889435a909e7d.log:{"log":"time=\"2019-02-28T11:34:13Z\" level=info msg=syncing 
id=cYJU+ ip=install-pglww namespace=couchbase-test phase=Failed\n","stream":"stderr","time":"2019-02-28T11:34:13.075067324Z"}
catalog-operator-75c4bfcf59-j9ctt_openshift-operator-lifecycle-manager_catalog-operator-e1abc73ca1f136694e9069da67db15357d99ba43aa380071365889435a909e7d.log:{"log":"time=\"2019-02-28T11:34:13Z\" level=info msg=syncing 
id=JIPHu ip=install-pglww namespace=couchbase-test phase=Failed\n","stream":"stderr","time":"2019-02-28T11:34:13.432394434Z"}
master-api-localhost_kube-system_api-7bbb74f5bfa1af0a17cdec93e915657d8951c8e8a2baf5356a8efb8570f00978.log:{"log":"E0228 09:57:01.978287       1 wrap.go:34] apiserver panic'd on POST /apis/operators.coreos.com/v1alpha1/
namespaces/couchbase-test/clusterserviceversions: regexp: Compile(`^(?:[A-Za-z0-9+/]{4}){0,16250}(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$`): error parsing regexp: invalid repeat count: `{0,16250}`\n","stream":"stderr
","time":"2019-02-28T09:57:01.978443885Z"}
master-api-localhost_kube-system_api-7bbb74f5bfa1af0a17cdec93e915657d8951c8e8a2baf5356a8efb8570f00978.log:{"log":"E0228 11:16:28.744381       1 wrap.go:34] apiserver panic'd on POST /apis/operators.coreos.com/v1alpha1/
namespaces/couchbase-test/clusterserviceversions: regexp: Compile(`^(?:[A-Za-z0-9+/]{4}){0,16250}(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$`): error parsing regexp: invalid repeat count: `{0,16250}`\n","stream":"stderr
","time":"2019-02-28T11:16:28.74448544Z"}
master-api-localhost_kube-system_api-7bbb74f5bfa1af0a17cdec93e915657d8951c8e8a2baf5356a8efb8570f00978.log:{"log":"E0228 11:34:13.062078       1 wrap.go:34] apiserver panic'd on POST /apis/operators.coreos.com/v1alpha1/
namespaces/couchbase-test/clusterserviceversions: regexp: Compile(`^(?:[A-Za-z0-9+/]{4}){0,16250}(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$`): error parsing regexp: invalid repeat count: `{0,16250}`\n","stream":"stderr
","time":"2019-02-28T11:34:13.062193863Z"}
master-controllers-localhost_kube-system_controllers-e0756e211ae5e63fcf77f6779c8ccf74f8cd10ca2e43943f71fd9e37addabc64.log:{"log":"E0228 09:48:59.343059       1 namespace_scc_allocation_controller.go:335] error syncing 
namespace, it will be retried: Operation cannot be fulfilled on namespaces \"couchbase-test\": the object has been modified; please apply your changes to the latest version and try again\n","stream":"stderr","time":"20
19-02-28T09:48:59.343289485Z"}
master-controllers-localhost_kube-system_controllers-e0756e211ae5e63fcf77f6779c8ccf74f8cd10ca2e43943f71fd9e37addabc64.log:{"log":"E0228 09:48:59.375021       1 namespace_scc_allocation_controller.go:335] error syncing 
namespace, it will be retried: Operation cannot be fulfilled on namespaces \"couchbase-test\": the object has been modified; please apply your changes to the latest version and try again\n","stream":"stderr","time":"20
19-02-28T09:48:59.375165088Z"}
master-controllers-localhost_kube-system_controllers-e0756e211ae5e63fcf77f6779c8ccf74f8cd10ca2e43943f71fd9e37addabc64.log:{"log":"I0228 11:29:10.832636       1 garbagecollector.go:408] processing item [operators.coreos
.com/v1alpha1/InstallPlan, namespace: couchbase-test, name: install-z9pwc, uid: 4b3a08ba-3b4a-11e9-a352-024eeff82cec]\n","stream":"stderr","time":"2019-02-28T11:29:10.832848564Z"}
master-controllers-localhost_kube-system_controllers-e0756e211ae5e63fcf77f6779c8ccf74f8cd10ca2e43943f71fd9e37addabc64.log:{"log":"I0228 11:29:10.842575       1 garbagecollector.go:521] delete object [operators.coreos.c
om/v1alpha1/InstallPlan, namespace: couchbase-test, name: install-z9pwc, uid: 4b3a08ba-3b4a-11e9-a352-024eeff82cec] with propagation policy Background\n","stream":"stderr","time":"2019-02-28T11:29:10.842734206Z"}
[root@ containers]# 

Wrong URL for installing OLM

I have tried to install Operator Lifecycle Manager as per instructions here:
https://www.operatorhub.io/how-to-install-an-operator#What-happens-when-I-execute-the-'Install'-command-presented-in-the-pop-up?
it says:

kubectl create -f https://raw.githubusercontent.com/operator-framework/operator-lifecycle-manager/quickstart/deploy/upstream/quickstart/olm.yaml

Which ends with:

error: unable to read URL "https://raw.githubusercontent.com/operator-framework/operator-lifecycle-manager/quickstart/deploy/upstream/quickstart/olm.yaml", server reported 404 Not Found, status code=404

The link is dead.

Would be nice if installation instructions would be corrected as it's a blocker for any further progress.

CI scripts to install olm should use same versions as mentioned in testing document.

Testing Operator document uses OLm release version 0.10.0
but
CI script uses 0.8.1

Files
https://github.com/operator-framework/community-operators/blob/master/scripts/ci/install-olm-local

Try twice, since order matters
kubectl apply -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/0.8.1/olm.yaml
kubectl apply -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/0.8.1/olm.yaml

The Testing document suggests using 0.10.0

2. Install OLM
Install OLM into the cluster in the olm namespace:
kubectl apply -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/0.10.0/crds.yaml
kubectl apply -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/0.10.0

Tool to build CSV automatically?

Out of curiosity.

Does everyone write CSVs manually? I saw operator-sdk scorecard can validate the files (and maybe generate ones, didn't dig too much).

Thanks.

Missing metadata in percona CSV

@lilic could you please update your CSV with the missing fields? Here is a doc with the definitions, etc: https://github.com/operator-framework/operator-marketplace/blob/master/docs/marketplace-required-csv-annotations.md

Thanks!

$ operator-courier verify --ui_validate_io .
WARNING:operatorcourier.validate:csv metadata.annotations.repository not defined.Without this field, the link to the operator source code will not be displayed in the UI.
ERROR:operatorcourier.validate:csv spec.maintainers not defined. Without this field, the operator details page will not display the name and contact for users to get support in using the operator. The field should be a yaml list of name & email pairs.
ERROR:operatorcourier.validate:UI validation failed to verify required fields for operatorhub.io exist.
ERROR:operatorcourier.api:Bundle failed validation.
Resulting bundle is invalid, input yaml is improperly defined.

Missing metadata in postgresql CSV

@jmckind could you please update your CSV with the missing fields? Here is a doc with the definitions, etc: https://github.com/operator-framework/operator-marketplace/blob/master/docs/marketplace-required-csv-annotations.md

Thanks!

$ operator-courier verify --ui_validate_io .
WARNING:operatorcourier.validate:csv metadata.annotations.alm-examples not defined.Without this field, users will not have examples of how to write Custom Resources for the operator.
ERROR:operatorcourier.validate:You should have alm-examples for every owned CRD
ERROR:operatorcourier.validate:UI validation failed to verify that required fields for operatorhub.io are properly formatted.
ERROR:operatorcourier.api:Bundle failed validation.
Resulting bundle is invalid, input yaml is improperly defined.

Add KubeVirt Operator

Hey,

shoul dkubevirt rather reside in community or redhat?

My 2ct: Community for now.

Kubernetes Federation Operator Not Working

Before I begin, let me paste the versions of my setup:
kubefed2: v0.0.6-39-g10df7a98
Kubernetes: 1.12.5-gke.5

I'm using GKE and to keep my tests as simples as possible, I've create 2 clusters (one node each) using Ubuntu's image (internal requirement).

I followed every step I read about:

  1. How to install an Operator;
  2. Installing the operator;
  3. Checking that every Pod is up and running (both in OLM and Operators namespaces).

Here things start to break down... As per the instructions, I'm supposed to enable the namespaces, but if I do I receive the following error message:

$ kubefed2 enable namespaces --federation-namespace operators
customresourcedefinition.apiextensions.k8s.io/federatednamespaces.types.federation.k8s.io created
F0307 02:06:44.590594   10200 enable.go:111] error: Error creating FederatedTypeConfig "namespaces": FederatedTypeConfig.core.federation.k8s.io "namespaces" is invalid: []: Invalid value: map[string]interface {}{"kind":"FederatedTypeConfig", "apiVersion":"core.federation.k8s.io/v1alpha1", "metadata":map[string]interface {}{"uid":"cd4f22fe-4096-11e9-87c6-42010a9e0fcc", "name":"namespaces", "namespace":"operators", "creationTimestamp":"2019-03-07T05:06:44Z", "generation":1}, "spec":map[string]interface {}{"target":map[string]interface {}{"pluralName":"namespaces", "version":"v1", "kind":"Namespace"}, "namespaced":false, "propagationEnabled":true, "federatedType":map[string]interface {}{"pluralName":"federatednamespaces", "group":"types.federation.k8s.io", "version":"v1alpha1", "kind":"FederatedNamespace"}}, "status":map[string]interface {}{}}: validation failure list:
spec.comparisonField in body is required
spec.template in body is required
spec.placement in body is required

I had to add --federation-namespace operators myself since there is not a single mention that the Federation Controller expects to run under federation-system. Anyhow, the User Guide lists this step after both clusters are joined, so I momentarily ignored this issue and went on to join them.

Here, I use the commands I need:

$ kubefed2 join cluster1 --cluster-context cluster1 --host-cluster-context cluster1 --add-to-registry --v=2 --federation-namespace operators
$ kubefed2 join cluster2 --cluster-context cluster2 --host-cluster-context cluster1 --add-to-registry --v=2 --federation-namespace operators

I see the secrets are created with the right content inside, but still it doesn't work. This time, the log is as follows:

$ kubectl logs -f federation-controller-manager-778675cd8c-fnb9j -n operators
I0307 05:02:32.580341       1 main.go:77] Version: {Version:v0.0.1-alpha.0 GitCommit:unknown GitTreeState:unknown BuildDate:unknown GoVersion:go1.10.1 Compiler:gc Platform:linux/amd64}
I0307 05:02:32.580790       1 feature_gate.go:230] feature gates: &{map[]}
I0307 05:02:32.581135       1 main.go:104] Federation namespace: operators
I0307 05:02:32.581198       1 main.go:110] Cluster registry namespace: operators
I0307 05:02:32.581260       1 main.go:115] Federation will be limited to the "operators" namespace
I0307 05:02:32.582134       1 controller.go:89] Starting cluster controller
I0307 05:02:32.583019       1 controller.go:99] Starting replicaschedulingpreferences controller
I0307 05:02:32.583955       1 controller.go:103] Starting MultiClusterServiceDNS controller
I0307 05:02:32.584808       1 controller.go:101] Starting MultiClusterIngressDNS controller
I0307 05:02:32.585139       1 controller.go:95] Starting FederatedTypeConfig controller
I0307 05:02:32.677652       1 controller.go:112] Starting "service" DNSEndpoint controller
I0307 05:02:32.677933       1 controller.go:112] Starting "ingress" DNSEndpoint controller
I0307 05:02:33.174305       1 controller.go:123] "ingress" DNSEndpoint controller synced and ready
I0307 05:02:33.180497       1 controller.go:123] "service" DNSEndpoint controller synced and ready
I0307 05:11:11.258288       1 federated_informer.go:216] Cluster operators/cluster1 not added; it is not ready.
I0307 05:11:11.258392       1 federated_informer.go:216] Cluster operators/cluster1 not added; it is not ready.
I0307 05:11:11.258455       1 federated_informer.go:216] Cluster operators/cluster1 not added; it is not ready.
I0307 05:11:11.258521       1 federated_informer.go:216] Cluster operators/cluster1 not added; it is not ready.
I0307 05:11:11.258936       1 federated_informer.go:216] Cluster operators/cluster1 not added; it is not ready.
I0307 05:11:11.258994       1 federated_informer.go:216] Cluster operators/cluster1 not added; it is not ready.
E0307 05:11:11.268554       1 controller.go:163] Failed to create corresponding restclient of kubernetes cluster: clusters.clusterregistry.k8s.io "cluster1" not found
W0307 05:11:13.259003       1 controller.go:200] Failed to get client for cluster cluster1
W0307 05:11:53.268179       1 controller.go:200] Failed to get client for cluster cluster1
W0307 05:12:33.274642       1 controller.go:200] Failed to get client for cluster cluster1
W0307 05:13:13.282018       1 controller.go:200] Failed to get client for cluster cluster1
W0307 05:13:53.288814       1 controller.go:200] Failed to get client for cluster cluster1
W0307 05:14:33.298364       1 controller.go:200] Failed to get client for cluster cluster1
W0307 05:15:13.301249       1 controller.go:200] Failed to get client for cluster cluster1
W0307 05:15:53.308184       1 controller.go:200] Failed to get client for cluster cluster1
W0307 05:16:33.316091       1 controller.go:200] Failed to get client for cluster cluster1
W0307 05:17:13.323766       1 controller.go:200] Failed to get client for cluster cluster1
W0307 05:17:53.326507       1 controller.go:200] Failed to get client for cluster cluster1
W0307 05:18:33.332443       1 controller.go:200] Failed to get client for cluster cluster1
W0307 05:19:13.339601       1 controller.go:200] Failed to get client for cluster cluster1
W0307 05:19:53.347379       1 controller.go:200] Failed to get client for cluster cluster1
I0307 05:20:30.219088       1 federated_informer.go:216] Cluster operators/cluster2 not added; it is not ready.
I0307 05:20:30.219272       1 federated_informer.go:216] Cluster operators/cluster2 not added; it is not ready.
I0307 05:20:30.219341       1 federated_informer.go:216] Cluster operators/cluster2 not added; it is not ready.
I0307 05:20:30.219420       1 federated_informer.go:216] Cluster operators/cluster2 not added; it is not ready.
I0307 05:20:30.220725       1 federated_informer.go:216] Cluster operators/cluster2 not added; it is not ready.
I0307 05:20:30.220800       1 federated_informer.go:216] Cluster operators/cluster2 not added; it is not ready.
E0307 05:20:30.229959       1 controller.go:163] Failed to create corresponding restclient of kubernetes cluster: clusters.clusterregistry.k8s.io "cluster2" not found
W0307 05:20:33.351895       1 controller.go:200] Failed to get client for cluster cluster1
W0307 05:20:33.352234       1 controller.go:200] Failed to get client for cluster cluster2
W0307 05:21:13.362653       1 controller.go:200] Failed to get client for cluster cluster1
W0307 05:21:13.362677       1 controller.go:200] Failed to get client for cluster cluster2
W0307 05:21:53.365988       1 controller.go:200] Failed to get client for cluster cluster1
W0307 05:21:53.366013       1 controller.go:200] Failed to get client for cluster cluster2
W0307 05:22:33.370930       1 controller.go:200] Failed to get client for cluster cluster1
W0307 05:22:33.370961       1 controller.go:200] Failed to get client for cluster cluster2
W0307 05:23:13.374098       1 controller.go:200] Failed to get client for cluster cluster1
W0307 05:23:13.374123       1 controller.go:200] Failed to get client for cluster cluster2

And now I'm at a lost. I get this is all in Alpha stage, but I don't seem to get past the most basic step in this whole configuration, not to mention some confusion between the Operator's instruction and the User Guide...

Does anyone know how to fix these issues? Or if the Operator worked and I'm missing something?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.