Code Monkey home page Code Monkey logo

application-monitoring-operator's People

Contributors

aidenkeating avatar austincunningham avatar austinmartinh avatar boomatang avatar briangallagher avatar carlkyrillos avatar cathaloconnorrh avatar david-martin avatar davidffrench avatar davidkirwan avatar grdryn avatar hvbe avatar jackdelahunt avatar laurafitzgerald avatar maleck13 avatar matskiv avatar openshift-merge-robot avatar palonsoro avatar pb82 avatar r-lawton avatar rajagopalan-ranganathan avatar sedroche avatar sergioifg94 avatar steventobin avatar tadayosi avatar valerymo avatar wojta avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

application-monitoring-operator's Issues

Failed to provision volume with StorageClass "glusterfs-cns"

I'm having this error while deploying the application-monitoring-operator:

Failed to provision volume with StorageClass "glusterfs-cns": failed to create volume: failed to create endpoint/service application-monitoring/glusterfs-dynamic-a19ef757-5883-11ea-a191-005056bcda32: failed to create endpoint: Endpoints "glusterfs-dynamic-a19ef757-5883-11ea-a191-005056bcda32" is invalid: metadata.labels: Invalid value: "prometheus-application-monitoring-db-prometheus-application-monitoring-0": must be no more than 63 characters

I've tracked this problem to the prometheus-operator: helm/charts#13170

As a workaround, they suggest editing the storagespec through the helms "values.yml" like so:

    storageSpec:
      volumeClaimTemplate:
        metadata:
          name: data

How can this be achieved in the application-monitoring-operator? Right now it's not possible to deploy it with glusterfs.

Best regards

Support multiple instances of application monitoring stack to be used to collect and show metrics for different users on the same cluster

In Integreatly, we are planning to have 2 instances of the application monitoring stack, one for SRE teams to monitor all the services, and the other one for end-users to use. In this case, some of the namespaces may need to be monitored by both stacks.

For example, the user-sso namespace needs to be monitored by the SRE team to make sure it is up and running. But also end-users need to see some of the application-specific metrics (like total login, failed login etc) through the user-facing metrics stack.

However, it doesn't seem to be possible to achieve this goal with the current implementation. The problem is that only 1 label (monitoring-key) will be used by the whole stack to look up for resources. If we can make the name of the label configurable, then it should be possible ( we just need to specify each stack to look for resources using different labels).

@david-martin @pb82 FYI

Can't login to Grafana

Hi,
I have deployed the application-monitoring-operator to a local OpenShift cluster ( oc cluster up ), all services get deployed correctly and the grafana Application is created, then I open the grafana route https://grafana-route-application-monitoring.127.0.0.1.nip.io/ , I click on Login with OpenShift, I get redirected to the OpenShift login, I insert username and password, I authorize the application to read my information, I get redirect back to grafana but a 500 Internal Error pop out.

I've tried with different users but nothing change.

Let me know if I can add more information.

Failed to load Grafana dashboards

I created a GrafanaDashboard resource but it didn't get loaded into Grafana. Looking at the grafana operator pod logs, I can see the following error:

E0617 11:54:21.550538 1 reflector.go:205] github.com/integr8ly/grafana-operator/vendor/sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: Failed to list *v1alpha1.GrafanaDashboard: grafanadashboards.integreatly.org is forbidden: User "system:serviceaccount:application-monitoring:grafana-operator" cannot list grafanadashboards.integreatly.org at the cluster scope: RBAC: clusterrole.rbac.authorization.k8s.io "grafana-operator-cluster-role" not found

Application monitoring operator fails to create Prometheus & Grafana instances

Strange problem encountered when running through the installation instructions against a local Minishift VM.

The application-monitoring-operator deployment is created ok, but the logs contain messages like 'error creating resource: no matches for kind Prometheus'.

{"level":"info","ts":1549972257.0360773,"logger":"controller_applicationmonitoring","caller":"applicationmonitoring/applicationmonitoring_controller.go:154","msg":"Phase: Create Prometheus CRs"}
--
  | {"level":"info","ts":1549972257.0986924,"logger":"controller_applicationmonitoring","caller":"applicationmonitoring/applicationmonitoring_controller.go:158","msg":"Error in CreatePrometheusCRs, resourceName=prometheus : err=error creating resource: no matches for kind \"Prometheus\" in version \"monitoring.coreos.com/v1\""}
  | {"level":"error","ts":1549972257.0987484,"logger":"kubebuilder.controller","caller":"controller/controller.go:209","msg":"Reconciler error","Controller":"applicationmonitoring-controller","Request":"application-monitoring/example-applicationmonitoring","error":"error creating resource: no matches for kind \"Prometheus\" in version \"monitoring.coreos.com/v1\"","errorVerbose":"no matches for kind \"Prometheus\" in version \"monitoring.coreos.com/v1\"\nerror creating resource\ngithub.com/integr8ly/application-monitoring-operator/pkg/controller/applicationmonitoring.

I verified that the relevant CRDs are installed. But the operator seems stuck on this error and does not proceed to create the prometheus-operator or granfana-operator deployments.

Scaling application-monitoring-operator to 0 and back to 1 seems to fix the issue.

minishift: v1.28.0+48e89ed
OpenShift: v3.11.0+d0c29df-98

Unable to install - 404 errors when getting CRDs yaml files from github

Hi,

we're getting this error when trying to install the operator

error: unable to read URL "https://raw.githubusercontent.com/coreos/prometheus-operator/master/example/prometheus-operator-crd/podmonitor.crd.yaml", server reported 404 Not Found, status code=404
error: unable to read URL "https://raw.githubusercontent.com/coreos/prometheus-operator/master/example/prometheus-operator-crd/prometheus.crd.yaml", server reported 404 Not Found, status code=404
error: unable to read URL "https://raw.githubusercontent.com/coreos/prometheus-operator/master/example/prometheus-operator-crd/alertmanager.crd.yaml", server reported 404 Not Found, status code=404
error: unable to read URL "https://raw.githubusercontent.com/coreos/prometheus-operator/master/example/prometheus-operator-crd/prometheusrule.crd.yaml", server reported 404 Not Found, status code=404
error: unable to read URL "https://raw.githubusercontent.com/coreos/prometheus-operator/master/example/prometheus-operator-crd/servicemonitor.crd.yaml", server reported 404 Not Found, status code=404

How to do offline installation

Hi!

We're running supported Openshift 3.11 in a disconnected installation.
What parameters should we change to deploy this in our environment? Do we have to rebuild the application-monitoring-operator (+ the grafana and prometheus operator?!) ?

Best regards

Prometheus not reaching Alertmanger - Error 403

Hello,

after a fresh installation on my OKD Cluster, this error occurs frequently in the prometheus log:

level=error ts=2019-12-04T13:13:48.741Z caller=notifier.go:528 component=notifier alertmanager=https://10.128.2.31:9091/api/v1/alerts count=1 msg="Error sending alert" err="bad response status 403 Forbidden"

Does someone know how to fix this? I think this is related to the oauth proxy in front of the Alertmanager.

Best regards,

Frank

Installation v1.3.0 doesn't work

I am not able to install a successfully v1.3.0 version on my OCP 3.11 nor OCP 4.5 clusters. After run make cluster/install command, the installation finished successfully,

[mkralik@localhost application-monitoring-operator_master]$ make cluster/install
./scripts/install.sh  v0.34.0 v3.5.0
Now using project "application-monitoring" on server "https://master.fo-311-c.dos.fuse-qe.eng.rdu2.redhat.com:8443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app ruby~https://github.com/sclorg/ruby-ex.git

to build a new example application in Python. Or use kubectl to deploy a simple Kubernetes application:

    kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node

namespace/application-monitoring labeled
customresourcedefinition.apiextensions.k8s.io/applicationmonitorings.applicationmonitoring.integreatly.org configured
clusterrole.authorization.openshift.io/alertmanager-application-monitoring configured
clusterrolebinding.authorization.openshift.io/alertmanager-application-monitoring created
clusterrole.rbac.authorization.k8s.io/grafana-operator unchanged
clusterrolebinding.authorization.openshift.io/grafana-operator unchanged
clusterrole.rbac.authorization.k8s.io/prometheus-application-monitoring unchanged
clusterrolebinding.rbac.authorization.k8s.io/prometheus-application-monitoring configured
clusterrole.rbac.authorization.k8s.io/prometheus-application-monitoring-operator unchanged
clusterrolebinding.rbac.authorization.k8s.io/prometheus-application-monitoring-operator configured
clusterrole.rbac.authorization.k8s.io/grafana-proxy unchanged
clusterrolebinding.authorization.openshift.io/grafana-proxy configured
serviceaccount/application-monitoring-operator created
role.rbac.authorization.k8s.io/application-monitoring-operator created
rolebinding.rbac.authorization.k8s.io/application-monitoring-operator created
customresourcedefinition.apiextensions.k8s.io/blackboxtargets.applicationmonitoring.integreatly.org configured
customresourcedefinition.apiextensions.k8s.io/grafanas.integreatly.org unchanged
customresourcedefinition.apiextensions.k8s.io/grafanadashboards.integreatly.org unchanged
customresourcedefinition.apiextensions.k8s.io/grafanadatasources.integreatly.org unchanged
Warning: oc apply should be used on resource created by either oc create --save-config or oc apply
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com configured
Warning: oc apply should be used on resource created by either oc create --save-config or oc apply
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com configured
Warning: oc apply should be used on resource created by either oc create --save-config or oc apply
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com configured
Warning: oc apply should be used on resource created by either oc create --save-config or oc apply
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com configured
Warning: oc apply should be used on resource created by either oc create --save-config or oc apply
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com configured
yes
deployment.apps/application-monitoring-operator created
applicationmonitoring.applicationmonitoring.integreatly.org/example-applicationmonitoring created

however, only two pods are created

application-monitoring-operator-7574f69755-rdlpd   1/1     Running   0          6m
prometheus-operator-86467cc6d8-zsrqh               1/1     Running   0          5m

Pods logs:
application-monitoring-operator

{"level":"info","ts":1596104703.666083,"logger":"cmd","msg":"Go Version: go1.13.14"}
{"level":"info","ts":1596104703.6662054,"logger":"cmd","msg":"Go OS/Arch: linux/amd64"}
{"level":"info","ts":1596104703.6662192,"logger":"cmd","msg":"operator-sdk Version: v0.15.2"}
{"level":"info","ts":1596104703.6672153,"logger":"leader","msg":"Trying to become the leader."}
{"level":"info","ts":1596104704.6974154,"logger":"leader","msg":"No pre-existing lock was found."}
{"level":"info","ts":1596104704.7026434,"logger":"leader","msg":"Became the leader."}
{"level":"info","ts":1596104705.7206311,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":"0.0.0.0:8383"}
{"level":"info","ts":1596104705.7220647,"logger":"cmd","msg":"Registering Components."}
{"level":"info","ts":1596104705.7246423,"logger":"cmd","msg":"amoGVK: [applicationmonitoring.integreatly.org/v1alpha1, Kind=ApplicationMonitoring applicationmonitoring.integreatly.org/v1alpha1, Kind=BlackboxTarget integreatly.org/v1alpha1, Kind=Grafana integreatly.org/v1alpha1, Kind=GrafanaDashboard integreatly.org/v1alpha1, Kind=GrafanaDataSource]"}
{"level":"info","ts":1596104711.864813,"logger":"metrics","msg":"Metrics Service object created","Service.Name":"application-monitoring-operator-metrics","Service.Namespace":"application-monitoring"}
{"level":"info","ts":1596104712.9669402,"logger":"cmd","msg":"Starting the Cmd."}
{"level":"info","ts":1596104712.9723122,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"blackboxtarget-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1596104712.972807,"logger":"controller-runtime.controller","msg":"Starting Controller","controller":"blackboxtarget-controller"}
{"level":"info","ts":1596104712.9730158,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
{"level":"info","ts":1596104712.9731624,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"applicationmonitoring-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1596104712.973311,"logger":"controller-runtime.controller","msg":"Starting Controller","controller":"applicationmonitoring-controller"}
{"level":"info","ts":1596104713.0730612,"logger":"controller-runtime.controller","msg":"Starting workers","controller":"blackboxtarget-controller","worker count":1}
{"level":"info","ts":1596104713.07352,"logger":"controller-runtime.controller","msg":"Starting workers","controller":"applicationmonitoring-controller","worker count":1}
{"level":"info","ts":1596104713.073891,"logger":"controller_applicationmonitoring","msg":"Reconciling ApplicationMonitoring","Request.Namespace":"application-monitoring","Request.Name":"example-applicationmonitoring"}
{"level":"info","ts":1596104713.073935,"logger":"controller_applicationmonitoring","msg":"Phase: Install PrometheusOperator"}
{"level":"info","ts":1596104713.180039,"logger":"controller_applicationmonitoring","msg":"can't find secret 'integreatly-additional-scrape-configs'"}

prometheus-operator

ts=2020-07-30T10:25:19.667893523Z caller=main.go:199 msg="Starting Prometheus Operator version '0.34.0'."
ts=2020-07-30T10:25:19.731047462Z caller=main.go:96 msg="Staring insecure server on :8080"

Last version which I was using was 1.1.6 which I was able to install without any issues.

Deleting the application-monitoring project is stuck in terminating state

Sometimes, deleting oc delete project application-monitoring application-monitoring project is stuck in terminating state. For example, now the project is the third day in this state. It happens to me on 3.11 (minishift) and 4.1 (aws).

For reproducing, when I delete project after creating, it works. However, when I want to delete project after some time, e.g. after one hour during which I use it for monitoring my application (Syndesis instance), the deleting operation is stuck.
After that, I cannot create new application-monitoring project so I have to reset the whole OPC.

When I look what is still in the project,
oc api-resources | tail -n +1 | grep true | awk '{print $1}' | xargs -L 1 -I % bash -c "echo %; oc get %"
I see these two remaining resources which are still there:

...
applicationmonitorings
NAME                            AGE
example-applicationmonitoring   5d20h
...
grafanadatasources
NAME         AGE
prometheus   5d20h
...

When I want to delete manually this resources,
oc delete applicationmonitorings example-applicationmonitoring
oc delete grafanadatasources prometheus
the output from oc is:
applicationmonitoring.applicationmonitoring.integreatly.org "example-applicationmonitoring" deleted
grafanadatasource.integreatly.org "prometheus" deleted
however, it is stuck too and I have to terminate (CTRL+C) those commands.

For the investigation, I can provide a credential for our OCP 3.11 instance where the project is stuck just now.

graphana operator cannot create events in other namespaces

The grafana operator tries to create events and link to grafana objects, but the clusterrole does not have required roles to create events.

The error logged is:

E0310 17:13:32.235287       1 event.go:240] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"prometheus-exporter-redis.15fb00a73cfa01d8", GenerateName:"", Namespace:"test-app", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"GrafanaDashboard", Namespace:"test-app", Name:"prometheus-exporter-redis", UID:"77eabff8-62f2-11ea-aa7a-12d443f7f383", APIVersion:"integreatly.org/v1alpha1", ResourceVersion:"5335410", FieldPath:""}, Reason:"Success", Message:"dashboard test-app/prometheus-exporter-redis successfully submitted", Source:v1.EventSource{Component:"controller_grafanadashboard", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf92108f0de9a9d8, ext:538230446197802, loc:(*time.Location)(0x207ad60)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf92108f0de9a9d8, ext:538230446197802, loc:(*time.Location)(0x207ad60)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:serviceaccount:application-monitoring:grafana-operator" cannot create resource "events" in API group "" in the namespace "test-app"' (will not retry!)

500 Internal Error in Grafana and Prometheus UI with custom CA

Hi,
I have deployed the application-monitoring-operator to a OpenShift 4.3 cluster built on-prem. All components seem to work fine, without errors. However, I get error "500 Internal Error" right after I try to login to Grafana UI or to Prometheus UI. We use internally signed certificates and custom Certification Authority in our OpenShift environment.

Steps to reproduce:

  1. I install application-monitoring-operator
    git clone https://github.com/integr8ly/application-monitoring-operator.git
    make cluster/install
    and wait for completion. Components install without errors.
  2. I go to routes and click a route to Grafana.
  3. I click "Login with OpenShift" and get redirected to the OpenShift login, then I insert my username and password, I authorize the application to read my information, I get "500 Internal Error" message on the page.
  4. Try steps 2 and 3 for Prometheus UI and also get "500 Internal Error".

Environment info:
oc get pods
NAME READY STATUS RESTARTS AGE
alertmanager-application-monitoring-0 3/3 Running 0 59m
application-monitoring-operator-5bc879f697-mcglx 1/1 Running 0 60m
grafana-deployment-58746b4f54-hr4xs 2/2 Running 0 9m32s
grafana-operator-66497b6fc6-q9lhc 1/1 Running 0 59m
prometheus-application-monitoring-0 5/5 Running 1 59m
prometheus-operator-76b4dfbb68-r7k95 1/1 Running 0 59m

Logs for grafana-proxy container:

2020/06/01 10:18:45 provider.go:117: Defaulting client-id to system:serviceaccount:application-monitoring:grafana-serviceaccount
2020/06/01 10:18:45 provider.go:122: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token
2020/06/01 10:18:45 provider.go:310: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.
2020/06/01 10:18:45 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:3000/"
2020/06/01 10:18:45 oauthproxy.go:221: compiled skip-auth-regex => "^/metrics"
2020/06/01 10:18:45 oauthproxy.go:227: OAuthProxy configured for Client ID: system:serviceaccount:application-monitoring:grafana-serviceaccount
2020/06/01 10:18:45 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain: refresh:disabled
2020/06/01 10:18:45 http.go:106: HTTPS: listening on [::]:9091
2020/06/01 10:21:25 provider.go:392: authorizer reason:
2020/06/01 10:21:28 provider.go:573: Performing OAuth discovery against https://172.30.0.1/.well-known/oauth-authorization-server
2020/06/01 10:21:28 provider.go:613: 200 GET https://172.30.0.1/.well-known/oauth-authorization-server {
"issuer": "https://oauth-openshift.apps.os4-test.lab.local",
"authorization_endpoint": "https://oauth-openshift.apps.os4-test.lab.local/oauth/authorize",
"token_endpoint": "https://oauth-openshift.apps.os4-test.lab.local/oauth/token",
"scopes_supported": [
"user:check-access",
"user:full",
"user:info",
"user:list-projects",
"user:list-scoped-projects"
],
"response_types_supported": [
"code",
"token"
],
"grant_types_supported": [
"authorization_code",
"implicit"
],
"code_challenge_methods_supported": [
"plain",
"S256"
]
}
2020/06/01 10:21:38 provider.go:573: Performing OAuth discovery against https://172.30.0.1/.well-known/oauth-authorization-server
2020/06/01 10:21:38 provider.go:613: 200 GET https://172.30.0.1/.well-known/oauth-authorization-server {
"issuer": "https://oauth-openshift.apps.os4-test.lab.local",
"authorization_endpoint": "https://oauth-openshift.apps.os4-test.lab.local/oauth/authorize",
"token_endpoint": "https://oauth-openshift.apps.os4-test.lab.local/oauth/token",
"scopes_supported": [
"user:check-access",
"user:full",
"user:info",
"user:list-projects",
"user:list-scoped-projects"
],
"response_types_supported": [
"code",
"token"
],
"grant_types_supported": [
"authorization_code",
"implicit"
],
"code_challenge_methods_supported": [
"plain",
"S256"
]
}
2020/06/01 10:21:38 oauthproxy.go:645: error redeeming code (client:10.254.3.1:47476): Post https://oauth-openshift.apps.os4-test.lab.local/oauth/token: x509: certificate signed by unknown authority
2020/06/01 10:21:38 oauthproxy.go:438: ErrorPage 500 Internal Error Internal Error

So I understand that the issue is with internally signed certificates.

I added two configmaps with our root certificates and labels:
config.openshift.io/inject-trusted-cabundle: 'true'

I tried to add two sections to grafana-deployment:
volumeMounts:
- name: grafana-trusted-ca-bundle
readOnly: true
mountPath: /etc/pki/ca-trust/extracted/pem/
....
volumes:
- name: grafana-trusted-ca-bundle
configMap:
name: grafana-trusted-ca-bundle
items:
- key: ca-bundle.crt
path: tls-ca-bundle.pem
defaultMode: 420
optional: true

I also tried to edit Grafana and Prometheus instances of CRDs. However, in all cases the configuration is ovewritten by operator, which is expected behaviour, I believe.

Please advice, what is the correct flow of adding trusted-ca-bundle with this operator?
Thank you!
Sergiy

Add a CR field to choose between upstream & downstream images

Add a field to the ApplicationMonitoring CRD to allow choosing between upstream & downstream images.
The default for this field should be to use upstream images.

This will allow the operator to be used out of the box without having to setup any pull secrets or linking them to serviceaccounts.

Changes:

  • Add a new useProtectedRHImages boolean field to the ApplicationMonitoring CRD, defaults to false
  • All images should be made configurable within the operator, and toggled based on the value of this new field
  • If a protected image is not being used currently, a public upstream image is fine.
  • Versions of things between RH and upstream images should match as closely as possible

These changes should lower the priority of #52, #58, #59

cannot get applicationmonitorings.applicationmonitoring.integreatly.org in the namespace

After creating the example-prometheus-nodejs, the grafana dashboard is not showing.

In the grafana-operator I get the following error:

{"level":"error","ts":1582801368.3774614,"logger":"cmd","msg":"error starting metrics service","error":"failed to initialize service object for metrics: applicationmonitorings.applicationmonitoring.integreatly.org \"example-applicationmonitoring\" is forbidden: User \"system:serviceaccount:application-monitoring:grafana-operator\" cannot get applicationmonitorings.applicationmonitoring.integreatly.org in the namespace \"application-monitoring\": no RBAC policy matched","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/home/travis/gopath/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128\nmain.main\n\tgrafana-operator/cmd/manager/main.go:223\nruntime.main\n\t/home/travis/.gimme/versions/go1.13.5.linux.amd64/src/runtime/proc.go:203"}

Best regards

Prometheus operator does not deny enough namespaces

Prometheus operator deployed by cluster monitoring operator in openshift-monitoring project does not only monitor that project, but these ones:

  • openshift-apiserver-operator
  • openshift-authentication
  • openshift-authentication-operator
  • openshift-controller-manager
  • openshift-controller-manager-operator
  • openshift-dns
  • openshift-image-registry
  • openshift-ingress
  • openshift-kube-apiserver-operator
  • openshift-kube-controller-manager-operator
  • openshift-monitoring
  • openshift-operator-lifecycle-manager
  • openshift-sdn
  • openshift-service-catalog-controller-manager-operator

All of these projects must be listed in deny namespaces.

Installation of application-monitoring-operator using full declarative language

Context

At 3scale engineering team, we want to use application-monitoring-operator, so both RHMI/3scale will use the same monitoring stack, which will help both teams to follow the same direction, taking into account that 3scale is working on adding metrics, prometheusRules, grafanaDashboards for the next release.

At 3scale SRE/Ops team we are using openshift hive to provision our on demand dev OCP clusters (so engineers can easily do testing with metrics, dashboards...), and we are using hive SyncSet object in order to apply same configurations to different OCP clusters (we define all resources once on a single yaml, and then we can apply the same config to any dev cluster, by just adding new clusters name to the list in the SyncSet object).

We have seen that current documented operator installation involves executing a Makefile target (with grafana/prometheus versions), which executes a bash script that executes oc apply of different files, directories or URLs.

We need an easy way to install the monitoring stack using declarative language (no Makefile target executions), so it will be easy to maintain and keep track of every change for every release on GitHub (GitOps philosophy).

Current workaround

As a workaround, what we are doing now is to parse/extract all resources deployed by scripts/install.sh and adding them to a single SyncSet object (which has a specific spec format). But before creating the SyncSet object, due to openshift hive using k8s native APIs, they don't accept for example some OpenShift apiVersion like authorization.openshift.io/v1 and need to be replaced by k8s native alternative rbac.authorization.k8s.io/v1(see issues openshift/hive#864 and https://issues.redhat.com/browse/CO-532), so we need to fix some resources in order to be full compatible with hive:

  • We update on some ClusterRole/ClusterRoleBinding resources, apiVersion from OpenShift authorization.openshift.io/v1 to k8s rbac.authorization.k8s.io/v1 (plus applying some additions like adding roleRef.king and roleRef.Group), actually you are already using that k8s native apiVersion on other ClusterRole/ClusterRolebinding objects (but not on all), example:
$ git diff luster-roles/alertmanager-clusterrole_binding.yaml 
diff --git a/deploy/cluster-roles/alertmanager-clusterrole_binding.yaml b/deploy/cluster-roles/alertmanager-clusterrole_binding.yaml
index 502df67..8977427 100644
--- a/deploy/cluster-roles/alertmanager-clusterrole_binding.yaml
+++ b/deploy/cluster-roles/alertmanager-clusterrole_binding.yaml
@@ -1,9 +1,10 @@
-apiVersion: authorization.openshift.io/v1
-groupNames: null
+apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   name: alertmanager-application-monitoring
 roleRef:
+  apiGroup: rbac.authorization.k8s.io
+  kind: ClusterRole
   name: alertmanager-application-monitoring
 subjects:
 - kind: ServiceAccount
  • We add the namespace name on each non-cluster-scope object (like operator deployment, service_account, role, role_binding, and applicationmonitoring example), actually you are already harcoding the namespace on some other Objects like ClusterRolebinding, example:
$ git diff examples/ApplicationMonitoring.yaml 
diff --git a/deploy/examples/ApplicationMonitoring.yaml b/deploy/examples/ApplicationMonitoring.yaml
index 993b044..0951d45 100644
--- a/deploy/examples/ApplicationMonitoring.yaml
+++ b/deploy/examples/ApplicationMonitoring.yaml
@@ -2,6 +2,7 @@ apiVersion: applicationmonitoring.integreatly.org/v1alpha1
 kind: ApplicationMonitoring
 metadata:
   name: example-applicationmonitoring
+  namespace: application-monitoring
 spec:
   labelSelector: "middleware"
   additionalScrapeConfigSecretName: "integreatly-additional-scrape-configs"
$ git diff cluster-roles/proxy-clusterrole_binding.yaml
diff --git a/deploy/cluster-roles/proxy-clusterrole_binding.yaml b/deploy/cluster-roles/proxy-clusterrole_binding.yaml
index 26497f6..8547cfc 100644
--- a/deploy/cluster-roles/proxy-clusterrole_binding.yaml
+++ b/deploy/cluster-roles/proxy-clusterrole_binding.yaml
@@ -7,6 +7,6 @@ roleRef:
 subjects:
   - kind: ServiceAccount
     name: grafana-serviceaccount
-    namespace: monitoring2
+    namespace: application-monitoring
 userNames:
   - system:serviceaccount:application-monitoring:grafana-serviceaccount

We have checked on deploy/cluster-roles/README.md that you use Integr8ly installer in order to install application-monitoring-operator (not the Makefile target), and you don't use yamls on deploy/cluster-roles/

In order to have full compatibility with k8s (hence with openshift hive), and not require us to make transformation on almost all objects, we wonder if we can open a PR to fix those small issues, while still being fully compatible with Openshift:

  • Update some ClusterRole/ClusterRoleBinding apiVersion (and spec) to make it compatible with native k8s (so compatible with openshift hive)
  • Add namespace to non-cluster-scope objects
  • Fix namespace typo on ClusterRoleBinding grafana-proxy

Possible improvement

To make easier the installation of application-monitoring-operator using a full declarative language without having to manage all that 25 yamls, we have seen that you are already using olm-catalog, so we wonder if you plan to:

  • Either publish the operator on a current OperatorSource like certified-operators, redhat-operators or community-operators (so operator can be used by anybody)
  • Or maybe just provide on the repository an alternative installation method with a working OperatorSource resource that can be easily deployed on Openshift cluster, and then just create a Subscription object to deploy the operator on a given namespace, channel, version...

We have tried to deploy a OperatorSource using data from the Makefile (like registryNamespace: integreatly):

apiVersion: operators.coreos.com/v1
kind: OperatorSource
metadata:
  name: integreatly-operators
  namespace: openshift-marketplace
spec:
  displayName: Integreatly operators
  endpoint: https://quay.io/cnr
  publisher: integreatly
  registryNamespace: integreatly
  type: appregistry

But we have seen that only the integreatly operator is available, so we guess application-monitoring-operator might be private.

[BLOCKER] Error pulling prometheus-blackbox-exporter image

prometheus-application-monitoring-0 pod fails to run due to the following error:
Failed to pull image "registry.connect.redhat.com/bitnami/prometheus-blackbox-exporter:0.14.0-rhel-7-r33-2": rpc error: code = Unknown desc = Get https://registry.connect.redhat.com/v2/bitnami/prometheus-blackbox-exporter/manifests/0.14.0-rhel-7-r33-2: unauthorized: Invalid username or password

Support for OCP4

After installation make cluster/install I added my RedHat token and link this secret to grafana-operator / alertmanager / prometheus-application-monitoring secret according to README

kubectl create -f ../my-secret.yaml --namespace=application-monitoring
oc secrets link grafana-operator my-secret --for=pull
oc secrets link alertmanager my-secret --for=pull
oc secrets link prometheus-application-monitoring my-secret --for=pull

After that I redeploy all failing pods. However the alertmanager-application-monitoring and prometheus-application-monitoring are in the ImagePullBackOff status

alertmanager-application-monitoring-0              0/3       ImagePullBackOff   0          6m4s
application-monitoring-operator-595d68dcf6-xpq2l   1/1       Running            0          9m47s
grafana-operator-7c5b565454-d4jrh                  1/1       Running            0          8m56s
prometheus-application-monitoring-0                1/5       ImagePullBackOff   0          5m54s
prometheus-operator-7f9c5c8b88-vmxft               1/1       Running            0          9m18s

oc describe pod alertmanager-application-monitoring-0

Events:
  Type     Reason     Age              From                                     Message
  ----     ------     ----             ----                                     -------
  Normal   Scheduled  2m               default-scheduler                        Successfully assigned application-monitoring/alertmanager-application-monitoring-0 to ip-172-31-149-100.ec2.internal
  Normal   BackOff    2m               kubelet, ip-172-31-149-100.ec2.internal  Back-off pulling image "registry.redhat.io/openshift3/oauth-proxy:v3.11.43"
  Warning  Failed     2m               kubelet, ip-172-31-149-100.ec2.internal  Error: ImagePullBackOff
  Normal   BackOff    2m               kubelet, ip-172-31-149-100.ec2.internal  Back-off pulling image "registry.redhat.io/openshift3/ose-configmap-reloader:v3.11"
  Normal   BackOff    2m               kubelet, ip-172-31-149-100.ec2.internal  Back-off pulling image "registry.redhat.io/openshift3/prometheus-alertmanager:v3.11"
  Warning  Failed     2m               kubelet, ip-172-31-149-100.ec2.internal  Error: ImagePullBackOff
  Warning  Failed     2m (x2 over 2m)  kubelet, ip-172-31-149-100.ec2.internal  Error: ErrImagePull
  Normal   Pulling    2m (x2 over 2m)  kubelet, ip-172-31-149-100.ec2.internal  Pulling image "registry.redhat.io/openshift3/oauth-proxy:v3.11.43"
  Warning  Failed     2m (x2 over 2m)  kubelet, ip-172-31-149-100.ec2.internal  Failed to pull image "registry.redhat.io/openshift3/ose-configmap-reloader:v3.11": rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password
  Normal   Pulling    2m (x2 over 2m)  kubelet, ip-172-31-149-100.ec2.internal  Pulling image "registry.redhat.io/openshift3/ose-configmap-reloader:v3.11"
  Warning  Failed     2m (x2 over 2m)  kubelet, ip-172-31-149-100.ec2.internal  Error: ErrImagePull
  Warning  Failed     2m (x2 over 2m)  kubelet, ip-172-31-149-100.ec2.internal  Failed to pull image "registry.redhat.io/openshift3/prometheus-alertmanager:v3.11": rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password
  Normal   Pulling    2m (x2 over 2m)  kubelet, ip-172-31-149-100.ec2.internal  Pulling image "registry.redhat.io/openshift3/prometheus-alertmanager:v3.11"
  Warning  Failed     2m (x2 over 2m)  kubelet, ip-172-31-149-100.ec2.internal  Error: ErrImagePull
  Warning  Failed     2m (x2 over 2m)  kubelet, ip-172-31-149-100.ec2.internal  Failed to pull image "registry.redhat.io/openshift3/oauth-proxy:v3.11.43": rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password
  Warning  Failed     1m (x2 over 2m)  kubelet, ip-172-31-149-100.ec2.internal  Error: ImagePullBackOff

oc describe pod prometheus-application-monitoring-0

Events:
  Type     Reason     Age              From                                     Message
  ----     ------     ----             ----                                     -------
  Normal   Scheduled  3m               default-scheduler                        Successfully assigned application-monitoring/prometheus-application-monitoring-0 to ip-172-31-129-245.ec2.internal
  Normal   Pulling    3m               kubelet, ip-172-31-129-245.ec2.internal  Pulling image "registry.redhat.io/openshift3/prometheus:v3.11"
  Warning  Failed     3m               kubelet, ip-172-31-129-245.ec2.internal  Error: ErrImagePull
  Warning  Failed     3m               kubelet, ip-172-31-129-245.ec2.internal  Error: ErrImagePull
  Normal   Pulling    3m               kubelet, ip-172-31-129-245.ec2.internal  Pulling image "registry.redhat.io/openshift3/ose-prometheus-config-reloader:v3.11"
  Warning  Failed     3m               kubelet, ip-172-31-129-245.ec2.internal  Failed to pull image "registry.redhat.io/openshift3/ose-prometheus-config-reloader:v3.11": rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password
  Warning  Failed     3m               kubelet, ip-172-31-129-245.ec2.internal  Failed to pull image "registry.redhat.io/openshift3/prometheus:v3.11": rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password
  Normal   Pulled     3m               kubelet, ip-172-31-129-245.ec2.internal  Container image "registry.connect.redhat.com/bitnami/prometheus-blackbox-exporter:0.14.0-rhel-7-r33-2" already present on machine
  Normal   Created    3m               kubelet, ip-172-31-129-245.ec2.internal  Created container blackbox-exporter
  Normal   Started    3m               kubelet, ip-172-31-129-245.ec2.internal  Started container blackbox-exporter
  Normal   Pulling    3m               kubelet, ip-172-31-129-245.ec2.internal  Pulling image "registry.redhat.io/openshift3/oauth-proxy:v3.11.43"
  Warning  Failed     3m               kubelet, ip-172-31-129-245.ec2.internal  Failed to pull image "registry.redhat.io/openshift3/oauth-proxy:v3.11.43": rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password
  Normal   Pulling    3m               kubelet, ip-172-31-129-245.ec2.internal  Pulling image "registry.redhat.io/openshift3/ose-configmap-reloader:v3.11"
  Warning  Failed     3m               kubelet, ip-172-31-129-245.ec2.internal  Error: ErrImagePull
  Warning  Failed     3m               kubelet, ip-172-31-129-245.ec2.internal  Failed to pull image "registry.redhat.io/openshift3/ose-configmap-reloader:v3.11": rpc error: code = Unknown desc = unable to retrieve auth token: invalid username/password
  Warning  Failed     3m               kubelet, ip-172-31-129-245.ec2.internal  Error: ErrImagePull
  Normal   BackOff    3m               kubelet, ip-172-31-129-245.ec2.internal  Back-off pulling image "registry.redhat.io/openshift3/ose-configmap-reloader:v3.11"
  Warning  Failed     3m               kubelet, ip-172-31-129-245.ec2.internal  Error: ImagePullBackOff
  Normal   BackOff    3m               kubelet, ip-172-31-129-245.ec2.internal  Back-off pulling image "registry.redhat.io/openshift3/ose-prometheus-config-reloader:v3.11"
  Warning  Failed     3m               kubelet, ip-172-31-129-245.ec2.internal  Error: ImagePullBackOff
  Normal   BackOff    3m               kubelet, ip-172-31-129-245.ec2.internal  Back-off pulling image "registry.redhat.io/openshift3/oauth-proxy:v3.11.43"
  Warning  Failed     3m               kubelet, ip-172-31-129-245.ec2.internal  Error: ImagePullBackOff
  Normal   BackOff    3m (x2 over 3m)  kubelet, ip-172-31-129-245.ec2.internal  Back-off pulling image "registry.redhat.io/openshift3/prometheus:v3.11"
  Warning  Failed     3m (x2 over 3m)  kubelet, ip-172-31-129-245.ec2.internal  Error: ImagePullBackOff

Latest '1.0.2' tag won't install grafana operator on v3.11 Openshift

Hi all,

we're trying to install the operator on a production v3.11 cluster and Grafana operator won't install.
The make cluster/install goes fine, and all relevant CRDs are present

➜  application-monitoring-operator git:(master) oc project       
Using project "application-monitoring" on server "https://openshift-cluster.[DOMAIN]:8443"

➜  application-monitoring-operator git:(master)  oc get crds       
NAME                                                           CREATED AT
alertmanagers.monitoring.coreos.com                            2019-08-30T14:07:24Z
applicationmonitorings.applicationmonitoring.integreatly.org   2020-01-14T17:12:44Z
blackboxtargets.applicationmonitoring.integreatly.org          2020-01-14T17:12:46Z
bundlebindings.automationbroker.io                             2019-08-30T14:10:38Z
bundleinstances.automationbroker.io                            2019-08-30T14:10:38Z
bundles.automationbroker.io                                    2019-08-30T14:10:39Z
grafanadashboards.integreatly.org                              2020-01-14T17:12:47Z
grafanadatasources.integreatly.org                             2020-01-14T17:12:48Z
grafanas.integreatly.org                                       2020-01-14T17:12:46Z
podmonitors.monitoring.coreos.com                              2020-01-13T14:19:10Z
prometheuses.monitoring.coreos.com                             2019-08-30T14:07:24Z
prometheusrules.monitoring.coreos.com                          2019-08-30T14:07:24Z
servicemonitors.monitoring.coreos.com                          2019-08-30T14:07:24Z

➜  application-monitoring-operator git:(master) oc version
oc v3.11.0+0cbc58b
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://openshift-cluster..[DOMAIN]::8443
openshift v3.11.135
kubernetes v1.11.0+d4cacc0
➜  application-monitoring-operator git:(master) 

Other resources are correctly deployed

➜  application-monitoring-operator git:(master) oc get pods
NAME                                               READY     STATUS    RESTARTS   AGE
alertmanager-application-monitoring-0              3/3       Running   0          4m
application-monitoring-operator-749d9b6b54-mhj9s   1/1       Running   0          5m
prometheus-application-monitoring-0                5/5       Running   1          4m
prometheus-operator-86467cc6d8-l8cx4               1/1       Running   0          4m

We can see this error in application-monitoring-operator logs:

{"level":"info","ts":1579022331.038997,"logger":"controller_applicationmonitoring","msg":"Phase: Install GrafanaOperator"}
{"level":"info","ts":1579022331.0712292,"logger":"controller_applicationmonitoring","msg":"Error in InstallGrafanaOperator, resourceName=grafana-operator-role : err=error creating resource: roles.rbac.authorization.k8s.io \"grafana-operator-role\" is forbidden: attempt to grant extra privileges: [{[*] [integreatly.org] [grafanadashboards/status] [] []} {[*] [integreatly.org] [grafanadatasources/status] [] []} {[*] [integreatly.org] [grafanas/status] [] []}] user=&{system:serviceaccount:application-monitoring:application-monitoring-operator 14b6a2a5-36f1-11ea-a98e-005056920bc0 [system:serviceaccounts system:serviceaccounts:application-monitoring system:authenticated] map[]} ownerrules=[{[get] [ user.openshift.io] [users] [~] []} {[list] [ project.openshift.io] [projectrequests] [] []} {[get list] [ authorization.openshift.io] [clusterroles] [] []} {[get list watch] [rbac.authorization.k8s.io] [clusterroles] [] []} {[get list] [storage.k8s.io] [storageclasses] [] []} {[list watch] [ project.openshift.io] [projects] [] []} {[create] [ authorization.openshift.io] [selfsubjectrulesreviews] [] []} {[create] [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] [] [] [] [/healthz /healthz/*]} {[get] [] [] [] [/version /version/* /api /api/* /apis /apis/* /oapi /oapi/* /openapi/v2 /swaggerapi /swaggerapi/* /swagger.json /swagger-2.0.0.pb-v1 /osapi /osapi/ /.well-known /.well-known/* /]} {[create] [ authorization.openshift.io] [selfsubjectrulesreviews] [] []} {[create] [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[list watch get] [servicecatalog.k8s.io] [clusterserviceclasses clusterserviceplans] [] []} {[get] [] [] [] [/healthz/ready]} {[create] [authorization.k8s.io] [selfsubjectaccessreviews selfsubjectrulesreviews] [] []} {[create] [ build.openshift.io] [builds/docker builds/optimizeddocker] [] []} {[create] [ build.openshift.io] [builds/jenkinspipeline] [] []} {[create] [ build.openshift.io] [builds/source] [] []} {[get] [] [] [] [/api /api/* /apis /apis/* /healthz /openapi /openapi/* /swagger-2.0.0.pb-v1 /swagger.json /swaggerapi /swaggerapi/* /version /version/]} {[delete] [ oauth.openshift.io] [oauthaccesstokens oauthauthorizetokens] [] []} {[get] [] [] [] [/version /version/* /api /api/* /apis /apis/* /oapi /oapi/* /openapi/v2 /swaggerapi /swaggerapi/* /swagger.json /swagger-2.0.0.pb-v1 /osapi /osapi/ /.well-known /.well-known/* /]} {[impersonate] [authentication.k8s.io] [userextras/scopes.authorization.openshift.io] [] []} {[create get] [ build.openshift.io] [buildconfigs/webhooks] [] []} {[*] [] [pods services services/finalizers endpoints persistentvolumeclaims events configmaps secrets serviceaccounts] [] []} {[*] [apps] [deployments deployments/finalizers daemonsets replicasets statefulsets] [] []} {[*] [monitoring.coreos.com] [alertmanagers prometheuses prometheusrules servicemonitors] [] []} {[*] [applicationmonitoring.integreatly.org] [applicationmonitorings applicationmonitorings/finalizers blackboxtargets blackboxtargets/finalizers] [] []} {[*] [integreatly.org] [grafanadatasources grafanadashboards grafanas grafanas/finalizers grafanadatasources/finalizers grafanadashboards/finalizers] [] []} {[*] [route.openshift.io] [routes routes/custom-host] [] []} {[*] [rbac.authorization.k8s.io] [rolebindings roles] [] []} {[*] [extensions] [ingresses] [] []} {[create] [authentication.k8s.io] [tokenreviews] [] []} {[create] [authorization.k8s.io] [subjectaccessreviews] [] []} {[get] [ image.openshift.io] [imagestreams/layers] [] []}] ruleResolutionErrors=[]"}
{"level":"error","ts":1579022331.0713165,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"applicationmonitoring-controller","request":"application-monitoring/example-applicationmonitoring","error":"error creating resource: roles.rbac.authorization.k8s.io \"grafana-operator-role\" is forbidden: attempt to grant extra privileges: [{[*] [integreatly.org] [grafanadashboards/status] [] []} {[*] [integreatly.org] [grafanadatasources/status] [] []} {[*] [integreatly.org] [grafanas/status] [] []}] user=&{system:serviceaccount:application-monitoring:application-monitoring-operator 14b6a2a5-36f1-11ea-a98e-005056920bc0 [system:serviceaccounts system:serviceaccounts:application-monitoring system:authenticated] map[]} ownerrules=[{[get] [ user.openshift.io] [users] [~] []} {[list] [ project.openshift.io] [projectrequests] [] []} {[get list] [ authorization.openshift.io] [clusterroles] [] []} {[get list watch] [rbac.authorization.k8s.io] [clusterroles] [] []} {[get list] [storage.k8s.io] [storageclasses] [] []} {[list watch] [ project.openshift.io] [projects] [] []} {[create] [ authorization.openshift.io] [selfsubjectrulesreviews] [] []} {[create] [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] [] [] [] [/healthz /healthz/*]} {[get] [] [] [] [/version /version/* /api /api/* /apis /apis/* /oapi /oapi/* /openapi/v2 /swaggerapi /swaggerapi/* /swagger.json /swagger-2.0.0.pb-v1 /osapi /osapi/ /.well-known /.well-known/* /]} {[create] [ authorization.openshift.io] [selfsubjectrulesreviews] [] []} {[create] [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[list watch get] [servicecatalog.k8s.io] [clusterserviceclasses clusterserviceplans] [] []} {[get] [] [] [] [/healthz/ready]} {[create] [authorization.k8s.io] [selfsubjectaccessreviews selfsubjectrulesreviews] [] []} {[create] [ build.openshift.io] [builds/docker builds/optimizeddocker] [] []} {[create] [ build.openshift.io] [builds/jenkinspipeline] [] []} {[create] [ build.openshift.io] [builds/source] [] []} {[get] [] [] [] [/api /api/* /apis /apis/* /healthz /openapi /openapi/* /swagger-2.0.0.pb-v1 /swagger.json /swaggerapi /swaggerapi/* /version /version/]} {[delete] [ oauth.openshift.io] [oauthaccesstokens oauthauthorizetokens] [] []} {[get] [] [] [] [/version /version/* /api /api/* /apis /apis/* /oapi /oapi/* /openapi/v2 /swaggerapi /swaggerapi/* /swagger.json /swagger-2.0.0.pb-v1 /osapi /osapi/ /.well-known /.well-known/* /]} {[impersonate] [authentication.k8s.io] [userextras/scopes.authorization.openshift.io] [] []} {[create get] [ build.openshift.io] [buildconfigs/webhooks] [] []} {[*] [] [pods services services/finalizers endpoints persistentvolumeclaims events configmaps secrets serviceaccounts] [] []} {[*] [apps] [deployments deployments/finalizers daemonsets replicasets statefulsets] [] []} {[*] [monitoring.coreos.com] [alertmanagers prometheuses prometheusrules servicemonitors] [] []} {[*] [applicationmonitoring.integreatly.org] [applicationmonitorings applicationmonitorings/finalizers blackboxtargets blackboxtargets/finalizers] [] []} {[*] [integreatly.org] [grafanadatasources grafanadashboards grafanas grafanas/finalizers grafanadatasources/finalizers grafanadashboards/finalizers] [] []} {[*] [route.openshift.io] [routes routes/custom-host] [] []} {[*] [rbac.authorization.k8s.io] [rolebindings roles] [] []} {[*] [extensions] [ingresses] [] []} {[create] [authentication.k8s.io] [tokenreviews] [] []} {[create] [authorization.k8s.io] [subjectaccessreviews] [] []} {[get] [ image.openshift.io] [imagestreams/layers] [] []}] ruleResolutionErrors=[]","errorVerbose":"roles.rbac.authorization.k8s.io \"grafana-operator-role\" is forbidden: attempt to grant extra privileges: [{[*] [integreatly.org] [grafanadashboards/status] [] []} {[*] [integreatly.org] [grafanadatasources/status] [] []} {[*] [integreatly.org] [grafanas/status] [] []}] user=&{system:serviceaccount:application-monitoring:application-monitoring-operator 14b6a2a5-36f1-11ea-a98e-005056920bc0 [system:serviceaccounts system:serviceaccounts:application-monitoring system:authenticated] map[]} ownerrules=[{[get] [ user.openshift.io] [users] [~] []} {[list] [ project.openshift.io] [projectrequests] [] []} {[get list] [ authorization.openshift.io] [clusterroles] [] []} {[get list watch] [rbac.authorization.k8s.io] [clusterroles] [] []} {[get list] [storage.k8s.io] [storageclasses] [] []} {[list watch] [ project.openshift.io] [projects] [] []} {[create] [ authorization.openshift.io] [selfsubjectrulesreviews] [] []} {[create] [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[get] [] [] [] [/healthz /healthz/*]} {[get] [] [] [] [/version /version/* /api /api/* /apis /apis/* /oapi /oapi/* /openapi/v2 /swaggerapi /swaggerapi/* /swagger.json /swagger-2.0.0.pb-v1 /osapi /osapi/ /.well-known /.well-known/* /]} {[create] [ authorization.openshift.io] [selfsubjectrulesreviews] [] []} {[create] [authorization.k8s.io] [selfsubjectaccessreviews] [] []} {[list watch get] [servicecatalog.k8s.io] [clusterserviceclasses clusterserviceplans] [] []} {[get] [] [] [] [/healthz/ready]} {[create] [authorization.k8s.io] [selfsubjectaccessreviews selfsubjectrulesreviews] [] []} {[create] [ build.openshift.io] [builds/docker builds/optimizeddocker] [] []} {[create] [ build.openshift.io] [builds/jenkinspipeline] [] []} {[create] [ build.openshift.io] [builds/source] [] []} {[get] [] [] [] [/api /api/* /apis /apis/* /healthz /openapi /openapi/* /swagger-2.0.0.pb-v1 /swagger.json /swaggerapi /swaggerapi/* /version /version/]} {[delete] [ oauth.openshift.io] [oauthaccesstokens oauthauthorizetokens] [] []} {[get] [] [] [] [/version /version/* /api /api/* /apis /apis/* /oapi /oapi/* /openapi/v2 /swaggerapi /swaggerapi/* /swagger.json /swagger-2.0.0.pb-v1 /osapi /osapi/ /.well-known /.well-known/* /]} {[impersonate] [authentication.k8s.io] [userextras/scopes.authorization.openshift.io] [] []} {[create get] [ build.openshift.io] [buildconfigs/webhooks] [] []} {[*] [] [pods services services/finalizers endpoints persistentvolumeclaims events configmaps secrets serviceaccounts] [] []} {[*] [apps] [deployments deployments/finalizers daemonsets replicasets statefulsets] [] []} {[*] [monitoring.coreos.com] [alertmanagers prometheuses prometheusrules servicemonitors] [] []} {[*] [applicationmonitoring.integreatly.org] [applicationmonitorings applicationmonitorings/finalizers blackboxtargets blackboxtargets/finalizers] [] []} {[*] [integreatly.org] [grafanadatasources grafanadashboards grafanas grafanas/finalizers grafanadatasources/finalizers grafanadashboards/finalizers] [] []} {[*] [route.openshift.io] [routes routes/custom-host] [] []} {[*] [rbac.authorization.k8s.io] [rolebindings roles] [] []} {[*] [extensions] [ingresses] [] []} {[create] [authentication.k8s.io] [tokenreviews] [] []} {[create] [authorization.k8s.io] [subjectaccessreviews] [] []} {[get] [ image.openshift.io] [imagestreams/layers] [] []}] ruleResolutionErrors=[]\nerror creating resource\ngithub.com/integr8ly/application-monitoring-operator/pkg/controller/applicationmonitoring.(*ReconcileApplicationMonitoring).createResource\n\tapplication-monitoring-operator/pkg/controller/applicationmonitoring/applicationmonitoring_controller.go:516\ngithub.com/integr8ly/application-monitoring-operator/pkg/controller/applicationmonitoring.(*ReconcileApplicationMonitoring).installGrafanaOperator\n\tapplication-monitoring-operator/pkg/controller/applicationmonitoring/applicationmonitoring_controller.go:468\ngithub.com/integr8ly/application-monitoring-operator/pkg/controller/applicationmonitoring.(*ReconcileApplicationMonitoring).Reconcile\n\tapplication-monitoring-operator/pkg/controller/applicationmonitoring/applicationmonitoring_controller.go:158\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\tapplication-monitoring-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:216\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\tapplication-monitoring-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:192\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\tapplication-monitoring-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:171\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\tapplication-monitoring-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\tapplication-monitoring-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\tapplication-monitoring-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88\nruntime.goexit\n\t/home/dkirwan/bin/applications/go/src/runtime/asm_amd64.s:1357","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\tapplication-monitoring-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\tapplication-monitoring-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:218\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\tapplication-monitoring-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:192\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\tapplication-monitoring-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:171\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\tapplication-monitoring-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\tapplication-monitoring-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\tapplication-monitoring-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
W0114 17:18:51.623456       1 reflector.go:302] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:204: watch of *v1.Secret ended with: The resourceVersion for the provided watch is too old.

Can anybody help? Thanks a lot

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.