Code Monkey home page Code Monkey logo

Comments (22)

SaMnCo avatar SaMnCo commented on September 14, 2024 2

Hello,

I started this. Not using yet the SUSE Cert Generator, but it is a start.

https://github.com/madeden/charts/tree/master/custom-metrics-apiserver

Also, I did not add a dependency on Prometheus as ppl may want to use the operator instead of the chart. Any comments welcome.

Best,
Sam

from prometheus-adapter.

steven-sheehy avatar steven-sheehy commented on September 14, 2024 2

@DirectXMan12 I think this issue can be closed now. prometheus-adapter is now in the official chart repository under stable/prometheus-adapter.

from prometheus-adapter.

SaMnCo avatar SaMnCo commented on September 14, 2024 1

from prometheus-adapter.

tomkerkhove avatar tomkerkhove commented on September 14, 2024 1

This might be nit-picking but wouldn't prometheus-metrics-adapter be a better name or something along those lines that it indicates where it can be used for in the context of kubernetes?

Not a must, just thinking out loud here.

from prometheus-adapter.

DirectXMan12 avatar DirectXMan12 commented on September 14, 2024

skimmed it briefly. Looks promising. Thanks! Feel free to submit a PR or somesuch when you're ready.

from prometheus-adapter.

SaMnCo avatar SaMnCo commented on September 14, 2024

from prometheus-adapter.

DirectXMan12 avatar DirectXMan12 commented on September 14, 2024

I'm not incredibly familiar, but GKE 1.8 doesn't have the right flag set on the cluster to allow the HPA controller to access custom metrics. It'll be enabled for GKE 1.9 though, when that lands.

from prometheus-adapter.

SaMnCo avatar SaMnCo commented on September 14, 2024

from prometheus-adapter.

DirectXMan12 avatar DirectXMan12 commented on September 14, 2024

How do you check that?

I had a suspicion, because the default value was "false" in that release of Kubernetes, and then actually had to ask the relevant people on Slack to confirm.

I am getting my bare metal cluster today, will allow me to test the whole thing a bit better. Stay tuned.

👍

from prometheus-adapter.

SaMnCo avatar SaMnCo commented on September 14, 2024

Hi there! I am really struggling here, for some reason I am not getting the HPA to pick the values.

Here is what I have:

K8s 1.8.7 on Ubuntu using the Canonical Distribution of K8s. That means the control plane is outside of the cluster leaving on its own machines. Configs for it are:

$ sudo cat /var/snap/kube-controller-manager/current/args
--horizontal-pod-autoscaler-sync-period "10s"
--horizontal-pod-autoscaler-use-rest-clients
--logtostderr
--master "http://127.0.0.1:8080"
--min-resync-period "3m"
--root-ca-file "/root/cdk/ca.crt"
--service-account-private-key-file "/root/cdk/serviceaccount.key"
--v 2

and

$ sudo cat /var/snap/kube-apiserver/current/args
--admission-control "Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,DefaultTolerationSeconds"
--allow-privileged
--authorization-mode "RBAC,AlwaysAllow"
--basic-auth-file "/root/cdk/basic_auth.csv"
--client-ca-file "/root/cdk/ca.crt"
--enable-aggregator-routing
--etcd-cafile "/root/cdk/etcd/client-ca.pem"
--etcd-certfile "/root/cdk/etcd/client-cert.pem"
--etcd-keyfile "/root/cdk/etcd/client-key.pem"
--etcd-servers "https://10.30.0.142:2379,https://10.30.0.145:2379,https://10.30.0.148:2379"
--insecure-bind-address "127.0.0.1"
--insecure-port 8080
--kubelet-certificate-authority "/root/cdk/ca.crt"
--kubelet-client-certificate "/root/cdk/client.crt"
--kubelet-client-key "/root/cdk/client.key"
--logtostderr
--min-request-timeout 300
--requestheader-allowed-names "aggregator"
--requestheader-client-ca-file "/root/cdk/ca.crt"
--requestheader-extra-headers-prefix "X-Remote-Extra-"
--requestheader-group-headers "X-Remote-Group"
--requestheader-username-headers "X-Remote-User"
--runtime-config "api/all=true"
--service-account-key-file "/root/cdk/serviceaccount.key"
--service-cluster-ip-range "10.152.183.0/24"
--storage-backend "etcd2"
--tls-cert-file "/root/cdk/server.crt"
--tls-private-key-file "/root/cdk/server.key"
--token-auth-file "/root/cdk/known_tokens.csv"
--v 4

I deploy everything as a variation of your setup because of that. For example, instead of using the certs for connection of the custom metrics adapter, I need to use a kubeconfig. This means for the sake of using the SA as the authentication data that I have to create the service account in advance, then share the kubeconfig in values.yaml. Not very elegant :( but well, I got it to work that way and will improve over time.
Attached is the deployment manifest for reference.

Now, this gives me about everything I can ever desire:

$ kubectl api-versions
admissionregistration.k8s.io/v1alpha1
apiextensions.k8s.io/v1beta1
apiregistration.k8s.io/v1beta1
apps/v1beta1
apps/v1beta2
authentication.k8s.io/v1
authentication.k8s.io/v1beta1
authorization.k8s.io/v1
authorization.k8s.io/v1beta1
autoscaling/v1
autoscaling/v2beta1
batch/v1
batch/v1beta1
batch/v2alpha1
certificates.k8s.io/v1beta1
custom.metrics.k8s.io/v1beta1
extensions/v1beta1
monitoring.coreos.com/v1
networking.k8s.io/v1
policy/v1beta1
rbac.authorization.k8s.io/v1
rbac.authorization.k8s.io/v1alpha1
rbac.authorization.k8s.io/v1beta1
scheduling.k8s.io/v1alpha1
settings.k8s.io/v1alpha1
storage.k8s.io/v1
storage.k8s.io/v1beta1
tensorflow.org/v1alpha1

When I deploy the sample app, I get the custom API endpoint filled with

$ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests | jq .
{
  "kind": "MetricValueList",
  "apiVersion": "custom.metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/http_requests"
  },
  "items": [
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "default",
        "name": "sample-metrics-app-6c49858746-77njc",
        "apiVersion": "/__internal"
      },
      "metricName": "http_requests",
      "timestamp": "2018-01-25T09:03:05Z",
      "value": "433m"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "default",
        "name": "sample-metrics-app-6c49858746-tv6jp",
        "apiVersion": "/__internal"
      },
      "metricName": "http_requests",
      "timestamp": "2018-01-25T09:03:05Z",
      "value": "433m"
    }
  ]
}

$ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1/namespaces/default/services/*/http_requests | jq .
{
  "kind": "MetricValueList",
  "apiVersion": "custom.metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/services/%2A/http_requests"
  },
  "items": [
    {
      "describedObject": {
        "kind": "Service",
        "namespace": "default",
        "name": "sample-metrics-app",
        "apiVersion": "/__internal"
      },
      "metricName": "http_requests",
      "timestamp": "2018-01-25T09:04:07Z",
      "value": "866m"
    }
  ]
}

But

  • When the HPA is configured on the Service endpoint as in Luxas walkthrough:
$ kubectl get hpa.v2beta1.autoscaling -o yaml
apiVersion: v1
items:
- apiVersion: autoscaling/v2beta1
  kind: HorizontalPodAutoscaler
  metadata:
    creationTimestamp: 2018-01-25T06:58:26Z
    name: sample-metrics-app-hpa
    namespace: default
    resourceVersion: "1531643"
    selfLink: /apis/autoscaling/v2beta1/namespaces/default/horizontalpodautoscalers/sample-metrics-app-hpa
    uid: 241b5d5a-019d-11e8-ae4d-00a0a59b0704
  spec:
    maxReplicas: 10
    metrics:
    - object:
        metricName: http_requests
        target:
          kind: Service
          name: sample-metrics-app
        targetValue: "100"
      type: Object
    minReplicas: 2
    scaleTargetRef:
      kind: Deployment
      name: sample-metrics-app
  status:
    conditions:
    - lastTransitionTime: 2018-01-25T06:58:56Z
      message: the HPA controller was able to get the target's current scale
      reason: SucceededGetScale
      status: "True"
      type: AbleToScale
    - lastTransitionTime: 2018-01-25T06:58:56Z
      message: 'the HPA was unable to compute the replica count: unable to get metric
        http_requests: Service on default sample-metrics-app/object metrics are not
        yet supported'
      reason: FailedGetObjectMetric
      status: "False"
      type: ScalingActive
    currentMetrics: null
    currentReplicas: 2
    desiredReplicas: 0
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

and

Name:                                             sample-metrics-app-hpa
Namespace:                                        default
Labels:                                           <none>
Annotations:                                      kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"sample-metrics-app-hpa","namespace":"default"...
CreationTimestamp:                                Thu, 25 Jan 2018 01:58:26 -0500
Reference:                                        Deployment/sample-metrics-app
Metrics:                                          ( current / target )
  "http_requests" on Service/sample-metrics-app:  <unknown> / 100
Min replicas:                                     2
Max replicas:                                     10
Conditions:
  Type           Status  Reason                 Message
  ----           ------  ------                 -------
  AbleToScale    True    SucceededGetScale      the HPA controller was able to get the target's current scale
  ScalingActive  False   FailedGetObjectMetric  the HPA was unable to compute the replica count: unable to get metric http_requests: Service on default sample-metrics-app/object metrics are not yet supported
Events:
  Type     Reason                        Age                 From                       Message
  ----     ------                        ----                ----                       -------
  Warning  FailedGetObjectMetric         5m (x161 over 1h)   horizontal-pod-autoscaler  unable to get metric http_requests: Service on default sample-metrics-app/object metrics are not yet supported
  Warning  FailedComputeMetricsReplicas  21s (x171 over 1h)  horizontal-pod-autoscaler  failed to get object metric value: unable to get metric http_requests: Service on default sample-metrics-app/object metrics are not yet supported

Now if I configure it with the pods resource as in your walkthrough I get events such as

Conditions:
  Type           Status  Reason               Message
  ----           ------  ------               -------
  AbleToScale    True    SucceededGetScale    the HPA controller was able to get the target's current scale
  ScalingActive  False   FailedGetPodsMetric  the HPA was unable to compute the replica count: unable to get metric http_requests: failed to get pod metrics: an error on the server ("Error: 'dial tcp 10.1.44.99:8082: getsockopt: connection timed out'\nTrying to reach: 'http://10.1.44.99:8082/api/v1/model/namespaces/default/pod-list/sample-metrics-app-6c49858746-77njc,sample-metrics-app-6c49858746-tv6jp/metrics/http_requests?start=2018-01-25T08%3A59%3A15Z'") has prevented the request from succeeding (get services http:heapster:)
Events:
  Type     Reason                        Age                From                       Message
  ----     ------                        ----               ----                       -------
  Warning  FailedGetPodsMetric           33m                horizontal-pod-autoscaler  unable to get metric http_requests: failed to get pod metrics: an error on the server ("Error: 'dial tcp 10.1.44.99:8082: getsockopt: connection timed out'\nTrying to reach: 'http://10.1.44.99:8082/api/v1/model/namespaces/default/pod-list/sample-metrics-app-6c49858746-77njc,sample-metrics-app-6c49858746-tv6jp/metrics/http_requests?start=2018-01-25T08%3A27%3A26Z'") has prevented the request from succeeding (get services http:heapster:)
  Warning  FailedComputeMetricsReplicas  33m                horizontal-pod-autoscaler  failed to get pods metric value: unable to get metric http_requests: failed to get pod metrics: an error on the server ("Error: 'dial tcp 10.1.44.99:8082: getsockopt: connection timed out'\nTrying to reach: 'http://10.1.44.99:8082/api/v1/model/namespaces/default/pod-list/sample-metrics-app-6c49858746-77njc,sample-metrics-app-6c49858746-tv6jp/metrics/http_requests?start=2018-01-25T08%3A27%3A26Z'") has prevented the request from succeeding (get services http:heapster:)

(note that 10.1.44.99 is the heapster pod IP address in the cluster. It is therefore not available from the outside world, and the Controller Manager leaves outside on another machine and is not a pod in the K8s cluster...). I could not find the doc explaining if I may point the HPA to another endpoint to get the metrics.

Any idea? I am scratching my head all week around this and cannot figure out the issue. If you do have some time to discuss I'd be happy to go through it with you.

Thx in advance for your help,
Best,
Sam

from prometheus-adapter.

SaMnCo avatar SaMnCo commented on September 14, 2024

OK I nailed the issue, and it is working now. The problem was due to a down-scaling of the control plane that was not completed properly. To debug easily, I requested to go down to 1 master (from 3) in the development cluster.
As a result, the controller-manager remained in a state where it was asking to become the leader, but constantly logging that the lock was held by another node.

Weirdly (I would consider this a bug but would take advice here) this single controller-manager would make sure that new resources such as pods, services and every other objects are created. But it would not do anything else, and in particular not try to collect HPA values.

So I restarted it with a leader-elect=false and it magically came back to life. I am properly collecting values.

So happy !! 💪

Will complete the chart soon...

from prometheus-adapter.

galan avatar galan commented on September 14, 2024

Hi @SaMnCo, so when can we expect a first working chart? Thanks for your effort!

from prometheus-adapter.

john-delivuk avatar john-delivuk commented on September 14, 2024

@SaMnCo a few things.

  1. Secure port should be outside of the TLS. It can only serve secure, and because of this it if you set tls to false, you end up with it failing to start on 443.
  2. I had to set my cert directory because I couldn't write to root. I've made mine /tmp, but there may be a better place.
        {{ if .Values.service.tls.enable -}}
        - --tls-cert-file=/var/run/serving-cert/tls.crt
        - --tls-private-key-file=/var/run/serving-cert/tls.key
        {{ else }}
        - --cert-dir=/tmp/certs/ 
        {{- end }}
        {{ if eq .Values.service.authentication.method "kubeconfig" -}}
        - --authentication-kubeconfig=/var/run/kubeconfig/kubeconfig
        {{- end }}
        - --secure-port={{ .Values.service.internalPort }}
        - --logtostderr=true
        - --prometheus-url={{- .Values.prometheus.service.url -}}:{{- .Values.prometheus.service.port }}
        - --metrics-relist-interval=30s
        - --rate-interval=5m
        - --v=4```

Otherwise this works great and I hope this makes it to stable!

from prometheus-adapter.

bradenwright avatar bradenwright commented on September 14, 2024

I'm trying to get this going and have run into some issues. I used the helm chart, I made @john-delivuk changes above. Which got me by the self signed cert not having permissions error.

I also add a serviceaccount:

kind: ServiceAccount                                                                                                                                                                                        
apiVersion: v1
metadata:
  name: {{ template "custom-metrics-apiserver.fullname" . }}
  namespace: {{ .Release.Namespace }}

values of interest (service.tls.enabled=false, service.authentication.method.default):

service:
  name: custom-metrics-apiserver
  type: ClusterIP
  externalPort: 443 
  internalPort: 6443
  tls:
    enable: false
    ca: |-
    key: |-
    certificate: |-
  version: v1beta1
  authentication:
    # method to use to load configuration. By default, will use tokens. 
    # Acceptable options: default, kubeconfig 
    method: default

### Removed for readablity ###

prometheus: 
  service: 
    url: http://kube-prometheus-prometheus.monitoring.svc.cluster.local
    port: 9090

Note: I had errors when setting service.authentication.method.kubeconfig regarding certificate-authority-data: base64 certificate <-- https://github.com/madeden/charts/blob/master/custom-metrics-apiserver/values.yaml#L32

and installed via helm install --namespace monitoring custom-metrics-apiserver

I double checked the tutorials files and all resources are being created. But something with the APIService isn't working properly:

$ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
Error from server (NotFound): the server could not find the requested resource
$ kc describe apiservice -n monitoring v1beta1.custom.metrics.k8s.io
Name:         v1beta1.custom.metrics.k8s.io
Namespace:    
Labels:       <none>
Annotations:  <none>
API Version:  apiregistration.k8s.io/v1beta1
Kind:         APIService
Metadata:
  Creation Timestamp:  2018-04-30T14:04:35Z
  Resource Version:    7821499
  Self Link:           /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.custom.metrics.k8s.io
  UID:                 69f69629-4c7f-11e8-b800-0a9eddd384ce
Spec:
  Ca Bundle:                 <nil>
  Group:                     custom.metrics.k8s.io
  Group Priority Minimum:    100
  Insecure Skip TLS Verify:  true
  Service:
    Name:            custom-metrics-apiserver
    Namespace:       monitoring
  Version:           v1beta1
  Version Priority:  100
Status:
  Conditions:
    Last Transition Time:  2018-04-30T14:04:36Z
    Message:               all checks passed
    Reason:                Passed
    Status:                True
    Type:                  Available
Events:                    <none>

Pod is running but is showing some errors:

I0430 14:04:36.926378       1 serving.go:279] Generated self-signed cert (/tmp/certs/apiserver.crt, /tmp/certs/apiserver.key)
I0430 14:04:37.210679       1 round_trippers.go:383] GET https://100.64.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication
I0430 14:04:37.210701       1 round_trippers.go:390] Request Headers:
I0430 14:04:37.210706       1 round_trippers.go:393]     Accept: application/json, */*
I0430 14:04:37.210712       1 round_trippers.go:393]     User-Agent: prometheus-adapter/v0.0.0 (linux/amd64) kubernetes/$Format
I0430 14:04:37.210717       1 round_trippers.go:393]     Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtb25pdG9yaW5nIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InN0ZWVseS1ncmV5aG91bmQtY3VzdG9tLW1ldHJpY3MtYXBpc2VydmVyLXRva2VuLWc0anJqIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6InN0ZWVseS1ncmV5aG91bmQtY3VzdG9tLW1ldHJpY3MtYXBpc2VydmVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNjlkNTMwYTEtNGM3Zi0xMWU4LWI4MDAtMGE5ZWRkZDM4NGNlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om1vbml0b3Jpbmc6c3RlZWx5LWdyZXlob3VuZC1jdXN0b20tbWV0cmljcy1hcGlzZXJ2ZXIifQ.NQCSDNS7teL3cO5mRhLVbiCvpIMVfaCg3cp7eVsJCXSBduoCWf99MlsiEXsY3neo9k6Ytknwcsk84B4kEW8Z7QVgyS6ZmRLoVNuKJKja91K7GWNs8rah_w6tdp5Ba9TFpiBFIwjse5gOOIrSCFt5olTk5ElC66CYjRBfxZJy4G7JtebphrbFXi_9ebMGdWSNlH6jR67o1-fjz8mUCgvRMoYSh0PmE_3XJDxHwbfuk9bj4RwhKlcUHv9T1INJmqIEVmyQAqET9GuRyc4xbWNta5Urzy1tIKthN8mLYBckjizIcKBRYBYFTUVCksNQNIJjINQ7mMyJD2SipL9sjWxMUQ
I0430 14:04:37.222308       1 round_trippers.go:408] Response Status: 200 OK in 11 milliseconds
I0430 14:04:37.223038       1 round_trippers.go:383] GET https://100.64.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication
I0430 14:04:37.223053       1 round_trippers.go:390] Request Headers:
I0430 14:04:37.223058       1 round_trippers.go:393]     Accept: application/json, */*
I0430 14:04:37.223063       1 round_trippers.go:393]     User-Agent: prometheus-adapter/v0.0.0 (linux/amd64) kubernetes/$Format
I0430 14:04:37.223067       1 round_trippers.go:393]     Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtb25pdG9yaW5nIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InN0ZWVseS1ncmV5aG91bmQtY3VzdG9tLW1ldHJpY3MtYXBpc2VydmVyLXRva2VuLWc0anJqIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6InN0ZWVseS1ncmV5aG91bmQtY3VzdG9tLW1ldHJpY3MtYXBpc2VydmVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNjlkNTMwYTEtNGM3Zi0xMWU4LWI4MDAtMGE5ZWRkZDM4NGNlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om1vbml0b3Jpbmc6c3RlZWx5LWdyZXlob3VuZC1jdXN0b20tbWV0cmljcy1hcGlzZXJ2ZXIifQ.NQCSDNS7teL3cO5mRhLVbiCvpIMVfaCg3cp7eVsJCXSBduoCWf99MlsiEXsY3neo9k6Ytknwcsk84B4kEW8Z7QVgyS6ZmRLoVNuKJKja91K7GWNs8rah_w6tdp5Ba9TFpiBFIwjse5gOOIrSCFt5olTk5ElC66CYjRBfxZJy4G7JtebphrbFXi_9ebMGdWSNlH6jR67o1-fjz8mUCgvRMoYSh0PmE_3XJDxHwbfuk9bj4RwhKlcUHv9T1INJmqIEVmyQAqET9GuRyc4xbWNta5Urzy1tIKthN8mLYBckjizIcKBRYBYFTUVCksNQNIJjINQ7mMyJD2SipL9sjWxMUQ
I0430 14:04:37.224737       1 round_trippers.go:408] Response Status: 200 OK in 1 milliseconds
I0430 14:04:37.225647       1 round_trippers.go:383] GET https://100.64.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication
I0430 14:04:37.225661       1 round_trippers.go:390] Request Headers:
I0430 14:04:37.225666       1 round_trippers.go:393]     Accept: application/json, */*
I0430 14:04:37.225670       1 round_trippers.go:393]     User-Agent: prometheus-adapter/v0.0.0 (linux/amd64) kubernetes/$Format
I0430 14:04:37.225675       1 round_trippers.go:393]     Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtb25pdG9yaW5nIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InN0ZWVseS1ncmV5aG91bmQtY3VzdG9tLW1ldHJpY3MtYXBpc2VydmVyLXRva2VuLWc0anJqIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6InN0ZWVseS1ncmV5aG91bmQtY3VzdG9tLW1ldHJpY3MtYXBpc2VydmVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNjlkNTMwYTEtNGM3Zi0xMWU4LWI4MDAtMGE5ZWRkZDM4NGNlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om1vbml0b3Jpbmc6c3RlZWx5LWdyZXlob3VuZC1jdXN0b20tbWV0cmljcy1hcGlzZXJ2ZXIifQ.NQCSDNS7teL3cO5mRhLVbiCvpIMVfaCg3cp7eVsJCXSBduoCWf99MlsiEXsY3neo9k6Ytknwcsk84B4kEW8Z7QVgyS6ZmRLoVNuKJKja91K7GWNs8rah_w6tdp5Ba9TFpiBFIwjse5gOOIrSCFt5olTk5ElC66CYjRBfxZJy4G7JtebphrbFXi_9ebMGdWSNlH6jR67o1-fjz8mUCgvRMoYSh0PmE_3XJDxHwbfuk9bj4RwhKlcUHv9T1INJmqIEVmyQAqET9GuRyc4xbWNta5Urzy1tIKthN8mLYBckjizIcKBRYBYFTUVCksNQNIJjINQ7mMyJD2SipL9sjWxMUQ
I0430 14:04:37.227560       1 round_trippers.go:408] Response Status: 200 OK in 1 milliseconds
I0430 14:04:37.227944       1 round_trippers.go:383] GET https://100.64.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication
I0430 14:04:37.227959       1 round_trippers.go:390] Request Headers:
I0430 14:04:37.227964       1 round_trippers.go:393]     User-Agent: prometheus-adapter/v0.0.0 (linux/amd64) kubernetes/$Format
I0430 14:04:37.227970       1 round_trippers.go:393]     Accept: application/json, */*
I0430 14:04:37.227974       1 round_trippers.go:393]     Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtb25pdG9yaW5nIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InN0ZWVseS1ncmV5aG91bmQtY3VzdG9tLW1ldHJpY3MtYXBpc2VydmVyLXRva2VuLWc0anJqIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6InN0ZWVseS1ncmV5aG91bmQtY3VzdG9tLW1ldHJpY3MtYXBpc2VydmVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNjlkNTMwYTEtNGM3Zi0xMWU4LWI4MDAtMGE5ZWRkZDM4NGNlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om1vbml0b3Jpbmc6c3RlZWx5LWdyZXlob3VuZC1jdXN0b20tbWV0cmljcy1hcGlzZXJ2ZXIifQ.NQCSDNS7teL3cO5mRhLVbiCvpIMVfaCg3cp7eVsJCXSBduoCWf99MlsiEXsY3neo9k6Ytknwcsk84B4kEW8Z7QVgyS6ZmRLoVNuKJKja91K7GWNs8rah_w6tdp5Ba9TFpiBFIwjse5gOOIrSCFt5olTk5ElC66CYjRBfxZJy4G7JtebphrbFXi_9ebMGdWSNlH6jR67o1-fjz8mUCgvRMoYSh0PmE_3XJDxHwbfuk9bj4RwhKlcUHv9T1INJmqIEVmyQAqET9GuRyc4xbWNta5Urzy1tIKthN8mLYBckjizIcKBRYBYFTUVCksNQNIJjINQ7mMyJD2SipL9sjWxMUQ
I0430 14:04:37.229626       1 round_trippers.go:408] Response Status: 200 OK in 1 milliseconds
I0430 14:04:37.230314       1 round_trippers.go:383] GET https://100.64.0.1:443/api
I0430 14:04:37.230330       1 round_trippers.go:390] Request Headers:
I0430 14:04:37.230335       1 round_trippers.go:393]     Accept: application/json, */*
I0430 14:04:37.230340       1 round_trippers.go:393]     User-Agent: prometheus-adapter/v0.0.0 (linux/amd64) kubernetes/$Format
I0430 14:04:37.230347       1 round_trippers.go:393]     Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtb25pdG9yaW5nIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InN0ZWVseS1ncmV5aG91bmQtY3VzdG9tLW1ldHJpY3MtYXBpc2VydmVyLXRva2VuLWc0anJqIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6InN0ZWVseS1ncmV5aG91bmQtY3VzdG9tLW1ldHJpY3MtYXBpc2VydmVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNjlkNTMwYTEtNGM3Zi0xMWU4LWI4MDAtMGE5ZWRkZDM4NGNlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om1vbml0b3Jpbmc6c3RlZWx5LWdyZXlob3VuZC1jdXN0b20tbWV0cmljcy1hcGlzZXJ2ZXIifQ.NQCSDNS7teL3cO5mRhLVbiCvpIMVfaCg3cp7eVsJCXSBduoCWf99MlsiEXsY3neo9k6Ytknwcsk84B4kEW8Z7QVgyS6ZmRLoVNuKJKja91K7GWNs8rah_w6tdp5Ba9TFpiBFIwjse5gOOIrSCFt5olTk5ElC66CYjRBfxZJy4G7JtebphrbFXi_9ebMGdWSNlH6jR67o1-fjz8mUCgvRMoYSh0PmE_3XJDxHwbfuk9bj4RwhKlcUHv9T1INJmqIEVmyQAqET9GuRyc4xbWNta5Urzy1tIKthN8mLYBckjizIcKBRYBYFTUVCksNQNIJjINQ7mMyJD2SipL9sjWxMUQ
I0430 14:04:37.231208       1 round_trippers.go:408] Response Status: 200 OK in 0 milliseconds

... (errors in next section)

I0430 14:05:54.438388       1 round_trippers.go:393]     Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtb25pdG9yaW5nIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InN0ZWVseS1ncmV5aG91bmQtY3VzdG9tLW1ldHJpY3MtYXBpc2VydmVyLXRva2VuLWc0anJqIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6InN0ZWVseS1ncmV5aG91bmQtY3VzdG9tLW1ldHJpY3MtYXBpc2VydmVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNjlkNTMwYTEtNGM3Zi0xMWU4LWI4MDAtMGE5ZWRkZDM4NGNlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om1vbml0b3Jpbmc6c3RlZWx5LWdyZXlob3VuZC1jdXN0b20tbWV0cmljcy1hcGlzZXJ2ZXIifQ.NQCSDNS7teL3cO5mRhLVbiCvpIMVfaCg3cp7eVsJCXSBduoCWf99MlsiEXsY3neo9k6Ytknwcsk84B4kEW8Z7QVgyS6ZmRLoVNuKJKja91K7GWNs8rah_w6tdp5Ba9TFpiBFIwjse5gOOIrSCFt5olTk5ElC66CYjRBfxZJy4G7JtebphrbFXi_9ebMGdWSNlH6jR67o1-fjz8mUCgvRMoYSh0PmE_3XJDxHwbfuk9bj4RwhKlcUHv9T1INJmqIEVmyQAqET9GuRyc4xbWNta5Urzy1tIKthN8mLYBckjizIcKBRYBYFTUVCksNQNIJjINQ7mMyJD2SipL9sjWxMUQ
I0430 14:05:54.440644       1 round_trippers.go:408] Response Status: 201 Created in 2 milliseconds
I0430 14:05:54.440806       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1: (2.747724ms) 404 [[kube-controller-manager/v1.8.7 (linux/amd64) kubernetes/b30876a/system:serviceaccount:kube-system:generic-garbage-collector] 100.96.0.1:28482]
E0430 14:05:54.964952       1 metric_namer.go:89] Unable to process namespaced series "kube_cronjob_created": unable to process prometheus series kube_cronjob_created: {  cronjob} matches multiple resources [{batch v1beta1 cronjobs} {batch v2alpha1 cronjobs}]
E0430 14:05:54.965076       1 metric_namer.go:89] Unable to process namespaced series "kube_cronjob_info": unable to process prometheus series kube_cronjob_info: {  cronjob} matches multiple resources [{batch v1beta1 cronjobs} {batch v2alpha1 cronjobs}]
E0430 14:05:54.965236       1 metric_namer.go:89] Unable to process namespaced series "kube_cronjob_labels": unable to process prometheus series kube_cronjob_labels: {  cronjob} matches multiple resources [{batch v1beta1 cronjobs} {batch v2alpha1 cronjobs}]
E0430 14:05:54.965267       1 metric_namer.go:89] Unable to process namespaced series "kube_cronjob_next_schedule_time": unable to process prometheus series kube_cronjob_next_schedule_time: {  cronjob} matches multiple resources [{batch v1beta1 cronjobs} {batch v2alpha1 cronjobs}]
E0430 14:05:54.965344       1 metric_namer.go:89] Unable to process namespaced series "kube_cronjob_spec_suspend": unable to process prometheus series kube_cronjob_spec_suspend: {  cronjob} matches multiple resources [{batch v1beta1 cronjobs} {batch v2alpha1 cronjobs}]
E0430 14:05:54.965434       1 metric_namer.go:89] Unable to process namespaced series "kube_cronjob_status_active": unable to process prometheus series kube_cronjob_status_active: {  cronjob} matches multiple resources [{batch v1beta1 cronjobs} {batch v2alpha1 cronjobs}]
E0430 14:05:54.965465       1 metric_namer.go:89] Unable to process namespaced series "kube_cronjob_status_last_schedule_time": unable to process prometheus series kube_cronjob_status_last_schedule_time: {  cronjob} matches multiple resources [{batch v1beta1 cronjobs} {batch v2alpha1 cronjobs}]
I0430 14:05:55.864569       1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1: (326.462µs) 404 [[kube-controller-manager/v1.8.7 (linux/amd64) kubernetes/b30876a/system:serviceaccount:kube-system:generic-garbage-collector] 100.96.0.1:28482]
I0430 14:06:07.934425       1 round_trippers.go:383] POST https://100.64.0.1:443/apis/authorization.k8s.io/v1beta1/subjectaccessreviews
I0430 14:06:07.934447       1 round_trippers.go:390] Request Headers:
I0430 14:06:07.934452       1 round_trippers.go:393]     User-Agent: prometheus-adapter/v0.0.0 (linux/amd64) kubernetes/$Format
I0430 14:06:07.934457       1 round_trippers.go:393]     Content-Type: application/json
I0430 14:06:07.934461       1 round_trippers.go:393]     Accept: application/json, */*
I0430 14:06:07.934465       1 round_trippers.go:393]     Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtb25pdG9yaW5nIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6InN0ZWVseS1ncmV5aG91bmQtY3VzdG9tLW1ldHJpY3MtYXBpc2VydmVyLXRva2VuLWc0anJqIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6InN0ZWVseS1ncmV5aG91bmQtY3VzdG9tLW1ldHJpY3MtYXBpc2VydmVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNjlkNTMwYTEtNGM3Zi0xMWU4LWI4MDAtMGE5ZWRkZDM4NGNlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Om1vbml0b3Jpbmc6c3RlZWx5LWdyZXlob3VuZC1jdXN0b20tbWV0cmljcy1hcGlzZXJ2ZXIifQ.NQCSDNS7teL3cO5mRhLVbiCvpIMVfaCg3cp7eVsJCXSBduoCWf99MlsiEXsY3neo9k6Ytknwcsk84B4kEW8Z7QVgyS6ZmRLoVNuKJKja91K7GWNs8rah_w6tdp5Ba9TFpiBFIwjse5gOOIrSCFt5olTk5ElC66CYjRBfxZJy4G7JtebphrbFXi_9ebMGdWSNlH6jR67o1-fjz8mUCgvRMoYSh0PmE_3XJDxHwbfuk9bj4RwhKlcUHv9T1INJmqIEVmyQAqET9GuRyc4xbWNta5Urzy1tIKthN8mLYBckjizIcKBRYBYFTUVCksNQNIJjINQ7mMyJD2SipL9sjWxMUQ
I0430 14:06:07.936901       1 round_trippers.go:408] Response Status: 201 Created in 2 milliseconds
I0430 14:06:07.937168       1 wrap.go:42] GET /swagger.json: (3.025475ms) 404 [[] 100.96.0.1:28482]

If anyone, @SaMnCo @john-delivuk @DirectXMan12 can point me in the right direction that would be great.

from prometheus-adapter.

bradenwright avatar bradenwright commented on September 14, 2024

Some more details:

$ kc describe apiservice -n monitoring v1beta1.custom.metrics.k8s.io
Name:         v1beta1.custom.metrics.k8s.io
Namespace:    
Labels:       <none>
Annotations:  <none>
API Version:  apiregistration.k8s.io/v1beta1
Kind:         APIService
Metadata:
  Creation Timestamp:  2018-04-30T14:04:35Z
  Resource Version:    7821499
  Self Link:           /apis/apiregistration.k8s.io/v1beta1/apiservices/v1beta1.custom.metrics.k8s.io
  UID:                 69f69629-4c7f-11e8-b800-0a9eddd384ce
Spec:
  Ca Bundle:                 <nil>
  Group:                     custom.metrics.k8s.io
  Group Priority Minimum:    100
  Insecure Skip TLS Verify:  true
  Service:
    Name:            custom-metrics-apiserver
    Namespace:       monitoring
  Version:           v1beta1
  Version Priority:  100
Status:
  Conditions:
    Last Transition Time:  2018-04-30T14:04:36Z
    Message:               all checks passed
    Reason:                Passed
    Status:                True
    Type:                  Available
Events:                    <none>

$ kc describe svc -n monitoring custom-metrics-apiserver 
Name:              custom-metrics-apiserver
Namespace:         monitoring
Labels:            <none>
Annotations:       <none>
Selector:          app=steely-greyhound-custom-metrics-apiserver
Type:              ClusterIP
IP:                100.65.88.67
Port:              custom-metrics-apiserver  443/TCP
TargetPort:        6443/TCP
Endpoints:         100.116.0.12:6443
Session Affinity:  None
Events:            <none>

$ kc describe -n monitoring po steely-greyhound-custom-metrics-apiserver-6997f8c8b6-ng7ld
Name:           steely-greyhound-custom-metrics-apiserver-6997f8c8b6-ng7ld
Namespace:      monitoring
Node:           ip-10-31-55-51.us-west-2.compute.internal/10.31.55.51
Start Time:     Mon, 30 Apr 2018 09:04:35 -0500
Labels:         app=steely-greyhound-custom-metrics-apiserver
                pod-template-hash=2553947462
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"monitoring","name":"steely-greyhound-custom-metrics-apiserver-6997f8c8b6","uid":"...
Status:         Running
IP:             100.116.0.12
Controlled By:  ReplicaSet/steely-greyhound-custom-metrics-apiserver-6997f8c8b6
Containers:
  steely-greyhound-custom-metrics-apiserver:
    Container ID:  docker://2c128c30077fec730d9de22540e5237a4a62de1b784e0c02eb94740552735c12
    Image:         directxman12/k8s-prometheus-adapter:v0.1.0-centos
    Image ID:      docker-pullable://directxman12/k8s-prometheus-adapter@sha256:a891a4d4e70e83e2664b1c60fdfc19bf3f94793b497342c81169a845d3412927
    Port:          6443/TCP
    Host Port:     0/TCP
    Args:
      --cert-dir=/tmp/certs/
      --secure-port=6443
      --logtostderr=true
      --prometheus-url=http://kube-prometheus-prometheus.monitoring.svc.cluster.local:9090
      --rate-interval=20s
      --metrics-relist-interval=30s
      --discovery-interval=30s
      --v=7
    State:          Running
      Started:      Mon, 30 Apr 2018 09:04:36 -0500
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from steely-greyhound-custom-metrics-apiserver-token-g4jrj (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  steely-greyhound-custom-metrics-apiserver-token-g4jrj:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  steely-greyhound-custom-metrics-apiserver-token-g4jrj
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                 node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason                 Age   From                                                Message
  ----    ------                 ----  ----                                                -------
  Normal  Scheduled              27m   default-scheduler                                   Successfully assigned steely-greyhound-custom-metrics-apiserver-6997f8c8b6-ng7ld to ip-10-31-55-51.us-west-2.compute.internal
  Normal  SuccessfulMountVolume  27m   kubelet, ip-10-31-55-51.us-west-2.compute.internal  MountVolume.SetUp succeeded for volume "steely-greyhound-custom-metrics-apiserver-token-g4jrj"
  Normal  Pulled                 27m   kubelet, ip-10-31-55-51.us-west-2.compute.internal  Container image "directxman12/k8s-prometheus-adapter:v0.1.0-centos" already present on machine
  Normal  Created                27m   kubelet, ip-10-31-55-51.us-west-2.compute.internal  Created container
  Normal  Started                27m   kubelet, ip-10-31-55-51.us-west-2.compute.internal  Started container

from prometheus-adapter.

bradenwright avatar bradenwright commented on September 14, 2024

@SaMnCo @john-delivuk @DirectXMan12 anyone have any ideas? or running into the same issue?

from prometheus-adapter.

jonasrmichel avatar jonasrmichel commented on September 14, 2024

@bradenwright -- Did you ever solve your issue?

I performed the same steps as you -- thanks for your note about the additional ServiceAccount. However, I additionally set the image.tag=advanced-config (instead of v.0.1.0-centos).

(It looks like the directxman12/k8s-prometheus-adapter:advanced-config image was only published c. one month ago.)

I see a similar set of errors logged by the adapter pod, however it doesn't "seem" to be an issue. I'm able to register and collect custom application metrics (e.g., following @luxas' HTTP request counter demo).

from prometheus-adapter.

steven-sheehy avatar steven-sheehy commented on September 14, 2024

The directxman12/k8s-prometheus-adapter-amd64 has a newer tag of v0.2.1. @SaMnCo helm chart doesn't seem to be compatible with the latest release. Anyone happen to have an updated helm chart with the above issues addressed?

from prometheus-adapter.

gajus avatar gajus commented on September 14, 2024

By MWC (End of Month). I am polishing the demo and various use cases for it
between now and then.

@SaMnCo Is there a link to this?

from prometheus-adapter.

steven-sheehy avatar steven-sheehy commented on September 14, 2024

I've submitted a helm chart to the official charts repository: helm/charts#7159

If @DirectXMan12 and anyone else can provide their feedback and testing, I would appreciate it.

from prometheus-adapter.

DirectXMan12 avatar DirectXMan12 commented on September 14, 2024

cool :-)

from prometheus-adapter.

tomkerkhove avatar tomkerkhove commented on September 14, 2024

Awesome, thanks!

from prometheus-adapter.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.