Code Monkey home page Code Monkey logo

Comments (11)

fevisera avatar fevisera commented on August 28, 2024 1

Hi @pomland-94,

Thanks for the additional information. I was able to reproduce the issue by setting the default provisioner to an empty storage class (storageClass: "").

The error arises because the pod name prometheus-prometheus-kube-prometheus-prometheus-shard-1-cf89697fb exceeds the 63-character limit imposed by the DNS naming specification in Kubernetes. You can find more details about this in https://github.com/bitnami/charts/blob/main/bitnami/common/templates/_names.tpl#L21-L37 and https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set.

To address this, you can simply set the release name to match the chart name during installation. This will ensure the pod names stay within the character limit.

$ helm install --namespace prometheus --create-namespace kube-prometheus bitnami/kube-prometheus -f values.yaml
$ kubectl get all
...
NAME                                                             READY   AGE
statefulset.apps/alertmanager-kube-prometheus-alertmanager       1/1     73s
statefulset.apps/prometheus-kube-prometheus-prometheus           3/3     73s
statefulset.apps/prometheus-kube-prometheus-prometheus-shard-1   3/3     73s
statefulset.apps/prometheus-kube-prometheus-prometheus-shard-2   3/3     73s
statefulset.apps/prometheus-kube-prometheus-prometheus-shard-3   3/3     73s
statefulset.apps/prometheus-kube-prometheus-prometheus-shard-4   3/3     73s

Please let me know if this solution helps resolve your issue.

from charts.

github-actions avatar github-actions commented on August 28, 2024

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

from charts.

fevisera avatar fevisera commented on August 28, 2024

Hi @pomland-94,

Sorry for the delay. I tried to reproduce your issue but I could not. Could you provide details on how you are deploying the chart? I could locate any StatefulSets when I deployed kube-prometheus chart:

$ cd bitnami/kube-prometheus
$ helm install -n prometheus prometheus . -f my-values.yaml --create-namespace
$ kubectl -n prometheus get pod
NAME                                                            READY   STATUS    RESTARTS   AGE
prometheus-kube-prometheus-blackbox-exporter-6c98576967-v86zv   1/1     Running   0          3m47s
prometheus-kube-prometheus-operator-55f857df44-tjv5t            1/1     Running   0          3m47s
prometheus-kube-state-metrics-5bf4fb9dcd-qfsk2                  1/1     Running   0          3m47s
prometheus-node-exporter-gpc8t                                  1/1     Running   0          3m47s
$ kubectl -n prometheus get statefulsets
No resources found in default namespace.

from charts.

pomland-94 avatar pomland-94 commented on August 28, 2024

When you look you see that I have only Problems with the Shard Statefulset

kubectl --namespace prometheus describe statefulset.apps/prometheus-prometheus-kube-prometheus-prometheus-shard-1

from charts.

fevisera avatar fevisera commented on August 28, 2024

Hi @pomland-94,

In the previous commands I shared I noticed the absence of statefulsets after deploying the chart. Could you let me know if you are creating them separately? Please provide details (configuration, commands, etc.) on how you are deploying the chart. This would help me in reproducing the issue and helping you in finding a solution.

Thank you.

from charts.

pomland-94 avatar pomland-94 commented on August 28, 2024

I install everything with Helm, see the following Command:

helm install --namespace prometheus --create-namespace prometheus bitnami/kube-prometheus -f values.yaml

My Values File looks like the following:

operator:
  enabled: true
  containerSecurityContext:
    enabled: true
    seLinuxOptions: null
    runAsUser: 1001
    runAsNonRoot: true
    privileged: false
    readOnlyRootFilesystem: false
    allowPrivilegeEscalation: false
    capabilities:
      drop: ["ALL"]
    seccompProfile:
      type: "RuntimeDefault"
  serviceMonitor:
    enabled: true
  kubeletService:
    enabled: true
    namespace: kube-system
  prometheusConfigReloader:
    containerSecurityContext:
      enabled: true
      seLinuxOptions: null
      runAsUser: 1001
      runAsNonRoot: true
      privileged: false
      readOnlyRootFilesystem: false
      allowPrivilegeEscalation: false
      capabilities:
        drop: ["ALL"]
      seccompProfile:
        type: "RuntimeDefault"
prometheus:
  enabled: true
  serviceAccount:
    create: true
    automountServiceAccountToken: false
  containerSecurityContext:
    enabled: true
    seLinuxOptions: null
    runAsUser: 1001
    runAsNonRoot: true
    privileged: false
    readOnlyRootFilesystem: false
    allowPrivilegeEscalation: false
    capabilities:
      drop: ["ALL"]
    seccompProfile:
      type: "RuntimeDefault"
  serviceMonitor:
    enabled: true
  ingress:
    enabled: false
    hostname: prometheus.local
    annotations: {}
    ingressClassName: ""
    tls: false
    selfSigned: false
  externalUrl: ""
  enableAdminAPI: false
  ## @param prometheus.enableFeatures Enable access to Prometheus disabled features.
  ## ref: https://prometheus.io/docs/prometheus/latest/disabled_features/
  ##
  enableFeatures: []
  ## @param prometheus.alertingEndpoints Alertmanagers to which alerts will be sent
  ## ref: https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/api.md#alertmanagerendpoints
  ##
  alertingEndpoints: []
  retention: 90d
  ## @param prometheus.retentionSize Maximum size of metrics
  ##
  disableCompaction: false
  walCompression: false
  replicaCount: 3
  shards: 5
  persistence:
    enabled: true
    storageClass: "rook-cephfs"
    accessModes:
      - ReadWriteMany
    size: 25Gi
  ## @param prometheus.additionalPrometheusRules PrometheusRule defines recording and alerting rules for a Prometheus instance.
  ## - name: custom-recording-rules
  ##   groups:
  ##     - name: sum_node_by_job
  ##       rules:
  ##         - record: job:kube_node_labels:sum
  ##           expr: sum(kube_node_labels) by (job)
  ##     - name: sum_prometheus_config_reload_by_pod
  ##       rules:
  ##         - record: job:prometheus_config_last_reload_successful:sum
  ##           expr: sum(prometheus_config_last_reload_successful) by (pod)
  ## - name: custom-alerting-rules
  ##   groups:
  ##     - name: prometheus-config
  ##       rules:
  ##         - alert: PrometheusConfigurationReload
  ##           expr: prometheus_config_last_reload_successful > 0
  ##           for: 1m
  ##           labels:
  ##             severity: error
  ##           annotations:
  ##             summary: "Prometheus configuration reload (instance {{ $labels.instance }})"
  ##             description: "Prometheus configuration reload error\n  VALUE = {{ $value }}\n  LABELS: {{ $labels }}"
  ##     - name: custom-node-exporter-alerting-rules
  ##       rules:
  ##         - alert: PhysicalComponentTooHot
  ##           expr: node_hwmon_temp_celsius > 75
  ##           for: 5m
  ##           labels:
  ##             severity: warning
  ##           annotations:
  ##             summary: "Physical component too hot (instance {{ $labels.instance }})"
  ##             description: "Physical hardware component too hot\n  VALUE = {{ $value }}\n  LABELS: {{ $labels }}"
  ##         - alert: NodeOvertemperatureAlarm
  ##           expr: node_hwmon_temp_alarm == 1
  ##           for: 5m
  ##           labels:
  ##             severity: critical
  ##           annotations:
  ##             summary: "Node overtemperature alarm (instance {{ $labels.instance }})"
  ##             description: "Physical node temperature alarm triggered\n  VALUE = {{ $value }}\n  LABELS: {{ $labels }}"
  ##
  ## @param prometheus.additionalArgs Allows setting additional arguments for the Prometheus container
  ## ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#monitoring.coreos.com/v1.Prometheus
  ##
  additionalArgs: []
  additionalPrometheusRules: []
  ## Note that the prometheus will fail to provision if the correct secret does not exist.
  ## @param prometheus.additionalScrapeConfigs.enabled Enable additional scrape configs
  ## @param prometheus.additionalScrapeConfigs.type Indicates if the cart should use external additional scrape configs or internal configs
  ## @param prometheus.additionalScrapeConfigs.external.name Name of the secret that Prometheus should use for the additional external scrape configuration
  ## @param prometheus.additionalScrapeConfigs.external.key Name of the key inside the secret to be used for the additional external scrape configuration
  ## @param prometheus.additionalScrapeConfigs.internal.jobList A list of Prometheus scrape jobs
  ##
  additionalScrapeConfigs:
    enabled: false
    type: external
    external:
      ## Name of the secret that Prometheus should use for the additional scrape configuration
      ##
      name: ""
      ## Name of the key inside the secret to be used for the additional scrape configuration.
      ##
      key: ""
    internal:
      jobList: []
  ## @param prometheus.additionalScrapeConfigsExternal.enabled Deprecated: Enable additional scrape configs that are managed externally to this chart
  ## @param prometheus.additionalScrapeConfigsExternal.name Deprecated: Name of the secret that Prometheus should use for the additional scrape configuration
  ## @param prometheus.additionalScrapeConfigsExternal.key Deprecated: Name of the key inside the secret to be used for the additional scrape configuration
  ##
  additionalScrapeConfigsExternal:
    enabled: false
    name: ""
    key: ""
  thanos:
    create: false
    containerSecurityContext:
      enabled: true
      seLinuxOptions: null
      runAsUser: 1001
      runAsNonRoot: true
      privileged: false
      readOnlyRootFilesystem: false
      allowPrivilegeEscalation: false
      capabilities:
        drop: ["ALL"]
      seccompProfile:
        type: "RuntimeDefault"
    ingress:
      enabled: false
      hostname: thanos.prometheus.local
      annotations: {}
      ingressClassName: ""
      tls: false
      selfSigned: false
  configReloader:
    service:
      enabled: false
    serviceMonitor:
      enabled: false
alertmanager:
  enabled: true
  serviceAccount:
    create: true
  containerSecurityContext:
    enabled: true
    seLinuxOptions: null
    runAsUser: 1001
    runAsNonRoot: true
    privileged: false
    readOnlyRootFilesystem: false
    allowPrivilegeEscalation: false
    capabilities:
      drop: ["ALL"]
    seccompProfile:
      type: "RuntimeDefault"
  serviceMonitor:
    enabled: true
  ingress:
    enabled: false
    hostname: alertmanager.local
    annotations: {}
    ingressClassName: ""
    tls: false
    selfSigned: false
  externalUrl: ""
  config:
    global:
      resolve_timeout: 5m
    route:
      group_by: ['job']
      group_wait: 30s
      group_interval: 5m
      repeat_interval: 12h
      receiver: 'null'
      routes:
        - match:
            alertname: Watchdog
          receiver: 'null'
    receivers:
      - name: 'null'
  ## @param alertmanager.templateFiles Extra files to be added inside the `alertmanager-{{ template "kube-prometheus.alertmanager.fullname" . }}` secret.
  ##
  templateFiles: {}
  externalConfig: false
  replicaCount: 1
  persistence:
    enabled: true
    storageClass: "rook-cephfs"
    accessModes:
      - ReadWriteMany
    size: 25Gi
exporters:
  node-exporter:
    enabled: true
  kube-state-metrics:
    enabled: true
node-exporter:
  service:
    labels:
      jobLabel: node-exporter
  serviceMonitor:
    enabled: true
    jobLabel: jobLabel
  extraArgs:
    collector.filesystem.ignored-mount-points: "^/(dev|proc|sys|var/lib/docker/.+)($|/)"
    collector.filesystem.ignored-fs-types: "^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$"
  tolerations:
    - key: "node-role.kubernetes.io/master"
      operator: "Equal"
      value: ""
      effect: "NoSchedule"
    - key: "node-role.kubernetes.io/control-plane"
      operator: "Equal"
      value: ""
      effect: "NoSchedule"
kube-state-metrics:
  serviceMonitor:
    enabled: true
    honorLabels: true
kubelet:
  enabled: true
  namespace: kube-system
  serviceMonitor:
    https: true
blackboxExporter:
  enabled: true
  replicaCount: 1
  configuration: |
    "modules":
      "http_2xx":
        "http":
          "preferred_ip_protocol": "ip4"
        "prober": "http"
      "http_post_2xx":
        "http":
          "method": "POST"
          "preferred_ip_protocol": "ip4"
        "prober": "http"
      "irc_banner":
        "prober": "tcp"
        "tcp":
          "preferred_ip_protocol": "ip4"
          "query_response":
          - "send": "NICK prober"
          - "send": "USER prober prober prober :prober"
          - "expect": "PING :([^ ]+)"
            "send": "PONG ${1}"
          - "expect": "^:[^ ]+ 001"
      "pop3s_banner":
        "prober": "tcp"
        "tcp":
          "preferred_ip_protocol": "ip4"
          "query_response":
          - "expect": "^+OK"
          "tls": true
          "tls_config":
            "insecure_skip_verify": false
      "ssh_banner":
        "prober": "tcp"
        "tcp":
          "preferred_ip_protocol": "ip4"
          "query_response":
          - "expect": "^SSH-2.0-"
      "tcp_connect":
        "prober": "tcp"
        "tcp":
          "preferred_ip_protocol": "ip4"
  serviceAccount:
    create: true
    automountServiceAccountToken: false
  containerSecurityContext:
    enabled: true
    seLinuxOptions: null
    runAsUser: 1001
    runAsNonRoot: true
    privileged: false
    readOnlyRootFilesystem: false
    allowPrivilegeEscalation: false
    capabilities:
      drop: ["ALL"]
    seccompProfile:
      type: "RuntimeDefault"
kubeApiServer:
  enabled: true
kubeControllerManager:
  enabled: true
  namespace: kube-system
  service:
    enabled: true
    ports:
      http: 10252
    targetPorts:
      http: 10252
  serviceMonitor:
    https: false
kubeScheduler:
  enabled: true
  namespace: kube-system
  service:
    enabled: true
    ports:
      http: 10251
    targetPorts:
      http: 10251
  serviceMonitor:
    https: false
coreDns:
  enabled: true
  namespace: kube-system
  service:
    enabled: true
    ports:
      http: 9153
    targetPorts:
      http: 9153
kubeProxy:
  enabled: true
  namespace: kube-system
  service:
    enabled: true
    ports:
      http: 10249
    targetPorts:
      http: 10249
  serviceMonitor:
    https: false
rbac:
  create: true
  pspEnabled: true

So I don't know which information do you need exactly...

from charts.

pomland-94 avatar pomland-94 commented on August 28, 2024

This are the Only Resources the Chart Creates:

NAME                                                                READY   STATUS    RESTARTS   AGE
pod/alertmanager-prometheus-kube-prometheus-alertmanager-0          2/2     Running   0          62s
pod/alertmanager-prometheus-kube-prometheus-alertmanager-1          2/2     Running   0          62s
pod/alertmanager-prometheus-kube-prometheus-alertmanager-2          2/2     Running   0          62s
pod/prometheus-kube-prometheus-blackbox-exporter-69568b474f-nxjvg   0/1     Running   0          65s
pod/prometheus-kube-prometheus-operator-7c6d9f458-zk9p6             1/1     Running   0          65s
pod/prometheus-kube-state-metrics-58b5c6b468-b4zzw                  1/1     Running   0          65s
pod/prometheus-node-exporter-2fpjb                                  1/1     Running   0          65s
pod/prometheus-node-exporter-59hzk                                  1/1     Running   0          65s
pod/prometheus-node-exporter-6s4td                                  1/1     Running   0          65s
pod/prometheus-node-exporter-ccsh4                                  1/1     Running   0          65s
pod/prometheus-node-exporter-cwn8d                                  1/1     Running   0          65s
pod/prometheus-node-exporter-jfwkp                                  1/1     Running   0          65s
pod/prometheus-node-exporter-jr2pm                                  1/1     Running   0          65s
pod/prometheus-node-exporter-jsq42                                  1/1     Running   0          65s
pod/prometheus-node-exporter-lpblv                                  1/1     Running   0          65s
pod/prometheus-prometheus-kube-prometheus-prometheus-0              2/2     Running   0          62s
pod/prometheus-prometheus-kube-prometheus-prometheus-1              2/2     Running   0          62s
pod/prometheus-prometheus-kube-prometheus-prometheus-2              2/2     Running   0          62s

NAME                                                            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/alertmanager-operated                                   ClusterIP   None            <none>        9093/TCP,9094/TCP,9094/UDP   62s
service/prometheus-kube-prometheus-alertmanager                 ClusterIP   10.233.49.27    <none>        9093/TCP                     65s
service/prometheus-kube-prometheus-blackbox-exporter            ClusterIP   10.233.19.146   <none>        19115/TCP                    65s
service/prometheus-kube-prometheus-operator                     ClusterIP   10.233.48.254   <none>        8080/TCP                     65s
service/prometheus-kube-prometheus-prometheus                   ClusterIP   10.233.17.196   <none>        9090/TCP                     65s
service/prometheus-kube-prometheus-prometheus-config-reloader   ClusterIP   None            <none>        8080/TCP                     65s
service/prometheus-kube-state-metrics                           ClusterIP   10.233.35.69    <none>        8080/TCP                     65s
service/prometheus-node-exporter                                ClusterIP   10.233.9.241    <none>        9100/TCP                     65s
service/prometheus-operated                                     ClusterIP   None            <none>        9090/TCP                     62s

NAME                                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/prometheus-node-exporter   9         9         9       9            9           <none>          65s

NAME                                                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/prometheus-kube-prometheus-blackbox-exporter   0/1     1            0           65s
deployment.apps/prometheus-kube-prometheus-operator            1/1     1            1           65s
deployment.apps/prometheus-kube-state-metrics                  1/1     1            1           65s

NAME                                                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/prometheus-kube-prometheus-blackbox-exporter-69568b474f   1         1         0       65s
replicaset.apps/prometheus-kube-prometheus-operator-7c6d9f458             1         1         1       65s
replicaset.apps/prometheus-kube-state-metrics-58b5c6b468                  1         1         1       65s

NAME                                                                        READY   AGE
statefulset.apps/alertmanager-prometheus-kube-prometheus-alertmanager       3/3     62s
statefulset.apps/prometheus-prometheus-kube-prometheus-prometheus           3/3     62s
statefulset.apps/prometheus-prometheus-kube-prometheus-prometheus-shard-1   0/3     62s
statefulset.apps/prometheus-prometheus-kube-prometheus-prometheus-shard-2   0/3     62s
statefulset.apps/prometheus-prometheus-kube-prometheus-prometheus-shard-3   0/3     62s
statefulset.apps/prometheus-prometheus-kube-prometheus-prometheus-shard-4   0/3     62s

And the Statefulsets for the Prometheus Shards are the Problem, they don't come up and give the following error:

Events:
  Type     Reason            Age                   From                    Message
  ----     ------            ----                  ----                    -------
  Normal   SuccessfulCreate  2m12s                 statefulset-controller  create Claim prometheus-prometheus-kube-prometheus-prometheus-db-prometheus-prometheus-kube-prometheus-prometheus-shard-1-0 Pod prometheus-prometheus-kube-prometheus-prometheus-shard-1-0 in StatefulSet prometheus-prometheus-kube-prometheus-prometheus-shard-1 success
  Warning  FailedCreate      48s (x15 over 2m12s)  statefulset-controller  create Pod prometheus-prometheus-kube-prometheus-prometheus-shard-1-0 in StatefulSet prometheus-prometheus-kube-prometheus-prometheus-shard-1 failed error: Pod "prometheus-prometheus-kube-prometheus-prometheus-shard-1-0" is invalid: metadata.labels: Invalid value: "prometheus-prometheus-kube-prometheus-prometheus-shard-1-cf89697fb": must be no more than 63 characters

from charts.

giosdas avatar giosdas commented on August 28, 2024

Hello,

I confirm that i have encountered the same issue, which could be resolved with a fullnameOverride shorther than the default:

fullnameOverride: "prometheus"

Thanks
Regards

from charts.

fevisera avatar fevisera commented on August 28, 2024

Hi,

Thank you for providing another alternative, @giosdas!

Please, @pomland-94 confirm if any of those alternatives resolve your issue, so we can proceed to close the issue.

from charts.

pomland-94 avatar pomland-94 commented on August 28, 2024

Hi @pomland-94,

Thanks for the additional information. I was able to reproduce the issue by setting the default provisioner to an empty storage class (storageClass: "").

The error arises because the pod name prometheus-prometheus-kube-prometheus-prometheus-shard-1-cf89697fb exceeds the 63-character limit imposed by the DNS naming specification in Kubernetes. You can find more details about this in https://github.com/bitnami/charts/blob/main/bitnami/common/templates/_names.tpl#L21-L37 and https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set.

To address this, you can simply set the release name to match the chart name during installation. This will ensure the pod names stay within the character limit.

$ helm install --namespace prometheus --create-namespace kube-prometheus bitnami/kube-prometheus -f values.yaml
$ kubectl get all
...
NAME                                                             READY   AGE
statefulset.apps/alertmanager-kube-prometheus-alertmanager       1/1     73s
statefulset.apps/prometheus-kube-prometheus-prometheus           3/3     73s
statefulset.apps/prometheus-kube-prometheus-prometheus-shard-1   3/3     73s
statefulset.apps/prometheus-kube-prometheus-prometheus-shard-2   3/3     73s
statefulset.apps/prometheus-kube-prometheus-prometheus-shard-3   3/3     73s
statefulset.apps/prometheus-kube-prometheus-prometheus-shard-4   3/3     73s

Please let me know if this solution helps resolve your issue.

Thank's this worked for me.

from charts.

fevisera avatar fevisera commented on August 28, 2024

Hi @pomland-94,

Thank you for confirming that the issue could be resolved. Since it has been resolved, I will proceed to close the issue.

from charts.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.