Code Monkey home page Code Monkey logo

uptime-kuma-helm's Introduction

Hi there

I'm a Cloud(:cloud:) Engineer providing Tools and Infrastructure to make other Engineers life a happier one! My main focus is on Infrastructure as Code, Security, Backups and Documentation. Furthermore I try to learn :seedling: everyday...

Technologies & Tools

Social Profiles

👋 You can follow me or contact me here

uptime-kuma-helm's People

Contributors

beatkind avatar commanderstorm avatar dfoxg avatar dirsigler avatar disconn3ct avatar elmariofredo avatar marcules avatar maximepiton avatar ml-chris avatar nobbs avatar realharter avatar renovate-bot avatar renovate[bot] avatar scrumplex avatar victor-keltio avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

uptime-kuma-helm's Issues

Dependency Dashboard

This issue provides visibility into Renovate updates and their statuses. Learn more

This repository currently has no open or pending branches.


  • Check this box to trigger a request for Renovate to run again on this repository

podLabels not included when useDeploy is true

When trying to deploy the helmchart using a custom values.yaml with useDeploy: true, I noticed the properties from podLabels were missing.
It seems that there is no conditional check in deployment.yaml, only in statefulset.yaml.

ArgoCD support?

Is it possible / do you have a recommended way to easily deploy the helm image with ArgoCD?
I got it working with the command but I much rather use Argo.

This only seems to work with helm upgrade for me:

# Chart.yaml
apiVersion: v2
appVersion: "1.23.11"
deprecated: false
description: A self-hosted Monitoring tool like "Uptime-Robot".
home: https://github.com/dirsigler/uptime-kuma-helm
icon: https://raw.githubusercontent.com/louislam/uptime-kuma/master/public/icon.png
maintainers:
  - name: dirsigler
    email: [email protected]
name: uptime-kuma
sources:
  - https://github.com/louislam/uptime-kuma
type: application
version: 2.18.0
# Default values for uptime-kuma.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

image:
  repository: louislam/uptime-kuma
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: "1.23.11-debian"

nameOverride: ""
fullnameOverride: ""

# If this option is set to false a StateFulset instead of a Deployment is used
useDeploy: true

serviceAccount:
  # Specifies whether a service account should be created
  create: false
  # Annotations to add to the service account
  annotations: { }
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

podAnnotations: { }
podLabels:
  { }
# app: uptime-kuma
podEnv:
  # a default port must be set. required by container
  - name: "UPTIME_KUMA_PORT"
    value: "3001"

podSecurityContext:
  { }
# fsGroup: 2000

securityContext:
  { }
  # capabilities:
  #   drop:
  #   - ALL
  # readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000

service:
  #  type: ClusterIP
  type: LoadBalancer
  port: 3001
  nodePort:
  annotations: { }

ingress:
  enabled: true
  className: "traefik"
  extraLabels:
    { }
  # vhost: uptime-kuma.company.corp
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: "websecure"
  hosts:
    - host: uptime.mydomain.net
      paths:
        - path: /
          pathType: ImplementationSpecific

  tls:
    #[]
    - hosts:
        - uptime.mydomain.net
      secretName: uptime-mydomain-net-tls
    # - secretName: chart-example-tls
    #   hosts:
    #     - chart-example.local

  labels:
    - "traefik.enable=true"
    - "traefik.http.routers.uptime-kuma.rule=Host(`uptime.mydomain.net`)"
    - "traefik.http.routers.uptime-kuma.entrypoints=https"
    - "traefik.http.routers.uptime-kuma.tls=true"
    - "traefik.http.routers.uptime-kuma.tls.certresolver=letsencrypt-prod"
    - "traefik.http.services.uptime-kuma.loadBalancer.server.port=3001"

resources:
  { }
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #   cpu: 100m
  #   memory: 128Mi
  # requests:
#   cpu: 100m
#   memory: 128Mi

nodeSelector: { }

tolerations: [ ]

affinity: { }

livenessProbe:
  enabled: true
  timeoutSeconds: 2
  initialDelaySeconds: 15

readinessProbe:
  enabled: true
  initialDelaySeconds: 5

volume:
  enabled: true
  accessMode: ReadWriteOnce
  size: 4Gi
  # If you want to use a storage class other than the default, uncomment this
  # line and define the storage class name
  storageClassName: longhorn-sd
  # Reuse your own pre-existing PVC.
  # existingClaim: ""

# -- A list of additional volumes to be added to the pod
additionalVolumes:
  [ ]
  # - name: "additional-certificates"
  #   configMap:
  #     name: "additional-certificates"
#     optional: true
#     defaultMode: 420

# -- A list of additional volumeMounts to be added to the pod
additionalVolumeMounts:
  [ ]
  # - name: "additional-certificates"
  #   mountPath: "/etc/ssl/certs/additional/additional-ca.pem"
#   readOnly: true
#   subPath: "additional-ca.pem"

strategy:
  type: Recreate

# Prometheus ServiceMonitor configuration
serviceMonitor:
  enabled: false
  # -- Scrape interval. If not set, the Prometheus default scrape interval is used.
  interval: 60s
  # -- Timeout if metrics can't be retrieved in given time interval
  scrapeTimeout: 10s
  # -- Scheme to use when scraping, e.g. http (default) or https.
  scheme: ~
  # -- TLS configuration to use when scraping, only applicable for scheme https.
  tlsConfig: { }
  # -- Prometheus [RelabelConfigs] to apply to samples before scraping
  relabelings: [ ]
  # -- Prometheus [MetricRelabelConfigs] to apply to samples before ingestion
  metricRelabelings: [ ]
  # -- Prometheus ServiceMonitor selector, only select Prometheus's with these
  # labels (if not set, select any Prometheus)
  selector: { }

  # -- Namespace where the ServiceMonitor resource should be created, default is
  # the same as the release namespace
  namespace: ~
  # -- Additional labels to add to the ServiceMonitor
  additionalLabels: { }
  # -- Additional annotations to add to the ServiceMonitor
  annotations: { }

  # -- BasicAuth credentials for scraping metrics, use API token and any string for username
  # basicAuth:
  #   username: "metrics"
  #   password: ""

# -- Use this option to set a custom DNS policy to the created deployment
dnsPolicy: ""

# -- Use this option to set custom DNS configurations to the created deployment
dnsConfig: { }

Add oauth2-proxy and disable User login

Can we add oauth2-proxy then helm chart have config to disable user login, then login directly over oauth2-proxy?

Or we can have disable user login or we can create new user for that?

Action Required: Fix Renovate Configuration

There is an error with this repository's Renovate configuration that needs to be fixed. As a precaution, Renovate will stop PRs until it is resolved.

Location: renovate.json
Error type: The renovate configuration file contains some invalid settings
Message: Invalid configuration option: packageRules[0].datasourceRules, Invalid configuration option: packageRules[0].upgradeVersions, packageRules[0]: packageRules cannot combine both matchUpdateTypes and versioning. Rule: {"matchDatasources":["docker"],"matchUpdateTypes":["minor"],"upgradeVersions":true,"groupName":"docker","versioning":"docker","automerge":false,"datasourceRules":[{"matchDatasources":["helm"],"automerge":false,"updateLabels":["AppVersion"],"bumpChart":true,"groupName":"helm"},{"matchDatasources":["github-actions"],"automerge":false,"groupName":"github-actions"},{"matchDatasources":["docker"],"matchUpdateTypes":["minor","patch"],"automerge":false}]}

HELM CHART FEATURE ENHANCEMENT

Can we add a feature in the helm chart, to use an existing secret, as to use this helm chart I will have to commit the API key in the values file of helm chart and push it to the VCS. I prefer using sealed secrets to encrypt secrets before pushing it to VCS.

Example for using Existing PV

The existing PV setup requires to pass in an object, can you share an example of how to attach our own PV/PVC via the values.yaml file

add version matrix to support and test on multiple K8S versions.

The CI currently spins up a KinD cluster to test the Helm Chart.
This Cluster runs with the latest Kubernetes/KinD Version which most of the time already exceeds the versions seen on common Cloud Providers.

Therefore it makes sense to add a matrix to the CI which spins up different K8S/KinD versions.

Statefulset vs. Deployment + PVC?

I'm just getting started with Uptime Kuma and was looking for a helm chart for it (thanks btw!) and came across your repo. However, I was wondering why you chose to use a deployment + pvc instead of a statefulset to deploy the pod? I've had issues in the past where a new pod was unable to start because the terminating pod was still running and hanging onto the PVC.

My understanding is that this is the exact situation where a Statefulset is useful. Maybe I don't understand the uptime-kuma infrastructure requirements well enough though.

add PR template

As this repository already got some Pull Requests from other contributors (thanks a lot ❤️ ) I think it makes sense to add a Pull Request Template like used by Bitnami.

URL NOT WORKING

your URL https://helm.irsigler.cloud is not available

The Readme page of artifacthub also has a different url https://dirsigler.github.io/uptime-kuma-helm, this also does not work.

Support for LoadBalancerIP

Could you please add support for LoadBalancerIP for the service? Something like:

{{- if .Values.service.loadBalancerIP }}
  loadBalancerIP: {{ .Values.service.loadBalancerIP | quote }}
{{- end }}

Readiness and Liveness probe failed

I use tag 2.0.1 and deploy with the default values file. I only change the storageclass value since I use gp2.
Below is the event logs:

Events:
  Type     Reason                  Age                 From                     Message
  ----     ------                  ----                ----                     -------
  Normal   Scheduled               2m28s               default-scheduler        Successfully assigned uptime-kuma/uptime-kuma-5d89ffd98d-m979t to ip-172-5-5-34.us-west-1.compute.internal
  Normal   SuccessfulAttachVolume  2m26s               attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-9f156812-c8ba-4fd2-b7c2-17e153af0f35"
  Normal   Pulling                 2m18s               kubelet                  Pulling image "louislam/uptime-kuma:1.11.1-alpine"
  Normal   Pulled                  2m5s                kubelet                  Successfully pulled image "louislam/uptime-kuma:1.11.1-alpine" in 12.955042419s
  Warning  Unhealthy               107s                kubelet                  Liveness probe failed:
  Normal   Created                 94s (x2 over 2m4s)  kubelet                  Created container uptime-kuma
  Normal   Started                 94s (x2 over 2m4s)  kubelet                  Started container uptime-kuma
  Warning  Unhealthy               68s (x9 over 2m4s)  kubelet                  Readiness probe failed: Get "http://172.5.5.202:3001/": dial tcp 172.5.5.202:3001: connect: connection refused
  Warning  Unhealthy               68s (x5 over 118s)  kubelet                  Liveness probe failed: Health Check ERROR
  Normal   Killing                 68s (x2 over 98s)   kubelet                  Container uptime-kuma failed liveness probe, will be restarted
  Normal   Pulled                  64s (x2 over 94s)   kubelet                  Container image "louislam/uptime-kuma:1.11.1-alpine" already present on machine

Create ServiceMonitor and Secret for Prometheus metrics endpoint

The upstream project provides some basic Prometheus metrics which can be scrapped by Prometheus.

I am currently deepen my Prometheus knowledge and found a suitable solution which allows to scrape the Uptime-Kuma metrics.
The additions would be manifests of type:

  • Secret to store the basic auth credentials
  • ServiceMonitor to scrape the Service for metrics

Reference: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/basic-auth.md#basic-auth-for-targets

The Prometheus instance and Prometheus CRD configuration needs to be provided by the users of this Helm Chart.

Alpine image is deprecated

see louislam/uptime-kuma#2463

As of 3 weeks ago, louislam has deprecated the alpine image due to dns issues and recommends the debian based one, which the latest of is "1.19.4-debian"

I tried to submit a PR for the change but couldn't push so I figured this would help!

Thanks

Bug: Value storageClassName does not apply when useDeploy is true

When useDeploy is true and the storageClassName value is not applied due to the following line in charts/uptime-kuma/templates/pvc.yaml:

storageClassName: {{ .Values.volume.storageClass | default "standard"}}

The value referenced here muss be .Values.volume.storageClassName

Admission Webhook denies request

If you use the newest ingress-nginx (/ community nginx ingress) Version 1.0.5 you may see following error:
Error: UPGRADE FAILED: failed to create resource: admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: nginx.ingress.kubernetes.io/server-snippets annotation contains invalid word location

This is due to a change in the ingress-nginx configuration which was introduced in kubernetes/ingress-nginx#7874.
This change solves a security vulnerability which was found in kubernetes/ingress-nginx#7837.

Helm chart certificate not trusted

Hello, i was looking to test your chart helm but following the deployment i got one error:

helm repo add uptime-kuma https://helm.irsigler.cloud Error: looks like "https://helm.irsigler.cloud" is not a valid chart repository or cannot be reached: Get "https://helm.irsigler.cloud/index.yaml": x509: “helm.irsigler.cloud” certificate is not trusted

Persistence should be optional

In some cases, persistence is not needed across Pod restarts. Persistence should configurable with an enable / disable flag. Most charts use a persistence key to handle all the volume configuration. Here is an example of this in one of my charts.

Helm chart certificate not trusted

hi it seems like this (or similar) issue is happening again:
#113

I'm getting:
Failed to load target state: failed to generate manifest for source 1 of 1: rpc error: code = Unknown desc = helm repo add https:--dirsigler.github.io-uptime-kuma-helm [https://dirsigler.github.io/uptime-kuma-helm](https://dirsigler.github.io/uptime-kuma-helm%60) failed exit status 1: Error: looks like "https://dirsigler.github.io/uptime-kuma-helm" is not a valid chart repository or cannot be reached: Get "https://helm.irsigler.cloud/index.yaml": tls: failed to verify certificate: x509: certificate signed by unknown authority

Issue seems to be with your private hosting on helm.irsigler.cloud , we have no issues with repos that are on github.io

Adding urls via helm chart

Hello,

How I can add the monitoring url's through helm chart so it is easy to manage the urls in the chart. So each time we can recreate on the fly. I have checked the

Kubernetes Ingress API changes

It can occur that the Ingress manifest needs some proper configuration to allow upwards and backwards compatibility for the different Kubernetes Ingress API definitions and configurations.

Utilise proper FQDN and GitHub Pages.

Initially the project was setup to use the default GitHub Pages URL created by the Action to host Helm Charts.

It would make sense to use the custom domain feature to provide proper FQDN support.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.