Code Monkey home page Code Monkey logo

chartrepo's People

Contributors

arielmorelli avatar belidzs avatar crapworks avatar elcomtik avatar gtrubach avatar harishdesetti1206 avatar james-d-elliott avatar jbergler avatar jsievenpiper avatar laugmanuel avatar leocov-dev avatar skuethe avatar sturman avatar tchoupinax avatar timoschwarzer avatar wrmilling avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chartrepo's Issues

failed to parse PEM block containing the key

I use fluxv2 to deploy Authelia.

When I refresh helm-release I got this error.

You can get the Configuration below. I did not configure OIDC private key. Do I have to configure it?

---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: authelia
  namespace: networking
spec:
  interval: 5m
  chart:
    spec:
      chart: authelia
      version: 0.8.*
      sourceRef:
        kind: HelmRepository
        name: authelia
        namespace: flux-system
  values:
    domain: ${SECRET_CLUSTER_DOMAIN}
    default_redirection_url: https://dns.${SECRET_CLUSTER_DOMAIN}
    service:
      annotations:
        prometheus.io/probe: "true"
        prometheus.io/protocol: "http"

    ingress:
      enabled: true
      className: ${INGRESS_CLASS}
      subdomain: login

      tls:
        enabled: true
        secret: ${SECRET_CLUSTER_DEFAULT_CERT}

    pod:
      # Must be Deployment, DaemonSet, or StatefulSet.
      kind: Deployment

      env:
        - name: TZ
          value: ${TZ}

      securityContext:
        container:
          runAsUser: ${SECRET_PLEX_UID}
          runAsGroup: ${SECRET_PLEX_GID}
          fsGroup: ${SECRET_PLEX_GID}

      extraVolumeMounts:
        - name: authelia-user
          mountPath: /conf
      extraVolumes:
        - name: authelia-user
          configMap:
            name: authelia-user
            items:
            - key: users_database.yml
              path: users_database.yml
      resources:
        requests:
          cpu: 200m
          memory: 128Mi
        limits:
          memory: 1Gi

    persistence:
      enabled: true
      storageClass: authelia
      size: 100Mi

    ##
    ## Authelia Config Map Generator
    ##
    configMap:
      enabled: true
      log:
        level: trace
      telemetry:
        metrics:
          enabled: false
          serviceMonitor:
            enabled: fasle
      server:
        read_buffer_size: 8192
        write_buffer_size: 8192
      theme: light
      authentication_backend:
        disable_reset_password: true
        ldap:
          enabled: false
        file:
          enabled: true
          path: /conf/users_database.yml
          password:
            algorithm: argon2id
      identity_providers:
        oidc:
          enabled: true
          clients:
          - id: grafana
            secret: ${SECRET_GRAFANA_CLIENT}
            public: false
            authorization_policy: two_factor
            #pre_configured_consent_duration: 10y
            scopes:
            - openid
            - profile
            - groups
            - email
            redirect_uris:
            - https://grafana.${SECRET_CLUSTER_DOMAIN}/login/generic_oauth
            userinfo_signing_algorithm: none
  
      access_control:
        default_policy: deny

        networks:
          - name: private
            networks:
              - ${SECRET_VPN_NET}
              - ${SECRET_PRIVATE_NET}

        rules:
          # bypass Authelia WAN + LAN
          - domain:
              - login.${SECRET_CLUSTER_DOMAIN}
            policy: bypass

          - domain: jackett.${SECRET_CLUSTER_DOMAIN}
            resources:
            - "^/jackett/.*$"
            policy: bypass

          # Deny admin services to users
          - domain:
              - filebrowser.${SECRET_CLUSTER_DOMAIN}
              - alert.${SECRET_CLUSTER_DOMAIN}
              - prometheus.${SECRET_CLUSTER_DOMAIN}
              - hubble.${SECRET_CLUSTER_DOMAIN}
            subject: ["group:users"]
            policy: deny

          # One factor auth for LAN
          - domain:
              - "*.${SECRET_CLUSTER_DOMAIN}"
            policy: one_factor
            subject: ["group:admins", "group:users"]
            networks:
              - private

          # Two factors auth for WAN
          - domain:
              - "*.${SECRET_CLUSTER_DOMAIN}"
            subject: ["group:admins", "group:users"]
            policy: two_factor

      session:
        redis:
          enabled: false

      storage:
        local:
          enabled: true
          path: /config/db.sqlite3
        postgres:
          enabled: false

      notifier:
        smtp:
          enabled: false
        filesystem:
          enabled: true
          filename: /config/notification.txt

    secret:
      jwt:
        key: JWT_TOKEN
        value: "${SECRET_AUTHELIA_JWT_SECRET}"
        filename: JWT_TOKEN
      storageEncryptionKey:
        key: STORAGE_ENCRYPTION_KEY
        value: "${SECRET_AUTHELIA_STORAGE_ENCRYPTION_KEY}"

Set replicas if local storage file enabled

Hi!
I am using authelia in "Deployment" mode. The users file is mounted to each pod as a ConfigMap. But since .Values.configMap.authentication_backend.file.enabled is set to "true", the number of replicas is automatically set to 1 (code).
Is it possible to modify the check for the case when we are sure that the file will be identical on each pod?

Now after every update I have to run kubectl scale --replicas=xxx

Allow toggling certificates_directory: /certificates.

The certificates_directory is only set in the configMap when a value is inside the certificates.values or certifcates.existringSecret. This is fine, but having an enabled: true option, in addition, would be better. We, for instance, use sealed-secrets to provide our credentials. This means we'd rather mount the secret inside the folder than have it publically readable inside a git project.

{{- if (include "authelia.enabled.certificatesSecret" .) }}
    certificates_directory: /certificates
{{- end }}
{{/*
Returns if we should generate the secret for certificates
*/}}
{{- define "authelia.enabled.certificatesSecret" -}}
    {{- if .Values.certificates -}}
        {{- if .Values.certificates.values -}}
            {{- true -}}
        {{- else if .Values.certificates.existingSecret -}}
            {{- true -}}
        {{- end -}}
    {{- end -}}
{{- end -}}

so, the change could look like this:

{{/*
Returns if we should generate the secret for certificates
*/}}
{{- define "authelia.enabled.certificatesSecret" -}}
    {{- if .Values.certificates -}}
        {{- if .Values.certificates.values -}}
            {{- true -}}
        {{- else if .Values.certificates.existingSecret -}}
            {{- true -}}
        {{- else if .Values.certificates.useCertificatesDirectory -}}
             {{- true -}}
        {{- end -}}
    {{- end -}}
{{- end -}}

OIDC public vs secret

Some apps require public: true to function correctly, in this case the secret must be unset yet Helm generates the secret if it is not set. Tried various workarounds like secret: false and secret: "" but none of them worked out. Had to manually remove the generated secret from Helm generated manifest to get it working properly

PS: In general it was pretty painful to get Authelia working with Helm...

configMap.identity_providers.oidc.clients[n].secret has to be exposed via plain text which is a security concern

Currently if i want to add oidc clients while using the chart I can simply add something like

configMap:
  identity_providers:
      oidc:
        clients:                                                                                                                                                                                                                                                                 
        - id: gitlab                                                                                                                                                                                                                                                           
          description: gitlab                                                                                                                                                                                                                                                  
          secret: complicatedSecretgoeshere

and this simplifies my usage of authelia significantly and is nice and readable (thank you!). What concerns me is those secrets are out in the open in my values file, and there is currently no way for me to use the helm chart and point to k8s secrets instead.

Cannot add labels to the pods

Hello,

When I'm trying to add labels to the pods like this:

pod:
  labels:
    myLabel: myValue

I'm getting the error Error: YAML parse error on authelia/templates/deployment.yaml: error converting YAML to JSON: yaml: line 11: mapping values are not allowed in this context.

11th line points to the spec word, so it's probably related to some labels spacing in the deployment template file, but I'm not sure.

Possibility to add --file to helm chart arguments

Currently, the helm chart only accepts ldap as authentication backend.
It would be great if you could add file as a possible auth backend to the helm, to specify the file you want in the helm chart.
Even thought ldap is preferred for a production environment, file would help for testing and simple setups.

Allow not using TLSOption for Traefik IngressRoute

Traefik has a special "default" TLSOption that gets used when a specific TLSOption is not referenced from an IngressRoute. https://doc.traefik.io/traefik/v2.6/https/tls/#tls-options

I have a default TLSOption defined in a different namespace and would like to use it for all running http services, including Authelia, however, currently the chart does not support omitting the TLSOption.

I can workaround this by specifying the "default" TLSOption wtihout namespace (or in a different namespace if Traefik allowCrossNamespace option is set), but I get an error on Traefik dashboard that the TLSOption is not found (which I think makes it fallback to the default one). It would be great if we could just not specify any TLSOption, neither existing nor a new one created by the chart.

Confusion about local config `users_database.yml`

I'm attempting to run the latest chart with unmodified values.local.yml and seeing the authelia-0 pod enter CrashLoopBackOff with the following logs:

level=warning msg="No access control rules have been defined so the default policy two_factor will be applied to all requests"
level=info msg="Logging severity set to info"
level=info msg="Storage schema upgrade to v1 completed"
level=error msg="Unable to find database file: /config/users_database.yml" stack="github.com/authelia/authelia/cmd/authelia/main.go:92  startServer\ngithub.com/authelia/authelia/cmd/authelia/main.go:145 main.func1\ngithub.com/spf13/[email protected]/command.go:856          (*Command).execute\ngithub.com/spf13/[email protected]/command.go:960          (*Command).ExecuteC\ngithub.com/spf13/[email protected]/command.go:897          main\ngithub.com/authelia/authelia/cmd/authelia/main.go:163 main\nruntime/proc.go:225                                   main\nruntime/asm_amd64.s:1371                              goexit"
level=error msg="Generating database file: /config/users_database.yml" stack="github.com/authelia/authelia/cmd/authelia/main.go:92  startServer\ngithub.com/authelia/authelia/cmd/authelia/main.go:145 main.func1\ngithub.com/spf13/[email protected]/command.go:856          (*Command).execute\ngithub.com/spf13/[email protected]/command.go:960          (*Command).ExecuteC\ngithub.com/spf13/[email protected]/command.go:897          main\ngithub.com/authelia/authelia/cmd/authelia/main.go:163 main\nruntime/proc.go:225                                   main\nruntime/asm_amd64.s:1371                              goexit"
level=error msg="Generated database at: /config/users_database.yml" stack="github.com/authelia/authelia/cmd/authelia/main.go:92  startServer\ngithub.com/authelia/authelia/cmd/authelia/main.go:145 main.func1\ngithub.com/spf13/[email protected]/command.go:856          (*Command).execute\ngithub.com/spf13/[email protected]/command.go:960          (*Command).ExecuteC\ngithub.com/spf13/[email protected]/command.go:897          main\ngithub.com/authelia/authelia/cmd/authelia/main.go:163 main\nruntime/proc.go:225                                   main\nruntime/asm_amd64.s:1371                              goexit"

It's not clear to me what's wrong here as the logs suggest the file is being auto-created, but are also 'error' level logs.
Should I be creating an additional volume and mounting it via pod.extraVolumeMounts, or something else?

LDAP: Updated service acccounts on additional users , the login authenication fails for user and service account

Hi Team

I got a problem with LDAP configuration at additional users.
We have created a service account and when i add service accounts to the additional users, it work for me.
could you please guide me. Thanks


## The base dn for every LDAP query.
      base_dn: OU=Mail,OU=DE,DC=prd,DC=**,DC=com  
      additional_users_dn: OU=Users_test,OU=ServiceAccounts (This is the new user)
      username_attribute: sAMAccountName
      users_filter: (&({username_attribute}={input})(objectCategory=person)(objectClass=user))
      display_name_attribute: sAMAccountName
      additional_groups_dn: OU=managed,OU=Groups
      group_name_attribute: cn
      mail_attribute: mail
      user: CN=admin,OU=ServiceAccounts,OU=Mail,OU=DE,DC=prd,DC=***,DC=com
    ##

AD group : prometheus-reader
test user > prd.domain/DE/mail/Users_test
service account > prd.domain/DE/mail/ServiceAccounts

Error: The test user & service account user cannot login with this LDAP config.
Without OU=ServiceAccounts, the test user can login. it works.

@james-d-elliott Could you please help with my query.

incorrect api kinds in older k8s versions and flux/argocd

So due to how we implemented capability checking, and the way argocd and flux collect the capabilities, the chart will always render the latest api versions for each kind regardless of what actual capabilities actually exist.

This can be fixed by specifically doing a k8s semver compare in helm instead. As it's relatively easy to do this, it just takes time to research the moments when apiversions were bumped, we'll replace all of them with a semver check.

How to add configuration of authelia in nginx controller in kubernetes?

This is a question on how to configure authelia in nginx controller.
Its been 4 days i was trying to find example or documentation on how to add authelia configuration in nginx controller.
Not sure if there is documentation on it.
If anyone have the links or information on it, please help me to progress.

   Environment:
   ==================
   Ingress: NGINX
   k8s: AKS
   storage: postgresql
   session: redis

Thanks.

Helm Bundle with Redis and PostgreSQL using helm optional dependencies

Hey I just wanted to check if any work was being done towards creating a bundle chart with redis and postgresql included.

If not then I would like to contribute, my thoughts are not to create a new helm chart but to enable options in the existing chart to create the services (so there is no repetition).

The best way to achieve this is likely using helm optional dependencies

I am thinking to have them off by default to maintain backward compatibility where possible. However if there is a conflict in the already existing values.yaml it may be possible something needs to change to prevent overriding a child charts options since the redis and postgresql dependency will respond to the redis and postgresql objects in values.yaml by default (I will confirm if there is any conflict in this way later).

Let me know if this is amiable as I would rather not have to overlay these components seperateley if I don't have to, as its nicer to do it from mainline.

identity_providers.oidc.secret from secret?

I can set both OIDC_HMAC_SECRET, OIDC_PRIVATE_KEY in a secret but i dont see where i could set the identity_providers.oidc.secret i would like to save my shared secret for the oicd client in a secret so i can encrypt secrets ( using sealed secrets )

from what i can see its just not being set:

{{- .Values.secret.oidcPrivateKey.key | nindent 2}}: {{ .Values.secret.oidcPrivateKey.value | default (get $secretData .Values.secret.oidcPrivateKey.key | b64dec) | default (genPrivateKey "rsa") | b64enc }}
and could be set after line 40?

I could send a PR unless there is something im missing

checksum for secretACL.yaml

It'd be great if you could add a checksum annotation for secretACL.yaml, so the pods get redeployed when access_control rules change.

Thank you in advance.

ldap config should have enabledSecret

configMap.authentication_backend.ldap.enabledSecret is missing.

I think this should be setup the same as configMap.session.redis.enabledSecret so that I can prevent the secret key from being created and the pod env for AUTHELIA_AUTHENTICATION_BACKEND_LDAP_PASSWORD_FILE being added.

This will allow me to provide my own secret via pod.env, pod.extraVolumes and pod.extraVolumeMounts.

This is needed for secret re-use when other charts are creating the secrets for those other services.

configMap:
  authentication_backend:
    ldap:
      enabled: true
      enabledSecret: false

pod:
  env:
  - name: AUTHELIA_SESSION_REDIS_PASSWORD_FILE
    value: "/redis-secrets/redis-password"
  - name: AUTHELIA_SESSION_REDIS_HIGH_AVAILABILITY_SENTINEL_PASSWORD_FILE
    value: "/redis-secrets/redis-password"
  - name: AUTHELIA_AUTHENTICATION_BACKEND_LDAP_PASSWORD_FILE
    value: "/ldap-secrets/LDAP_ADMIN_PASSWORD"

  extraVolumeMounts:
  - mountPath: /redis-secrets
    name: redis-credentials
    readOnly: true
  - mountPath: /ldap-secrets
    name: ldap-credentials
    readOnly: true

  extraVolumes:
  - name: redis-credentials
    secret:
      defaultMode: 420
      items:
      - key: redis-password
        path: redis-password
  - name: ldap-credentials
    secret:
      defaultMode: 420
      items:
      - key: LDAP_ADMIN_PASSWORD
        path: LDAP_ADMIN_PASSWORD

configuration key not expected error in server,totp and webauthn

"Configuration: configuration key not expected: server.headers.csp_template"
"Configuration: configuration key not expected: totp.disable"
"Configuration: configuration key not expected: webauthn.disable"

The above errors are found in authelia pod logs. The TOTP and WEBAUTHN are disabled in values.yaml but still these errors are displayed.
Even when server.headers.csp_template is left empty, the error arises.

Latest chart fails with parsing telemetry even with it disabled

Since the upgrade to 0.8.37 authelia fails to start with

time="2022-07-01T07:10:15Z" level=error msg="Configuration: error occurred during unmarshalling configuration: 1 error(s) decoding:\n\n* error decoding 'telemetry.metrics.address': could not decode '0.0.0.0:9959' to a Address: the string '0.0.0.0:9959' does not appear to be a valid address"
time="2022-07-01T07:10:15Z" level=fatal msg="Can't continue due to the errors loading the configuration"

I don't have any of the metrics values configured, so this are all the default values

Current chart version is broken since 4.37 upgrade

Today, flux updated my authelia release and left the pod in CrashLoopBackOff. Looking at the logs:

ime="2022-10-23T09:07:33+02:00" level=error msg="Configuration: authentication_backend: file: password: argon2: option 'memory' is configured as '65536' but must be greater than or equal to '524288' or '65536' (the value of 'parallelism) multiplied by '8'"
time="2022-10-23T09:07:33+02:00" level=fatal msg="Can't continue due to the errors loading the configuration"

Looking at the ConfigMap, the parallelism value is not correct:

algorithm: 'argon2'\n      argon2:\n        variant: 'argon2id'\n        iterations:
    3\n        memory: 65536\n        parallelism: 65536\n        key_length: 32\n
    \       salt_length: 16\n      

Maybe problem is on this line:

parallelism: {{ $auth.file.password.parallelism | default $auth.file.password.argon2.memory | default 4 }}

You are looking for parallelism value in the password block ($auth.file.password.parallelism) instead of looking for it inside argon2 block ($auth.file.password.argon2.parallelism)? As $auth.file.password.parallelism value doesn't exist in my case (as I changed the configuration to the current one in values.yaml, it is taking the same value as the memory, which lefts the application broken.

Here is the configuration block:

authentication_backend:
        password_reset:
          disable: false
        file:
          enabled: true
          path: /config/users_database.yml
          watch: true
          search:
            email: false
            case_insensitive: false
          password:
            algorithm: argon2
            argon2:
              variant: argon2id
              iterations: 3
              memory: 65536
              parallelism: 4
              key_length: 32
              salt_length: 16

Thank you!

existingConfigMap still includes secrets

Hi,

i was just about to move my configuration file into an own configmap in kubernetes using the existingConfigMap key.

But i have encountered a lot of issue while doing this and it looks like currently the existingConfigMap can not be used as expected because some values of configMap are used during the helm chart rendering.

For example what kind of storage is configured is determined based on a value in configMap.storage but this whole key is going to be migrated into the configmap mentioned in existingConfigMap

Here is an example of the key i am talking about:
https://github.com/authelia/chartrepo/blob/master/charts/authelia/templates/deployment.yaml#L129C16-L129C16

Maybe in the future the storage type itself should be a higher level.

Thanks

[Error]The encryption key is not valid against the schema check value

I'm using Alpha numeric (Upper case characters) storage encryption key.
path in values: secret.storageEncryptionKey.value

time="2022-03-25T13:57:56Z" level=error msg="Failure running the storage provider startup check: the encryption key is not valid against the schema check value" stack="github.com/authelia/authelia/v4/internal/commands/root.go:92 doStartupChecks\ngithub.com/authelia/authelia/v4/internal/commands/root.go:78 cmdRootRun\ngithub.com/spf13/[email protected]/command.go:860 (*Command).execute\ngithub.com/spf13/[email protected]/command.go:974 (*Command).ExecuteC\ngithub.com/spf13/[email protected]/command.go:902 (*Command).Execute\ngithub.com/authelia/authelia/v4/cmd/authelia/main.go:10 main\nruntime/proc.go:255 main\nruntime/asm_amd64.s:1581 goexit"

Please help me if the format is right. The information about schema is not explained any where else.

IMPORTANT: version 0.9.0 will be a BREAKING change

The chart, as it is still beta is subject to breaking changes. I intend to do some significant refactoring in v0.9 and this will not be compatible with values from v8 or prior. Please feel free to give feedback in the PR #129 or #108 which is the issue that triggered the changes. I'd like some feedback in advance of releasing if possible.

Enhancement: Include Redis and Postgres chart so it can be deployed though Authelia chart

The thinking is that you could specify things like:

---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: authentik
  namespace: security
spec:
  interval: 5m
  chart:
    spec:
      # renovate: registryUrl=https://charts.goauthentik.io
      chart: authentik
      version: 2.0.0
      sourceRef:
        kind: HelmRepository
        name: authentik-charts
        namespace: flux-system
      interval: 5m
  values:
    outposts:
      docker_image_base: ghcr.io/goauthentik/%(type)s
    fullnameOverride: authentik
    image:
      repository: ghcr.io/goauthentik/server
      tag: latest
      pullPolicy: Always
      
    authentik:
      secret_key: "${SECRET_AUTHENTIK_SECRET_KEY}"
      postgresql:
        host: "authentik-postgresql"
        name: "authentik"
        user: "authentik"
        password: "${SECRET_AUTHENTIK_POSTGRES_PASSWORD}"
      redis:
        host: "authentik-redis-master"
      email:
        host: "smtp.eu.mailgun.org"
        port: 587
        use_tls: true
        username: "authentik@mg.${MAIN_DOMAIN}"
        password: "${SECRET_MAILGUN_PASSWORD}"
        from: "no-reply@mg.${MAIN_DOMAIN}"
    volumeMounts:
    - name: media
      mountPath: /media
    volumes:
    - name: media
      persistentVolumeClaim:
        claimName: authentik-media-v1

    ingress:
      enabled: true
      ingressClassName: "traefik"
      annotations:
        traefik.ingress.kubernetes.io/router.entrypoints: "websecure"
#        traefik.ingress.kubernetes.io/router.middlewares: "networking-cloudflare-ips@kubernetescrd"
      hosts:
      - host: "id.${MAIN_DOMAIN}"
        paths:
        - path: "/"
          pathType: Prefix
      tls:
      - hosts:
        - "id.${MAIN_DOMAIN}"
        secretName: ${MAIN_DOMAIN}-tls
    postgresql:
      enabled: true
      image:
        repository: postgres
        tag: '11.12'
      postgresqlUsername: authentik
      postgresqlDatabase: authentik
      postgresqlPassword: "${SECRET_AUTHENTIK_POSTGRES_PASSWORD}"
      postgresqlDataDir: "/data/pgdata"
      persistence:
       enabled: true
       size: 8Gi
       mountPath: "/data/"
    redis:
      enabled: true
      image:
       repository: redis
       tag: 'latest'

Basically that means that this chart would deployed redis too if it's enabled and use it.
Same with the storage backend.

Automatically recreate pods on config change

Hello,

Right now Authelia does not have hot reload. With Helm it's possible to automatically rollout the pods in case configuration changed based on the file checksum.

Is it possible to add to this chart the technique described here - https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments? Basically to make pods react to config changes it's required to add something like this:

kind: Deployment
spec:
  template:
    metadata:
      annotations:
        checksum/config: {{ include (print $.Template.BasePath "/configMap.yaml") . | sha256sum }}
        checksum/secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }}
        # other files which requires pod restart

Since the checksum will change on files change, annotation of deployment will change and pods will be recreated.

Not working in Portainer

It's a recurring problem in many Charts repos. We were unable to import into Helm via Portainer.

My solution (for other charts) was to make a clone and host it locally via NGINX. But this is not something trivial and friendly.

Cheers.

registry value not taken into account

helm install authelia authelia/authelia --set registry=docker.io
k describe pods
Events:
  Type     Reason     Age   From               Message
  ----     ------     ----  ----               -------
  Normal   Scheduled  14s   default-scheduler  Successfully assigned sso/authelia-xxvv7 to scw-cluster-qa-power-pool-73b1fa7b403044b08075
  Normal   Pulling    14s   kubelet            Pulling image "ghcr.io/authelia/authelia:4.30.5"
  Warning  Failed     13s   kubelet            Failed to pull image "ghcr.io/authelia/authelia:4.30.5": rpc error: code = NotFound desc = failed to pull and unpack image "ghcr.io/authelia/authelia:4.30.5": failed to copy: httpReadSeeker: failed open: content at https://ghcr.io/v2/authelia/authelia/manifests/sha256:cf7c87388c9974b96daaefe7f2d7cb91c5086d9064e789c174be891374cec386 not found: not found
  Warning  Failed     13s   kubelet            Error: ErrImagePull
  Normal   BackOff    12s   kubelet            Back-off pulling image "ghcr.io/authelia/authelia:4.30.5"
  Warning  Failed     12s   kubelet            Error: ImagePullBackOff

Still pulling from ghcr.io (and not working)

Access control rules from ingress annotations

I think there is disconnect between Authelia configuration and Ingress CRD-s and Authelia should support access control rules in Ingress CRD annotations, so something like following:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: foo
  namespace: foo
  annotations:
    kubernetes.io/ingress.class: traefik
    cert-manager.io/cluster-issuer: default
    traefik.ingress.kubernetes.io/router.entrypoints: websecure
    traefik.ingress.kubernetes.io/router.middlewares: authelia-chain-foo-authelia-auth@kubernetescrd
    traefik.ingress.kubernetes.io/router.tls: "true"
    external-dns.alpha.kubernetes.io/target: traefik.example.com
    authelia.com/acl-policy: two_factor
    authelia.com/acl-networks: 1.2.3.4
    authelia.com/acl-subject: "group:admins"
spec:
  rules:
  - host: foo.example.com
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: foo
            port:
              number: 8000
  tls:
  - hosts:
    - foo.example.com
    secretName: foo-tls

Maybe it's also possible to annotate paths and get even more fine grained control

Beta Testers Unite: Information About the Repo

This thread is mainly intended as a place for me to move the TODO list, but also for people to suggest things for when the chart is "officially" released. It is also fine if you want to open individual issues and this is more encouraged so we can link commit history to individual issues.

It's also a place you're free to just ask questions.

Basic Overview

The chart is currently in beta and is subject to breaking changes in any chart release until the 1.0.0 release as per semver rules. Once we hit 1.0.0 breaking changes are intended to be limited to major version bumps (1.x.x -> 2.x.x.x). Feature bumps will be restricted to the minor version number (1.0.x -> 1.1.x), and everything else is a patch release which just fixes bugs.

Currently the main thing I'm in thought over is the config map, specifically the providers. There are essentially two ways I can configure them. I can make it so if they're defined they are enabled, or I can make individual switches. For example I can add a key to mysql called "enabled" which is a boolean, which is false by default. The advantage of the enabled option for these purposes is that all options are configured in values, the disadvantage is the chart will require a considerable amount of additional curly braces. I am currently leaning towards using the enabled switch, but if anyone has arguments against it then please let me know.

To Do List

  • Values
    • Standard Values
    • Local Values
    • Production Values #9
  • Setup CI
    • Chart Linting (chart-testing)
    • Chart Releaser
    • yamllint config
    • Merge restrictions (version bump requirement)
    • Unit Testing (terratest?)
    • Integration Testing (kind)
  • Chart Values Validation
    • Enabled/Defined Providers #10
    • Statefulness (probably needs tweaking)
  • Documentation
    • Chart Documentation
    • Automatically Generate Website Layout with versioned docs (jekyll)

Ingress tls secret template missing

Hi
I'm currently working on authelia chart, at ingress manifest i have tls.secret, the secretname is by default "authelia-tls", i dont want to have it for security purpose. I would like to use our own tls.

  • I used our tls name, it worked for me becuase the authelia is running on same name space.
  • the secret is already deployed before authelia .
  • so i create a secret.yaml with cert and key.
  • when i do helm install secret -f /secret.yaml -f /values.yaml > the ingress uses this template functionality
  • I cannot create the template and work with authelia, this doesnot support.
    I would like to check with the team, if you can create the below custom template. Thanks

Here is the template

{{- if .Values.ingress.tls.enabled }}
apiVersion: {{ $apiVersion }}
kind: Secret
metadata:
 name: test-tls  >> this name will be used in ingress manifest
 labels:
   app: monitor > our app name
type: kubernetes.io/tls
data:
 tls.crt: {{ .Values.secrets.cert | quote }}
 tls.key: {{ .Values.secrets.key | quote }}
{{- end }}






Add support for injecting secrets from hashicorp vault

I would like to transition from using k8s secrets and use Hashicorp Vault instead. It has many advantages over using k8s secrets, they are explained well at the beginning of this blog https://www.hashicorp.com/resources/vault-and-kubernetes-better-together.

They introduced tool vault-k8s , which leverages the Kubernetes Mutating Admission Webhook to intercept and augment specifically annotated pod configuration for secrets injection using Init and Sidecar containers. https://www.hashicorp.com/blog/injecting-vault-secrets-into-kubernetes-pods-via-a-sidecar.

If I simplify it:

  • the user assigns a k8s service account to pod spec. This service account is paired with the Vault role, which allows access to specific secrets
  • the user adds pod annotations, which defines how are secrets rendered into files mounted at path '/vault/secrets'
  • the user makes the app read secrets from '/vault/secrets/[SECRET-NAME]' path
  • (optional) if the user uses dynamic secrets and the secret has changed during the app lifetime, the annotation for app reload may be specified through "vault.hashicorp.com/agent-inject-command-SECRET-NAME"

A detailed tutorial is available at https://learn.hashicorp.com/tutorials/vault/kubernetes-sidecar#define-a-kubernetes-service-account

Changes needed for this helm chart:

  • add k8s service account which will be used by pods
  • introduce a variable, which will signal the use of Hashicorp Vault instead of k8s secrets.
  • add logic for rendering pod secrets path, when Vault is used. Currently, it's used a static prefix '/usr/app/secrets/', we need it to be '/vault/secrets' if Vault used.
  • add necessary annotations to render needed vars
  • add SecurityContext.RunAsUser to pod spec, to allow the use of annotation 'vault.hashicorp.com/agent-run-as-same-user'
  • find out how to define a mapping from chart secrets to Vault secrets. This may reuse the value field, or introduce something new like vault_path
  • find out how to handle logic around generating new secrets, when not defined. Currently as stated in docs 'If both the values and existingSecret are not defined, this chart randomly generates a new secret on each install'. It means that if Vault is used we don't want to generate secrets by the app and store it in k8s. It applies to JWT_TOKEN, STORAGE_PASSWORD, SESSION_ENCRYPTION_KEY. We must instruct the user to use the appropriate Vault toolset to do this.

The annotations needed:

  • vault.hashicorp.com/agent-inject: "true"
  • vault.hashicorp.com/agent-inject-status: "update" (if we want to update secret when rotated)
  • vault.hashicorp.com/agent-run-as-same-user: "true" (if we want to allow sending signals to app, which enable reload of secret)
  • vault.hashicorp.com/agent-inject-secret-SECRET-NAME: "[VAULT_PATH_TO_SECRET]" (for each defined secret)
  • vault.hashicorp.com/agent-inject-template-SECRET-NAME: (go template to render secret, for each defined secret)
  • vault.hashicorp.com/agent-inject-command: eg. "sh -c 'kill HUP $(pidof authelia)'" (if we want to reload the app when the secret was updated)
  • vault.hashicorp.com/role: eg. "authelia" (set to the Vault role defined by the user - the scope of client access permission to Vault secrets)
  • vault.hashicorp.com/tls-secret: eg. "vault-tls-client" (name of the Kubernetes secret containing TLS Client and CA certificates and keys. This is mounted to /vault/tls. This is optional if we want to verify Vault identity)
  • vault.hashicorp.com/ca-cert: eg. "/vault/tls/ca.crt" (path of the CA certificate used to verify Vault's TLS. This is optional if we want to verify Vault identity)
  • vault.hashicorp.com/client-cert: eg. "/vault/tls/client.crt" (path of the client certificate used when communicating with Vault via mTLS. This is optional if Vault enforces client identity verify.)
  • vault.hashicorp.com/client-key: eg. "/vault/tls/client.key" (path of the client public key used when communicating with Vault via mTLS. This is optional if Vault enforces client identity verify.)

Spec for all supported annotations https://www.vaultproject.io/docs/platform/k8s/injector/annotations

Ping PostgreSql deployed in K8s

Authelia can't ping postgresql deployed in kubernetes and exposed with a clusterIp service.

time="2023-05-31T15:56:15Z" level=warning msg="Configuration: access control: no rules have been specified so the 'default_policy' of 'two_factor' is going to be applied to all requests"
time="2023-05-31T15:56:15Z" level=info msg="Authelia v4.37.5 is starting"
time="2023-05-31T15:56:15Z" level=info msg="Log severity set to info"
time="2023-05-31T15:56:25Z" level=error msg="Failure running the storage provider startup check: error pinging database: failed to connect to `host=postgres-postgresql.postgres.svc.cluster.local. user=authelia database=authelia`: hostname resolving error (lookup postgres-postgresql.postgres.svc.cluster.local.: no such host)" stack="github.com/authelia/authelia/v4/internal/commands/root.go:281 doStartupChecks\ngithub.com/authelia/authelia/v4/internal/commands/root.go:87  cmdRootRun\ngithub.com/spf13/[email protected]/command.go:920                  (*Command).execute\ngithub.com/spf13/[email protected]/command.go:1044                 (*Command).ExecuteC\ngithub.com/spf13/[email protected]/command.go:968                  (*Command).Execute\ngithub.com/authelia/authelia/v4/cmd/authelia/main.go:10       main\nruntime/proc.go:250                                           main\nruntime/asm_amd64.s:1594                                      goexit"
time="2023-05-31T15:56:25Z" level=error msg="Error checking user authentication YAML database" error="user authentication database file doesn't exist at path '/config/users_database.yml' and has been generated" stack="github.com/authelia/authelia/v4/internal/authentication/file_user_provider.go:130 (*FileUserProvider).StartupCheck\ngithub.com/authelia/authelia/v4/internal/commands/root.go:323                     doStartupCheck\ngithub.com/authelia/authelia/v4/internal/commands/root.go:286                     doStartupChecks\ngithub.com/authelia/authelia/v4/internal/commands/root.go:87                      cmdRootRun\ngithub.com/spf13/[email protected]/command.go:920                                      (*Command).execute\ngithub.com/spf13/[email protected]/command.go:1044                                     (*Command).ExecuteC\ngithub.com/spf13/[email protected]/command.go:968                                      (*Command).Execute\ngithub.com/authelia/authelia/v4/cmd/authelia/main.go:10                           main\nruntime/proc.go:250                                                               main\nruntime/asm_amd64.s:1594                                                          goexit"
time="2023-05-31T15:56:25Z" level=error msg="Failure running the user provider startup check: one or more errors occurred checking the authentication database" stack="github.com/authelia/authelia/v4/internal/commands/root.go:287 doStartupChecks\ngithub.com/authelia/authelia/v4/internal/commands/root.go:87  cmdRootRun\ngithub.com/spf13/[email protected]/command.go:920                  (*Command).execute\ngithub.com/spf13/[email protected]/command.go:1044                 (*Command).ExecuteC\ngithub.com/spf13/[email protected]/command.go:968                  (*Command).Execute\ngithub.com/authelia/authelia/v4/cmd/authelia/main.go:10       main\nruntime/proc.go:250                                           main\nruntime/asm_amd64.s:1594                                      goexit"
time="2023-05-31T15:56:25Z" level=error msg="Failure running the notification provider startup check: error performing MAIL with the SMTP server: 530 5.5.1 Authentication Required." stack="github.com/authelia/authelia/v4/internal/commands/root.go:293 doStartupChecks\ngithub.com/authelia/authelia/v4/internal/commands/root.go:87  cmdRootRun\ngithub.com/spf13/[email protected]/command.go:920                  (*Command).execute\ngithub.com/spf13/[email protected]/command.go:1044                 (*Command).ExecuteC\ngithub.com/spf13/[email protected]/command.go:968                  (*Command).Execute\ngithub.com/authelia/authelia/v4/cmd/authelia/main.go:10       main\nruntime/proc.go:250                                           main\nruntime/asm_amd64.s:1594                                      goexit"
time="2023-05-31T15:56:25Z" level=fatal msg="The following providers had fatal failures during startup: storage, user, notification" stack="github.com/authelia/authelia/v4/internal/commands/root.go:309 doStartupChecks\ngithub.com/authelia/authelia/v4/internal/commands/root.go:87  cmdRootRun\ngithub.com/spf13/[email protected]/command.go:920                  (*Command).execute\ngithub.com/spf13/[email protected]/command.go:1044                 (*Command).ExecuteC\ngithub.com/spf13/[email protected]/command.go:968                  (*Command).Execute\ngithub.com/authelia/authelia/v4/cmd/authelia/main.go:10       main\nruntime/proc.go:250                                           main\nruntime/asm_amd64.s:1594                                      goexit"
Stream closed EOF for authelia/authelia-vq9st (authelia)

I think the problem may be that ping fails on ClusterIp services.

Could it be more convenient to initiate a direct connection to the Database and truncate it if there is a response?

traefikCRD with lets-encrypt makes traefik sad

Hey there.

Thanks for working on the chart! I noticed in trying to use it that the current template for the IngressRoute always sets a secret name, even though this isn't required when using a resolver.

This results in my Traefik logs being spammed with

level=error msg="Error configuring TLS: secret kube-system/authelia-tls does not exist" providerName=kubernetescrd ingress=authelia namespace=kube-system

I'm not super sure, but I suspect it's this line here, that either needs to disappear when a resolver is specified or be somehow configurable so it doesn't appear.

secretName: {{ default (printf "%s-traefik-tls" (include "authelia.name" .)) .Values.ingress.tls.secret }}

traefik will change apiVersion in v3

Hello,

Thanks for this helm chart. As I am a kubernetes beginner, authelia is by now the most demanding service to deploy. But this chart really helps, even in its current beta state.

I just saw by chance that the current helm chart deploys ingressroutes and middlewares for traefik by
apiVersion: traefik.containo.us/v1alpha1

Traefik additionally supports:
apiVersion: traefik.io/v1alpha1

As most of my current deployment uses the latter, that does not seem to cause any trouble. However, traefik's helm chart repo mentions, that support for traefik.containo.us/v1alpha1 will be removed with traefik v3: https://doc.traefik.io/traefik/v3.0/migration/v2-to-v3/

As both apiVersions are supported with the current v2, it seems reasonable to change to the new apiVersion.

Kind regards

Unable to set pod.strategy.type

Hey Folks,

I have a small instalation of authelia that uses a PVC for persistence (ReadWriteOnce).
That means that RollingUpdate will not work for mu authelia instalation (since you cannot start another job before removing the previous).

I've been trying to set .Values.pod.strategy.type but I was unsucessful. Helm complains with:

  Type    Reason  Age                From             Message
  ----    ------  ----               ----             -------
  Normal  info    46m (x2 over 46m)  helm-controller  HelmChart 'flux-system/networking-authelia' is not ready
  Normal  error   37m (x2 over 46m)  helm-controller  Helm install failed: template: authelia/templates/deployment.yaml:22:13: executing "authelia/templates/deployment.yaml" at <include "authelia.deploymentStrategy" .>: error calling include: template: authelia/templates/_helpers.tpl:460:18: executing "authelia.deploymentStrategy" at <.Values.pod.strategy>: nil pointer evaluating interface {}.pod

It seems to be related to helm/helm#8026

Reference to my deployment: https://github.com/fernandopn/swarm

Question: Why are there two different Traefik Middleware chains?

Looking at the generated template I find the following two middlewares. Is there a specific reason for having two?

---
# Source: authelia/templates/traefikCRD/middlewares.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
  name: chain-authelia-auth
  labels: 
    app.kubernetes.io/name: authelia
    app.kubernetes.io/instance: authelia
    app.kubernetes.io/version: 4.29.4
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: authelia-0.4.19
spec:
  chain:
    middlewares:
      - name: forwardauth-authelia
        namespace: default
---
# Source: authelia/templates/traefikCRD/middlewares.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
  name: chain-authelia
  labels:
    app.kubernetes.io/name: authelia
    app.kubernetes.io/instance: authelia
    app.kubernetes.io/version: 4.29.4
    app.kubernetes.io/managed-by: Helm
    helm.sh/chart: authelia-0.4.19
spec:
  chain:
    middlewares:
      - name: headers-authelia
        namespace: default

Relevant part of config:

ingress:
  enabled: true

  traefikCRD:
    enabled: true

    entryPoints:
      - websecure

    middlewares:
      auth:
        authResponseHeaders:
        - Remote-User
        - Remote-Name
        - Remote-Email
        - Remote-Groups

      chains:
        auth:
          before: []

          after: []

        ingressRoute:

          before: []

          after: []
  

consider using Kustomize

Wow. I have never seen such complicated helm charts, and I have seen quite a few.

I would suggest as an alternative that you consider using Kustomize or jsonnet instead of Helm.

Using default values.local.yaml results in crash-loop

I'm using the supplied values.local.yaml file to start testing Authelia but installation with this seems to result in a crash-loop. Here's the error log.

time="2021-07-23T19:01:45Z" level=warning msg="No access control rules have been defined so the default policy two_factor will be applied to all requests"
time="2021-07-23T19:01:45Z" level=info msg="Logging severity set to info"
time="2021-07-23T19:01:45Z" level=info msg="Storage schema upgrade to v1 completed"
time="2021-07-23T19:01:45Z" level=error msg="Unable to find database file: /config/users_database.yml" stack="github.com/authelia/authelia/cmd/authelia/main.go:92  startServer\ngithub.com/authelia/authelia/cmd/authelia/main.go:145 main.func1\ngithub.com/spf13/[email protected]/command.go:856          (*Command).execute\ngithub.com/spf13/[email protected]/command.go:960          (*Command).ExecuteC\ngithub.com/spf13/[email protected]/command.go:897          main\ngithub.com/authelia/authelia/cmd/authelia/main.go:163 main\nruntime/proc.go:225                                   main\nruntime/asm_amd64.s:1371                              goexit"
time="2021-07-23T19:01:45Z" level=error msg="Generating database file: /config/users_database.yml" stack="github.com/authelia/authelia/cmd/authelia/main.go:92  startServer\ngithub.com/authelia/authelia/cmd/authelia/main.go:145 main.func1\ngithub.com/spf13/[email protected]/command.go:856          (*Command).execute\ngithub.com/spf13/[email protected]/command.go:960          (*Command).ExecuteC\ngithub.com/spf13/[email protected]/command.go:897          main\ngithub.com/authelia/authelia/cmd/authelia/main.go:163 main\nruntime/proc.go:225                                   main\nruntime/asm_amd64.s:1371                              goexit"
time="2021-07-23T19:01:45Z" level=error msg="Generated database at: /config/users_database.yml" stack="github.com/authelia/authelia/cmd/authelia/main.go:92  startServer\ngithub.com/authelia/authelia/cmd/authelia/main.go:145 main.func1\ngithub.com/spf13/[email protected]/command.go:856          (*Command).execute\ngithub.com/spf13/[email protected]/command.go:960          (*Command).ExecuteC\ngithub.com/spf13/[email protected]/command.go:897          main\ngithub.com/authelia/authelia/cmd/authelia/main.go:163 main\nruntime/proc.go:225                                   main\nruntime/asm_amd64.s:1371                              goexit"

Secret configuration is problematic

Obviously storing clear-text secrets in a values file is a bad idea, however generating secrets for things like SMTP/DB/Redis/etc are not viable - one side or the other has to come first. So, clearly existingSecret is the way to go.

However, there are some problems with the way the chart handles secrets:

  1. No documentation on the data structure for the existingSecret: through trial and error, I determined that the key fields under secret.*.key control the secret keys that should exist, but it was non-obvious from looking at the default values that these objects were relevant when specifying existingSecret - the upstream docs don't help in this regard either.
    2. No way to omit certain passwords: if I'm not using a feature, I shouldn't have to provide a secret for it. Attempting to omit secrets from the secret values object by setting their parent keys to null blows up the deployment.yaml during templating, and omitting secrets from the existingSecret blows up the image at run-time, due to missing secret mounts from the deployment.
  2. SMTP does not appear to honor the enabledSecret option, resulting in my experiences trying to disable passwords via the methods above, and rendering unauthenticated SMTP non-functional.
  3. The enabledSecret fields lack documentation, and feel a little clumsy, since they conflict with the secret.* structure. Perhaps each object under secret should have an enabled bool instead (or I guess have handling allowing them to be set to null), then people can disable the need to supply OIDC/Duo/etc secrets if they're not using them, in addition to things that take an optional secret, like Redis/SMTP.

Edited now that I've managed to generate mostly functional output, since I can see that if featuers are disabled, their secrets are also removed from the ConfigMap.

Trying a fresh install, results in errors

I tried to install this chart, but it failed with the following error:
"spec.updateStrategy.rollingUpdate.maxSurge: Invalid value: intstr.IntOrString{Type:1, IntVal:0, StrVal:"25%"}: may not be set when maxUnavailable is non-zero"

What is the issue?

I have not set that, so my values are as follows:

spec:
  interval: 5m
  chart:
    spec:
      # renovate: registryUrl=https://charts.authelia.com
      chart: authelia
      version: 0.6.3
      sourceRef:
        kind: HelmRepository
        name: authelia-charts
        namespace: flux-system
      interval: 5m
  values:
    fullnameOverride: authelia-dev-domain
    rbac:
      enabled: true
      serviceAccountName: authelia-dev
    domain: ${DEV_DOMAIN}
    ingress:
      subdomain: sso
      tls:
        enabled: true
        secret: networking/${DEV_DOMAIN}-tls
      traefikCRD:
        enabled: false

    configMap:
      enabled: true
      server:
        port: 9091
        read_buffer_size: 4096
        write_buffer_size: 4096

      log:
        level: info
        format: json

      theme: dark
      totp:
        issuer: ${DEV_DOMAIN}
        period: 30
        skew: 1

      authenication_backend:
        disable_reset_password: false

        ldap:
          enabled: true
          implementation: custom

          url: ldap://ak-outpost-ldap.security.svc.cluster.local
          start_tls: false
          base_dn: DC=ldap,DC=skylab,DC=fi
          username_attribute: "cn"
          additional_users_dn: "ou=users"
          user: cn=akadmin,dc=ldap,DC=skylab,dc=fi

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.