Code Monkey home page Code Monkey logo

chart's Introduction

Introduction

This is a Helm chart for installing Mastodon into a Kubernetes cluster. The basic usage is:

  1. edit values.yaml or create a separate yaml file for custom values
  2. helm dep install
  3. helm install --namespace mastodon --create-namespace my-mastodon ./ -f path/to/additional/values.yaml

This chart is tested with k8s 1.21+ and helm 3.8.0+.

NOTICE: Future Deprecation

We have plans in the very near future to deprecate this chart in favor of a new git repo, which has proper helm repository support (e.g. helm repo add), and will contain multiple charts, both for mastodon and for supplementary components that we make use of.

We still encourage suggestions and PRs to help make this chart better, and this repository will remain available after the new charts are ready to give users time to migrate. However, we will not be approving large PRs, or PRs that change fundamental chart functions, as those changes should be directed to the new charts.

Please see the pinned GitHub issue for more info & discussion.

Configuration

The variables that must be configured are:

  • password and keys in the mastodon.secrets, postgresql, and redis groups; if left blank, some of those values will be autogenerated, but will not persist across upgrades.

  • SMTP settings for your mailer in the mastodon.smtp group.

If your PersistentVolumeClaim is ReadWriteOnce and you're unable to use a S3-compatible service or run a self-hosted compatible service like Minio then you need to set the pod affinity so the web and sidekiq pods are scheduled to the same node.

Example configuration:

podAffinity:
  requiredDuringSchedulingIgnoredDuringExecution:
    - labelSelector:
        matchExpressions:
          - key: app.kubernetes.io/part-of
            operator: In
            values:
              - rails
      topologyKey: kubernetes.io/hostname

Administration

You can run admin CLI commands in the web deployment.

kubectl -n mastodon exec -it deployment/mastodon-web -- bash
tootctl accounts modify admin --reset-password

or

kubectl -n mastodon exec -it deployment/mastodon-web -- tootctl accounts modify admin --reset-password

Missing features

Currently this chart does not support:

  • Hidden services
  • Swift

Upgrading

Because database migrations are managed as a Job separate from the Rails and Sidekiq deployments, it’s possible they will occur in the wrong order. After upgrading Mastodon versions, it may sometimes be necessary to manually delete the Rails and Sidekiq pods so that they are recreated against the latest migration.

chart's People

Contributors

abbottmg avatar ananace avatar bigwheel avatar bobbyd0g avatar consideratio avatar deepy avatar dunn avatar emilweth avatar gargron avatar hardillb avatar hinricht avatar ikuradon avatar jeremiahlee avatar jgsmith avatar jimeh avatar lleyton avatar metal3d avatar mickkael avatar mohe2015 avatar norman-zon avatar paolomainardi avatar pbzweihander avatar pqo avatar renchap avatar roobre avatar sisheogorath avatar timetinytim avatar varac avatar wyrihaximus avatar ydkk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chart's Issues

mastodon-web still looks for secret mastodon-postgresql even when existing secret is used

When starting mastodon web I get

Warning Failed 11s (x2 over 13s) kubelet Error: secret "mastodon-postgresql" not found

However postgres can start and uses the correct key

POSTGRES_POSTGRES_PASSWORD: <set to the key 'postgres-password' in secret 'mastodon-values-secret'>

Seems theres some kind of mistake in the handling code for this?

My values.yaml contains

    postgresql:
      # -- disable if you want to use an existing db; in which case the values below
      # must match those of that external postgres instance
      enabled: true
      # postgresqlHostname: preexisting-postgresql
      # postgresqlPort: 5432
      auth:
        database: mastodon_production
        username: mastodon
        # you must set a password; the password generated by the postgresql chart will
        # be rotated on each upgrade:
        # https://github.com/bitnami/charts/tree/master/bitnami/postgresql#upgrade
        password: ""
        # Set the password for the "postgres" admin user
        # set this to the same value as above if you've previously installed
        # this chart and you're having problems getting mastodon to connect to the DB
        postgresPassword: ""
        # you can also specify the name of an existing Secret
        # with a key of password set to the password you want
        existingSecret: "mastodon-values-secret"

mastodon.local isn't working

Hello

Everything has been well started

$ helm install --namespace mastodon --create-namespace my-mastodon ./ -f dev-values.yaml 
NAME: my-mastodon
LAST DEPLOYED: Tue Dec 13 14:27:42 2022
NAMESPACE: mastodon
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
  https://mastodon.local/
Every 2,0s: kubectl get services --namespace mastodon                                                                                                  MacBook-Pro-de-Thomas.local: Tue Dec 13 14:48:29 2022

NAME                                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
my-mastodon-elasticsearch                   ClusterIP   10.97.3.138      <none>        9200/TCP,9300/TCP   20m
my-mastodon-elasticsearch-coordinating-hl   ClusterIP   None             <none>        9200/TCP,9300/TCP   20m
my-mastodon-elasticsearch-data-hl           ClusterIP   None             <none>        9200/TCP,9300/TCP   20m
my-mastodon-elasticsearch-ingest-hl         ClusterIP   None             <none>        9200/TCP,9300/TCP   20m
my-mastodon-elasticsearch-master-hl         ClusterIP   None             <none>        9200/TCP,9300/TCP   20m
my-mastodon-postgresql                      ClusterIP   10.111.193.172   <none>        5432/TCP            20m
my-mastodon-postgresql-hl                   ClusterIP   None             <none>        5432/TCP            20m
my-mastodon-redis-headless                  ClusterIP   None             <none>        6379/TCP            20m
my-mastodon-redis-master                    ClusterIP   10.109.210.218   <none>        6379/TCP            20m
my-mastodon-redis-replicas                  ClusterIP   10.104.27.210    <none>        6379/TCP            20m
my-mastodon-streaming                       ClusterIP   10.107.169.110   <none>        4000/TCP            20m
my-mastodon-web                             ClusterIP   10.98.188.14     <none>        3000/TCP            20m

But https://mastodon.local/ isn't working.

Don't we need to expose something via kubectl ? or add the domain in our /etc/hosts ?

How to aim the ingress mastodon.local ?

Double reference to checksum/config-secrets

checksum/config-secrets: {{ include ( print $.Template.BasePath "/secret-smtp.yaml" ) $context | sha256sum | quote }}

vs

checksum/config-secrets: {{ include ( print $.Template.BasePath "/secrets.yaml" ) . | sha256sum | quote }}

Not sure how you'll want this resolved. You define the config-secrets annotation in both the helpers and that specific deployment. Unfortunately that means the values are doubled up, and k8s does not like it when that happens very much. Maybe remove the helpers, and let the sidekiq deploy do the annotation without the helper function?

Happy to do a PR and test, but don't know what would be preferred in this case.

Using helm, mastodon(web, streaming, sidekiq) is not installed. Only postgreSQL, redis are installed.

I use Helm chart to deploy Mastodon in my VM kubernetes cluster.

But only PostgreSQL, Redis is installed When I deploy Mastodon, PostgreSQL and Redis.

Idk what is the problem.

Anyone knows about this issue?

Below, I attached my valuse.yaml file. and I use helm install --namespace mastodon --create-namespace --debug my-mastodon ./ -f values.yaml

Details

image:
  repository: ghcr.io/mastodon/mastodon
  tag: "v4.0"
  pullPolicy: IfNotPresent

mastodon:
  labels: {}

  createAdmin:
    enabled: true
    username: admin
    email: [email protected]
  hooks:
    dbMigrate:
      enabled: true
    assetsPrecompile:
      enabled: true
  cron:
    removeMedia:
      enabled: true
      schedule: "0 0 * * 0"
  locale: en
  local_domain: mastodon.local
  web_domain: null
  singleUserMode: false
  authorizedFetch: false
  limitedFederationMode: false
  persistence:
    assets:
      accessMode: ReadWriteOnce
      resources:
        requests:
          storage: 10Gi
    system:
      accessMode: ReadWriteOnce
      resources:
        requests:
          storage: 100Gi
  s3:
    enabled: false
    access_key: ""
    access_secret: ""
    existingSecret: ""
    bucket: ""
    endpoint: ""
    hostname: ""
    region: ""
    permission: ""
    alias_host: ""
  deepl:
    enabled: false
    plan:
    apiKeySecretRef:
      name:
      key:
  hcaptcha:
    enabled: false
    siteId:
    secretKeySecretRef:
      name:
      key:
  secrets:
    secret_key_base: "f704da41ee039045aa267653d2954db0d4283a9f63ac60e445851cb2354a633372446ffa66f86aff56570c9d24fb77958c32c7bb269edb0b3a56060ba98d9fb1"
    otp_secret: "1b325bdca9099f38f79a95a12b2a35bca184910836cb2ecfb6beee58a9de85c044f6439e1d545c4c11f107c706507051f793119106e5a2a327974a50d244ed29"
    vapid:
      private_key: "vN24YYVW2f1k809AKtH9QoSHI2QmqRxXVIusBEO0tms="
      public_key: "BCyWPOsF5CmRqbzAZNEFiaY88dLThieRmRNXr8NbpypVCxPwf27VkAGgtgH234LMcjw9TT9PCZbUaNA3pDYatBs="
    existingSecret: ""

  revisionHistoryLimit: 2

  sidekiq:
    podSecurityContext: {}
    securityContext: {}
    resources: {}
    affinity: {}
    topologySpreadConstraints: {}
    workers:
      - name: all-queues
        concurrency: 25
        replicas: 1
        resources: {}
        affinity: {}
        topologySpreadConstraints: {}
        queues:
          - default,8
          - push,6
          - ingress,4
          - mailers,2
          - pull
          - scheduler
        image:
          repository:
          tag:
        customDatabaseConfigYml:
          configMapRef:
            name:
            key:
  smtp:
    domain: localhost
    server: smtp.mailplug.co.kr
    from_address: [email protected]
    auth_method: plain
    delivery_method: smtp
    enable_starttls: false
    openssl_verify_mode: none
    port: 465
    tls: false
    login: [email protected]
    password: Snix2019!
    existingSecret:
  streaming:
    image:
      repository:
      tag:
    port: 4000
    workers: 1
    base_url: null
    replicas: 1
    affinity: {}
    topologySpreadConstraints: {}
    podSecurityContext: {}
    securityContext: {}
    resources: {}
  web:
    port: 3000
    replicas: 1
    affinity: {}
    topologySpreadConstraints: {}
    podSecurityContext: {}
    securityContext: {}
    resources: {}
    minThreads: "5"
    maxThreads: "5"
    workers: "2"
    persistentTimeout: "20"
    image:
      repository:
      tag:
    customDatabaseConfigYml:
      configMapRef:
        name:
        key:

  cacheBuster:
    enabled: false
    httpMethod: "GET"
    authHeader:
    authToken:
      existingSecret:

  metrics:
    statsd:
      address: ""
      exporter:
        enabled: false
        port: 9102

  preparedStatements: true

  extraEnvVars: {}

ingress:
  enabled: false
  annotations:
  ingressClassName:
  hosts:
    - host: mastodon.local
      paths:
        - path: "/"
  tls:
    - secretName: mastodon-tls
      hosts:
        - mastodon.local

  streaming:
    enabled: false
    annotations:
    ingressClassName:
    hosts:
      - host: streaming.mastodon.local
        paths:
          - path: "/"
    tls:
      - secretName: mastodon-tls
        hosts:
          - streaming.mastodon.local

elasticsearch:
  enabled: false
  image:
    tag: 7

postgresql:
  enabled: true
  volumePermissions:
    enabled: true
  auth:
    database: mastodon_production
    username: mastodon
    password: "admin"
    existingSecret: ""
  readReplica:
    hostname:
    port:
    auth:
      database:
      username:
      password:
      existingSecret:

redis:
  enabled: true
  volumePermissions:
    enabled: true
  hostname: "my-mastodon-redis-master.mastodon.svc.cluster.local"
  port: 6379
  auth:
    password: "admin"
  replica:
    replicaCount: 0

service:
  type: ClusterIP
  port: 80

externalAuth:
  oidc:
    enabled: false
  saml:
    enabled: false
    omniauth_only: false
  cas:
    enabled: false
  pam:
    enabled: false
  ldap:
    enabled: false

podSecurityContext:
  runAsUser: 991
  runAsGroup: 991
  fsGroup: 991

securityContext: {}

serviceAccount:
  create: true
  annotations: {}
  name: "mastodon-sa"

deploymentAnnotations: {}

podAnnotations: {}

revisionPodAnnotation: true

jobAnnotations: {}

resources: {}

nodeSelector: {}

tolerations: []

affinity: {}

topologySpreadConstraints: {}

volumeMounts: []

volumes: []

Helm Chart does not work on ARM

Steps to reproduce the problem

Steps taken to reproduce this issue:

  1. Install the helm chart on a Kubernetes Cluster Raspberry Pi (k3s in my case)
helm install --namespace mastodon --create-namespace mastodon myvalues.yml
  1. Helm installation fails
Error: INSTALLATION FAILED: failed post-install: timed out waiting for the condition
  1. Dependencies fail to start:
kubectl get pods
NAME                                  READY   STATUS             RESTARTS         AGE
mastodon-assets-precompile-sk7dp      0/1     Completed          0                3h16m
mastodon-db-migrate-bv9pr             0/1     Error              0                3h15m
mastodon-db-migrate-545rk             0/1     Error              0                3h13m
mastodon-db-migrate-cnkhz             0/1     Error              0                3h11m
mastodon-db-migrate-k5mmk             0/1     Error              0                3h8m
mastodon-db-migrate-h8vmb             0/1     Error              0                3h6m
mastodon-db-migrate-fts5r             0/1     Error              0                3h3m
mastodon-db-migrate-66kpr             0/1     Error              0                3h1m
mastodon-sidekiq-f469fd769-46g86      0/1     CrashLoopBackOff   29 (3m18s ago)   3h16m
mastodon-redis-master-0               0/1     CrashLoopBackOff   43 (85s ago)     3h16m
mastodon-streaming-75b9bc7659-dpktc   0/1     CrashLoopBackOff   69 (72s ago)     3h16m
mastodon-postgresql-0                 0/1     CrashLoopBackOff   43 (77s ago)     3h16m
mastodon-web-d6459dff8-9v44t          0/1     Running            54 (30s ago)     3h16m
  1. Error message for postgres: exec /opt/bitnami/scripts/postgresql/entrypoint.sh: exec format error
  2. kubectl describe indicates that the AMD64 docker images are loaded:
Containers:
  postgresql:
    Container ID:   containerd://eec9162ed1fc16460455845cd21bc90fcc3c9872b12d0a6bac5992ea4b8b40ab
    Image:          docker.io/bitnami/postgresql:14.2.0-debian-10-r14
    Image ID:       docker.io/bitnami/postgresql@sha256:d0e50b7d78623f2da85420d9dfc1a31c43fe09215edff9edf2a417c3edacee1c

Expected behaviour

The mastodon helm chart works on ARM64 (e.g. Raspberry Pi) and AMD64.

Actual behaviour

Images fail to start due to exec format error failures.

Detailed description

The Helm Chart depends on bitnami images which do not support ARM64 at the moment.

The respective Bitnami Issue: bitnami/charts#7305

Specifications

Mastodon: 290d78cea4850982a2843dc1a2954f0d66fe58d8

Target: Raspberry Pi with K3S (v1.25.3+k3s1)

Allow external ElasticSearch

Hello!

This chart has the option of using external redis/db, but the elasticsearch option is only on/off.

Can we add the ability to allow external ES similar to the other components?

Thanks!

How increase mastodon web log level

i am not able to debug the lack of connectio between cloudflare tunnels and mastodon using ingress haproxy due i am not being able to set up the log level on mastodon-web to see if for any change i am reaching the service from the ingress.

Configuration issue with mastodon-streaming

Has anyone seen an issue with the mastodon-streaming container where it tries to connect to use its own pod's IP address as the postgres host despite DB_HOST being properly set in the ConfigMap?

I'm in the middle of testing a transition from my docker-compose environment to a k8s stack where I'm consuming this as a subchart, with a postgres cluster managed by another subchart that drives a CrunchyData PostgresCluster custom resource.

In my values file, I have mastodon.postgresql.postgresqlHostname set to the service name cluster-primary (a service published by the PostgresCluster) and I see that value makes its way to the pod environments correctly for both mastodon-web and mastodon-streaming pods. I pass the appropriate secret ref via mastodon.postgresql.auth.existingSecret and can see DB_PASS set appropriately in the pods as well.

The mastodon-web rails process seems to be contacting the database correctly, but all requests to wss://.../api/v1/streaming are returning a 401. Reading the logs from the mastodon-streaming pod, I see the following:

WARN Worker 1 now listening on 0.0.0.0:4000
ERR! error: no pg_hba.conf entry for host "10.2.4.187", user "mastodon", database "mastodon_production", no encryption
ERR! error: no pg_hba.conf entry for host "10.2.4.187", user "mastodon", database "mastodon_production", no encryption
ERR! error: no pg_hba.conf entry for host "10.2.4.187", user "mastodon", database "mastodon_production", no encryption
ERR! error: no pg_hba.conf entry for host "10.2.4.187", user "mastodon", database "mastodon_production", no encryption

The first line is on container boot, and subsequent lines seem to match up with requests made by my browser.

10.2.4.187 is the IP of the mastodon-streaming pod, not any pods related to my postgres cluster. I would expect the address used to match either "10.128.54.117", which is the endpoint associated with the cluster-primary service name set in DB_HOST or possibly 10.2.5.83, the IP associated with the pod hosting the actual primary pg instance.

Browsing through the deployment template file and the mastodon-env template file, everything seems correct there, and the proper values seem to make it to the final rendered resources, so perhaps this is a bug in the main mastodon/mastodon project? Is there some other environment variable that could be misleading rails to look to the wrong IP for postgres?

EDIT: Just realized the streaming server isn't rails, but a separate JS file run via node. The env setup code is pretty straightforward so it's a little perplexing how this mismatch could be happening. I'm going to try setting DATABASE_URL, which seems to override all other envvars and see if that is visible to node.

Enabling Elastic Search after initial Install

With the upcoming Search changes due in version 4.2.0 I started to look at what is needed to enable Elastic Search on my instance. (I had been running without Elastic Search as it is a single user instance)

Looking at the values.yml I find the following:

chart/values.yaml

Lines 235 to 246 in 4b6fd9f

# -- https://github.com/bitnami/charts/tree/master/bitnami/elasticsearch#parameters
elasticsearch:
# `false` will disable full-text search
#
# if you enable ES after the initial install, you will need to manually run
# RAILS_ENV=production bundle exec rake chewy:sync
# (https://docs.joinmastodon.org/admin/optional/elasticsearch/)
# @ignored
enabled: true
# @ignored
image:
tag: 7

It is not clear where/how to run the command mentioned and the link now 404's (I assume the new version is here https://docs.joinmastodon.org/admin/optional/elasticsearch/)

Can you

  1. confirm how the command should be run. I expect something like
    kubectl -n mastodon exec -it mastodon-web-75b84997c-s4lfq -- RAILS_ENV=production bundle exec rake chewy:sync
    
  2. That this is still valid for version 4.2.0?
  3. Update the URL in the comment

Thanks

depend on a pg operator instead?

I would recommend depending on a postgres operator - like https://github.com/zalando/postgres-operator - that will enable you to easily have DB backups, and DB scaling / HA managed by the operator (and the mastodon chart can then just define a pg instance per the operators CRD) - and it would be much easier to start getting horizontal scaling working with Mastodon.

Helm allow scaling sidekiq Deployment

Pitch

The current Helm setup creates one Deployment for sidekiq which handles all queues.

We could add something like this to values.yaml to allow scaling the queues separately.

  sidekiq:
    concurrency: 25
    workers:
    - name: pullpush
      count: 2
      queues: ["push,6", "pull"]
    - name: mailers
      count: 1
      queues: ["mailers"]

Each worker would correspond to one Deployment which would handle just the specified queues.
The previous one worker that handles all the queues could be moved to a conditional defaulting to true.

The downside of this solution is that an administrator would have to know about all the queues to manage, but on the other hand this feature would likely only be used at the point where you know when you want to scale up.
The benefit is that the chart itself doesn't have to know about the queues.

Another solution could be creating deployments scaled to 0 for each queue by default
The downside here is that the chart would have to know about the queues, but on the other hand they don't change that often.
Additionally an admin has no way of configuring a sidekiq that handles multiple queues.
The benefit here is there's less magic and values.yaml would be a little smaller.

Motivation

The documentation is very adamant about: Make sure you only have one scheduler queue running!!

improved ldap settings

diff --git a/templates/configmap-env.yaml b/templates/configmap-env.yaml
index 60efedd..5a0bc8f 100644
--- a/templates/configmap-env.yaml
+++ b/templates/configmap-env.yaml
@@ -288,13 +288,13 @@ data:
   {{- if .Values.externalAuth.ldap.enabled }}
   LDAP_ENABLED: {{ .Values.externalAuth.ldap.enabled | quote }}
   LDAP_HOST: {{ .Values.externalAuth.ldap.host }}
-  LDAP_PORT: {{ .Values.externalAuth.ldap.port }}
+  LDAP_PORT: {{ .Values.externalAuth.ldap.port | quote }}
   LDAP_METHOD: {{ .Values.externalAuth.ldap.method }}
   {{- with .Values.externalAuth.ldap.base }}
   LDAP_BASE: {{ . }}
   {{- end }}
-  {{- with .Values.externalAuth.ldap.bind_on }}
-  LDAP_BIND_ON: {{ . }}
+  {{- with .Values.externalAuth.ldap.bind_dn }}
+  LDAP_BIND_DN: {{ . }}
   {{- end }}
   {{- with .Values.externalAuth.ldap.password }}
   LDAP_PASSWORD: {{ . }}

The port needs to be quoted or k8s will complain about a number value for a string. and it's dn not on. The key in the values file should be updated as well.

Thanks!

On multi-node Kubernetes, the default settings on ReadWriteOnce without pod affinity are non-functional

Steps to reproduce the problem

  1. Install Mastodon from the Helm chart to a multi-node Kubernetes cluster with an NFS storage class.
  2. If the mastodon-web and mastodon-sidekiq-all-queues end up on different nodes, some of them will hang indefinitely on "ContainerCreating".

They are waiting to mount the persistence volumes system and assets. These can only be mounted on a single node at a time.

Expected behaviour

Everything should work on roughly default settings

Actual behaviour

The pods hang in ContainerCreating state in a difficult to understand way.

Detailed description

The default settings are non-functional on multi-node clusters. Either there needs to be a better comment warning to set pod affinities, the default mode should be ReadWriteMany, or there should be a pod affinity defined which puts these two kinds of pods to the same nodes by default.

Specifications

Mastodon: edge
OS: Ubuntu
Kubernetes: MicroK8S
Nodes: 2+

Elasticsearch never becomes ready

I enabled elasticsearch with the default values and found the system stuck in a CrashLoopBackOff. After investigating, it turns out neither the readiness nor live probe were able to return true; delaying or disabling both of them allowed the cluster to live and take requests from rails.

It turns out this was a bug with bitnami/elasticsearch, and was fixed about a month ago. I upgraded the chart and the problem was resolved.

I have implemented the change in my fork. PR to follow shortly.

helm install order of operations prioritizes db migrate job before env

When installing this chart via ArgoCD, it always tries to create job-db-migreate.yaml first, but it can't run the job without the configmap-env.yaml being applied, but the config map can't be applied because the job is set to run before everything else via a helm hook:

annotations:
"helm.sh/hook": post-install,pre-upgrade
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
"helm.sh/hook-weight": "-2"

According to the helm docs:

pre-install - Executes after templates are rendered, but before any resources are created in Kubernetes
post-install - Executes after all resources are loaded into Kubernetes

and the same docs further down:

Helm defines two hooks for the install lifecycle: pre-install and post-install. If the developer of the foo chart implements both hooks, the lifecycle is altered like this:
...
4. After some verification, the library renders the foo templates
5. The library prepares to execute the pre-install hooks (loading hook resources into Kubernetes)
6. The library sorts hooks by weight (assigning a weight of 0 by default), by resource kind and finally by name in ascending order.
7. The library then loads the hook with the lowest weight first (negative to positive)
8. The library waits until the hook is "Ready" (except for CRDs)
9. The library loads the resulting resources into Kubernetes. Note that if the --wait flag is set, the library will wait until all resources are in a ready state and will not run the post-install hook until they are ready.
10. The library executes the post-install hook (loading hook resources)
11. The library waits until the hook is "Ready"

But the config map doesn't have anything similar:

metadata:
name: {{ include "mastodon.fullname" . }}-env
labels:
{{- include "mastodon.labels" . | nindent 4 }}
data:

I can submit a PR to add the following to the configmap-env.yaml:

metadata:
  annotations: 
    "helm.sh/hook": pre-install
    "helm.sh/hook-weight": "-3"

I believe this might also affect #18 because the job has highest priority and is executed before the configMap would be available.

Using external Elasticsearch Database

It would be a nice feature to use an external Elasticsearch Database outside of the Deployment / Kubernetes Cluster.
Like with the PostgreSQL or Redis.

It is possible or planned to implement such feature?

Helm chart autoscaling doesn't scale the web deployment

Steps to reproduce the problem

  1. I set the autoscaling.enabled to true and set minReplicas as described in the values.yaml
  2. I deployed the Helm chart, and noticed the pod count was incorrect
  3. The HPA throws an error:
status:
  conditions:
  - lastTransitionTime: "2022-11-14T20:13:29Z"
    message: 'the HPA controller was unable to get the target''s current scale: deployments/scale.apps
      "mastodon" not found'
  1. Checking the names, the name for the target of the HPA and the rendered name of the deployment do not match.

Expected behaviour

The HPA should set the number of pods in the deployment correctly

Actual behaviour

The pod stayed at one and the HPA could not find the deployment to scale it

Detailed description

No response

Specifications

Latest version of the chart.

Can't access fresh install - Stylesheets fail integrity metadata check

Hi all,

I have installed Mastodon using this Helm Chart on my cluster and I am having trouble accessing it.
I have an Nginx Ingress Controller in front of the containers with Domain and a valid SSL cert configured.
Every time I navigate to the URL I only see 2 icons and receive a lot of errors regarding the integrity of the stylesheets in the console:

Screenshot 2022-12-22 at 13 05 50

Judging by the network tab, the stylesheets are loading fine and the content looks okay as well.
I couldn't find any errors in any of the pods either and the precompile command ran successfully also.
Using Safari or Chrome result in the same issue.

Does anyone have an idea what the error could be or even just how I could go about debugging this?
I tried manually port-forwarding port 3000 from the web container to see if I could access the site that way, but my connection seems to get refused immediately.

Thanks a lot!

interest in having a release via GHA?

Hoi and thanks for maintaining this helm chart! :)

I setup a demo at jessebot/mastodon-helm-chart on how to setup a release via github-pages using the helm/chart-releaser-action. I can submit a pull request with that if you'd like. In the demo I have on my fork of this repo, I'm also using the latest versions of the bitnami oci compliant helm charts, but I would remove that from a potential PR on this topic to keep it scoped properly.

The gist of the required changes would be:

  • create charts/mastodon directory
  • move templates/, values.yaml, dev-values.yaml, Chart.yaml, and Chart.lock into charts/mastodon
  • create a branch called gh-pages
  • under ⚙️ Settings > Actions > Workflow Permissions select "Read and write permissions"
Workflow permissions screenshot reads Choose the default permissions granted to the GITHUB_TOKEN when running workflows in this repository. You can specify more granular permissions in the workflow using YAML. Learn more. https://docs.github.com/actions/reference/authentication-in-a-workflow#modifying-the-permissions-for-the-github_token Option selected is: Workflows have read and write permissions in the repository for all scopes. - create a workflow file in `.github/workflows/release.yaml` something like this:
name: Release Chart
concurrency: chart_releaser

on:
  push:
    branches:
      - main
    paths-ignore:
      - '.github/**'
      - '**/README.md'
      - 'LICENSE'

jobs:
  release:
    runs-on: ubuntu-22.04
    steps:
      - name: Checkout
        uses: actions/checkout@v3

      - name: Fetch history
        run: git fetch --prune --unshallow

      - name: Configure Git
        run: |
          git config user.name "$GITHUB_ACTOR"
          git config user.email "[email protected]"

      # See https://github.com/helm/chart-releaser-action/issues/6
      - name: Set up Helm
        uses: azure/[email protected]
        with:
          version: v3.11.1

      - name: Add dependency chart repos
        run: |
          helm dep update charts/mastodon

      - name: Run chart-releaser
        uses: helm/[email protected]
        env:
          CR_GENERATE_RELEASE_NOTES: true
          CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"

This is not dissimilar to what we're doing over in the community lead nextcloud/helm repo. The only caveat is that you still have to remember to manually bump the helm chart version each time in any commits you merge.

This isn't urgent or required for me right now, but it would be nice to have more official releases :)

How to generate vapid keys for this Chart?

Hi all,
I tried installing Mastodon using this chart and it requires me to set the vapid private_key and public_key.
Now according to the normal installation instructions on the mastodon website one can do this using the "mastodon:webpush:generate_vapid_key" command, after installing the packages, which doesn't really work when deploying to Kubernetes with Helm.
I would assume that it is possible to run the generate command inside one of the containers, but as I don't yet have a deep understanding of Mastodon I don't know which one.
I think it would be great if on the README you could provide a one-shot "kubectl run ..." command that uses the correct container image and generates these keys, such that newcomers like me know how to generate them in the Helm setup :)

How should you upgrade to 4.2.5?

The current chart has the tag set to v4.2 which should track the latest 4.2.x release

But the default pullPolicy is IfNotPresent so it won't upgrade without overiding the default.

Should this default be changed?

Or is the recommendation to pass the definitive tag name e.g. v4.2.5?

Helm chart unable to create fresh server: `relation "accounts" does not exist` unless db migration job is run manually

Steps to reproduce the problem

Creating a fresh, completely new mastodon server using the helm chart fails to initialize. It appears that the database tables required are not initialized as both the web and sidekiq containers return relation "accounts" does not exist

From my casual perusal, it appears that the creation of the tables should be done by job-db-migrate.yaml, but it never runs as the web container is never finished installing it needs the tables to be initialized, and is thus stuck in a crash loop.

Executing the template to a file, copying the job from there and manually kubectling the db migration job results in a deployment.

Expected behaviour

Helm chart should spin up database tables

Actual behaviour

Web and sidekiq containers booting

Detailed description

No response

Specifications

Chart v 3.0.0,
Tag: latest (v4.0.2)
K8s

Allow user configurable probes for streaming and worker Deployments

Pitch

I'd like to be able to, using the first-party Mastodon Helm Chart, configure my liveness, readiness, and startup probes on Deployment objects to meet my needs of my running environment.

Motivation

People who want to either do more advanced probes and checking for health, or users who want to customize probes to operate Mastodon in slower environments are two examples that immediately spring to mind.

Helm chart cannot be updated, dependencies are too old

Steps to reproduce the problem

  1. Get mastodon repo, use whatever the tag you want (stable-3.4, release-3.5.3...)
  2. Go to chart
  3. Type help dep update

Expected behaviour

The dependencies should be downloaded

Actual behaviour

An error occured

Detailed description

$ helm dep update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈Happy Helming!⎈
Error: can't get a valid version for repositories elasticsearch, postgresql, redis. Try changing the version constraint in Chart.yaml

Specifications

Whatever the version of Mastodon :)

Actually, none of the versions are actually OK, Going to artifacthub.io to check, the listed versions in the Chart.yaml file are now very old and not listed.

NOTICE: Helm chart rework + future deprecation of this chart.

We have some big plans for our official Mastodon helm chart.

The way we've been developing and distributing the current helm chart has been less than ideal in a few different ways, and since there's been a shortage of time and people to work on making this chart better & address users' issues and suggestions, development of this particular chart has not always followed best practices or used consistent conventions. So as a result, we want to move this chart to a new home, clean up the bloat, and refactor it into a couple different charts to better suit different use cases.

Our overall goals for this are:

  • Host our helm charts from a git repo that has helm repo support (helm repo add ...).
  • Host multiple helm charts out of one github repo.
    • Initial idea is one chart which includes dependencies, and one standalone for those bringing their own.
    • Allows us to host other helm charts that we write/utilize in our own instances.
  • Refactor/simplify the chart itself to be less bloated, and be more consistent with best practices and conventions.

All of this means that we will need to create a new git repository to host this. The new home for these charts will be located here once they're ready: https://github.com/mastodon/helm-charts.

This also means that this chart will eventually be deprecated, but we have no intention of doing it right away. Now that we have more resources to devote to the helm charts, we be going through all the issues and PRs in this chart, and addressing/incorporating them wherever we can, which will all be carried forward into the new chart. And this chart will continue to be supported for a while after the new charts' release to give everyone time to transition. However, because of these plans, any PRs or suggestions that result in large or fundamental changes will have to be rejected for the time being until the new repo is up and ready to go.

Please feel free to comment with any concerns or questions you might have!

relation "accounts" does not exist at character 454

Each time I try to deploy this helm chart with the included postgres, the deployment fails and the postgres container has this in the logs:

2022-12-22 20:13:15.277 GMT [1] LOG:  pgaudit extension initialized
2022-12-22 20:13:15.281 GMT [1] LOG:  starting PostgreSQL 14.2 on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
2022-12-22 20:13:15.281 GMT [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2022-12-22 20:13:15.282 GMT [1] LOG:  listening on IPv6 address "::", port 5432
2022-12-22 20:13:15.283 GMT [1] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
2022-12-22 20:13:15.287 GMT [155] LOG:  database system was shut down at 2022-12-22 20:13:15 GMT
2022-12-22 20:13:15.309 GMT [1] LOG:  database system is ready to accept connections
2022-12-22 20:13:26.068 GMT [169] ERROR:  relation "accounts" does not exist at character 454
2022-12-22 20:13:26.068 GMT [169] STATEMENT:  SELECT a.attname, format_type(a.atttypid, a.atttypmod),
	       pg_get_expr(d.adbin, d.adrelid), a.attnotnull, a.atttypid, a.atttypmod,
	       c.collname, col_description(a.attrelid, a.attnum) AS comment
	  FROM pg_attribute a
	  LEFT JOIN pg_attrdef d ON a.attrelid = d.adrelid AND a.attnum = d.adnum
	  LEFT JOIN pg_type t ON a.atttypid = t.oid
	  LEFT JOIN pg_collation c ON a.attcollation = c.oid AND a.attcollation <> t.typcollation
	 WHERE a.attrelid = '"accounts"'::regclass
	   AND a.attnum > 0 AND NOT a.attisdropped
	 ORDER BY a.attnum

Helm Chart - postgresql container in crashbackoff loop

Steps to reproduce the problem

Pulled latest from git
Modified values.yaml to include relevant info
Deployed via helm

Expected behaviour

postgresql pod runs as excepted

Actual behaviour

Postgresql pod is in crashbackoff loop

Detailed description

Warning FailedScheduling 2m27s default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
Normal Scheduled 2m26s default-scheduler Successfully assigned mastodon/mastodon-postgresql-0 to k8s3
Normal Pulled 81s (x4 over 2m18s) kubelet Container image "docker.io/bitnami/postgresql:14.2.0-debian-10-r14" already present on machine
Normal Created 71s (x4 over 2m18s) kubelet Created container postgresql
Normal Started 68s (x4 over 2m18s) kubelet Started container postgresql
Warning BackOff 54s (x13 over 2m15s) kubelet Back-off restarting failed container

The PVC does get created and bound without a problem.

There are no errors, and this is all I see in the pod logs:

postgresql 16:23:14.56

postgresql 16:23:14.56 Welcome to the Bitnami postgresql container
postgresql 16:23:14.56 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql
postgresql 16:23:14.57 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues
postgresql 16:23:14.57
postgresql 16:23:14.64 INFO ==> ** Starting PostgreSQL setup **
postgresql 16:23:14.69 INFO ==> Validating settings in POSTGRESQL_* env vars..
postgresql 16:23:14.71 INFO ==> Loading custom pre-init scripts...
postgresql 16:23:14.75 INFO ==> Initializing PostgreSQL database...
postgresql 16:23:14.80 INFO ==> pg_hba.conf file not detected. Generating it...
postgresql 16:23:14.81 INFO ==> Generating local authentication configuration

Specifications

Mastodon: latest

External Redis password not stored

Expected: setting redis.auth.password as a chart value stores the password as a secret for use by other services in the same way that postgresql.auth.password is handled.

Behavior: When using an external Redis service, setting redis.auth.password as a chart value does not store the password for use by the other services. Currently, the value is only used to set the password for a Redis instance created by the chart. The only way to use an external Redis service is to set the password in an existingSecret outside of the chart.

This results in the error:

secret "mastodon-redis" not found: CreateContainerConfigError

Note that postgresql.auth.password gets stored in the mastodon.fullname secret name and the Redis secret is referenced from .Release.Name-redis secret name.

2023-07-19 edit: I have a pull request to fix this that I will submit tomorrow.

Scaling

How is everyone scaling this chart? Looking for ways to scale when the webserver gets busier, but also when there is a ton of items in the queue and my instance would need more workers to process the data in the queue.

Migrate away from using Bitnami for dependencies in Helm chart. No support for arm64 containers.

Pitch

The Helm chart Bitnami dependencies (PostgreSQL, Redis, ElastiCache) all don't support arm64 platform even though the Bitnami community has been asking for this support forever.

https://github.com/mastodon/mastodon/blob/ef196c913c77338be5ebb1e02af2f6225f857080/chart/Chart.yaml#L25-L36

Is it possible to migrate to another provider for these dependencies who maintain arm64 containers?

Motivation

Run Kubernetes cluster on arm64.

sslv3 alert handshake failure in cleanup cronjob

Looks like my media cleanup cronjob isn't working. It runs with logs that look like this:

Error processing 109971050890472566: SSL_connect returned=1 errno=0 peeraddr=104.18.9.90:443 state=error: sslv3 alert handshake failure
Progress: |==
Error processing 109971050965759532: SSL_connect returned=1 errno=0 peeraddr=104.18.8.90:443 state=error: sslv3 alert handshake failure

Error processing 109971051589630739: SSL_connect returned=1 errno=0 peeraddr=104.18.9.90:443 state=error: sslv3 alert handshake failure
Progress: |==
Error processing 109971054235710502: SSL_connect returned=1 errno=0 peeraddr=104.18.9.90:443 state=error: sslv3 alert handshake failure
Progress: |==
Error processing 109971054355490759: SSL_connect returned=1 errno=0 peeraddr=104.18.9.90:443 state=error: sslv3 alert handshake failure

Error processing 109971054513703975: SSL_connect returned=1 errno=0 peeraddr=104.18.9.90:443 state=error: sslv3 alert handshake failure

Error processing 109971054916655204: SSL_connect returned=1 errno=0 peeraddr=104.18.8.90:443 state=error: sslv3 alert handshake failure
Progress: |==
Error processing 109971056761130102: SSL_connect returned=1 errno=0 peeraddr=104.18.9.90:443 state=error: sslv3 alert handshake failure

Any ideas?

Error generating template when disabling postgres / redis / elasticsearch

Background

When utilizing a cloud provider for data stores (eg: AWS rds for postgres, elasticcache for redis, etc), it is a very reasonable use case to disable all three of the bitnami charts specified in the dependencies. When this happens, the following bitnami common library chart is not included. This will cause issues when the mastodon chart code refers to common.names.fullname (which is only defined in the bitnami common library).

Steps to reproduce

  • clone charts repo
  • fill in basic values boilerplate for secrets
  • disable postgres, redis, and elasticsearch
  • generate template

Error received:

Error: template: mastodon/templates/secret-smtp.yaml:5:29: executing "mastodon/templates/secret-smtp.yaml" at <include "common.names.fullname" .>: error calling include: template: no template "common.names.fullname" associated with template "gotpl"

Possible resolution?

  • use mastodon.fullname instead (not certain how these would differ)
  • include the bitnami common library chart in some toplevel way such that disabling the dependencies doesn't preclude its inclusion

Final note

I am happy to open a PR with one of the above solutions, but lack a bit of the context necessary to decide which is the best choice. Apologies, I'm not terribly familiar with helm templating so some of this code is Greek to me. Cheers

Enhance README.md with Emojis for Clarity

Issue Description:
I would like to contribute to this repository by enhancing the README.md for improved clarity and aesthetics. My suggestion is to add emojis to the existing headings in the README to make it more visually appealing and user-friendly.

Proposed Emojis for Headings:

Here are emojis for the additional headings:

  1. Introduction 🌟
  2. Configuration ⚙️
  3. Administration 🛡️
  4. Missing features
  5. Upgrading 🚀
  6. Upgrades in 2.1.0 🆕
  7. Upgrades in 2.0.0 🔄

These emojis can help add some visual flair to your README.md and make it more engaging for readers. If you have any more headings or need further assistance, feel free to let me know!

I believe that adding emojis to the headings will not only make the documentation more engaging but also help readers quickly identify and understand the content of each section.

Please assign this issue to me for the hacktoberfest. :)

Docs for Active Record Encryption secrets?

I've just pulled the latest version of the chart in order to do the 4.2.10 upgrade (from 4.2.6)

And it is now asking for extra secrets for ACTIVE_RECORD_ENCRYPTION

The docs imply these were added for 4.3.0 (a future release?) So it's unclear if these are really needed for 4.2.10 or if the current version of the helm chart is now only useful with nightly?

As there are not release tagged in this repo (but they are mentioned in the CHANGELOG.md) how to i get a suitable version of the helm chart to do the upgrade or should I just generate some random strings to use as the keys (if so are there any restrictios on length required?)

Duplicate annotation in deployment-sidekiq

I tried to deploy using helm and could not, so I identified the cause.
The cause was a duplicate annotation key.

deployment-sidekiq.yaml file has duplicate annotation key checksum/config-secrets. k8s requires a unique annotation key, so installing mastodon chart failed.

I will show you how to reproduce it in the next section. I tried with 3934da1.

reproduct

$ helm dependency update
$ helm template . \
  --values dev-values.yaml \
  --output-dir rendered-templates
$ cat rendered-templates/mastodon/templates/deployment-sidekiq.yaml
...
annotations:
  # roll the pods to pick up any db migrations or other changes

  rollme: "1"
  checksum/config-secrets: "c988861c081539520cbb260007ba5b9292a189a02254ba616eb4e67445f210d0"
  checksum/config-configmap: "8ed3ba7c668487c141b93a946bfdb2b3caebf3f8ae5efccbe922cbedc711fa6d"
  checksum/config-secrets: "58c287f58852f11e58b06dbea32d36751824156324743461f864565d9ebbb969"
...
# -> the annotation `checksum/config-secrets` in L31, L33 were duplicated.

I think #38 may be a hint.

Question about the chart maintenance

Apologies that this isn't a technical question, more around the maintenance/support of the chart.

My understanding is that this repo is now the supported helm chart for mastodon since it was separated out of the main mastodon repo. However, there are a number of open issues, and some pending PRs that fix them, but no merges to main in several months.

Does this chart need additional maintainers? I'm sure there are some of us who'd like to make sure it's up to date and stable, if anything to help the adoption of mastodon itself.

streaming service ingress path should match mastodon documentation

the mastodon api documentation states the streaming api websocket endpoint is /api/v1/streaming
but the ingress template of this chart only forwards /api/v1/streaming/ to the service.

this came up in mastodon too (discussion at mastodon/mastodon#19872, patch at mastodon/mastodon#19896), and breaks some clients (elk, pinafore nolanlawson/pinafore#2161) that follow the documentation, but not the default mastodon frontend or glitch.social which both use the undocumented /api/v1/streaming/ directly.

Upgrade Redis ?

Something I've been wondering for a long while now. Why is this chart still using Redis 6?

Redis 7 is used in the docker-compose.yaml file and I'm running Redis 7 without any obvious issue in my own chart/cluster - so - is there some reason it's set to such an old version in Chart.yaml?

It just needs something like so:

diff --git a/Chart.lock b/Chart.lock
index 961e4fa..c0cdc69 100644
--- a/Chart.lock
+++ b/Chart.lock
@@ -7,6 +7,6 @@ dependencies:
   version: 11.1.3
 - name: redis
   repository: https://raw.githubusercontent.com/bitnami/charts/archive-full-index/bitnami
-  version: 16.13.2
-digest: sha256:17ea58a3264aa22faff18215c4269f47dabae956d0df273c684972f356416193
-generated: "2022-08-08T21:44:18.0195364+02:00"
+  version: 17.7.6
+digest: sha256:1a9598fa8e475adb9a9ed05f9d8aa37366895165f4cb3d861dd1483f6c7dc8c2
+generated: "2023-03-07T11:21:33.592016Z"
diff --git a/Chart.yaml b/Chart.yaml
index 1ebc973..f0eb034 100644
--- a/Chart.yaml
+++ b/Chart.yaml
@@ -32,6 +32,6 @@ dependencies:
     repository: https://raw.githubusercontent.com/bitnami/charts/archive-full-index/bitnami
     condition: postgresql.enabled
   - name: redis
-    version: 16.13.2
+    version: 17.7.6
     repository: https://raw.githubusercontent.com/bitnami/charts/archive-full-index/bitnami
     condition: redis.enabled

Error in Helm template prevents installation when caching proxy is defined

Error:

Error: INSTALLATION FAILED: template: mastodon/templates/deployment-web.yaml:21:12: executing "mastodon/templates/deployment-web.yaml" at <include "mastodon.rollingPodAnnotations" .>: error calling include: template: mastodon/templates/_helpers.tpl:60:30: executing "mastodon.rollingPodAnnotations" at <include (print $.Template.BasePath "/configmap-env.yaml") .>: error calling include: template: mastodon/templates/configmap-env.yaml:54:27: executing "mastodon/templates/configmap-env.yaml" at <.Values.mastodon.s3.alias_host>: can't evaluate field Values in type string

S3_ALIAS_HOST: {{ .Values.mastodon.s3.alias_host}}

This should be . not .Values... I would think.

Helm Deployment - DB migrate job should run on post-upgrade

Helm Deployment - DB migrate job should run on post-upgrade, instead on pre-upgrade

https://github.com/tootsuite/mastodon/blob/c3786b29b7730b8c858320599508a20b11884108/chart/templates/job-db-migrate.yaml#L8

Looking at the upgrade notes:
https://github.com/tootsuite/mastodon/releases/tag/v3.3.0

image

I guess the db migrate job should run after the new code is running.

If you are upgrading to a new mastodon version, with the current configuration, the db-migrate job is running after the new image is pulled.

[ActionDispatch::HostAuthorization::DefaultResponseApp] Blocked host: XXX

Mastodon web logs are full of blocking log entries:

[ActionDispatch::HostAuthorization::DefaultResponseApp] Blocked host: 10.1.149.159
[ActionDispatch::HostAuthorization::DefaultResponseApp] Blocked host: 10.1.145.207
[ActionDispatch::HostAuthorization::DefaultResponseApp] Blocked host: 10.1.154.2
[ActionDispatch::HostAuthorization::DefaultResponseApp] Blocked host: 10.1.174.212

Those addresses are the K8S node assigned private addresses. The whole subnet had been added in env variables:

  • TRUSTED_PROXY_IP : 10.0.0.0/8,
  • ALLOWED_PRIVATE_ADDRESSES : 10.0.0.0/8,

Any ideas on how to resolve that issue?

Unable to create new account, email "does not seem to exist"

This chart has been deployed and seems to be working fine with a letsencrypt cert and such, but I am unable to create any new accounts as the registration page states "does not seem to exist" no matter which email I try to use (in this case, a gmail.com email). Something appears broken in this deployment in that it will not allow new account creation.

affinity settings in values.yaml are ignored for multi-arch

My cluster is multi-arch with both arm64 and amd64 nodes. It appears some of the chosen apps for this deployment are not multi-arch despite mastodon supposedly supporting it. redis and postgres are locked to amd64 for some reason.

That said, I have tried to apply affinity settings in the respective sections of values.yaml but they are ignored when deployed. Setting the affinity at the bottom globally only seems to work for the mastodon pods, and still ignored on the postgres and redis pods.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.