Code Monkey home page Code Monkey logo

Comments (14)

monotek avatar monotek commented on August 11, 2024 1

This is also not possible for historical reasons (shared disk) without using a RWX disk.
If you would want to scale you would need to sync disk content from one pod to another.

See also:

#107
#58
#51
#28

from zammad-helm.

mgruner avatar mgruner commented on August 11, 2024 1

I also believe that the current helm chart is misleading there. replica > 1 cannot work properly as is, because Zammad does not support multiple instances of Background Worker and Websocket Server. We intend to change this in future, but that will take quite some time.

from zammad-helm.

klml avatar klml commented on August 11, 2024

This is also not possible for historical reasons (shared disk) without using a RWX disk.

that's clear, that's why we use a pre-defined nfs ;)

persistence:
  enabled: true
  size: 10Gi
  existingClaim: lhm-var-zammad

from zammad-helm.

mgruner avatar mgruner commented on August 11, 2024

After some discussions, a short update on this topic.

A first (hacky) solution idea was to allow replicas > 1 and find some way of making sure that WS and Scheduler containers are active only once in the cluster. The other containers would be present, but not active. This will probably not work because of kubernetes' readinessProbe and livenessProbe. With the WS service not being active, the entire POD would not receive traffic (in case of livenessProbe == Failure), or the (inactive) WS container would receive traffic (in case of livenessProbe == Success. So this seems impossible.

Another approach could be to modify the Helm chart like this:

  • It should react to the value of Values.replicas
  • If that is 1, everything stays as is.
  • If it is > 1
    • zammad-scheduler and zammad-websocket are not included in the zammad StatefulSet any more.
    • A second StatefulSet zammad-nonscalable (or similar) is created, which contains them, and has replicas fixed to 1.
    • An init container will wait until the railsserver is available. This ensures that the zammad init process finished in the primary pod.
    • Having multiple pods should be possible because replicas > 1 requires ReadWriteMany storage anyways.
    • With this, we'd end up with 2 pods: the zammad pod with the web server is scalable at will. The other pod is not. This is not a full HA setup, but it's as good as it can get with the current state of Zammad.

@monotek, @klml, @t-shehab and everybody interested, what do you think about this idea? Can it work like this? Any suggestions for the waiting on the railsserver?

On the long run, we plan to make Zammad fully scalable, but that is not a short term solution.

from zammad-helm.

mgruner avatar mgruner commented on August 11, 2024

Hm, even that proposal might not work like this. Currently, the zammad-nginx container is tightly connected to the WS server by contacting it through localhost and its port number. This will probably not work with a separate pod.

from zammad-helm.

monotek avatar monotek commented on August 11, 2024

To scale zammad we have to split up the statefulset in separate deployments. That this was not done in the beginning, when the chart was new, has to do with the old way the container worked (copying stuff around).

If we don't have to share any files anymore between the pods it should be relatively easy to achieve.

Using replica > 1 in the statefulset and not having all containers active will not work. If the livenessprobe fails the pod would be even restarter, not just not get traffic.

from zammad-helm.

mgruner avatar mgruner commented on August 11, 2024

@monotek can you elaborate this a bit more, please? Zammad still needs the var/ and storage/ folders (if used) at present. If we split into single deployments, would that mean we always require a network volume like NFS?

from zammad-helm.

mgruner avatar mgruner commented on August 11, 2024

@monotek I have another question about this. Our init containers are also not scalable right now, they should not run several times in parallel. How could this be implemented in a new way together with scalable deployments?

from zammad-helm.

klml avatar klml commented on August 11, 2024

Our init containers are also not scalable right now, they should not run several times in parallel.

from my experience with replica: 2, statefulset zammad starts always one after the other, even at a plain install. So no initcontainer is active at the same time.

from zammad-helm.

mgruner avatar mgruner commented on August 11, 2024

@klml yes, that is the specified behaviour for StatefulSet. But @monotek suggested to move to Deployments already.

from zammad-helm.

monotek avatar monotek commented on August 11, 2024

@monotek can you elaborate this a bit more, please? Zammad still needs the var/ and storage/ folders (if used) at present. If we split into single deployments, would that mean we always require a network volume like NFS?

S3 usage would be my prefered workaroung for this.
But i'm not sure if this works only for /storage or also for /var currently?

We can spin up our own s3 container in zammad, using a http://min.io/.
The bitnami guys again provide a helm chart for this, which we can use as dependency: https://github.com/bitnami/charts/tree/main/bitnami/minio

@monotek I have another question about this. Our init containers are also not scalable right now, they should not run several times in parallel. How could this be implemented in a new way together with scalable deployments?

The init containers could be put in a separate pod or even in a kubernetes job.

What would be nice is if the zammad services coud check if the database has already the expected db schema version and fail if not. If this would be an option the containers would just restart until the schema migration was run.

from zammad-helm.

mgruner avatar mgruner commented on August 11, 2024

S3 usage would be my prefered workaroung for this.

S3 was indeed added with Zammad 6.2, and we should consider integrating it. Maybe as a part of this change, since we need to release a new major version anyway. But: it is only used as a new back end for Zammads file storage. The var/ folder is still a regular directory. So we cannot get by without requiring an existing network volume at this point. We will work towards changing this in future, but not for Zammad 6.2.

The init containers could be put in a separate pod or even in a kubernetes job.

Yes. A kubernetes Job seems to be the correct choice here, combined with the post-install and post-update helm chart hooks. This job would essentially run the existing init containers.

What would be nice is if the zammad services coud check if the database has already the expected db schema version and fail if not. If this would be an option the containers would just restart until the schema migration was run.

This would be a nice simplification. I guess we can build this into Zammad via a new env like RAILS_CHECK_PENDING_MIGRATIONS. Then we would not need an init container for the waiting.

from zammad-helm.

mgruner avatar mgruner commented on August 11, 2024

This would be a nice simplification. I guess we can build this into Zammad via a new env like RAILS_CHECK_PENDING_MIGRATIONS. Then we would not need an init container for the waiting.

This has been implemented in zammad/zammad@b95ce61.

from zammad-helm.

mgruner avatar mgruner commented on August 11, 2024

Please feel free to try #243 and provide some early feedback. Thanks!

from zammad-helm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.