Code Monkey home page Code Monkey logo

Comments (7)

shusugmt avatar shusugmt commented on September 21, 2024 2

I was a bit away from k8s but now it seems providing RWX capable PV become much easier than before. AKS natively support RWX PV via AzureFile plugin. EKS/GKE, they both provide managed NFS services (Amazon Elastic File System/Google Cloud Filestore) and we can use nfs-client-provisioner BUT, the NFS backed PV may suffer in terms of performance especially when the pack has many dependent pip packages, and/or the package that needs native binary to be built.

IMO though the current approach seems much stable since the only problem is mongodb part.

which generally should be fine for a immutable infrastructure

I feel this is the reasonable way for now. We can just put a load balance in front, then switch traffic once the new st2 deployment become ready. But this is only possible if you can just through away any data like execution logs stored in mongodb. And of course you need some extra work like exporting/importing st2kv data between new/old clusters. This approach is which exactly called as "Content Roll-Over" method in the doc.

from stackstorm-k8s.

ericreeves avatar ericreeves commented on September 21, 2024 2

Commenting to provide some additional feedback given our current use case.

We are deployed to EKS, and have an EFS volume used for mounting "packs" and "virtualenvs". We are currently using the outstanding NFS pull request and it has been working like a champ. We have an in-house API/UI built that allows the construction of user workflows. Those workflows are ultimately written to the shared EFS volume by a Lambda function so we can update things on the fly. For our use case, any sort of pack-sharing mechanism that requires "bake and deploy" is not going to do the trick.

And to make things a bit more fun, we do have a core set of internal packs that are essentially shared libraries utilized by the custom packs that we develop with our API/UI. We're considering a build job that takes our "shared library packs", assembles a package (maybe simply a tarball), and deploys them to the cluster using a Lambda that writes to the EFS volumes and issues the "register" and "setup_virtualenvs" API calls. We could use a custom pack image for this piece, but then I need to make some significant changes to the Helm chart to support both a custom pack image AND NFS. I truly do not want to do this, because we'd like to stay more in-line with master for easier updates and ability to contribute back.

Cheers!

from stackstorm-k8s.

troshlyak avatar troshlyak commented on September 21, 2024 1

I would be very interested to understand what would be the best way to update/add/remove or in general the lifecycle of the packs, given the proposed above approaches.

For the shared content file system I guess we can spin a k8s job that would mount the shared filesystem, update it, and then update mongodb through st2 pack.load packs=some_pack.

For the customs pack Docker image it seems that we need to kill and recreate all containers (which generally should be fine for a immutable infrastructure) in order to rerun the initcontainers that are responsible for coping the pack/virtualenv data into the container, but then what would happen with long running actions (we have actions running from 30mins up to 2h+) inside st2actionrunner containers? Is there a way to ensure that the st2actionrunner container is idle, before recreating it? And then it comes the question on how do we update mongodb - I guess we need to rerun the job-st2-register-content job as part of the pack update process.

from stackstorm-k8s.

troshlyak avatar troshlyak commented on September 21, 2024 1

Indeed "Roll-Over" deployments seem like the way to go and the mongodb "problem" can most probably be fixed, by externalising it, as discussed in #24. This would also eliminate the need to export/import st2kv. And having the RabbitMQ "within" the deployment should limit the communication between sensor/triggers, runners etc. inside the deployment as we don't want a runner from old deployment to pick up a job for a new action that is missing in the old deployment packs.

I've actually just quickly tested this and I'm able to see the running tasks from both old/new deployment, so I can switch on the fly and still maintain control over the "old" running tasks and interact with them (like canceling them). Now I need to understand how to prevent the sensors from both deployments triggering simultaneously, while the old deployment is "draining" from tasks.

from stackstorm-k8s.

cognifloyd avatar cognifloyd commented on September 21, 2024 1

I started to add charts for the rook-ceph operator, but I think that setting up a storage backend should be out-of-scope for the StackStorm chart because:

  1. There are many different in-tree k8s volume types plugins, and even more out-of-tree volume plugins that can use flexVolume (the older standard) or CSI (the newer standard). rook-ceph + flexVolume happens to be the one that I need, but others are very likely to need NFS or some other storage solution in their cluster.
  2. Our current solution, copy packs from images into emptyDir volumes, makes for a good robust cluster-agnostic default.
  3. Setting up an operator like rook-ceph ideally uses namespaces separate from the st2 namespace. This becomes problematic when considering installations like helm install --namespace st2 stackstorm-ha . because that ends up putting all of that storage infrastructure in the st2 namespace.

So, I think we should allow something like this in values.yaml:

st2:
  packs:
    images: []
    use_volumes: true
    volumes:
      packs:
        # volume definition here (required when st2.packs.use_volumes = true)
      virtualenvs:
        # volume definition here (required when st2.packs.use_volumes = true)
      configs:
        # optional volume definition here

Translating the example from my previous comment into this context I would put this in my values.yaml:

st2:
  packs:
    images: []
    use_volumes: true
    volumes:
      packs:
        flexVolume:
          driver: ceph.rook.io/rook
          options:
            fsName: fs1
            clusterNamespace: rook-ceph
            path: /st2/packs
      virtualenvs:
        flexVolume:
          driver: ceph.rook.io/rook
          options:
            fsName: fs1
            clusterNamespace: rook-ceph
            path: /st2/virtualenvs
      configs:
        flexVolume:
          driver: ceph.rook.io/rook
          options:
            fsName: fs1
            clusterNamespace: rook-ceph
            path: /st2/configs

Then we can translate that into volume definitions that get mounted to /opt/stackstorm/packs, /opt/stackstorm/virtualenvs, (and maybe /opt/stackstorm/configs).

What does everyone think of this approach? Is this simple enough but flexible enough for most requirements?

from stackstorm-k8s.

cognifloyd avatar cognifloyd commented on September 21, 2024

I am working on converting an old 1ppc k8s install to stackstorm-ha charts.

We use Ceph + rook to handle packs, configs, and virtualenvs with approximately this:

volumeMounts:
- name: st2-packs
  mountPath: /opt/stackstorm/packs
- name: st2-configs
  mountPath: /opt/stackstorm/configs
- name: st2-virtualenvs
  mountPath: /opt/stackstorm/virtualenvs

volumes:
- name: st2-packs
  flexVolume:
    driver: ceph.rook.io/rook
    options:
      fsName: fs1
      clusterNamespace: rook-ceph
      path: /st2/packs
- name: st2-configs
  flexVolume:
    driver: ceph.rook.io/rook
    options:
      fsName: fs1
      clusterNamespace: rook-ceph
      path: /st2/configs
- name: st2-virtualenvs
  flexVolume:
    driver: ceph.rook.io/rook
    options:
      fsName: fs1
      clusterNamespace: rook-ceph
      path: /st2/virtualenvs

This has been working quite well for some time. I'm happy to put together a PR to add support for this to stackstorm-ha.

from stackstorm-k8s.

ericreeves avatar ericreeves commented on September 21, 2024

I think that looks like a fantastic, flexible approach!

from stackstorm-k8s.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.