Code Monkey home page Code Monkey logo

mglawica's Introduction

Mglawica

Status: Proof of Concept
Documetation:tutorial

The basic idea here is to start with Dokku-like experience to make first deployment as easy as possible, but allow to scale a cluster to several nodes.

Theoretically you could continue to scale the cluster beyond several nodes, but usually this uncovers more details specific to your project. Which basically means you will fork our scripts, tweak verwalter's scheduler and continue use the tools on your own.

This project is built with vagga, cantal, lithos and verwalter. We also use nginx, rsync, linux and more, they are pretty ubuquitous, though.

See Verwalter's concepts for description of roles of all these tools, but you don't need to learn all that. You should get used to vagga basics to be productive, though. Luckily there are plenty of tutorials.

License

Licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

mglawica's People

Contributors

tailhook avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

mglawica's Issues

New concept of deployment

Configuration

User adds the following to their vagga file:

mixins:
- vagga/deploy.yaml

commands:
  deploy: !CapsuleCommand
  run:
  - vagga
  - _script
  - https://github.com/.../deployment_script
  - --destination=http://internal.network/your-deployment.json
  - --containers="python,assets"

Then they can run vagga deploy staging or vagga deploy production

How Does it Work?

  1. CapsuleCommand downloads a script
  2. Then script downloads your-deployment.json
  3. Then script generates vagga/deploy.yaml according to your-deployment.json
  4. Then it runs ciruela upload to servers described in json
  5. Then it pushes a list of images to verwalter HTTP API

Details follow. The important points here:

Step (4) is configurable. We might allow rsync, or use docker import and docker push
Step (5) is also configurable, it might not use verwalter, or it might put the metadata into intermediate storage and create a release from multiple repositories individually uploaded to servers.

What is in vagga/deploy.yaml

Basically, it wraps each container into:

containers:
  xxx-deploy:
    setup:
    - !SubConfig
      path: vagga.yaml
      container: xxx
    - !Tar
      url: https://github.com/.../container-cooker.tar
      script: "./container-cooker"

What Does "container-cooker" Do ?

Note: the name container-cooker is just for explanatory purposes.

It validates configs and fixes things that lead to many mistakes:

  1. !EnsureDir for all the volumes
  2. Find lithos (or maybe other) configs, check which ones belong to this container (probably by looking at executable), and copy them to the container
  3. Put configs or metadata extracted from them to some well-known place, so that verwalter could find them
  4. Maybe optimize some things in container: clean common tmp folders which vagga doesn't clean by default, reset timestamps and recompile *.pyc files (latter makes containers reproducible)
  5. Might execute some vulnerability checks, or extract package lists so it would be easier to run vulnerability checks later
  6. Might generate some lithos configs from vagga config
  7. Make hosts and resolv.conf symlinks

What Does your-deployment.json Contain?

It should describe the full deployment target, here is the non-exhaustive list of things:

  1. A validator for lithos' metadata. We expect that every deployment can have it's own scripting in verwalter so it might need more or less metadata. Still, validation of metadata is super-useful. [*]
  2. Additional checks for the config, i.e. it may require always having PYTHONIOENCODING if the executable is pythonic, or an /app_version file in the root of the container
  3. An entry point for the ciruela or another way of image upload
  4. An API endpoint for verwalter
  5. Conventions on the configuration files, which are staging and which are production so you can just name the environment.

All of these things except hostnames (3, 4) could be a hardcoded convention, but I feel it would be too restrictive and does not take advantage of full verwalter power (or makes it less convenient if metadata is not validated properly).

No keys/secrets/passwords are contained in json. Keys are passed through the environment variables.

[*] not sure which validator to use though, maybe json_schema or livr

How Verwalter Works?

Currently, verwalter relies on having all needed metadata extracted from the repo/image/config and put into a "runtime" folder. While we're trying to move most things into container itself we still need one thing left: a link between container which constitute a new version, i.e. a version v1.2.3 might have containers app.v1.2.3 and redis.023fed, i.e. the redis container is versioned by hashed and only updated when configuration change.

So the thing pushed into verwalter will be basically a dict:

{
  'app-web-staging': 'app.v1.2.3',
  'app-celery-staging': 'app.v1.2.3',
  'redis': 'redis.023fed',
}

I.e. a mapping with process name to it's container name. The other metadata/configuration files are stored in the image itself with some convention (not a real one, to be determined):

/lithos-configs/app-web-staging.yaml

And presumably, verwalter needs to figure out few things:

  1. Which machines in the cluster have this image
  2. Get the lithos config from image (by accessing ciruela itself) and extract metadata from it (by metadata I mean both: the metadata key and useful things such as memory-limit or cpu-shares, maybe even display the whole thing in GUI)

This should be enough for verwalter to do the work. Note: it's up to scheduler whether to enable version immediately, wait for more machines to fetch image, only upgrade existing processes or run new ones right from the point their config is pushed, add new services to nginx automatically or not and so on.

Notes

  1. Versioning of deployment might be suboptimal, if configs are copied by container-cooker. But since these are deployment containers, it's usually just enough to put !GitDescribe into command, to make them rebuild often enough (basically every commit). We don't want to put it by default because you might want database containers which are not restarted on each deploy. Other option is explicitly opt out of versioning on script's command-line
  2. Caching of json is unclear, but basically it can be cached for dry-run and never cached for deployment (i.e. you can check configs in airplane but obviously not deploy)
  3. At the end of the day, you can fork both the deployment-script and container-cooker and provide very different deployment tool with very same interface, say pack and deploy it to heroku or AMI

/cc @anti-social, @popravich

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.