Code Monkey home page Code Monkey logo

stackhead's People

Contributors

dependabot[bot] avatar saitho avatar stackhead-bot avatar

Stargazers

 avatar

Watchers

 avatar

Forkers

christingruber

stackhead's Issues

Webserver configurations from project definition

Some projects may require additional webserver settings which currently can not be set.

Possible workaround: Define a nginx service within the project and let it handle incoming requests. Basically that results in the following request flow: Nginx (webserver) -> Nginx container (via project definition) -> App container (via project definition)

The project definition file should receive a setting to define universal webserver instructions, regardless of the webserver software used (i.e. currently Nginx or Caddy).

Proper uninstall mechanism / resource state

It should be possible to remove resources (files, users, containers, etc) provisioned and deployed via StackHead. Many roles used already provide a setting e.g. state: present/absent to indicate that resource should be created or removed from the server.

Maybe we can have a top level setting for that inside the project configuration.

  • Which resources must not be deleted?
    • Important project data (docker data, database data, etc)
    • data that is still used elsewhere

Local database

It should be possible to connect to a locally running database on the Docker host, instead of having to run the database inside a container. This is to improve upgrading of the database system.

Service timer configuration setting

Timer interval for the Terraform update should be configurable, so one can update application e.g. everyday at 2am instead of every 5 minutes.

Maintainance page while Docker container is set up

Idea: When a Docker container is set up (either created or removed before recreation) a maintainance page should be visible.

Maybe we can generate a file in the project directory ENABLE_MAINTAINANCE.
The webserver plugin would check if that file is present and show the StackHead maintainance page unless the file is removed again.

The maintainance page is a static HTML page. There should be a default page by StackHead that includes all resources inline (i.e. CSS, images as SVG) so no external resources need to be loaded. The user is able to set their own maintainance page HTML file during deployment or in server setup.

Nginx:

Modularity

Components such as Web Server software to be used should be exchangable.
That means that roles should be releases as separate package to allow installing them only when needed.

  • Refactor roles into a structure that supports modularity
  • Extract roles with dependencies into own repository where required
  • Webserver: Nginx
  • Webserver: Caddy
  • Container: Docker

Stackhead user for deployment

During server setup a stackhead user and group (UID 1412) is created.
We should check if that user is used correctly and if there are cases where it should be.

  • Check that all files created via StackHead should be created by this user
  • Use stackhead user for project deployment playbooks instead of root

Extend module architecture by plugin type

"Plugin" types allow setting up additional services on a server like databases or load balancers.
Crucial components that are required for StackHead to work (i.e. container providers and webservers) will still use the webserver and container types.

The plugins to be installed are configured similarly as webserver and container in inventory file or StackHead CLI config file.
In contrast to those, multiple plugins can be defined.

This new feature is tested by implementing a plugin that adds Watchtower to allow polling Docker images of already deployed projects. The plugin module is implemented here: https://github.com/getstackhead/module-plugin-watchtower

Secret Storage / Environment variables in project definition

Secrets should not be stored in code, instead they should be supplied as environment variable.
Therefore it should be possible to access environment variables.

The string ${{ env.FOO }} should be resolved to the value of the environment variable "FOO".
This should be implemented in the Ansible collection.

Move GitHub action

GitHub actions should be moved to a actions directory.
There we can differentiate between further actions (in the future).

The one action we have now, for integration testing, would be moved to actions/integration-test directory.
As it needs the files from the repository, it would clone the full StackHead repository as the first step with the same version as the action (todo: check if that's possible)

Test with Molecule

Tests should be executed with Molecule instead of using a shell script and spinning up a VPS.
We should evaluate if it is possible to use Molecule, especially since StackHead generates SSL certificates via Certbot and generally requires a TLD with configured DNS.

https://molecule.readthedocs.io/

Only remove TF symlinks that need to be removed

Terraform plans are applied regularly every 5 minutes in order to update certificates and Docker containers.

When deploying an application all existing project symlinks are removed and recreated from scratch.
This may offer a small window for all project containers to be removed during the automated apply.

Therefore we should only remove those symlinks, that should be removed.
Creating the symlink may use the -sf flag to ensure existing symlinks are kept and new ones are created.

Discussion: StackHead v2

Vision

The vision of StackHead is to provide a simple way to set up everything needed to serve a containerbased web application.
Such may be:

  • create the webserver configuration (reverse proxy)
  • start up the multiple containers and connect them to multiple domains or ports
  • generate and renew SSL certificates for the domains

StackHead should NOT reinvent the wheel.

Current state

StackHead in it's current form "just works", however in a long term there are several challenges when thinking about ongoing maintainance and development.

  • Ansible is used to install the software on the target webserver and generate Terraform files via Jinja2 templates
  • StackHead project definitions are processed via Ansible
  • Terraform is used to place configuration files (webserver configuration, certificates, etc)
  • StackHead is composed of individual StackHead modules (webserver, container, DNS) to enable using different technologies
  • StackHead CLI enables easier usage of the Ansible playbooks by installing the required Ansible collections and roles and playbook execution via a temporary inventory

ansible-terraform-interaction

Reasoning

  • Ansible has many roles for installing the required software on a server (e.g. Docker, Terraform, Nginx), so we do not have to reimpement those ourselves. That is why StackHead is executed via Ansible, as an Ansible collection.
  • Terraform is used because we do not need to tear down resources before we delete the definition (in contrast to Ansible's/Chef's/Puppet's "ensure: present/absent")

Current issues

  • maintainence of logic "code" (YAML files) with new Ansible versions (to be expected as StackHead essentially is an Ansible collection)
  • maintainance for Terraform files and the Terraform version installed on the server
  • dependency management of external (i.a. third-party or StackHead modules) Ansible roles and Terraform providers
  • The config file format may be hard to read, even though it is YAML (#131)
  • makeshift, immature dependency management for StackHead modules
  • Python code needs to be defined inside the StackHead collection and can not be used from the roles

Improvement ideas

Drop Ansible

  • maintainence of logic "code" (YAML files) with new Ansible versions (to be expected as StackHead essentially is an Ansible collection)
  • dependency management of external (i.a. third-party or StackHead modules) Ansible roles
  • makeshift, immature dependency management for StackHead modules
  • Python code needs to be defined inside the StackHead collection and can not be used from the roles

Ansible is only used as SSH client to run code on the target system (of which we currently only support Ubuntu). The required logic for that is pretty simple. A module system based on Ansible roles was developed in order to "easily" install only the required modules and not bundle a giant collection.

StackHead was constructed as Ansible collection, containing all the logics for processing the YAML StackHead configuration files (project definitions and plugin configuratiins). Furthermore the StackHead CLI was developed to simplify calling the Ansible commands, also utilizing a temporary inventory for deployment onto a single server.

Conclusion:
The main work of Ansible, i.e. acting as a SSH client, can easily be done via Golang. Since we only support Ubuntu safely, we would't lose much logic since we'd mainly use APT. Needing a wrapper CLI for "easier access" proves that Ansible's usage, requiring inventories etc. may be too much to handle.

Suggestion:
Based on the discussion above, keeping the code base compatible with future Ansible versions is not worth it. Therefore Ansible is removed from the project and the logic is being moved to Golang with the StackHead CLI. StackHead modules are being reworked, possibly as Go plugin.

Extract logic from YAML to code

Move logic from YAML files to Python code for being able to unit test, and to separate the logic from Ansible as executor.

Drop Terraform

  • maintainance for Terraform files and the Terraform version installed on the server
  • dependency management of external (i.a. third-party or StackHead modules) Ansible roles and Terraform providers

TBD...
Does this goes against the idea to not reimplement everything ourselves?

New project definition schema (Configuration blocks)

  • The config file format may be hard to read, even though it is YAML (#131)

See #131

Distribution of StackHead schema

The StackHead schema should be served on a URL.

Also, SchemaStore is used by many IDEs. It would be nice to publish the StackHead schema file there. This would require a final version 1.0.

[!!!] Project definition file suffix

BREAKING CHANGE

Project definition files MUST end with ".stackhead.yml" (or ".stackhead.yaml") to allow applying the schema automatically.

The Ansible collection will be checking the file name suffix and reject it, if it does not end in ".stackhead.yml".
When determining the project name, the suffix is to be removed (so it behaves the same as before).

The StackHead CLI validation task will also throw an error if the name does not match.

Revise validation schema locations

The schema directory stores all JSONSchema files used for validating project definition files, plugin configurations and the CLI configuration file. There is also a copy of that inside the "ansible" directory which is used by the CLI when validating the files.

That's not good as the schema files would need to be copied every time they are updated in the "schema" directory.
As we usually install StackHead from Git (and do not want to not be able to do that), that folder needs to be present in Git.

While the "ansible" folder's files can be updated on each release, the files in the "next" branch may become outdated, as we can easily forget to update those files too. We may need to reconsider how those schema files are handled.

Schema Evaluated by
project definition file Ansible collection
StackHead module configuration file Ansible collection
StackHead CLI configuration file CLI

Additionally all schema files are available at: https://schema.stackhead.io/

Proposal: Move the schema files where they belong (as above)!
That means the schemas folder is split into ansible/schemas and schemas on the CLI repository.
StackHead CLI validation commands would:
(1) validate CLI configuation files with the local schema compiled into the binary
(2) validate files for Ansible Collection using the files from the Collection (as before)
(3) allow a parameter for schema version, using the files from StackHead Schemastore

This would need the Schemastore to be extended to allow multiple repository sources!

Alternative config schema

While YAML files are easy to process, an alternative config schema may be easier to maintain.

The suggested schema is a block-based config schema. The idea of configuration blocks is easier to grasp, as the documentation would describe the structure of each block and where they can be used.

Similar schemas:

Domain declaration

Usecase:
I want to serve the application running inside the service "myservice" on port 8080 on the domain "mydomain.com" on port 80. It should require Basic Authentication with username "user" and password "pass".
The application has a WebSocket endpoint on path /socket.

Old

Taken from documentation

domains:
  - domain: mydomain.com
    expose:
      - internal_port: 8080
        external_port: 80
        service: myservice
        proxy_websocket_locations: ["/socket"]
    security:
        authentication:
            - type: basic
              username: user
              password: pass

New

The security block may be defined globally, inside a domain block or inside a port definition.

domain mydomain.com {
    serve {
        :80 {
            service = myservice
            port = 8080
            proxy_websocket_location {
                path = /socket
            }
            security {
                authentication {
                    type = basic
                    username = user
                    password = pass
                }
            }
        }
    }
    security {
        # ... same as security block inside port definition
    }
}

security {
    # ... same as security block inside port definition
}

Service declaration

Old

Taken from documentation

container:
  services:
    - name: app # service name
      image: nginxdemos/hello:latest # Docker image name
      volumes:
        - type: global # mount the "config" folder inside container at "/var/config". As both services use "global" they will mount the same source and share data.
          src: config
          dest: /var/config
        - type: local # mount the "data" folder inside container at "/var/data". This folder is not shared with other services ("local").
          src: data
          dest: /var/data
        - type: custom # mount the "/docker/data/test" folder inside container at "/var/test".
          src: /docker/data/test
          dest: /var/test
      hooks:
        execute_after_setup: ./setup.sh # this file is executed inside container after the container is created
        execute_before_destroy: ./destroy.sh # this file is executed inside container before the container is destroyed
    - name: db # service name
      image: mariadb:10.5 # Docker image name
      environment: # environment variables for Docker container
        MYSQL_ROOT_PASSWORD: example
      volumes_from: app:ro # mount all volumes from app service as readonly

New

service app {
  image = nginxdemos/hello:latest
  volume {
    type = global # mount the "config" folder inside container at "/var/config". As both services use "global" they will mount the same source and share data.
   src = config
   dest = /var/config
  }
  volume {
    type = local # mount the "data" folder inside container at "/var/data". This folder is not shared with other services ("local").
    src = data
    dest = /var/data
  }
  volume {
    type = custom # mount the "/docker/data/test" folder inside container at "/var/test".
    src = /docker/data/test
    dest = /var/test
  }
  hook after_setup {
    exec = ./setup.sh
  }
  hook before_destroy {
    exec = ./destroy.sh
  }
}
service db {
  image = mariadb:10.5
  volumes_from = app:ro
  environment {
    MYSQL_ROOT_PASSWORD = example
  }
}

Registry declaration

Old

Taken from documentation

container:
  registries:
    # Authenticate with Docker Hub for private images
    - username: mydockerhubuser
      password: mydockerhubpassword
    # Authenticate with custom Docker registry
    - username: myuser
      password: mypassword
      url: https://myregistry.com

New

registry {
  username = mydockerhubuser
  password = mydockerhubpassword
}
registry {
  username = myuser
  password = mypassword
  url = https://myregistry.com
}

Provide StackHead validator as package

Should be provided as package, e.g. via NPM.
This was initially available (as Composer package) but was removed with the change to using Ansible Collections.

Versioning support for internal dependencies

The internal software stack used and required by StackHead – which is set up during provisioning – may change over time. Some sort of versioning concept is needed.

  • How to detect if a server was provisioned with an old stack (= deployment version?) ?
  • How to find out which packages have to be upgraded?
  • How to upgrade packages?
  • How to determine optional and mandatory upgrades?
    • Missing mandatory upgrades would block any application deployment
  • How can we allow updating packages via software update processes (e.g. crontabs)?
    • How to detect if such updates installed packages that are incompatible with StackHead?

Automated system updates

It would be cool be able to perform system updates automatically.
For that a apt update may be triggered regularly via crontab.

How to detect if such updates installed packages that are incompatible with StackHead (e.g. Terraform upgrade 0.12 to 0.13)?

Using Certbot instead of Terraform ACME

It may be better to use Certbot for creating SSL certificates for Nginx as we may not run into the issues in #118 then.
Renewing certificates may still work via Terraform using a Null Resource with local-exec.

Release process

The release process should be established.

  • Automatically move files and create a version (semantic-release with backmerge into next branch)
  • Document which Docker containers are updated with a new release

Should be tested with forked repository.

Generate Terraform files in project root for customization

Add a command that will generate the Terraform files on the local drive with a fixed name schema.
Those files can be freely modified by the user.

StackHead will deploy those files preferably instead of generating new Terraform files from project definition.

Generate Terraform files in project root for customization

Add a new playbook that will generate the Terraform files on the local drive with a fixed name schema.
Those files can be freely modified by the user.

StackHead will deploy those files preferably instead of generating new Terraform files from project definition.

Service presets

Presets for the service section of a StackHead project definition

e.g.
typo3-amp (Apache, MySQL, PHP)
typo3-amps (Apache, MySQL, PHP, Solr)
typo3-nmps (Nginx, MySQL, PHP, Solr)

Preset values (image, version, etc) may be overridden

Processed by CLI. CLI will generate valid StackHead project definitions...

Logging

  • Where are logs stored?
  • How can I see which Terraform plan was applied at what time?

Finding the right software for managing resources

Using Terraform for managing resources works just fine. However it would be nice to have a solution that allows installing the required software when it's needed (ideally dropping the setup step).
The handing should be as easy as Terraform: removing a file with resource definitions will also remove the resources that were created with that definition.

Terraform

  • (+) executed locally
  • (+) Define desired state, modules take care of the rest, deciding if a resource have to be recreated on updates
  • (+) Removing a Terraform file will also remove the created resources when running "terraform apply"
  • (-) Software has to be already installed (=> done in server setup step using Ansible)

Puppet

  • (+) executed locally
  • (+) Allows installing software
  • (+) Seems faster than Terraform
  • (+) Ability to bundle resources definitions and add default configurations aside from the definitions generated from project definition
  • (-) Resources have to be explicitly removed (ensure: absent)
  • **(-) pretty much the same as Ansible; no advantage such as not caring about removing resources as with Terraform)
  • (-) Requires Puppet modules being present on the system (=> done in server setup step using Ansible) => that's fine, way better than installing the Docker or Nginx packages during setup.

Chef

  • (+) apparently can be executed locally
  • ...
  • (-) Resources have to be explicitly removed (ensure: absent)
  • ...

Extract services from existing docker-compose file

Instead of specifying the services in the project definition it should also be possible to specify an existing docker-compose file that is already used in the project source.

Example:

deployment:
  type: container
  service_file: ../../docker-compose.yml # relative path from the project configuration file

Mutually exclusive with services setting.

It will then extract the information that can currently be set in project definition from that file.

SSL registration becomes outdated?

In December a certificate was generated. When testing the latest changes, following error occurred:

fatal: [mackerel]: FAILED! => {"changed": false, "command": "/usr/local/bin/terraform apply -no-color -input=false -auto-approve=true -lock=true /tmp/tmpngb8anki.tfplan", "msg": "Failure when executing Terraform command. Exited 1.\nstdout: nginx_server_block.nginx-schemas: Destroying... [id=/etc/nginx/sites-available/schemas.conf]\nnginx_server_block.nginx-schemas: Provisioning with 'local-exec'...\nnginx_server_block.nginx-schemas (local-exec): Executing: ["/bin/sh" "-c" "sudo systemctl reload nginx"]\nacme_registration.schemas-reg: Creating...\nnginx_server_block.nginx-schemas: Destruction complete after 0s\ndocker_container.stackhead-schemas-webserver: Destroying... [id=b9b59b7856c0d84fd15a57fdcfbcc027c5216acd003f4b59b2a2293c7300a749]\ndocker_container.stackhead-schemas-webserver: Destruction complete after 1s\ndocker_container.stackhead-schemas-webserver: Creating...\ndocker_container.stackhead-schemas-webserver: Creation complete after 0s [id=e226dea7dcf7b12450392d03b6848038d73abd925f8d98aa2ac2c726115b0b72]\nnginx_server_block.nginx-schemas: Creating...\nnginx_server_block.nginx-schemas: Provisioning with 'local-exec'...\nnginx_server_block.nginx-schemas (local-exec): Executing: ["/bin/sh" "-c" " ln -s /stackhead/certificates/fullchain_snakeoil.pem /stackhead/certificates/schemas/fullchain.pem || true &&\n ln -s /stackhead/certificates/privkey_snakeoil.pem /stackhead/certificates/schemas/privkey.pem || true &&\n sudo systemctl reload nginx\n"]\nnginx_server_block.nginx-schemas (local-exec): ln: failed to create symbolic link '/stackhead/certificates/schemas/fullchain.pem': File exists\nnginx_server_block.nginx-schemas (local-exec): ln: failed to create symbolic link '/stackhead/certificates/schemas/privkey.pem': File exists\nnginx_server_block.nginx-schemas: Creation complete after 0s [id=/etc/nginx/sites-available/schemas.conf]\n\nstderr: \nError: acme: error: 403 :: POST :: https://acme-v02.api.letsencrypt.org/acme/new-acct :: urn:ietf:params:acme:error:unauthorized :: An account with the provided public key exists but is deactivated, url: \n\n\n"}

Running terraform apply again after that still succeeds. I wasn't able to pinpoint the issue so far.

Ubuntu 20.10 compatibility

Right now StackHead seems to only run on Ubuntu 18.
It should also run on Ubuntu 20.

The tests must be extended to allow testing multiple operating systems.

Wait for Terraform lockfile

When Terraform runs, it creates a lockfile .terraform.tfstate.lock.info where it runs.
When wanting to run Terraform apply we should check if that file exists and wait until it is removed (i.e. the previous process finished). There should be a maximum timeout after which the execution will abort and fail.

Decouple SSL task from Nginx

The SSL Terraform template defines a dependency to Nginx.

nginx_server_block.nginx-{{ project_name }}

This should be configurable similarly to containers.
So webservers have to define their Terraform resource name as well and adhere to a fixed naming scheme.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.