Code Monkey home page Code Monkey logo

garden-io / garden Goto Github PK

View Code? Open in Web Editor NEW
3.3K 36.0 267.0 108.94 MB

Automation for Kubernetes development and testing. Spin up production-like environments for development, testing, and CI on demand. Use the same configuration and workflows at every step of the process. Speed up your builds and test runs via shared result caching

Home Page: https://garden.io

License: Mozilla Public License 2.0

Shell 0.51% TypeScript 96.09% JavaScript 1.01% Ruby 0.04% Dockerfile 0.45% PowerShell 0.10% HTML 0.08% HCL 0.12% Go 0.02% Python 0.11% Open Policy Agent 0.01% Mustache 0.20% PEG.js 0.54% Java 0.16% CSS 0.06% Vue 0.12% Jinja 0.01% Rust 0.38%
kubernetes developer-tools containers testing-tools testing

garden's Introduction

Garden

If you love Garden, please ★ star this repository to show your support 💚. Looking for support? Join our Discord.

Garden

Quickstart   •   Website   •   Docs   •   Examples   •   Blog   •   Discord

Welcome to Garden!

Garden is a DevOps automation tool for developing and testing Kubernetes apps faster.

  • Spin up production-like environments for development, testing, and CI on demand
  • Use the same configuration and workflows for every stage of software delivery
  • Speed up builds and test runs via smart caching.

Getting Started

The fastest way to get started with Garden is by following our quickstart guide.

Demo

Garden dev deploy

Docs

For a thorough introduction to Garden and comprehensive documentation, visit our docs.

Usage Overview

Garden is configured via garden.yml files. For large projects you can split the files up and co-locate them with the relevant parts of your stack, even across multiple repositories.

A (simplified) Garden configuration for a web app looks like this:

kind: Deploy
name: db
type: helm
spec:
  chart:
    name: postgres
    repo: https://charts.bitnami.com/bitnami
---
kind: Build
name: api
type: container
source:
  path: ./api
---
kind: Deploy
name: api
type: kubernetes
dependencies: [build.api, deploy.postgres]
spec:
  files: [./manifests/api/**/*]
---
kind: Test
name: integ
type: container
dependencies: [deploy.api]
spec:
  args: [npm, run, test:integ]

You can build and deploy this project with:

garden deploy

...and test it with:

garden test

To create a preview environment on every pull request, you would add the following to your CI pipeline:

garden deploy --env preview

Garden also has a special mode called "sync mode" which live reloads changes to your running services—ensuring blazing fast feedback while developing. To enable it, run:

garden deploy --sync

You can also start an interactive dev console (see screencap above) from which you can build, deploy, and test your project with:

garden dev

How Garden Works

The Stack Graph is a key feature of Garden that enables efficient development, testing, and DevOps automation. The Stack Graph allows you to declare the dependency structure of your project and track changes to avoid unnecessary builds, deploys and test runs. It's like CI/CD config that you can additionally use for development. Without the Stack Graph, many of these functionalities that distinguish Garden from its competitors would not be possible or would be much less efficient.

  • Efficient builds and deploys: The Stack Graph allows Garden to determine which parts of your project have changed and need to be rebuilt or redeployed, avoiding unnecessary work and speeding up the development process.

  • Automated testing: Garden can automatically run tests for the parts of your project that have changed, thanks to the Stack Graph. This saves time because all parts of your dependency graph are known and cached.

  • DevOps automation: The Stack Graph allows Garden to automate many aspects of the DevOps process, including building, testing, and deploying your project.

For more information on the Stack Graph and how Garden works, see:

Plugins

Garden is pluggable: how actions are executed depends on the plugins used. Our Kubernetes plugin is currently the most popular, followed by our Terraform and Pulumi plugins. For a more thorough introduction to Garden and its plugins, visit our docs:

Community

Join our Discord community to ask questions, give feedback or just say hi 🙂

Contributing

Garden accepts contributions! Please see our contributing guide for more information.

License

Garden is licensed according to Mozilla Public License 2.0 (MPL-2.0).

garden's People

Contributors

10ko avatar baspetersdotjpeg avatar benstov avatar dcharbonnier avatar dependabot[bot] avatar drubin avatar edvald avatar eysi09 avatar highb avatar janario avatar mattpolzin avatar mitchfriedman avatar mjgallag avatar mkhq avatar orzelius avatar shankyjs avatar shumailxyz avatar sixhobbits avatar solomonope avatar stefreak avatar swist avatar thsig avatar timbeyer avatar trymbill avatar twelvemo avatar vkorbes avatar vvagaytsev avatar walther avatar worldofgeese avatar xenoscopic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

garden's Issues

Re-evaluate module spec syntax and parsing mechanism

(supersedes #90)

I'm bumping into some issues with our module spec API. I'm trying to balance a few things and figure out what degree of control to give plugins, and I worry a bit about our current YAML layout.

For a bit of context, I'd first off (and definitely) like to make one important change: parseModule should no longer return a class, but rather a validatable object with something like the following interface (which can be readily validated on the framework side):

{
  config?: ModuleConfig    // this allows the handler to optionally modify the configuration, e.g. set some default values (which are then re-evaluated and validated by the framework)
  services: ServiceConfig[]    // a list of services defined in the module
  tests: TestConfig[]    // specify which tests the module includes
}

We'd then define all of ModuleConfig, ServiceConfig and TestConfig as non-extendable interfaces, potentially each with a generic metadata object property that plugins can use at will.

This would mean we could completely do away with generics on Module, Service etc. across our framework code and leave it to plugins to keep the metadata properties type-safe, as well as make it more safe overall from poor plugin implementations.

Another implication is that module specs can technically have whatever structure or semantics, and plugins just need to convert the spec into the above structure. For example, using the term services may not be immediately intuitive in some contexts, and in some contexts a single service (in our framework parlance) may be implicit in the module definition.

Which leads to the concern: To what degree should module configurations be strictly defined at the framework level, and how do we ensure separation between framework-native parameters and pluggable ones?

Another example (in addition to the semantics of what "service" means), is how tests are defined at the module level. We currently have a tests key in the native module spec but it includes a command key that really only makes sense for container and generic modules. So we'll need plugins to be able to specify whatever instructions for the test execution that make sense for that plugin. Do we then evaluate the test key for every module but allow plugins to extend that definition? Or do we omit it entirely from the native spec and rather try and push for conventions in how tests are defined (i.e. urging for them to have a similar layout as our internal TestConfig interface)? Same goes for the build spec.

Yet another issue is how we maintain a backward-compatible separation of native keys in module specs and plugin-specified keys. Example - say we decide today that name, description, and type are framework-native fields that we validate, but not test. Some plugins then implement test as part of their spec, aaaaand then we change our minds and want test to be in our native spec. We're gonna need to be smart about this. At the very least we need to make the fixed fields part of an API version on our side, and make sure that we catch plugins using conflicting fields properly (and before using the values).

So, I'm trying to balance with a simple and clean API. It's always possible to go full-on XML and massively overcomplicate to deal with stuff like this, but I'd like to arrive at a good compromise where we deal smoothly with these concerns and keep the DX slick.

What do y'all think?

Service name is undefined in output from "garden status" in hello-world example

Output looks like this:

➜  hello-world (master) g status
providers:
  local-kubernetes:
    configured: true
    detail:
      namespaceReady: true
      metadataNamespaceReady: true
      systemReady: true
services:
  undefined:
    endpoints:
      - protocol: http
        hostname: hello-function.hello-world.local.app.garden
        port: 32000
        url: 'http://hello-function.hello-world.local.app.garden:32000'
    runningReplicas: 1
    detail:
      resourceVersion: 2323881
      lastMessage: CREATE Ingress garden--hello--hello-world/hello-function
    state: ready
    lastMessage: ''

Notice the undefined key under the services key.

Task errors should prevent execution of dependant tasks

(Note: I'm pretty sure I caused this bug myself, but I may still assign to someone else)

Currently when a task errors, the spinner for its log entry still runs, and dependant tasks still execute. The expected behaviour would be for an error state to be set on the log entry, and dropping dependant tasks.

Mirror examples to separate repo

It would be cleaner to have all the examples (and there should be many more over time) to a separate repo. We'd need to stop using the hello-world as part of our tests, but we want to do that anyway.

Consistently use verb before noun in CLI commands

Currently we do both (e.g. garden run module and garden env configure). We should change this to be consistent across the board. Verb before noun seems more natural, and also more commonplace, so I think we should go for that.

Watch-mode commands don't trigger rebuilds as they should under certain circumstances

While working on #192, I came across a couple of bugs:

  1. garden build -w doesn't rebuild modules when sources change (though it does properly perform a restart when garden.yml files are affected).
  2. If module-b depends on module-a, changes made to module-a while a deploy or build task for module-b is in progress don't trigger a redeploy/rebuild as they should.

Since these aren't regressions (I encountered the same behaviours on master, using the old watchman-based logic), I decided to address these in a separate PR.

Rename `project.globals` key to `project.defaults`

Or even environmentDefaults? That would be more verbose but quite clear. Either way I think globals isn't super clear. We talked about it at the time, but I didn't follow up on it. What do you all think makes sense?

Hot reload 🔥👩🏻‍🚒

I'd like to discuss and lay out some options and a proposed design for "hot reload", a feature that would allow rapid reload of containers (and by extension any other modules that are transparently run as containers).

The problem
Currently our re-deployments always require a re-build and full restart of pods, which can take a long time compared to the built-in live reload that many servers offer in development mode. This is one of the pain-points often associated with containers and can often slow down development substantially.

People apply a number of different "hacks" to get around this, such as running development containers that mount host volumes (only works on local machine), Telepresence which is interesting but non-trivial for the end-user as well, and probably some other options involving port proxy-ing etc that I can't think of myself.

We'd like to make this as smooth as possible in our framework, so I'd like to lay out a couple of ideas that I've been mulling over.

Option 1: Automate Telepresence
I mentioned above that using Telepresence is non-trivial, but I figure many of the steps involved could be abstracted and automated away in the framework. The way it would work (in basic terms) is that we'd derive the appropriate parameters for Telepresence (such as port numbers) automatically from the container configuration and you'd get a command like garden telepresence my-service. This would deploy a proxy container in place of the my-service container in the cluster, and start a telepresence shell locally on your machine.

This would actually be a rather neat feature, but it does mean you'll need to run the command for each service you're working on, so it's not at all transparent. We'd also need to make sure other runs of garden deploy/dev/test don't interfere with the proxy (basically abort deploying unless you use --force, something like that).

Option 2: Sync code to remote volume
Much the same way we use rsync to sync to our build staging directory, we could use it to sync to a container running rsync, on top of a shared volume, which we then mount in the corresponding service container(s). This has the benefit of having the container run in its actual environment, and that we could transparently sync code for any number of services.

This would have to be "opt-in" at the service level. Not all modules would work with this out of the box, and you'd need to configure some specifics. I imagine you'd want something like the following:

module:
  type: container
  # ...
  services:
    - name: my-service
      # ...
      hot-reload:
        command: [npm, run, dev]  # the command to run instead of the normal start command
        sync:  # where to sync the module source to
          target: "/app"

Some open questions:

  1. Do we deploy a module using the hot-reload mechanism by default or not?
  2. Do we configure the hot-reload default by environment perhaps?
  3. How do we keep track of the "version" that's running when syncing code?
  4. What implications does this have in terms of dependencies?

You may note that options 1 and 2 are not mutually exclusive, but we should think about what our preferred path is and what we think would be most valuable. And of course - will there be dragons in our path towards either of these?

Discuss :)

More verbose logging when Docker is downloading images

When Docker is downloading images (e.g. when garden environment configure is initially run), maybe the log output should indicate something like Downloading image....

I ran garden environment configure inside the hello-world example project today, and the command seemed to hang (to my silly self, at least, since I hadn't touched this functionality before). The log output was in this state for ~20+ seconds:

Configuring undefined environment ⚙

✔ kubernetes         → Environment configured

... and then the Done! line suddenly appeared below. I'd CTRL-C'ed it a couple of times before that.

Would be cool to add a download progress bar if possible, not sure what Docker's API allows around that.

kubernetes: Output logs from container when it crashes on startup

Currently we get a less-than-helpful BackOff - back-off restarting failed container when a container doesn't successfully start when deploying.

A reasonable workaround is to run the troubled service with garden run service <service-name> to see what it spits out, but it would be better if we'd get the service log output when this particular error happens.

Issues with new logger

The new logger looks really promising, but could use a few fixes and improvements. We should log them here. The one's I've found so far:

  1. Emoji disappear when calling .update(). We should probably use the same formatting function for both new and updated lines to avoid discrepancies.
  2. The first section width is inconsistent between lines that use emojis/symbols and those that don't.
  3. It might be good to add a .close() method to entries, that changes the state but doesn't set a visible success/warn/error state, if only for efficiency.
  4. It would be super nice to be able to nest entries, something like
    const entry = ctx.log.info({ msg: "Deploying services" })
    const nestedA = entry.nest.verbose({ msg: "Deploying service A" })
    const nestedB = entry.nest.verbose({ msg: "Deploying service B" })
    nestedA.done()
    nestedB.done()
    entry.done()
  1. We could make the rendering flicker less with a few adjustments - basic one being to simply move the cursor whenever a character hasn't changed, instead of replacing lines.
  2. I think I'd prefer to have separate methods for updating an entry and appending to it, rather than the replace parameter.

I can add more things as I come across them, and might commit a couple of small changes here and there.

Run garden as a container

I'm not really sure if or how we're gonna do this, but I thought I'd create this issue to centralize the discussion.

The idea is to run garden as a container to avoid the whole headache with the installation process.

One issue raised was that of watching the filesystem. We can (probably) do that with docker run -v (link), which will affect performance though we don't know whether it'll have a negligible impact or not. (Could this help?)

General links about containerizing tools:

Check for docker and k8s versions on startup

Some of our features require recent versions of the docker client and server, as well as Kubernetes. We should check for these at startup to avoid non-descriptive errors at runtime.

Debian package

It's fairly standard to have proper packages for Debian/Ubuntu, and since we have a fair number of system dependencies I think it'd be a good idea to set up, similar to our Homebrew package for OSX.

It'd be great for it to be configured to automatically set up when releasing, not sure how easy that is though?

Add screencap to README

It'd be nice to have a quick screencap showing basic usage on the README, helps to explain at a glance what Garden is and does.

Enable FancyLogger pause and resume (e.g. when intercepting other streams)

Now FancyLogger catches strings/buffers from other streams and passes them to the RootLogNode. This can cause flickering when printing user input from stdin. Current workaround is to simply stop the logger, have the normal process take over and then resume below on the next log call which results in the logger loosing access to the content already printed. The reason we can't just pause the FancyLogger is that the logUpdate library is unaware of the new content and therefore uses the wrong prevLineCount.

A better solution would be to:

  1. Pause FancyLogger every time it intercepts other streams.
  2. Print the content using the stream it originated from.
  3. Silently add content to the log graph.
  4. Implement own logUpdate function which recognises that the graph has changed and updates the prevLineCount accordingly.
  5. Resume FancyLogger on next log call.

Multi-repo support 🌲

Supporting Garden projects that span multiple repositories will be a critical feature for many users.

To support that properly, we need to be able to do the following:

  1. Reference repositories via git URLs, in the project configuration and/or as sources in module configs.
  2. Automatically clone those repos and keep them updated.
  3. Locally link a repo to a local clone, so the user can work on multiple repos without circling through the remote repo.

Each of these has some inherent complexities and needs careful design and implementation.

1 - Referencing external sources

We can imagine a few different ways to link external sources (i.e. repos) to a project.
One is to allow a list of sources to be specified in the project config, which will all be pulled in when scanning modules, and then treated as part of the project tree. It could look something like this:

project:
  name: my-project
  # ...
  sources:
  - name: other-repo
    location: git://github.com/my/repo#master
  # ...

We'd then pull each of the referenced repos (unfortunately having to poll for updates changes every time) and then scan for garden.yml files in each of the clones, much like we do currently in the project directory.

Another way could be to add something like an external or sources top-level key to the garden.yml file, which would have a similar format as the above sources key. A benefit of that would be that you wouldn't have to have all the external references in the top-level project config (and they could even cascade across multiple repos), but otherwise it would work functionally the same.

And a third option would be to have an optional source key in module configs, so you'd essentially define the garden.yml for a module in the main project repo, and then just refer to a repo for its source code.

The above aren't actually mutually exclusive, but I'd prefer to pick a single canonical way to do this, at least to start. I'd personally lean towards having a top-level sources key in the garden.yml file.

2 - Cloning and syncing linked repos

Cloning the referenced repos should be fairly straightforward. I guess the only question might be where we clone them to. I think something like .garden/sources/<source name> would make sense.

Keeping the clone in sync is slightly more tricky, particularly when auto-reloading, but also not super complicated. We'd probably need to settle for basic polling for changes, possibly with a configurable poll rate. You might also want to disable automatic updates, if we provide a command to manually fetch latest changes from remote (e.g. garden update-sources)

3 - Link local working copies

Of course you'll want to be able to work on linked repos without having to push changes to the remote all the time. I figure we could add a command to do that locally (since you don't want to hard-code a reference to a local directory in the garden.yml config). Something like:

garden link-source name-of-source ../path/to/local/clone

This "link" would then be stored in your local config and when present the framework will simply look to the local path for the linked sources. This would mean that the user is then responsible for having their local working copy up to date, on the correct branch etc. (because we can't reliably take care of that, in case the working copy is modified/dirty etc.). And we'd of course need a corresponding unlink command.

I imagine it could get a little thorny for users to separately commit changes in multiple repos etc. but then again they likely already have that problem...

I think that mostly covers what needs to happen. Am I missing something?

File watcher should handle changes to garden.yml files

Currently, only builds, deploys and tests (depending on the command) are run when module sources change. If a garden.yml file is modified, the changes aren't reflected in the process.

Reloading modules should not be too difficult if we specifically catch changes to a module's garden.yml file. We would basically need to trigger a reload of the module (which is possible currently in the Garden class), and stop+restart the processModules method (otherwise we could run into a bunch of inconsistencies).

Handling the project garden.yml might be slightly trickier, but should be achievable. We'd need to refactor the Garden class slightly to allow it to be reset, and we'd need to stop and restart processModules as well.

The alternative would be to just restart the whole process, but that would be kinda lumpy if you're making rapid changes to configs.

In any case we'd also need to elegantly handle errors in the garden.yml file, and not just crash the process.

`garden logs` fails when one or more services haven't been deployed

The command errors out with something like

Error from server (NotFound): deployments.extensions "hello-function" not found
Error from server (NotFound): deployments.extensions "hello-container" not found

The expected behavior would be to ignore missing services, and ideally to retry fetching the logs until they are deployed.

Allow multiple module definitions in one file

To quote @eysi09 from #55:

Something like:

project:
  name: hello-world
  environments: ...
modules:
  - name: module-a
     dir: /path/to/module-a
     description: Module A description
     type: container
  
  - name: module-b
     dir: /path/to/module-b
     description: Module B description
     type: container

I think this will make sense especially when we allow referencing other git repos, which we'll need to do soon.

Module watcher should also watch build dependencies

Currently when build dependencies that are not explicitly being watched are modified, we don't trigger execution. For example, when I run garden deploy module-a -w and module-b is a build dependency, nothing happens when we modify module-b. Should be fairly easy to fix.

Homebrew package

It would be super nice to have a homebrew package, and to automate the release process along with the npm release. Especially since we do have peer dependencies like git and rsync, that we can't bundle, and we're likely to add more of those if anything.

Enable running garden commands from subdirectories in a project directory

Currently you can only run from the root directory of a project. We should add a step when starting up to walk up the directory tree from the current working directory and find the closest garden.yml that has a project key.

The logic for this should probably be in the Garden.factory() method, since it will involve parsing garden.yml files.

local-gcf-container built with wrong path

When running garden env configure in the hello-world example, the buildModule method of the ContainerModuleHandler class attempts to build local-gcf-container using the relative build dir path as opposed to /garden/static/local-gcf-container.

Throws the following error:

ChildProcessError: Command failed: docker build -t local-gcf-container:0000000000 /Users
/eysi/code/garden-io/garden/examples/hello-world/.garden/build/local-gcf-container
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat
/Users/eysi/code/garden-io/garden/examples/hello-world/.garden/build/local-gcf-container
/Dockerfile: no such file or directory
`docker build -t local-gcf-container:0000000000
/Users/eysi/code/garden-io/garden/examples/hello-world/.garden/build/local-gcf-container
` (exited with error code 1)
    at callback
(/Users/eysi/code/garden-io/garden/node_modules/child-process-promise/lib/index.js:33:27
)
    at ChildProcess.exithandler (child_process.js:282:5)
    at emitTwo (events.js:126:13)
    at ChildProcess.emit (events.js:214:7)
    at maybeClose (internal/child_process.js:925:16)
    at Process.ChildProcess._handle.onexit (internal/child_process.js:209:5)

Make rsync dependency optional

Having the hard dependency on rsync will be a problem on Windows, and we want Garden to be easy to set up in any case.

I haven't find a well-maintained that directly mimics rsync, but we should be able to conjure something in pure JS and leverage the watcher functionality to copy files as they change.

Make `error.log` more readable

Parsing the error log is a little difficult as it is currently (JSON per line). I wonder if we could do something like output YAML instead, and prefix each line with a timestamp for clarity? Either that or provide a command that makes it easy to read.

I also noticed that we're just stringifying exceptions, which often omits a bunch of detail. For example, we include a detail property on GardenError instances, which includes helpful information for debugging. Maybe we could handle those specially, even when we print out the error in the CLI.

garden cli fails when using -log global option but not when using --loglevel

When running garden with the -log option, garden fails with the below error, but it works when using --loglevel

node ~/devel/node/garden/dist/src/bin/garden.js status -log verbose

USAGE

  garden status [options]

Global options
  -h, --help        Show help
                    [boolean]
....

  -log, --loglevel  Set logger level.
                    [enum] [default: info] [error, warn, info, verbose, debug, silly]


  -o, --output      Output command result in specified format (note: disables progress
                    logging).
                    [enum] [json, yaml]

Value "" is invalid for argument o or output. Choices are: json, yaml

Local GCF handler is brittle and relies on host paths

The implementation is over-complicated and brittle. It should be rewritten to not rely on local host paths.

One option is to simply build a container for each function. Not optimal performance/RAM wise, but most straightforward in implementation.

Another would be a slightly more elaborate way, by creating a meta-module that copies sources from each GCF function via build dependencies, and deploys all of them together in a container. The downside being that all functions need to be restarted when updating any of them, but in turn much more efficient in terms of memory usage (not least because the GCF emulator is a bit heavy).

Merge `garden-project.yml` and `garden.yml` into one format?

It bothers me a little that we have one type of file called garden-project.yml and then garden.yml files in module directories. Both the aesthetics of it, and for a practical reason, which is that a project will be able to reference other repos. So when you’re in a repository that contains a single module, there is no garden-project.yml and we’ll have a weird usability issue (what happens when you run garden in a module repo root?).

My proposal is this: We merge the two spec files into one garden.yml, and add top level project and module keys (and possibly a plural modules or external-modules key, to facilitate adding multiple references to other repos).

Example

Project root:

project:
  name: hello-world
  environments:
    local:
      providers:
        docker:
          type: kubernetes
          context: docker-for-desktop
        gcf:
          type: local-google-cloud-functions
    dev:
      providers:
        docker:
          type: google-app-engine
        gcf:
          type: google-cloud-functions
          default-project: garden-hello-world
  variables:
    my-variable: hello-variable

Module:

module:
  description: Hello world container service
  type: container
  services:
    hello-container:
      command: [npm, start]
      endpoints:
        - paths: [/hello]
          containerPort: 8080
      healthCheck:
        httpGet:
          path: /_ah/health
          port: 8080
      dependencies:
        - hello-function
  build:
    dependencies:
      - hello-npm-package
  test:
    unit:
      command: [npm, test]
    integ:
      command: [npm, run, integ]
      dependencies:
        - hello-function

I feel like this'd feel a bit cleaner, plus it opens up a bit of flexibility in our configuration down the road imo. Thoughts?

Use numeric log levels instead of (or in addition to) named levels?

Just a thought. I for one always get confused as to which is more verbose, "verbose" or "debug". And I can imagine us wanting to set multiple levels at some point.

For example, in kubectl you can set --v=8 to dump a lot of logs. We could for example alias the named levels to the numeric levels (0=error, 1=warning, 2=info etc.), and perhaps make l an alias for --loglevel to make it more concise.

`parseModule()` should not return a Module instance

It's a bit of a stability/security concern to let plugins return potentially non-compliant objects that are used in the framework. The parseModule() handler should instead return a module config that can be validated, potentially with a free-form key that can be passed back to plugin action handlers.

Plugins would then be responsible for upgrading basic Module objects to more specific subclasses, or we could perhaps add some convenience feature for doing that automatically before passing to action handlers.

Move package root to subdirectory

I found out the hard way that lerna doesn't really support having a package in the repo root. I figure we'll also have multiple packages in the repo at some point, so I'm thinking we should move all the stuff we end up packaging and releasing as garden-cli to a garden-cli subdirectory.

We'd keep some stuff like bin, docs and some config stuff in the root. Figure it'd also make the top-level directory a bit cleaner, it's a bit crowded at this point.

Any objections...?

User configuration

I'm working on the ability for users to set configuration parameters locally (per project/repository) and I’m wondering about how we might handle the semantics of configuration, internally and (perhaps most importantly) in the CLI.

For context, we may eventually end up having several different types of configuration:

  1. Custom project parameters that are intended to be populated locally by each user of a project. This could for example apply to user-specific keys, flags that individual users can set while developing/debugging etc. and can in general be used to supply secrets (although we will also need to provide a specific secrets storage in the hosted platform).
  2. Custom project parameters that are stored remotely and shared across the team and all environments. For example authentication keys for 3rd party services, that are not specific to individual developers or environments.
  3. Custom project parameters that are stored remotely but are specific to a single development environment (which may or may not be shared across the team). For example authentication keys for 3rd party services that need to be different for individual environments (e.g. a specific key for an analytics service in the shared staging environment).
  4. Built-in local per-project configuration parameters, such as authentication info for the hosted platform, CLI preferences (e.g. defaulting to output JSON or YAML). The names/schemas of these parameters could be specified by plugins.
  5. Built-in project parameters that are stored remotely and shared across the team. Could apply to anything that should be kept secret but would otherwise belong in the garden-project.yml file, for example custom SSL certificates for the ingress endpoint.
  6. Built-in global parameters for the garden framework. This could again apply to authentication info for the hosted platform, CLI preferences (e.g. defaulting to output JSON or YAML) etc. I'm unsure if we want something like that, or if we should stick to just per-project/repo configuration.

Or to frame this another way, configuration parameters may be:

  • Built-in at the framework level
  • Defined by plugins and integrations/apps
  • Custom (i.e. implicitly defined in the project config files via template strings)

And may have the following scopes:

  • Account/Team (we may introduce a distinction between those at some point, for enterprise accounts with multiple teams)
  • Project
  • Environment
  • User/local (should those have a distinction, i.e. remote user parameters and local user parameters?)

So that's all in all quite a few things to consider (and I may be missing something as well?). We won't implement all of the above in the short term, but what I'm grappling with now is how we would like to see this look in our CLI and config files, such that it is easy to understand and elegant, since it would be annoying to change substantially later.

To try and decompose this a bit, I'm going to raise some questions that I've bumped into, discuss each of those, and then try and converge on something that looks pretty good.

Do we want to have a semantic distinction between remotely hosted custom configuration variables and secrets?
Seeing as secrets are basically configuration variables except with a clear indication that they're, well, secret (meaning encrypted remotely etc.) I see no reason why you would want non-secret variables on the remote platform, I'm leaning towards having no distinction there, but rather considering just the two "dimensions" listed above.

When referencing these parameters, should we explicitly reference them with a scope (e.g. user.some.key and team.some.key) or implicitly collapse values based on scope (so just write some.key and use the value defined in the most granular scope)? Should we allow both?
There's no cut and dry correct answer here. The former has the benefit of being simpler to implement, as well as being more explicit so it's less likely to cause confusion as to where something is configured. The latter in turn allows more flexible overrides at different levels of scope (e.g. having default values at the account level, but allowing users to override). Allowing both would raise some edge case issues, like having to disallow user, team etc. as top level names for parameters, and my feeling is I'd rather have it explicitly work one way or the other. That is unless we have another "scope" that called something like all, combined, collapsed, coalesced or something like that (ideas welcome), so you can explicitly choose to use the most granularly defined value.

How should we separate between built-in parameters, plugin parameters and custom/project parameters?
I think each of the above should have its own namespace at same same level, but a follow-on question could be whether other template resolvers (e.g. the one for getting environment variables in the user's environment) are also at the same level or whether these should fall under a top-level config. prefix. I'm leaning towards the latter, for consistency and clarity, so parameter values could be accessed with ${config.<namespace>.<key>}.
As for the namespace names, the built-in namespace can simply be called garden. Plugins could have their own namespace, which will simply be the name of the plugin. I'm not 100% sure what to call the custom namespace. custom doesn't sound quite right to me, but maybe it's fine + I can't think of a clear alternative. Another way could be to not have a namespace for custom parameters, and instead use some symbol prefix for built-in/plugin parameters, e.g. ${[email protected]}. Looks a bit weird though, because it doesn't look like a normal identifier, no matter which prefix we use (except _, which doesn't work because it makes it look like a private property).

Should there be a distinction between user config values and local config values?
This one is at least fairly easy. We can just start with local and if we see a need for user configuration that is stored remotely per user, we can add it later under a user scope.

Should there be a special indication of namespace and/or scope in the CLI when setting configuration values (e.g. Heroku-style garden config:local set bla=ble as opposed to garden config set local.bla ble)?
This may depend a bit on how we answer the above questions on namespaces/scope, but I'd lean towards "nah" and say keep it simple. If we make this distinction, we'd kinda need it to look similar in template strings for consistency, and I can't think of a solid way to do that or a good reason to.

... Anyway, by now you likely understand why I'm struggling a bit with this. 😖

I'll keep thinking on this and put in some suggestions on how this might look, but thoughts would be appreciated.

Version of plugin modules is always "0000000000"

This is because we make the assumption in our current Module.getVersion() code that modules are in a project git tree. We need to add more ways to specify versions and/or automatically retrieve the version of the plugin and use that instead. Or I suppose we could hash the contents of the plugin directory, but that feels like it would be flimsy. Ideas welcome.

Running multiple garden commands that trigger builds at same time is likely to fail

... or cause inconsistent/valid builds.

This could be resolved using locks at the file-system level, by staging builds differently or even using a daemon process to make sure a single task-graph runs for the user.

Other concurrent/conflicting tasks, such as tests and deployments, could also fail or cause issues (depending on how the providers are implemented) but conflicting build tasks are most likely to have a negative impact.

Error messages need to be much clearer

There are quite a few places where we emit rather opaque error messages. I'll gather some examples here as I come across them. We should btw also in general format error messages in a nicer+clearer way, instead of just spitting out an object, which may involve some different shape of our error objects.

  • Error when failing to call docker CLI commands doesn't indicate why
  • Error when Kubernetes provider can't be reached should say that specifically
  • When key in template string is not found, it would be good to list which top-level keys are available (because it may not be obvious what's available in the context of each string).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.