Code Monkey home page Code Monkey logo

stacks's Introduction

Stacks

Heighliner stacks to speed up app dev.

Local Development

# Clone the repo or just do `git pull`
git clone [email protected]:h8r-dev/stacks.git

# Install git hooks
make install-hooks

If you want to live reload the chain modules, you need to install go and setup GOPATH and GOBIN env first. Then run:

# Watch files and develop
make watch

Layout

  • chain/: This contains the chain (CUE) modules.
  • official-stack: This contains the official stacks.

Documentation

Repo Structure

This repo provides the following CUE modules:

  • The entire repo can be imported as a CUE module.
  • Each stack can be imported as a CUE module.
  • The cuelib can be imported as a CUE module.

This repo contains the following stacks:

stacks's People

Contributors

92hackers avatar hongchaodeng avatar lyzhang1999 avatar vgjm avatar xinxinh2020 avatar yni9ht avatar yuyicai avatar zhangze-github avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

stacks's Issues

[Feature] Helm repo could be generated automatically, hidden from user.

Background

We must specify a helm repo to deploy our apps with stack now, However, Helm repo is not scope of my business. can you hide it from me ? and there is no need to write a helm repo section in stack plan.

Solution

Regard helm repo as best practise when deploying apps, generate a help repo automatically when executing a stack.

[Proposal] How to custom a new stack ? From the perspective of a developer

Background

As a community developer, i want to customize a new Stack with Heighliner tools, what should i do ?

Currently, all stacks are created by Heighliner team and the source code of which existed in h8r-dev/stacks GitHub repo.

What developers want is: he want to owns the created stack repo, Heighliner Stacks just an engine to him, he create custom chain components in his own stack repo, import 'h8r-dev/stacks' library, and then build his own stack.

Solution

All chain components used by a stack should be imported directly in plan.cue file, and then customization part of a stack should reside with plan.cue file and imported in plan file.

Reorg repo directories

The current directory layout is confusing. We need to reorganize them in the following ways:

  • The cuelib/ is the first generation of CUE libraries which has a lot of problems. Then we invented chain/ to be provide more structured modules. We should move cuelib/ into chain/ and make it as internal package.
  • We should document the structure of a chain module better. That includes:
    • What input/output files and definitions mean and being used for?
    • Each #Instance should have a comment block describing what does it do and how does it work.
    • If time permits, we shall convert them into website docs automatically.
  • The supply/ package should be renamed to factory or something.
    • Its helpers should not be chains under its directory. Otherwise we shall move them out to top level chain.
  • tools/* should be moved to top level chains. Everything in chain/ are tools to users basically.

[Terraform]: A K8S cluster is required by Heighliner, to create source code repos.

Background

Heighliner manage much resources (only github repos now) with Terraform, which will generate a state file to records resource creation status.

Currently, heighliner will save the state file as a secret in K8S cluster, which leads to result:

As developer, to run a stack, we have to provide a K8S cluster to heighliner, But in situations such as: create github repos, there is no need to set up a K8S cluster, But heighliner required that.

Solution

  1. When no K8S cluster provided, heighliner will save terraform state files into user local disk, as what output.yaml did.
  2. Ignore NON-CD cases, A K8S cluster is required to run a stack.

too big source code size

du -hs code/go-gin 
136M    code/go-gin

It's hard to sync such a big chunk of code into newly initialized github repo

[Optimize]: Split installed tools of base image into chunks being installed by consumer step.

Background

When running stack, it will download a big docker image, which will pending a lot of time, leads to bad user experience.

Solution

Set Alpine as base image, every consumer step is responsible for installing it's own dependencies.

Considering stack steps running concurrently, total run time maybe reduced.

Benchmark

There needs a benchmark test to verify.

Target: First time stack running time.

[Feature]: `docker-publish` Github action workflow should supports add another depdendent workflow to run.

Background

Currently, every application project repo created by Heighliner will includes a Github action workflow file called: docker-publish.yml, which used to build a docker image and push it to Github container registry.

In a real application level project, there are a bunch of works need to do before building the final docker image, such as: lint, static check, test.

So, it's a must, that docker-publish.yml needs to support it can depends on success of other custom workflows running.

Solution

Github action provides workflow_run event to handle dependencies between multi workflows, view : https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#workflow_run for more details.

[Feature]: How to keep template source code of a stack.

Background

Currently, in every stack, a clone of code/ dir contains a backend project named 'go-gin', and a helm project named helm.

If some file content need to be updated, we have to modify all of these copies.

Solution

Keep just one copy of code/ directory. move it outside from Stack dir, When deploying a stack, copy that code/ dir to stack temporarily.

[Feature]: Use more powerful storage backend to store terraform state file.

Background

We use Terraform in our stack, and terraform will generate state file, we must save the state file, to make terraform works properly.

Currently, we will save the terraform state file in *K8S Secret, which has size limit of 1 Mb and will be replaced by other more powerful storage backend in the future.

Solution

Terraform supports etcd and consul and other storage backend to store state file, full list can be reached at:

https://www.terraform.io/language/settings/backends/local

gin-next stack

  • frontend(next.js) @92hackers
    • create project use next.js with APP_NAME (framwork/react/next.cue)
    • set github action (ci/github/action.cue)
    • set Dockerfile (framwork/react/next.cue)
  • backend(gin) @yuyicai
    • gin framwork layout (generate by template with APP_NAME)
    • set github action (ci/github/action.cue)
    • set Dockerfile (framwork/kratos/kratos.cue)
  • helm @lyzhang1999
    • create helm chart
    • add services

Does not support multiple KUBECONFIG env

官方对于 KUBECONFIG 环境变量的使用说明:设置 KUBECONFIG 环境变量

本地环境参数:

export KUBECONFIG=$KUBECONFIG:$HOME/.kube/work/ysz-dev
export KUBECONFIG=$KUBECONFIG:$HOME/.kube/ni9ht/k3s-sh

问题描述:
当本地的 KUBECONFIG 环境变量存在多个时,运行 dagger do up ./plans 时进行 kubeconfig 文件获取的时候会报错,错误如下:

[✗] client.commands.kubeconfig                                                                                                                               0.0s

10:59AM FTL failed to execute plan: task failed: client.commands.kubeconfig: exit status 1

修改建议:
目前看代码此处获取该环境变量主要用户获取 kubeconfig 文件内容,可以考虑使用自定义的环境变量让用户指定需要部署应用的集群 kubeconfig 文件。

[Feature] add Github Action CI to verify stacks

Currently the gin-vue stack has no CI to verify it on pull request. We need to setup CI for it. Here are two ways that we can try and implement:

Pure Github Action

There is an Action that can run dagger inside Github Action: https://docs.dagger.io/1201/ci-environment
We just need to add a verify step for it.

Github Action + Self-hosted Runner

We can use Github Action to trigger test which actually runs on our self-hosted runner: https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners

In this way we can have our specific version of binary, environment, etc.

Specify custom domain

Currently the domain h8r.site is hardcoded. We should let user specify it. If it is specified, we don't need to change /etc/hosts in that case.

[Feature] Modular, Declarative Stack Design

Problem

Currently, each stack is written as a Dagger Plan

dagger.#Plan & {
  client: {
    filesystem: {
      "...": read: contents: dagger.#FS
      "...": write: contents: actions.up.outputYaml.output
    }
    commands: kubeconfig: {
      name: "cat"
      args: ["\(env.KUBECONFIG)"]
      stdout: dagger.#Secret
    }
    env: {
      KUBECONFIG:   string
      ...
    }
  }

  actions: {
    up: {
      outputYaml: output: string
      
      installIngress: {...}

      installNocalhost: {
        ingressIP: installIngress.output.IP
      }
      ...
    }
  }
}

The problem with this design is that it is hard to compose Stacks. For example, I want to compose two Stacks, one is deploying serverless apps and another is installing middleware software like nginx, into one stack. Currently there is no way to put two Dagger plans into one Dagger plan.

Proposal

To make composition of Stacks, we need to redesign a Stack to be a module. We can run a Stack module using hln, and a Stack can import other Stacks as modules.

Here is what a Stack module would look like:

input: {
  env: {
    KUBECONFIG: string
  }
  commands: {
    kubeconfig: {
      name: "cat"
      args: ["\(env.KUBECONFIG)"]
      stdout: h8r.#Secret
    }
  }
  files: {
    "...": read: contents: h8r.#FS
    "...": write: contents: actions.up.outputYaml.output
  }

  // This is the fields that we will read directly from `app.yaml`
  config: {
    image: string
    deploy: {
      cmd: [...string]
      port: int
    }
  }
}

output: {
  // This is the fields that we will write to `.hln/output.yaml`
  local: {
    ingressIP: string
    ingressPort: int
    ingressHost: string
    ingressURL: string
  }
}

up: {
  installIngress: {
    name: input.config.name
  }
  installNocalhost: {
    ingressIP: installIngress.IP
  }
}

Basically, a stack is a module that has input and output, and does a bunch of stuff under the hood.

How does it work to run a Stack

When we run hln up for a stack, it basically renders it into a Dagger Plan. The input and output will be rendered into client sections, and up will be rendered into actions sections.

A special case is that we can keep input.config as is and fill it with fields from app.yaml.

Reusing Stacks

Let's say you have two stacks with the above format, called serverlessapp and middleware. You can compose them in the following way:

import (
  "serverlessapp"
  "middleware"
)

input: {
  serverlessapp.input
  middleware.input
}

output: {
  url: up.installServerlessApp.output.url
}


up: {
  installMiddleware: middleware.up
  installServerlessApp: {
    wait: installMiddleware.output.ready
    up: serverlessapp.up
    output: {
      url: up.output.url
    }
  }
}

When we run hln up for this plan, only the above plan will be rendered to a Dagger plan. Both serverlessapp and middleware will be served as modules.

[Proposal] Add `go bug` alike bug report mechanism

Background

A lot of mistakes or wrong configs will lead failed running result of a Stack, and currently there is no way to get what happend when user runnint stack, but which is very important to help up improve Stack.

As maintainer team of Stack, we should import a way to help us seek out why user get failed when running Stack,

Solution

We could collect user environment info && error stack || error info, and then send to us.
go bug is a good example that we can learn from, The user will make decisition wether to report or not.

[Problem] yarn registry should be different in different locations.

Background

Currently, we use https://registry.npmmirror.com as yarn registry in stack, which is very slow when run stacks at outside of China.

Solution

In China, set yarn registry as https://registry.npmmirror.com.
In outside of China, do not set yarn registry, let it be default value.

[Bug] error image pull for tag `main`

On ci workflow, docker image tag only uses sha values, not main.
code
This causes a problem.
When argocd first installs the application, the image tag pulled is main, but this tag does not actually exist. argocd needs to wait for the next sync to trigger a change to the deployment image tag.
This will take more time to wait for the image to be pulled.
cc @92hackers
image

Github ID include uppercase lead to InvalidImageName

运行参数
GithubID: Yni9ht
GITHUB_ORG: Yni9ht
ORGANIZATION: Yni9ht
APP_NAME:book-store

问题描述
通过 dagger do up -p ./plans 部署 gin-vue 应用后,应用的后端和前端仓库打包出的镜像地址为 ghcr.io/yni9ht/book-store:main,但是应用的 Deployment 中镜像地址为ghcr.io/Yni9ht/book-store:main,导致无法拉取镜像。

[Proposal] Splitting the Infra installation from the Stack

Problem.

Currently, Infra components are installed together in the Stack, which can cause problems with #204. We plan to split the installation of Infra components from the current Stack; the Infra installation process is responsible for installing Infra components and the Stack only generates the application framework source code and Deploy repository.

Option 1.

Split the Stack into an Infra Stack and an Application Stack, with the Infra Stack being used exclusively for installing Infra components and making the necessary configurations to dynamically install the necessary Infra components based on the incoming parameters. When executing hln up, the Installation Infra Stack is executed first, followed by the Application Stack (e.g. gin-next), and we first support these Infra components:

  1. Split Stack(include Argocd) to Infra Stack and an Application Stack @vgjm
    1. Infra Stack can run independently, install them by input some env, and record configmap into cluster for Application Stack use it.
  2. update Infra Stack(Prometheus) and enhance Application Stack make them can multiplexing between different applications in the same cluster @92hackers
  3. update Infra Stack(Loki) and enhance Application Stack make them can multiplexing between different applications in the same cluster @92hackers
  4. update Infra Stack(Dapr) and enhance Application Stack make them can multiplexing between different applications in the same cluster @92hackers
  5. update Infra Stack(Nocalhost) and enhance Application Stack make them can multiplexing between different applications in the same cluster @yuyicai
  6. update Infra Stack(Argocd) and enhance Application Stack make them can multiplexing between different applications in the same cluster @yuyicai
  7. update Infra Stack(SealedSecrets) and enhance Application Stack make them can multiplexing between different applications in the same cluster @yuyicai
  8. update Infra Stack(Jeager) and enhance Application Stack make them can multiplexing between different applications in the same cluster @lyzhang1999
  9. update Infra Stack(Istio) and enhance Application Stack make them can multiplexing between different applications in the same cluster @lyzhang1999
  10. hln init && up && down && hln status...... @hongchaodeng

Results.

When hln up is executed, if the Infra component is not installed, it is installed automatically. If it is installed, it is skipped and execute the Application Stack installation.
When a new application is created in the same cluster, the Infra component will remain intact and only application related content will be created, solving the problem of #204.

hln

Enhance hln init command, install infra first, then hln up will run application stack.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.