Code Monkey home page Code Monkey logo

pipeline's People

Contributors

abayer avatar afrittoli avatar bendory avatar bobcatfish avatar chitrangpatel avatar chmouel avatar chuangw6 avatar danielhelfand avatar dependabot[bot] avatar dibyom avatar dlorenc avatar emmamunley avatar imjasonh avatar jeromeju avatar jerop avatar jlpettersson avatar khrm avatar lbernick avatar mattmoor avatar mattmoor-sockpuppet avatar nader-ziada avatar pritidesai avatar quanzhang-william avatar tejal29 avatar tualeron avatar vdemeester avatar vincent-pli avatar wlynch avatar xinruzhang avatar yongxuanzhang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pipeline's Issues

Implement simple PipelineRun

Expected Behavior

A user should be able to create a PipelineRun which will look at the Tasks in the referenced Pipeline and for each of those, create a TaskRun.

If #59 is done before this, creation of the TaskRun would also trigger running of the Task, however if not, nothing would actually be executed but the TaskRuns would exist.

Out of scope of this issue (coming in later issues):

  • Providing inputs to the Tasks
  • Doing something with produced outputs

Requirements:

  • Test coverage (unit tests + if possible integration tests, may be blocked by #16 )
  • Docs that explain how to create a PipelineRun, with examples
  • Docs that explain how to debug this (i.e. where to look if it goes wrong)

Actual Behavior

If you create a PipelineRun, nothing will happen.

Steps to Reproduce the Problem

  1. Deploy the Pipeline CRD + related CRDs to your cluster (note: should be docs on how to do this post kubebuilder)
  2. Create a Task like this:
apiVersion: pipeline.knative.dev/v1beta1
kind: Task
metadata:
  name: helloworld-task
  namespace: default
spec:
    buildSpec:
        steps:
            - name: helloworld
              image: busybox
              args: ['echo', 'hello', 'build']
  1. Create a Pipeline like this:
apiVersion: pipeline.knative.dev/v1beta1
kind: Pipeline
metadata:
  name: hello-pipeline
  namespace: default
spec:
    tasks:
        - name: helloworld-task-1
          taskRef:
              name: helloworld-task
        - name: helloworld-task-2
          taskRef:
              name: helloworld-task
        - name: helloworld-task-3
          taskRef:
              name: helloworld-task
  1. Create a PipelineRun like this:
apiVersion: pipeline.knative.dev/v1beta1
kind: PipelineRun
metadata:
  name: hello-pipeline-run-1
  namespace: default
spec:
    pipelineRef:
        name: hello-pipeline
    triggerRef:
        type: manual
  1. Creating the PipelineRun should have created TaskRuns, so we should then be able to see:

a. The status of the PipelineRun should be updated (see example), check with kubectl get pipelineRuns
b. There should be 3 TaskRuns created, each with a unique name and a reference to the PipelineRun (check with kubectl get taskRuns), for example:

apiVersion: pipeline.knative.dev/v1beta1
kind: TaskRun
metadata:
  name: helloworld-task-run-1-1
spec:
    taskRef:
        name: helloworldTask
    trigger:
        triggerRef:
            type: manual

(I left the PipelineParams reference out of the PipelineRun, but if it turns out that we need a ServiceAccount to make this work, we'll need to specify it in the PipelineParams and reference it in the PipelineRun as well)

Create integration test demonstrating milestone 1

Expected Behavior

To make sure we work in the same direction was we work towards our first milestone we should have an integration test (which would be skipped) that attempts to perform the actions we want to demo, which we can review as a team and make sure we are happy with.

Actual Behavior

The only description of the expected behaviour is the description of milestone 1, but we are all on slightly different pages about what we need to accomplish.

Additional Info

https://github.com/knative/build-pipeline/milestone/1

Add validation to controller for TaskRun CRD

Expected Behavior

When someone creates a TaskRun in a k8s cluster (e.g. with kubectl apply, the controller should make sure every field is valid.

This should include verifying as many fields as possible, e.g. verifying connectivity with any endpoints, if there are any references, verifying that those actually exist in the system, etc.

TaskRun is defined here: https://github.com/knative/build-pipeline/blob/master/pkg/apis/pipeline/v1beta1/taskrun_types.go

Any additional fields included should cause a failure (e.g. if a typo is made when defining a known field).

Actual Behavior

Some basic validation is applied, e.g. that required fields are present and that fields are in the correct format (e.g. string vs. int), but that's it.

Steps to Reproduce the Problem

  1. Deploy the Pipeline CRDs to your k8s cluster
  2. Create an instance that has invalid fields, e.g. a malformed URL or a typo in one of the fields.
  3. Apply the instance with kubectl apply -f and notice that the controller does not complain

Additional Info

Blocked by #19

Encorporate k8s documentation style guidelines

Expected Behavior

The k8s project has a solid set of documentation guidelines. It would be great if the docs in this project followed them. This means two things:

  1. We should review our existing docs (specifically READMEs) and fix any issues
  2. We should add a link to these standards to our CONTRIBUTING.md

If possible we should consider adding some markdown linting as part of our

Actual Behavior

We did not follow any kind of standard when we wrote these :S

Design `image` type

The work for this task is to design this feature and present one or more proposals (before implementing).

Expected Behavior

One of the features we want the Pipeline to have is to have strong typing around concepts that will be used commonly in k8s centric CI/CD pipelines.

One of the concepts we want to have strong typing around is a "container image", which is something that could be produced by a Task as an output, and published to an artifact store, and may be used by subsequent Tasks for testing or deploying.

Some attributes we'd probably like include:

  • name
  • tag
  • digest (could be unknown until the image is built)

Actual Behavior

This concept only exists theoretically, but in this example pipeline we have a builtImage attribute on an artifact store binding, declaring what we'd like the built image to be called:

image

Additional Info

Propose process for API updates

Expected Behavior

At a certain point we will decide we want to solidify the API and we will want to prevent backwards incompatible changes from being made. We should define a process for introducing changes into the API to minimize disruption to anyone that has integrated with us. For example:

  • Requiring all changes be additive
  • Versioning and standards around what the version increments represent (e.g. semantic versioning)
  • A deprecation process / period

Actual Behavior

As of today (Sept 10, 2018) the API is still under discussion and is far from the point where we'd expect anyone to integrate with us, so it's okay to make backwards incompatible changes.

Add versioning + CHANGELOG

Expected Behavior

Once we have the Pipeline actually working to some extent, and we want to start demoing to people (aka we've hit our first milestone), we should introduce:

Actual Behavior

We have no versioning and no changelog, which is fine b/c this doesn't actually do anything yet.

Additional Info

Blocked on hitting our first milestone.

Create sample exercises for User study

Expected Behavior

Create tasks for user study which will be used in sample pipelines. Using this task, we will ask user to define a new pipeline.

Actual Behavior

do not exists.

Steps to Reproduce the Problem

none

Additional Info

Create tasks for user study.
As of now, things on top of my list are

  • Build with kaniko and push task
  • Make task
  • Integration Test Task
  • Deploy Task

Create pipeline CRD documentation for user study

Expected Behavior

For user study, go through documentation and iterate the docs with someone on the team.

Actual Behavior

docs currently are not reviewed if they are inline with the samples.

Steps to Reproduce the Problem

none

Additional Info

Minimal documentation and not all the details need to fleshed since it might lead to confusion.
Only relevant to what is defined in samples. Refer to samples in documentations.
should make sense with #3
This document can then be used as user facing docs.

Design: Partial Pipeline execution

The work for this task is to design this feature and present one or more proposals (before implementing).

Expected Behavior

If a pipeline has many tasks and takes a long time to run (e.g. tens of minutes, or even hours), and one Task fails, it might be desirable to be able to pick up execution where the Task failed, with different PipelineParams (e.g. from a different git commit), so you can resume the Pipeline without having to rerun the whole thing.

Some ideas for how to implement this:

  1. Fields in a PipelineRun which override which Tasks to run from / refer to a previous PipelineRun from which results should be taken
  2. A tool which makes it easy to create a new Pipeline from an existing one which only runs a subset of the Tasks

It is also worth considering what this could be like via a UI: if one is viewing a Pipeline in a UI, and wants to re-run only a portion of the Pipeline, they probably want the user experience to be as if they were still running the same Pipeline, even if underneath a new Pipeline is created.

Actual Behavior

At the moment, if any Task in a Pipeline fails, your options to rerun the rest of the Pipeline would be:

  1. Run the entire Pipeline again
  2. Create a new Pipeline from the previous one which contains only the Tasks you wish to run

Additional Info

This originally came up in discussion about #39, in the context of whether or not we'd want to always use the same git commit from a source for all Tasks in a Pipeline, or if we wanted sometimes for a Task to always use HEAD. This would allow a user to change a repo, by updating HEAD, between Task executions.

The feature of partial pipeline execution could be an alternative to this.

Implement the github Resource type

Expected Behavior

When a user creates a TaskRun, the TaskRun will contain the actual Resources that the Task is using, and if these Resources are inputs, they should be made available as mounted volumes in the Task's steps.

For example run-kritis-test.yaml contains a kritis github source (note: maybe this section should be called resource now?):

        sources:
            - name: 'kritis'
              type: 'github'
              url:  'github.com/grafeas/kritis'
              branch: 'featureX'
              commit: 'HEAD'

This would mean that the TaskRun controller would need to pull from that github repo and make the contents available in a mounted volume for the steps subsequently executed by the TaskRun's Task.

It would need to use a serviceAccount to do this which would have the correct GitHub credentials.

Requirements

  • Implement the logic to fetch a github Resource type, so that it can be made available for the steps in a TaskRun's Task (ideally sharing knative/build's gitToContainer)
  • Docs on how to use a github Resource
  • Test coverage, at least unit test, possibly also integration test if reasonable (may be blocked by #16) (this test coverage could even be added to the knative/build repo if that is where most of the logic livs)

Bonus:

  • Update the TaskRun controller to invoke this logic

Actual Behavior

We have none of this implemented, TaskRun's controller currently just (barely) validates the yaml.

Additional Info

  • If it is possible to share any of this logic with knative/build, that is great! knative/build-pipeline is likely going to depend on knative/build anyway
  • Note that #57 (coming asap) will make some sweeping changes to the repo.
  • TaskRun currently doesn't seem to have any notion of the serviceAccount to use (e.g. in this example), the serviceAccount is currently specified only in the PipelineParams

(Open to redesign: instead of repeating all of the resources and credentials in the TaskRun, TaskRun could have a references to the Resource and PipelineParam CRDs. One downside tho is that if you look at a TaskRun, you won't know for sure what values were used for these, since they could have been subseqently updated.)

Add a step to verify vendor code before running unit tests.

Expected Behavior

On a Branch, we would want developers to run dep ensure, so that unused dependencies are removed.
We can do this by running a script in the checks which runs dep ensure and check if they are any changes to vendor dir.

Actual Behavior

Right now, developers can easily add unused deps and overlook the libs pulled in since they tend to do this

git add vendor/

Steps to Reproduce the Problem

none

Additional Info

Use knative/serving style controllers

Expected Behavior

Our controllers should be written in the current preferred style, which is that used byhttps://github.com/knative/serving .

Bonus

Write some docs on how to write controllers in this style (maybe in https://github.com/knative/pkg).

If we want ppl to write controllers in this style, we should document how to do that. The only choice currently will be to reverse engineer the style from the examples in knative/serving, and without guidance can easily lead to cargo-culting.

Actual Behavior

In #66 we removed kubebuilder and re-did our controllers so that they use the same style as https://github.com/knative/build. However in the review, @imjasonh pointed out that the style in https://github.com/knative/serving is actually the latest and greatest, and @mattmoor pointed us at https://github.com/mattmoor/warm-image as an example.

Additional Info

Spawned from #57

Create Sample piplelines for User testing.

Expected Behavior

Sample pipelines should be easy to understand and can be reviewed by people for the upcomning user study.

Actual Behavior

samples do not exists.

Steps to Reproduce the Problem

none

Additional Info

In this task we need to define sample pipelines which

  • a simple example of linear pipeline
  • an example pipeline with Fan in
  • an example pipeline with Fan out
  • an example pipeline with implicit dependency

Create a pipeline-params which is easy to use.
These samples are solely for the purpose of user study and should not be more than 5 min read.
Try to create samples that only depict one feature per pipeline. Try using the one source, artifact store, result store across all examples.

Requirement Documents for Result Store.

Expected Behavior

Test Results are first class citizens in PipelineCRDs.
We want to flush out the requirements for Result Store where these results will be stored.

Actual Behavior

Steps to Reproduce the Problem

Additional Info

What kind of Results.

Design: Notifications

The work for this task is to design this feature and present one or more proposals (before implementing).

Expected Behavior

In a many modern pipelines, we want to be able to

Requirements:

  • Must support: slack updates, email updates, github PR commenting
  • Should be possible to perform different actions on failure of a task or a pipeline, or success
  • Must be possible to provide required credentials/auth info

Possibly a notification could be a top level directive in both a Pipeline and a Task, e.g.:

kind: Pipeline
...
spec:
    tasks:
        - name: unit-test-kritis
          taskRef:
              name: make
          inputSourceBindings:
              - inputName: workspace
                sourceKey: kritis
          params:
              - name: makeTarget
                value: test
          notifications:
               failure:
                   ....
               success:
                  .....

Actual Behavior

In the current design, a user would have to implement this as a Task that knows how to report these notifications.

Additional Info

There is some potential overlap with #27 which is about executing Tasks conditionally.

Add step-by-step guides to DEVELOPMENT.md

Expected Behavior

The DEVELOPMENT.md should cover these common actions:

  1. How to start from scratch, setup forks, etc. (e.g. see knative/serving DEVELOPMENT.md)
  2. How to update controllers in your cluster once deployed
  3. How to test changes to a controller
  4. How to test changes to a type

Any other common operations you think might be useful!

Actual Behavior

The DEVELOPMENT.md is pretty sparse.

Additional Info

The less you assume the better :D

Implement working Task that deploys to k8s using helm

Expected Behavior

Once we have implemented a simple hellworld Task (#59) a user should be able to define a Task that can deploy at a k8s resource using helm.

This would mean that:

  • Templating is in place that allows for the Task to use the image and registry values specified in the input image resource in the steps of the Task as arguments to the helm commands
  • Any required service account needed for accessing to the cluster is available to the Task
  • Any logic needed to support the cluster type (see #60) is implemented

To try this out manually, you'll need a k8s cluster has been setup that you can deploy to.

Should include:

  • Expanding the docs about how to write a Task that uses an k8s cluster resource, and how to set up the service account correctly
  • Add docs on how to debug this when it doesn't work
  • Test coverage, unit and also integration if possible (see #16)

Actual Behavior

We have no logic currently for even running a task, but #59 will add basic logic. #62 will expand on this but shouldn't block this task.

Steps to Reproduce the Problem

  1. Deploy the Pipeline CRD + related CRDs to your cluster (note: should be docs on how to do this post kubebuilder)
  2. Create a Resource like this (you can use a public image such as the kritis one in this example or you can use any other image):
apiVersion: pipeline.knative.dev/v1beta1
kind: Resource
metadata:
  name: helm-resources
  namespace: default
spec:
  resources:
    - name: kritis-app-github-resource
      type: github
      params:
      - name: url
        value: https://github.com/grafeas/kritis
      - name: revision
        value: master
    - name: kritis-server
      type: image
      params:
        - name: url
          value: gcr.io/kritis-project
  1. Create a Task like this:
apiVersion: pipeline.knative.dev/v1beta1
kind: Task
metadata:
  name: helm
  namespace: default
spec:
    inputs:
        resources:
            - name: workspace
              type: github
            - name: imageToDeploy
              type: image
        params:
            - name: pathToHelmCharts
              value: string
            - name: helmArgs
              value: string
        cluster:
            - name: clusterToDeployTo
    buildSpec:
        steps:
        - name: deploy
          image: kubernetes-helm
          args: ['deploy', '--path=${pathToHelmChart}', '--set image=${imageToDeploy.Url}.${imageToDeploy.Name}', '${helmArgs}', '--cluster=${clusterToDeployTo.Endpoint}']

(Note I am pretty much making up the helm command here, please replace it with something that actually works and makes sense!)

  1. Create a TaskRun like this:
apiVersion: pipeline.knative.dev/v1beta1
kind: TaskRun
metadata:
  name: helm-run-0
spec:
    taskRef:
        name: helm
    resourceRef:
       name: helm-resources
    trigger:
        triggerRef:
            type: manual
  1. Look at the logs from the running container (Note the best docs we have on this at the moment are some of the steps in the Build quick start) to see that the helm command was successful (or not successful)
  2. Observe that the status of the TaskRun has been updated appropriately, e.g. see this example which contains:
  • Outcomes of individual steps
  • Started, Completed, and Successful conditions
  1. Look at the cluster itself and verify that the image has been deployed to it

Additional Info

Design: sources

The work for this task is to design this feature and present one or more proposals (before implementing).

Expected Behavior

It should be possible for Pipelines to execute against multiple kinds of "sources".

The exact definition of what a "source" is a bit murky but generally we want this to be the same as a knative build CRD source, since ultimately that is how the Source will be used (once it is passed down to a Task as an input).

As far as the knative build CRD is concerned, a source could be:

  • git
  • gcs
  • a custom image

And we would want to support other sources in the future as well, which should be arbitrarily extensible.

Also in #2 @pivotal-nader-ziada pointed out that we might want to be able to use outputs of Tasks as inputs to subsequent Tasks (e.g. mount a built image directly into a subsequent Task as a source).

Requirements

  1. It should be possible to use git, gcs or a custom image as a source
  2. It should be possible to add other kinds of sources (in a first iteration, this could require changing the pipline crd controller but in the long run it should be possible to do without forking the implementation)
  3. For Tasks and Pipelines that need it, it should be possible to access additional information about sources, e.g. for a git source you might want to use the commit ID, the branch name, the committer, the author, etc.

Actual Behavior

In our examples we show what a possible github source could look like:

image

Additional Info

Though event triggering is outside the scope of the Pipeline CRD, we need to make sure that when an event DOES trigger a pipeline to run (by creating a PipelineRun and probably also a PipelineParams), that it is possible to provide all relevent info, so it might be worth exploring what info is contained in event such as github Pull Request events

See #11 re. ensuring the same sources/inputs are used throughout a Pipeline.

Add validation to controller for Task CRD

Expected Behavior

When someone creates a Task in a k8s cluster (e.g. with kubectl apply, the controller should make sure every field is valid.

This should include verifying as many fields as possible, e.g. verifying connectivity with any endpoints, if there are any references, verifying that those actually exist in the system, etc.

Task is defined here: https://github.com/knative/build-pipeline/blob/master/pkg/apis/pipeline/v1beta1/task_types.go

Any additional fields included should cause a failure (e.g. if a typo is made when defining a known field).

Actual Behavior

Some basic validation is applied, e.g. that required fields are present and that fields are in the correct format (e.g. string vs. int), but that's it.

Steps to Reproduce the Problem

  1. Deploy the Pipeline CRDs to your k8s cluster
  2. Create an instance that has invalid fields, e.g. a malformed URL or a typo in one of the fields.
  3. Apply the instance with kubectl apply -f and notice that the controller does not complain

Additional Info

Blocked by #19

Setup PR testing for build-pipeline unit tests

Expected Behavior

Every PR should have unit tests run against it automatically.

(The long term dream is that we use the Pipeline CRD reference implementation itself to test this repo, but since it is just an API right now, that's a ways off!)

Actual Behavior

We have no automation! Is master broken? WHO KNOWS IT'S A CRAZY WORLD ๐Ÿค 

Steps to Reproduce the Problem

  1. Open a PR with broken unit tests
  2. Notice that nothing catches this

To actually run the tests:

make test

(see also the DEVELOPMENT.md)

Additional Info

We would be fine with using something like Travis initially, but it would have to be enabled for the entire knative org, and since the rest of the org is using Prow, we can probably jump on that train.

Configuration for Prow for the other knative projects lives in https://github.com/knative/test-infra/tree/master/ci/prow

Add `cluster` and `clusterBindings` to types

Expected Behavior

The following types should be added:

(All should be optional)

This wouldn't be expected to actually do anything yet (similar to the rest of the current implementation).

Actual Behavior

We have references to clusters in our example tasks:

https://github.com/knative/build-pipeline/blob/3726a431cee21f9a88bc320339c978cca59a58e5/examples/deploy_tasks.yaml#L17-L18

Clusters defined in our example pipeline params:

https://github.com/knative/build-pipeline/blob/3726a431cee21f9a88bc320339c978cca59a58e5/examples/pipelineparams.yaml#L33-L39

Note that the clusterBinding example is incomplete:
https://github.com/knative/build-pipeline/blob/3726a431cee21f9a88bc320339c978cca59a58e5/examples/pipelines/kritis.yaml#L44-L45

This should be binding a cluster from PipelineParams to a cluster input in the Task, e.g.

          clusterBindings:
              -  inputName: clusterName # refers to the input in the Task
                 clusterName: test # refers to the cluster in the PipelineParms

Additional Info

@barney-s will be working on designs around deployment and k8s cluster interaction for this project

Create step by step walkthrough tutorial

Expected Behavior

When someone wants to ramp up on this project, they should be able to follow along with a tutorial that explains to them how to create their own pipelines and interact with them.

We can initially cover a simple pipeline, e.g. the example kritis pipeline.

Requirements

  • It should be possible to:
    1. Start from no pipeline or tasks at all
    2. Gradually build up a complete set of Pipelines, Tasks, PipelineParams
    3. Manually invoke a Task by creating a TaskRun
    4. Manually invoke a Pipeline by creating a PipelineRun
  • There should be tests that cover this.
  • The tutorial should cover actually deploying these to a k8s cluster using kubectl (e.g. like in the DEVELOPMENT.md).
  • The solution should be available (could be link to https://github.com/knative/build-pipeline/tree/master/examples#example-pipelines if we use those examples as-is)

Initially this would not actually do anything, but as we start adding actual functionality we can build this up over time.

Additional Info

If you are feeling really ambitious, you could also cover the more complex guestbook pipeline which has both fan-in and fan-out.

Design: Ensure same sources/inputs used throughout pipeline

The work for this task is to design this feature and present one or more proposals (before implementing).

Expected Behavior

If you create a PipelineRun which executes against a particular branch/commit/etc., you want to be certain that the same branch/commit etc is used all the way through.

For example if your pipeline runs against master, but between two Tasks in your Pipeline, master is updated, you want subsequent Tasks to continue using the same commit they started using.

Additionally, as pointed out by @pivotal-nader-ziada in #2, we need to be sure that when one Task creates an artifact such as an image, that subsequent steps that use that image must be guaranteed to run against the correct image (e.g. if the image is identified using only tags, the tag could be pushed to by something between the steps - one possible way to circumvent this is to use digests).

Actual Behavior

Nothing ensures this at the moment. It should be possible to construct a Pipeline that will meet these criteria, but if the author if the Pipeline makes a mistake, there is nothing verifying that the same sources and inputs are used throughout.

Additional Info

Concourse solves this problem, the person investigating this design might want to check out how they solve this (e.g. ping @jchesterpivotal :D)

Implement working kaniko build Task

Expected Behavior

Once we have implemented a simple hellworld Task (#59) a user should be able to define a Task that can build and push an image to a registry using Kaniko. This would mean that:

  • Templating is in place that allows for the Task to use the image and registry values specified in the output image resource in the steps of the Task
  • The kaniko image can be used
  • Any required service account needed for pushing to the registry is available to the Task
  • The source to build from is available via the GitHub resource input (#58)

Should include:

  • Expanding the docs about how to write a Task that uses an Image resource, and how to set up the service account correctly
  • Test coverage, unit and also integration if possible (see #16)

Actual Behavior

Once #59 is complete, it will be possible to run simple helloworld Task, but it won't be possible to use inputs or outputs.

Steps to Reproduce the Problem

  1. Deploy the Pipeline CRD + related CRDs to your cluster (note: should be docs on how to do this post kubebuilder)
  2. Create a Resource like this (where gcr.io/some-registry is a registry that you/your service account can push to):
apiVersion: pipeline.knative.dev/v1beta1
kind: Resource
metadata:
  name: kaniko-resources
  namespace: default
spec:
  resources:
    - name: guestbook
      type: git
      params:
      - name: url
        value: github.com/kubernetes/examples
      - name: revision
        value: HEAD      
      - name: serviceAccount
        value: githubServiceAccount
    - name: guestbook-image
      type: image
      params:
        - name: url
          value: gcr.io/some-registry
  1. Create a Task like this:
apiVersion: pipeline.knative.dev/v1beta1
kind: Task
metadata:
  name: build-push
  namespace: default
spec:
    inputs:
        resources:
        - name: workspace
          type: github
        params:
            - name: pathToDockerFile
              value: string
    outputs:
        resources:
        - name: imageToBuild
          type: image
    buildSpec:
      steps:
         - name: build-and-push
           image: gcr.io/kaniko-project/executor
      args:
         - --dockerfile=${DOCKERFILE}
         - --destination=${IMAGE}

(Or alternatively you can use the kaniko build template, but that would mean adding support for build templates:

    buildSpec:
        template:
            name: kaniko
            arguments:
                - name: DOCKERFILE
                  value: ${pathToDockerFile}
                - name: REGISTRY
                  value: ${registry}
  1. Create a TaskRun like this:
apiVersion: pipeline.knative.dev/v1beta1
kind: TaskRun
metadata:
  name: kaniko-run-0
spec:
    taskRef:
        name: build-push
    resourceRef:
       name: kaniko-resources
    trigger:
        triggerRef:
            type: manual
  1. Look at the logs from the running container (Note the best docs we have on this at the moment are some of the steps in the Build quick start) to see that the build and push was completed
  2. Observe that the status of the TaskRun has been updated appropriately, e.g. see this example which contains:
  • Outcomes of individual steps
  • Started, Completed, and Successful conditions
  1. Look at the image itself in the registry and verify that it was built and pushed

Additional Info

Blocked by #59 (helloworld task), #58 (GitHub resource)

Reconcile `examples` dir with `config/samples`

Expected Behavior

We should have one clear location for all of our example yamls and it should be tested (and eventually included in our CI - #15).

Actual Behavior

We currently have two sets of examples:

  1. config/samples - these were generated by kubebuilder and we modified them
  2. examples these were created specifically to demonstrate the propose pipeline API to the build working group

config/samples is used in our unit tests but examples is not.

Additional Info

There is some overlap with #3 and @tejal29 will need to be consulted re. the plan around those samples so that:

  1. All of our samples live in one place
  2. All of our samples are tested to ensure they continue working

Evaluate kubebuilder for controllers

Expected Behavior

Though our controllers (e.g. pipeline run controller) don't do much, we should be happy with how they are built and prepared to move forward with the skeletons we do have as we start actually implementing functionality.

Actual Behavior

We used kubebuilder to bootstrap our controllers which allowed us to get up and moving pretty quickly. However the controllers it generates depend on a lot of code in controller-tools and controller-runtime.

We should compare a controller creating using these libs to one implemented from scratch (e.g. the knative/serving controllers) and make sure we want to keep using kubebuilder in the long run.

Additional Info

n/a

Design: Deployment build-pipeline-resource

The work for this task is to design this feature and present one or more proposals (before implementing).

Expected Behavior

A common expectation of a pipeline is to deploy the created application/function on some cluster

The deploy resource should satisfy the following requirements:

  • Defined as an output of a Task
  • Can only be used as an output binding, if used as an input will have no-op
  • Can deploy the application/function using a helm chart
  • Helm chart is provided to the resource
  • Follows the standard resource interface and can be created the same way

an example definition of a pipeline

kind: Pipeline
...
spec:
    tasks:
        - name: unit-test-kritis
          taskRef:
              name: make
          inputSourceBindings:
              - name: workspace
                key: kritis
          outputSourceBindings:
              - name: deploy-output
                key: deploy-to-test-cluster               
          params:
              - name: makeTarget
                value: test

and an example of a resource

 ...
apiVersion: pipeline.knative.dev/v1beta1
kind: Resource
metadata:
  name: deploy-to-test-cluster
  namespace: default
spec:  
  name: deploy-to-test-cluster
  type: deploy
  params:
  - name: cluster
    value: kritis-cluster
  ...  

Actual Behavior

We have no resource currently to deploy using helm chart or otherwise

Additional Info

Add validation to controller for Pipeline CRD

Expected Behavior

When someone creates a Pipeline in a k8s cluster (e.g. with kubectl apply, the controller should make sure every field is valid.

This should include verifying as many fields as possible, e.g. verifying connectivity with any endpoints, if there are any references, verifying that those actually exist in the system, etc.

Pipeline is defined here: https://github.com/knative/build-pipeline/blob/master/pkg/apis/pipeline/v1beta1/pipeline_types.go

Any additional fields included should cause a failure (e.g. if a typo is made when defining a known field).

Actual Behavior

Some basic validation is applied, e.g. that required fields are present and that fields are in the correct format (e.g. string vs. int), but that's it.

Steps to Reproduce the Problem

  1. Deploy the Pipeline CRDs to your k8s cluster
  2. Create an instance that has invalid fields, e.g. a malformed URL or a typo in one of the fields.
  3. Apply the instance with kubectl apply -f and notice that the controller does not complain

Additional Info

Blocked by #19

Pipeline uses `passedConstraints` to provide correct inputs and outputs

Expected Behavior

If a user creates a Pipeline which has passedConstraints specified, this should be used to construct a DAG of tasks (including fan in and fan out)
2. Ensure that the version of the Resource used by the task is the same as the version used by the Task(s) in the passedConstraint

Should include:

  • Expanding the docs about how to use Resources that are changed between Tasks and how to use passedConstraints to make sure the right versions are used
  • Test coverage, unit and also integration if possible (see #16)

Actual Behavior

After #61 we will have a simple Pipeline in which either the DAG is flat or all Tasks execute simultaneously; i.e. the passedConstraints are ignored.

Steps to Reproduce the Problem

  1. Deploy the Pipeline CRD + related CRDs to your cluster (note: should be docs on how to do this post kubebuilder)
  2. Create a Resource that brings in the hello-world docker image repo, and declares an image we will be building (gcr.io/my-registry must be changed to a registry that the specified serviceAccounts will have permission to push to):
apiVersion: pipeline.knative.dev/v1beta1
kind: Resource
metadata:
  name: resources
  namespace: default
spec:
  resources:
    - name: hello-world-resource
      type: github
      params:
      - name: url
        value: https://github.com/docker-library/hello-world
      - name: revision
        value: master
    - name: hello-world-image
      type: image
      params:
        - name: url
          value: gcr.io/my-registry
  1. Create a two Tasks, the first will build an image (using Kaniko, see #62), the second will run the image:
apiVersion: pipeline.knative.dev/v1beta1
kind: Task
metadata:
  name: build-push
  namespace: default
spec:
    inputs:
        resources:
        - name: workspace
          type: github
        params:
            - name: pathToDockerFile
              value: string
    outputs:
        resources:
        - name: imageToBuild
          type: image
    buildSpec:
      steps:
         - name: build-and-push
           image: gcr.io/kaniko-project/executor
      args:
         - --dockerfile=${DOCKERFILE}
         - --destination=${IMAGE}
apiVersion: pipeline.knative.dev/v1beta1
kind: Task
metadata:
  name: run
  namespace: default
spec:
    inputs:
        resources:
        - name: imageToRun
          type: image
    buildSpec:
      steps:
         - name: run-image
           image: "${imageToRun.url}/${imageToRun.name}"
  1. Create a Pipeline which wires the two Tasks together using passedConstraints:
apiVersion: pipeline.knative.dev/v1beta1
kind: Pipeline
metadata:
  name: hello-world-pipeline
  namespace: default
spec:
    tasks:
        - name: build-hello-world
          taskRef:
              name: build-push
          inputSourceBindings:
          - key: workspace
            resourceRef:
                 name: hello-world-resource
          outputSourceBindings:
          - key: imageToBuild
             resourceRef:
                 name: hello-world-image
        - name: run-hello-world
          taskRef:
              name: run
          inputSourceBindings:
          - name: imageToRun
             resourceRef:
                 name: hello-world-image
             passedConstraints: ['build-hello-world']
  1. Create a PipelineRun which invokes the Tasks (you'll have to create a PipelineParams as well with your serviceAccount, in this case called hello-world-params:
apiVersion: pipeline.knative.dev/v1beta1
kind: PipelineRun
metadata:
  name: pipeline-run-1
  namespace: default
spec:
    pipelineRef:
        name: hello-world-pipeline
    resourceRef:
        name: hello-world-resource
    pipelineParamsRef:
        name: hello-world-params
    triggerRef:
        type: manual
  1. Creating the PipelineRun should have created TaskRuns with all of the values filled in, so with kubectl get taskRuns you should be able to see conditions updated appropriately

  2. Looking at the Task logs (if this has been implemented, see #59) or at the logs for the running pod if not, you should be able to see that the hello world image was run and output the expected value from the hello world image

  3. You should be able to see hello-world image built and pushed to your registry.

Additional Info

  • Blocked by #61 (simple PipelineRun) and #62 (build and push using Kaniko)

Design: Conditional execution

The work for this task is to design this feature and present one or more proposals (before implementing).

Expected Behavior

Pipelines need to be able to express Tasks that execute conditionally, e.g.:

  • If a Task fails, do X (e.g. notify slack)
  • If anything in the Pipeline fails, do Y (e.g. notify slack)
  • Regardless of success/failure, do Z (e.g. cleanup)

Additionally (as pointed out by @abayer !) Pipelines may need more complex logic, e.g.:

  • Only run this Task/follow this branch of tasks if we're not on a pull request (e.g. only publish an image, etc., from master branch)

This means we need to be able to express conditional execution generally against the state of the Pipeline. We can start with a specific subset, e.g.:

  • What Tasks have run
  • What Tasks have failed
  • The state of the Resources available to this Task (i.e. where you'd get information such as the branch)

Actual Behavior

At the moment, execution of a Pipeline will continue to traverse DAG the user has expressed until either the Pipeline completes or one of the Tasks fails (a step exits with a non-0 code).

Additional Info

For other designs:

This doc has a (very) rough start at expressing some of the requirements for this (note doc visible to members of [email protected]).

Implement Pipeline Templating

Expected Behavior

When a user creates a Pipeline, they should be able to use templating to pass arguments to tasks, using the values that will eventually (via PipelineRun) be provided by the referenced PipelineParams and Resources.

This should include:

  • Unit test coverage
  • Docs on how to use this templating

Actual Behavior

Templating is being designed in #36 but is not implemented.

Steps to Reproduce the Problem

  1. Deploy the Pipeline CRD + related CRDs to your cluster (note: should be docs on how to do this post kubebuilder)
  2. Create a Task like this:
apiVersion: pipeline.knative.dev/v1beta1
kind: Task
metadata:
  name: template-params-task
  namespace: default
spec:
    inputs:
        params:
           - name: stuffToEcho
             value: string
    buildSpec:
        steps:
            - name: helloworld
              image: busybox
              args: ['echo', '${stuffToEcho}']
  1. Create Resources and PipelineParams like this (we're going to use the clusters):
apiVersion: pipeline.knative.dev/v1beta1
kind: Resource
metadata:
  name: template-resources
  namespace: default
spec:
  resources:
  - name: github-resource
    type: git
    params:
    - name: url
      value: https://github.com/grafeas/kritis
    - name: revision
      value: 123456789123456789
  - name: image-resource
    type: image
    params:
    - name: url
      value: gcr.io/somethingsomethingsomething
apiVersion: pipeline.knative.dev/v1beta1
kind: PipelineParams
metadata:
  name: template-params
  namespace: default
spec:
    clusters:
        - name: 'testCluster'
          type: 'gke'
          endpoint: 'https://somethingsomethingsomethingsomething.com'
  1. Create a Pipeline like this that uses templating to pass values from Resources and PipelineParams:
apiVersion: pipeline.knative.dev/v1beta1
kind: Pipeline
metadata:
  name: template-pipeline
  namespace: default
spec:
    tasks:
        - name: template-params-task-image
          taskRef:
              name: template-params-task
          params:
              - name: stuffToEcho
                value: ${resources.image-resource.url}
        - name: template-params-task-github
          taskRef:
              name: template-params-task
          params:
              - name: stuffToEcho
                value: ${resources.github-resource.revision}
        - name: template-params-task-cluster
          taskRef:
              name: template-params-task
          params:
              - name: stuffToEcho
                value: ${pipelineParams.testCluster.endpoint}
  1. Create a PipelineRun like this:
apiVersion: pipeline.knative.dev/v1beta1
kind: PipelineRun
metadata:
  name: template-param-run-1
  namespace: default
spec:
    pipelineRef:
        name: template-pipeline
    resourceRef:
        name: template-resources
    pipelineParamsRef:
        name: template-params
    triggerRef:
        type: manual
  1. Creating the PipelineRun should have created TaskRuns with all of the values filled in, so with kubectl get taskRuns you should be able to see:
  • Conditions updated appropriately
  • params (see this example) filled in with the templated values
  1. Looking at the Task logs (if this has been implemented, see #59) you should be able to see the appropriate values echoed for each task, i.e.:
  • template-params-task-image should echo gcr.io/somethingsomethingsomething
  • template-params-task-github should echo 123456789123456789
  • template-params-task-cluster should echo https://somethingsomethingsomethingsomething.com

Additional Info

Blocked by #36 (design templating) and #61 (simple PipelineRun)

Remove kubebuilder (for now)

Expected Behavior

While looking into #19 we've decided that for now we want to move forward without kubebuilder (watch #19 for more info re. the rationale) (we might revisit this later afte rfollowing up with kubebuidler folks).

So we would like:

  • build-pipeline's controllers to no longer require kubebuilder
  • Any boilerplate added by kubebuilder that we don't want anymore to be removed
  • Basic skeletons of these controllers to be re-added, but implemented in the same style as the knative/serving and knative/build controllers
  • We should maintain approximately the same level of functionality we currently have, i.e. it is possible to run controllers for all CRDs and perform some very basic validation against them.

We may still want to keep using envtest if it can work without our controllers using controller-runtime, since it's pretty handy to be able to run the controllers in tests without a k8s cluster.

Actual Behavior

Our current controllers were built using kubebuilder v1 and depend on controller-runtime.

Additional Info

See #19 for more.

Implement working "hello world" Task + TaskRun

Expected Behavior

A user should be able to create a "hello world" Task that just echoes "hello world" and run it as many times as they like in their k8s cluster by creating TaskRuns.

Requirements:

  • Test coverage (unit tests + if possible integration tests, may be blocked by #16 )
  • Docs that explain how to run this Task, with examples
  • Docs that explain how to debug this (i.e. where to look if it goes wrong)

Actual Behavior

It is possible to get something like this working with Build only, but currently our TaskRun controller does nothing.

Steps to Reproduce the Problem

  1. Deploy the Pipeline CRD + related CRDs to your cluster (note: should be docs on how to do this post kubebuilder)
  2. Create a task like this:
apiVersion: pipeline.knative.dev/v1beta1
kind: Task
metadata:
  name: helloworld-task
  namespace: default
spec:
    buildSpec:
        steps:
            - name: helloworld
              image: busybox
              args: ['echo', 'hello world'']
  1. Create a task run like this:
apiVersion: pipeline.knative.dev/v1beta1
kind: TaskRun
metadata:
  name: helloworld-task-run-0
spec:
    taskRef:
        name: helloworld-task
    trigger:
        triggerRef:
            type: manual
  1. Look at the logs from the running container (Note the best docs we have on this at the moment are some of the steps in the Build quick start) - these should contain "hello world"
  2. Observe that the status of the TaskRun has been updated appropriately, e.g. see this example which contains:
  • Outcomes of individual steps
  • Started, Completed, and Successful conditions

Verify vendor directory

Expected Behavior

Running dep ensure (or whatever our DEVELOMENT.md says to run) should be a no-op on the repo unless deps have actually changed.

Actual Behavior

When I run dep ensure -v on the repo, I get a whole slew of changes to vendor/

Steps to Reproduce the Problem

  1. Checkout repo
  2. Install go dep
  3. Run dep ensure -v

Additional Info

In the long run we'd like to use a script as part of CI to make sure vendor/ is kept in good shape on master. Potentially we can use:

Add Pipeline Label to pipelineRun and tasksRun

Expected Behavior

The PipelineRun and TasksRun object should have a label with the pipeline name. Can be called knative.dev/pipeline with a value of the actual pipeline name.

The TaskRun controller can use the Pipeline name in Pod metadata and log messages.

Actual Behavior

Currently there are no labels added

Additional Info

  • The PR to add this functionality should add an integration test to validate labels are propagating correctly
  • Similar example in serving and the Spec for it
  • Should we have any other labels? such as Task name label on the TaskRun?

Define a exercise to create a pipeline from given tasks.

Expected Behavior

Define a task to create pipeline from a given Task Catalog. The task to be defined in the Catalog are listed in #9

Actual Behavior

do not exists.

Steps to Reproduce the Problem

none

Additional Info

The task could be as simple as "Create a Build-Test-Deploy pipeline for Kritis where

  • the source code is at github location X
  • unit test can be run using steps Y
  • Needs to deployed to Z Artifact Store

Add validation for PipelineRun CRD

Expected Behavior

When someone creates a PipelineRun in a k8s cluster (e.g. with kubectl apply, the controller should make sure every field is valid.

This should include verifying as many fields as possible, e.g. verifying connectivity with any endpoints, if there are any references, verifying that those actually exist in the system, etc.

PipelineRun is defined here: https://github.com/knative/build-pipeline/blob/master/pkg/apis/pipeline/v1beta1/pipelinerun_types.go

Any additional fields included should cause a failure (e.g. if a typo is made when defining a known field).

Actual Behavior

Some basic validation is applied, e.g. that required fields are present and that fields are in the correct format (e.g. string vs. int), but that's it.

Steps to Reproduce the Problem

  1. Deploy the Pipeline CRDs to your k8s cluster
  2. Create an instance that has invalid fields, e.g. a malformed URL or a typo in one of the fields.
  3. Apply the instance with kubectl apply -f and notice that the controller does not complain

Additional Info

Blocked by #19

Create Documentation for Pipeline Parameters

Expected Behavior

For user study, go through pipeline documentation and iterate the docs with someone on the team.

Actual Behavior

The docs are in a good shape right now. We just need to sit with someone and see if they can follow along the docs and samples created in #6 together.

Steps to Reproduce the Problem

none

Additional Info

Minimal documentation and not all the details need to fleshed since it might lead to confusion.
Only relevant to what is defined in samples. Refer to samples in documentations.
This document can then be used as user facing docs.

Add validation to controller for PipelineParams CRD

Expected Behavior

When someone creates a PipelineParams in a k8s cluster (e.g. with kubectl apply, the controller should make sure every field is valid.

This should include verifying as many fields as possible, e.g. verifying connectivity with any endpoints, if there are any references, verifying that those actually exist in the system, etc.

PIpelineParams is defined here: https://github.com/knative/build-pipeline/blob/master/pkg/apis/pipeline/v1beta1/pipelineparams_types.go

Any additional fields included should cause a failure (e.g. if a typo is made when defining a known field).

Actual Behavior

Some basic validation is applied, e.g. that required fields are present and that fields are in the correct format (e.g. string vs. int), but that's it.

Steps to Reproduce the Problem

  1. Deploy the Pipeline CRDs to your k8s cluster
  2. Create an instance that has invalid fields, e.g. a malformed URL or a typo in one of the fields.
  3. Apply the instance with kubectl apply -f and notice that the controller does not complain

Additional Info

Blocked by #19

Design + POC: CLI tool for visualizing pipeline

This task is to propose a design and also create a POC tool that demonstrates that design.

Expected Behavior

We would like to have a CLI tool for interacting with the Pipeline CRD, to make it easier for a user to deal with all that YAML. The first feature we want from this tool is to visualize, on the command line, the DAG of a Pipeline.

Actual Behavior

As a Pipeline gets more complex, e.g. with Fan in and Fan out, it can be harder and harder to understand from the yaml alone. Also since the DAG is created by:

  • Implicit associations via input and output binding
  • next tasks
  • prev tasks

the order the Tasks are declared in the Pipeline itself doesn't matter, which could be confusing for someone working with this yaml.

Run unit tests in prow

Expected Behavior

Currently, we do not run in unit tests in prow since they are no go test files.
In #75, i added go tests.
This issue to run go tests and any other that exists.

Actual Behavior

Currently no unit tests run

Steps to Reproduce the Problem

Additional Info

Design: Templating

The work for this task is to design this feature and present one or more proposals (before implementing).

Expected Behavior

In our examples, we use the syntax ${something} to reference things, but we don't have a clear model for what can and can't be referenced.

  • Tasks should be able to reference anything from their inputs or outputs
  • Pipelines should be able to reference outputs of Tasks in the Pipeline and a PipelineParams

For example, referencing both parameters and cluster bindings within a Task:
https://github.com/knative/build-pipeline/blob/master/examples/deploy_tasks.yaml#L47

Actual Behavior

This exists in our examples but it's not clear exactly what can and can't be referenced.

Setup continuous integration for build-pipeline

Expected Behavior

In addition to running tests against pull requests, and especially once we get #16 (integration tests) added, we should have our tests running continually to be sure nothing breaks over time (e.g. some kind of issue with a dependency - as much as we want our tests to be hermetic they probably never will be 100%).

Our pull request tests:

We should also update the tags in our README to use these so they won't turn red every time someone's PR breaks ๐Ÿ˜…

Actual Behavior

We currently have tests which run on PRs only, but no continuous integration.

Additional Info

This is an example of continuous integration for knative/serving:
https://prow.knative.dev/?job=ci-knative-serving-continuous

Add enum like values for all `type` placeholders

Expected Behavior

Whenever a type attribute is used, there should be string aliases for all the possible value values so we can see what values should be possible. This should just include the types we would initially expect to implement.

Actual Behavior

There are a bunch of places in our API where we use a type attribute, but it's not clear what kinds of values should actually be used for these, for example https://github.com/knative/build-pipeline/blob/ef113e1d513c67b692eb1096904789be0e9616b6/pkg/apis/pipeline/v1beta1/pipelineparams_types.go#L35

Additional Info

According to k8s API conventions we should not use enums, but should use string constants instead, e.g. see knative/serving ServiceConditionType

Setup PR testing for build-pipeline integration tests

Need to complete #15 before we can tackle this

Expected Behavior

Every PR should have integration tests run against it automatically.

(The long term dream is that we use the Pipeline CRD reference implementation itself to test this repo, but since it is just an API right now, that's a ways off!)

Actual Behavior

Even once #15 is done, we won't have anything to ensure we can actually deploy the CRDs and use them with a running controller.

Steps to Reproduce the Problem

To run the current equivalent of "integration tests", manually walk through the "installing and running" steps in the DEVELOPMENT.md:

# Add/update CRDs in your kubernetes cluster
make install

# Run your controller locally, (stop execution with `ctrl-c`)
make run

# In another terminal, deploy tasks
kubectl apply -f config/samples

Additional Info

We'll need a k8s cluster to deploy to, probably we can make use of boskos:

Whatever we implement should be easy for contributors to trigger manually and not require specific credentials.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.