tektoncd / pipeline Goto Github PK
View Code? Open in Web Editor NEWA cloud-native Pipeline resource.
Home Page: https://tekton.dev
License: Apache License 2.0
A cloud-native Pipeline resource.
Home Page: https://tekton.dev
License: Apache License 2.0
A user should be able to create a PipelineRun
which will look at the Tasks
in the referenced Pipeline
and for each of those, create a TaskRun
.
If #59 is done before this, creation of the TaskRun
would also trigger running of the Task
, however if not, nothing would actually be executed but the TaskRuns
would exist.
Out of scope of this issue (coming in later issues):
Requirements:
If you create a PipelineRun
, nothing will happen.
Task
like this:apiVersion: pipeline.knative.dev/v1beta1
kind: Task
metadata:
name: helloworld-task
namespace: default
spec:
buildSpec:
steps:
- name: helloworld
image: busybox
args: ['echo', 'hello', 'build']
Pipeline
like this:apiVersion: pipeline.knative.dev/v1beta1
kind: Pipeline
metadata:
name: hello-pipeline
namespace: default
spec:
tasks:
- name: helloworld-task-1
taskRef:
name: helloworld-task
- name: helloworld-task-2
taskRef:
name: helloworld-task
- name: helloworld-task-3
taskRef:
name: helloworld-task
PipelineRun
like this:apiVersion: pipeline.knative.dev/v1beta1
kind: PipelineRun
metadata:
name: hello-pipeline-run-1
namespace: default
spec:
pipelineRef:
name: hello-pipeline
triggerRef:
type: manual
TaskRuns
, so we should then be able to see:a. The status
of the PipelineRun
should be updated (see example), check with kubectl get pipelineRuns
b. There should be 3 TaskRuns
created, each with a unique name and a reference to the PipelineRun
(check with kubectl get taskRuns
), for example:
apiVersion: pipeline.knative.dev/v1beta1
kind: TaskRun
metadata:
name: helloworld-task-run-1-1
spec:
taskRef:
name: helloworldTask
trigger:
triggerRef:
type: manual
(I left the PipelineParams reference out of the PipelineRun, but if it turns out that we need a ServiceAccount to make this work, we'll need to specify it in the PipelineParams and reference it in the PipelineRun as well)
To make sure we work in the same direction was we work towards our first milestone we should have an integration test (which would be skipped) that attempts to perform the actions we want to demo, which we can review as a team and make sure we are happy with.
The only description of the expected behaviour is the description of milestone 1, but we are all on slightly different pages about what we need to accomplish.
When someone creates a TaskRun in a k8s cluster (e.g. with kubectl apply, the controller should make sure every field is valid.
This should include verifying as many fields as possible, e.g. verifying connectivity with any endpoints, if there are any references, verifying that those actually exist in the system, etc.
TaskRun is defined here: https://github.com/knative/build-pipeline/blob/master/pkg/apis/pipeline/v1beta1/taskrun_types.go
Any additional fields included should cause a failure (e.g. if a typo is made when defining a known field).
Some basic validation is applied, e.g. that required fields are present and that fields are in the correct format (e.g. string vs. int), but that's it.
kubectl apply -f
and notice that the controller does not complainBlocked by #19
The k8s project has a solid set of documentation guidelines. It would be great if the docs in this project followed them. This means two things:
If possible we should consider adding some markdown linting as part of our
We did not follow any kind of standard when we wrote these :S
The work for this task is to design this feature and present one or more proposals (before implementing).
One of the features we want the Pipeline to have is to have strong typing around concepts that will be used commonly in k8s centric CI/CD pipelines.
One of the concepts we want to have strong typing around is a "container image", which is something that could be produced by a Task as an output, and published to an artifact store, and may be used by subsequent Tasks for testing or deploying.
Some attributes we'd probably like include:
This concept only exists theoretically, but in this example pipeline we have a builtImage
attribute on an artifact store binding, declaring what we'd like the built image to be called:
Define a task which will ask users to define a pipeline param
do not exists.
none
The task can be something as simple as
" Define a github source code repository https://github.com/knative/build-pipeline to Pipeline paramas"
At a certain point we will decide we want to solidify the API and we will want to prevent backwards incompatible changes from being made. We should define a process for introducing changes into the API to minimize disruption to anyone that has integrated with us. For example:
As of today (Sept 10, 2018) the API is still under discussion and is far from the point where we'd expect anyone to integrate with us, so it's okay to make backwards incompatible changes.
Once we have the Pipeline actually working to some extent, and we want to start demoing to people (aka we've hit our first milestone), we should introduce:
We have no versioning and no changelog, which is fine b/c this doesn't actually do anything yet.
Blocked on hitting our first milestone.
Create tasks for user study which will be used in sample pipelines. Using this task, we will ask user to define a new pipeline.
do not exists.
none
Create tasks for user study.
As of now, things on top of my list are
For user study, go through documentation and iterate the docs with someone on the team.
docs currently are not reviewed if they are inline with the samples.
none
Minimal documentation and not all the details need to fleshed since it might lead to confusion.
Only relevant to what is defined in samples. Refer to samples in documentations.
should make sense with #3
This document can then be used as user facing docs.
The work for this task is to design this feature and present one or more proposals (before implementing).
If a pipeline has many tasks and takes a long time to run (e.g. tens of minutes, or even hours), and one Task fails, it might be desirable to be able to pick up execution where the Task failed, with different PipelineParams (e.g. from a different git commit), so you can resume the Pipeline without having to rerun the whole thing.
Some ideas for how to implement this:
It is also worth considering what this could be like via a UI: if one is viewing a Pipeline in a UI, and wants to re-run only a portion of the Pipeline, they probably want the user experience to be as if they were still running the same Pipeline, even if underneath a new Pipeline is created.
At the moment, if any Task in a Pipeline fails, your options to rerun the rest of the Pipeline would be:
This originally came up in discussion about #39, in the context of whether or not we'd want to always use the same git commit from a source for all Tasks in a Pipeline, or if we wanted sometimes for a Task to always use HEAD. This would allow a user to change a repo, by updating HEAD, between Task executions.
The feature of partial pipeline execution could be an alternative to this.
When a user creates a TaskRun
, the TaskRun
will contain the actual Resources
that the Task
is using, and if these Resources
are inputs, they should be made available as mounted volumes in the Task
's steps.
For example run-kritis-test.yaml contains a kritis github source (note: maybe this section should be called resource
now?):
sources:
- name: 'kritis'
type: 'github'
url: 'github.com/grafeas/kritis'
branch: 'featureX'
commit: 'HEAD'
This would mean that the TaskRun controller would need to pull from that github repo and make the contents available in a mounted volume for the steps subsequently executed by the TaskRun's Task.
It would need to use a serviceAccount to do this which would have the correct GitHub credentials.
gitToContainer
)Resource
Bonus:
We have none of this implemented, TaskRun's controller currently just (barely) validates the yaml.
(Open to redesign: instead of repeating all of the resources and credentials in the TaskRun, TaskRun could have a references to the Resource and PipelineParam CRDs. One downside tho is that if you look at a TaskRun, you won't know for sure what values were used for these, since they could have been subseqently updated.)
On a Branch, we would want developers to run dep ensure, so that unused dependencies are removed.
We can do this by running a script in the checks which runs dep ensure and check if they are any changes to vendor dir.
Right now, developers can easily add unused deps and overlook the libs pulled in since they tend to do this
git add vendor/
none
Our controllers should be written in the current preferred style, which is that used byhttps://github.com/knative/serving .
Write some docs on how to write controllers in this style (maybe in https://github.com/knative/pkg).
If we want ppl to write controllers in this style, we should document how to do that. The only choice currently will be to reverse engineer the style from the examples in knative/serving, and without guidance can easily lead to cargo-culting.
In #66 we removed kubebuilder
and re-did our controllers so that they use the same style as https://github.com/knative/build. However in the review, @imjasonh pointed out that the style in https://github.com/knative/serving is actually the latest and greatest, and @mattmoor pointed us at https://github.com/mattmoor/warm-image as an example.
Spawned from #57
Sample pipelines should be easy to understand and can be reviewed by people for the upcomning user study.
samples do not exists.
none
In this task we need to define sample pipelines which
Create a pipeline-params which is easy to use.
These samples are solely for the purpose of user study and should not be more than 5 min read.
Try to create samples that only depict one feature per pipeline. Try using the one source, artifact store, result store across all examples.
Test Results are first class citizens in PipelineCRDs.
We want to flush out the requirements for Result Store where these results will be stored.
What kind of Results.
The work for this task is to design this feature and present one or more proposals (before implementing).
In a many modern pipelines, we want to be able to
Requirements:
Possibly a notification could be a top level directive in both a Pipeline and a Task, e.g.:
kind: Pipeline
...
spec:
tasks:
- name: unit-test-kritis
taskRef:
name: make
inputSourceBindings:
- inputName: workspace
sourceKey: kritis
params:
- name: makeTarget
value: test
notifications:
failure:
....
success:
.....
In the current design, a user would have to implement this as a Task
that knows how to report these notifications.
There is some potential overlap with #27 which is about executing Tasks
conditionally.
The DEVELOPMENT.md should cover these common actions:
Any other common operations you think might be useful!
The DEVELOPMENT.md is pretty sparse.
The less you assume the better :D
Once we have implemented a simple hellworld Task (#59) a user should be able to define a Task that can deploy at a k8s resource using helm.
This would mean that:
To try this out manually, you'll need a k8s cluster has been setup that you can deploy to.
Should include:
We have no logic currently for even running a task, but #59 will add basic logic. #62 will expand on this but shouldn't block this task.
apiVersion: pipeline.knative.dev/v1beta1
kind: Resource
metadata:
name: helm-resources
namespace: default
spec:
resources:
- name: kritis-app-github-resource
type: github
params:
- name: url
value: https://github.com/grafeas/kritis
- name: revision
value: master
- name: kritis-server
type: image
params:
- name: url
value: gcr.io/kritis-project
apiVersion: pipeline.knative.dev/v1beta1
kind: Task
metadata:
name: helm
namespace: default
spec:
inputs:
resources:
- name: workspace
type: github
- name: imageToDeploy
type: image
params:
- name: pathToHelmCharts
value: string
- name: helmArgs
value: string
cluster:
- name: clusterToDeployTo
buildSpec:
steps:
- name: deploy
image: kubernetes-helm
args: ['deploy', '--path=${pathToHelmChart}', '--set image=${imageToDeploy.Url}.${imageToDeploy.Name}', '${helmArgs}', '--cluster=${clusterToDeployTo.Endpoint}']
(Note I am pretty much making up the helm
command here, please replace it with something that actually works and makes sense!)
TaskRun
like this:apiVersion: pipeline.knative.dev/v1beta1
kind: TaskRun
metadata:
name: helm-run-0
spec:
taskRef:
name: helm
resourceRef:
name: helm-resources
trigger:
triggerRef:
type: manual
status
of the TaskRun
has been updated appropriately, e.g. see this example which contains:The work for this task is to design this feature and present one or more proposals (before implementing).
It should be possible for Pipelines to execute against multiple kinds of "sources".
The exact definition of what a "source" is a bit murky but generally we want this to be the same as a knative build CRD source, since ultimately that is how the Source will be used (once it is passed down to a Task as an input).
As far as the knative build CRD is concerned, a source could be:
And we would want to support other sources in the future as well, which should be arbitrarily extensible.
Also in #2 @pivotal-nader-ziada pointed out that we might want to be able to use outputs of Tasks as inputs to subsequent Tasks (e.g. mount a built image directly into a subsequent Task as a source).
In our examples we show what a possible github source could look like:
Though event triggering is outside the scope of the Pipeline CRD, we need to make sure that when an event DOES trigger a pipeline to run (by creating a PipelineRun and probably also a PipelineParams), that it is possible to provide all relevent info, so it might be worth exploring what info is contained in event such as github Pull Request events
See #11 re. ensuring the same sources/inputs are used throughout a Pipeline.
When someone creates a Task in a k8s cluster (e.g. with kubectl apply, the controller should make sure every field is valid.
This should include verifying as many fields as possible, e.g. verifying connectivity with any endpoints, if there are any references, verifying that those actually exist in the system, etc.
Task is defined here: https://github.com/knative/build-pipeline/blob/master/pkg/apis/pipeline/v1beta1/task_types.go
Any additional fields included should cause a failure (e.g. if a typo is made when defining a known field).
Some basic validation is applied, e.g. that required fields are present and that fields are in the correct format (e.g. string vs. int), but that's it.
kubectl apply -f
and notice that the controller does not complainBlocked by #19
Every PR should have unit tests run against it automatically.
(The long term dream is that we use the Pipeline CRD reference implementation itself to test this repo, but since it is just an API right now, that's a ways off!)
We have no automation! Is master broken? WHO KNOWS IT'S A CRAZY WORLD ๐ค
To actually run the tests:
make test
(see also the DEVELOPMENT.md)
We would be fine with using something like Travis initially, but it would have to be enabled for the entire knative
org, and since the rest of the org is using Prow, we can probably jump on that train.
Configuration for Prow for the other knative projects lives in https://github.com/knative/test-infra/tree/master/ci/prow
The following types should be added:
clusters
clusterBindings
cluster
inputs(All should be optional)
This wouldn't be expected to actually do anything yet (similar to the rest of the current implementation).
We have references to clusters in our example tasks:
Clusters defined in our example pipeline params:
Note that the clusterBinding
example is incomplete:
https://github.com/knative/build-pipeline/blob/3726a431cee21f9a88bc320339c978cca59a58e5/examples/pipelines/kritis.yaml#L44-L45
This should be binding a cluster
from PipelineParams to a cluster
input in the Task, e.g.
clusterBindings:
- inputName: clusterName # refers to the input in the Task
clusterName: test # refers to the cluster in the PipelineParms
@barney-s will be working on designs around deployment and k8s cluster interaction for this project
When someone wants to ramp up on this project, they should be able to follow along with a tutorial that explains to them how to create their own pipelines and interact with them.
We can initially cover a simple pipeline, e.g. the example kritis pipeline.
kubectl
(e.g. like in the DEVELOPMENT.md).Initially this would not actually do anything, but as we start adding actual functionality we can build this up over time.
If you are feeling really ambitious, you could also cover the more complex guestbook pipeline which has both fan-in and fan-out.
The work for this task is to design this feature and present one or more proposals (before implementing).
If you create a PipelineRun which executes against a particular branch/commit/etc., you want to be certain that the same branch/commit etc is used all the way through.
For example if your pipeline runs against master, but between two Tasks in your Pipeline, master is updated, you want subsequent Tasks to continue using the same commit they started using.
Additionally, as pointed out by @pivotal-nader-ziada in #2, we need to be sure that when one Task creates an artifact such as an image, that subsequent steps that use that image must be guaranteed to run against the correct image (e.g. if the image is identified using only tags, the tag could be pushed to by something between the steps - one possible way to circumvent this is to use digests).
Nothing ensures this at the moment. It should be possible to construct a Pipeline that will meet these criteria, but if the author if the Pipeline makes a mistake, there is nothing verifying that the same sources and inputs are used throughout.
Concourse solves this problem, the person investigating this design might want to check out how they solve this (e.g. ping @jchesterpivotal :D)
Once we have implemented a simple hellworld Task (#59) a user should be able to define a Task that can build and push an image to a registry using Kaniko. This would mean that:
Should include:
Once #59 is complete, it will be possible to run simple helloworld Task, but it won't be possible to use inputs or outputs.
gcr.io/some-registry
is a registry that you/your service account can push to):apiVersion: pipeline.knative.dev/v1beta1
kind: Resource
metadata:
name: kaniko-resources
namespace: default
spec:
resources:
- name: guestbook
type: git
params:
- name: url
value: github.com/kubernetes/examples
- name: revision
value: HEAD
- name: serviceAccount
value: githubServiceAccount
- name: guestbook-image
type: image
params:
- name: url
value: gcr.io/some-registry
apiVersion: pipeline.knative.dev/v1beta1
kind: Task
metadata:
name: build-push
namespace: default
spec:
inputs:
resources:
- name: workspace
type: github
params:
- name: pathToDockerFile
value: string
outputs:
resources:
- name: imageToBuild
type: image
buildSpec:
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor
args:
- --dockerfile=${DOCKERFILE}
- --destination=${IMAGE}
(Or alternatively you can use the kaniko build template, but that would mean adding support for build templates:
buildSpec:
template:
name: kaniko
arguments:
- name: DOCKERFILE
value: ${pathToDockerFile}
- name: REGISTRY
value: ${registry}
TaskRun
like this:apiVersion: pipeline.knative.dev/v1beta1
kind: TaskRun
metadata:
name: kaniko-run-0
spec:
taskRef:
name: build-push
resourceRef:
name: kaniko-resources
trigger:
triggerRef:
type: manual
status
of the TaskRun
has been updated appropriately, e.g. see this example which contains:We should have one clear location for all of our example yamls and it should be tested (and eventually included in our CI - #15).
We currently have two sets of examples:
config/samples
is used in our unit tests but examples
is not.
There is some overlap with #3 and @tejal29 will need to be consulted re. the plan around those samples so that:
Though our controllers (e.g. pipeline run controller) don't do much, we should be happy with how they are built and prepared to move forward with the skeletons we do have as we start actually implementing functionality.
We used kubebuilder to bootstrap our controllers which allowed us to get up and moving pretty quickly. However the controllers it generates depend on a lot of code in controller-tools
and controller-runtime
.
We should compare a controller creating using these libs to one implemented from scratch (e.g. the knative/serving controllers) and make sure we want to keep using kubebuilder in the long run.
n/a
The work for this task is to design this feature and present one or more proposals (before implementing).
A common expectation of a pipeline is to deploy the created application/function on some cluster
The deploy
resource should satisfy the following requirements:
an example definition of a pipeline
kind: Pipeline
...
spec:
tasks:
- name: unit-test-kritis
taskRef:
name: make
inputSourceBindings:
- name: workspace
key: kritis
outputSourceBindings:
- name: deploy-output
key: deploy-to-test-cluster
params:
- name: makeTarget
value: test
and an example of a resource
...
apiVersion: pipeline.knative.dev/v1beta1
kind: Resource
metadata:
name: deploy-to-test-cluster
namespace: default
spec:
name: deploy-to-test-cluster
type: deploy
params:
- name: cluster
value: kritis-cluster
...
We have no resource currently to deploy using helm chart or otherwise
When someone creates a Pipeline in a k8s cluster (e.g. with kubectl apply, the controller should make sure every field is valid.
This should include verifying as many fields as possible, e.g. verifying connectivity with any endpoints, if there are any references, verifying that those actually exist in the system, etc.
Pipeline is defined here: https://github.com/knative/build-pipeline/blob/master/pkg/apis/pipeline/v1beta1/pipeline_types.go
Any additional fields included should cause a failure (e.g. if a typo is made when defining a known field).
Some basic validation is applied, e.g. that required fields are present and that fields are in the correct format (e.g. string vs. int), but that's it.
kubectl apply -f
and notice that the controller does not complainBlocked by #19
If a user creates a Pipeline which has passedConstraints
specified, this should be used to construct a DAG of tasks (including fan in and fan out)
2. Ensure that the version of the Resource
used by the task is the same as the version used by the Task
(s) in the passedConstraint
Should include:
Resources
that are changed between Tasks
and how to use passedConstraints
to make sure the right versions are usedAfter #61 we will have a simple Pipeline
in which either the DAG is flat or all Tasks
execute simultaneously; i.e. the passedConstraints
are ignored.
Resource
that brings in the hello-world docker image repo, and declares an image we will be building (gcr.io/my-registry
must be changed to a registry that the specified serviceAccounts will have permission to push to):apiVersion: pipeline.knative.dev/v1beta1
kind: Resource
metadata:
name: resources
namespace: default
spec:
resources:
- name: hello-world-resource
type: github
params:
- name: url
value: https://github.com/docker-library/hello-world
- name: revision
value: master
- name: hello-world-image
type: image
params:
- name: url
value: gcr.io/my-registry
apiVersion: pipeline.knative.dev/v1beta1
kind: Task
metadata:
name: build-push
namespace: default
spec:
inputs:
resources:
- name: workspace
type: github
params:
- name: pathToDockerFile
value: string
outputs:
resources:
- name: imageToBuild
type: image
buildSpec:
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor
args:
- --dockerfile=${DOCKERFILE}
- --destination=${IMAGE}
apiVersion: pipeline.knative.dev/v1beta1
kind: Task
metadata:
name: run
namespace: default
spec:
inputs:
resources:
- name: imageToRun
type: image
buildSpec:
steps:
- name: run-image
image: "${imageToRun.url}/${imageToRun.name}"
Pipeline
which wires the two Tasks
together using passedConstraints
:apiVersion: pipeline.knative.dev/v1beta1
kind: Pipeline
metadata:
name: hello-world-pipeline
namespace: default
spec:
tasks:
- name: build-hello-world
taskRef:
name: build-push
inputSourceBindings:
- key: workspace
resourceRef:
name: hello-world-resource
outputSourceBindings:
- key: imageToBuild
resourceRef:
name: hello-world-image
- name: run-hello-world
taskRef:
name: run
inputSourceBindings:
- name: imageToRun
resourceRef:
name: hello-world-image
passedConstraints: ['build-hello-world']
PipelineRun
which invokes the Tasks
(you'll have to create a PipelineParams
as well with your serviceAccount
, in this case called hello-world-params
:apiVersion: pipeline.knative.dev/v1beta1
kind: PipelineRun
metadata:
name: pipeline-run-1
namespace: default
spec:
pipelineRef:
name: hello-world-pipeline
resourceRef:
name: hello-world-resource
pipelineParamsRef:
name: hello-world-params
triggerRef:
type: manual
Creating the PipelineRun should have created TaskRuns with all of the values filled in, so with kubectl get taskRuns
you should be able to see conditions updated appropriately
Looking at the Task logs (if this has been implemented, see #59) or at the logs for the running pod if not, you should be able to see that the hello world image was run and output the expected value from the hello world image
You should be able to see hello-world image
built and pushed to your registry.
The work for this task is to design this feature and present one or more proposals (before implementing).
Pipelines need to be able to express Tasks that execute conditionally, e.g.:
Additionally (as pointed out by @abayer !) Pipelines may need more complex logic, e.g.:
This means we need to be able to express conditional execution generally against the state of the Pipeline. We can start with a specific subset, e.g.:
At the moment, execution of a Pipeline will continue to traverse DAG the user has expressed until either the Pipeline completes or one of the Tasks fails (a step exits with a non-0 code).
For other designs:
This doc has a (very) rough start at expressing some of the requirements for this (note doc visible to members of [email protected]).
When a user creates a Pipeline
, they should be able to use templating to pass arguments to tasks, using the values that will eventually (via PipelineRun
) be provided by the referenced PipelineParams
and Resources
.
This should include:
Templating is being designed in #36 but is not implemented.
Task
like this:apiVersion: pipeline.knative.dev/v1beta1
kind: Task
metadata:
name: template-params-task
namespace: default
spec:
inputs:
params:
- name: stuffToEcho
value: string
buildSpec:
steps:
- name: helloworld
image: busybox
args: ['echo', '${stuffToEcho}']
Resources
and PipelineParams
like this (we're going to use the clusters):apiVersion: pipeline.knative.dev/v1beta1
kind: Resource
metadata:
name: template-resources
namespace: default
spec:
resources:
- name: github-resource
type: git
params:
- name: url
value: https://github.com/grafeas/kritis
- name: revision
value: 123456789123456789
- name: image-resource
type: image
params:
- name: url
value: gcr.io/somethingsomethingsomething
apiVersion: pipeline.knative.dev/v1beta1
kind: PipelineParams
metadata:
name: template-params
namespace: default
spec:
clusters:
- name: 'testCluster'
type: 'gke'
endpoint: 'https://somethingsomethingsomethingsomething.com'
Pipeline
like this that uses templating to pass values from Resources
and PipelineParams
:apiVersion: pipeline.knative.dev/v1beta1
kind: Pipeline
metadata:
name: template-pipeline
namespace: default
spec:
tasks:
- name: template-params-task-image
taskRef:
name: template-params-task
params:
- name: stuffToEcho
value: ${resources.image-resource.url}
- name: template-params-task-github
taskRef:
name: template-params-task
params:
- name: stuffToEcho
value: ${resources.github-resource.revision}
- name: template-params-task-cluster
taskRef:
name: template-params-task
params:
- name: stuffToEcho
value: ${pipelineParams.testCluster.endpoint}
PipelineRun
like this:apiVersion: pipeline.knative.dev/v1beta1
kind: PipelineRun
metadata:
name: template-param-run-1
namespace: default
spec:
pipelineRef:
name: template-pipeline
resourceRef:
name: template-resources
pipelineParamsRef:
name: template-params
triggerRef:
type: manual
kubectl get taskRuns
you should be able to see:template-params-task-image
should echo gcr.io/somethingsomethingsomething
template-params-task-github
should echo 123456789123456789
template-params-task-cluster
should echo https://somethingsomethingsomethingsomething.com
Blocked by #36 (design templating) and #61 (simple PipelineRun)
While looking into #19 we've decided that for now we want to move forward without kubebuilder (watch #19 for more info re. the rationale) (we might revisit this later afte rfollowing up with kubebuidler folks).
So we would like:
build-pipeline
's controllers to no longer require kubebuilderWe may still want to keep using envtest
if it can work without our controllers using controller-runtime
, since it's pretty handy to be able to run the controllers in tests without a k8s cluster.
Our current controllers were built using kubebuilder
v1 and depend on controller-runtime
.
See #19 for more.
Create a separate design doc which explores in detail how references will work between our resources, e.g.:
Documentation:
https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#references-to-related-objects
A user should be able to create a "hello world" Task that just echoes "hello world" and run it as many times as they like in their k8s cluster by creating TaskRuns.
Requirements:
It is possible to get something like this working with Build only, but currently our TaskRun controller does nothing.
apiVersion: pipeline.knative.dev/v1beta1
kind: Task
metadata:
name: helloworld-task
namespace: default
spec:
buildSpec:
steps:
- name: helloworld
image: busybox
args: ['echo', 'hello world'']
apiVersion: pipeline.knative.dev/v1beta1
kind: TaskRun
metadata:
name: helloworld-task-run-0
spec:
taskRef:
name: helloworld-task
trigger:
triggerRef:
type: manual
status
of the TaskRun
has been updated appropriately, e.g. see this example which contains:Running dep ensure
(or whatever our DEVELOMENT.md says to run) should be a no-op on the repo unless deps have actually changed.
When I run dep ensure -v
on the repo, I get a whole slew of changes to vendor/
dep ensure -v
In the long run we'd like to use a script as part of CI to make sure vendor/ is kept in good shape on master. Potentially we can use:
The PipelineRun
and TasksRun
object should have a label with the pipeline name. Can be called knative.dev/pipeline
with a value of the actual pipeline name.
The TaskRun controller can use the Pipeline name in Pod metadata and log messages.
Currently there are no labels added
Define a task to create pipeline from a given Task Catalog. The task to be defined in the Catalog are listed in #9
do not exists.
none
The task could be as simple as "Create a Build-Test-Deploy pipeline for Kritis where
When someone creates a PipelineRun in a k8s cluster (e.g. with kubectl apply, the controller should make sure every field is valid.
This should include verifying as many fields as possible, e.g. verifying connectivity with any endpoints, if there are any references, verifying that those actually exist in the system, etc.
PipelineRun is defined here: https://github.com/knative/build-pipeline/blob/master/pkg/apis/pipeline/v1beta1/pipelinerun_types.go
Any additional fields included should cause a failure (e.g. if a typo is made when defining a known field).
Some basic validation is applied, e.g. that required fields are present and that fields are in the correct format (e.g. string vs. int), but that's it.
kubectl apply -f
and notice that the controller does not complainBlocked by #19
For user study, go through pipeline documentation and iterate the docs with someone on the team.
The docs are in a good shape right now. We just need to sit with someone and see if they can follow along the docs and samples created in #6 together.
none
Minimal documentation and not all the details need to fleshed since it might lead to confusion.
Only relevant to what is defined in samples. Refer to samples in documentations.
This document can then be used as user facing docs.
When someone creates a PipelineParams in a k8s cluster (e.g. with kubectl apply, the controller should make sure every field is valid.
This should include verifying as many fields as possible, e.g. verifying connectivity with any endpoints, if there are any references, verifying that those actually exist in the system, etc.
PIpelineParams is defined here: https://github.com/knative/build-pipeline/blob/master/pkg/apis/pipeline/v1beta1/pipelineparams_types.go
Any additional fields included should cause a failure (e.g. if a typo is made when defining a known field).
Some basic validation is applied, e.g. that required fields are present and that fields are in the correct format (e.g. string vs. int), but that's it.
kubectl apply -f
and notice that the controller does not complainBlocked by #19
This task is to propose a design and also create a POC tool that demonstrates that design.
We would like to have a CLI tool for interacting with the Pipeline CRD, to make it easier for a user to deal with all that YAML. The first feature we want from this tool is to visualize, on the command line, the DAG of a Pipeline.
As a Pipeline gets more complex, e.g. with Fan in and Fan out, it can be harder and harder to understand from the yaml alone. Also since the DAG is created by:
next
tasksprev
tasksthe order the Tasks are declared in the Pipeline itself doesn't matter, which could be confusing for someone working with this yaml.
As requested in the build CRD working group, we should compare the features we are looking for in the Pipeline CRD to features provided by existing solutions to explain why we feel there is a need for this project.
Currently, we do not run in unit tests in prow since they are no go test files.
In #75, i added go tests.
This issue to run go tests and any other that exists.
Currently no unit tests run
The work for this task is to design this feature and present one or more proposals (before implementing).
In our examples, we use the syntax ${something}
to reference things, but we don't have a clear model for what can and can't be referenced.
For example, referencing both parameters and cluster bindings within a Task:
https://github.com/knative/build-pipeline/blob/master/examples/deploy_tasks.yaml#L47
This exists in our examples but it's not clear exactly what can and can't be referenced.
In addition to running tests against pull requests, and especially once we get #16 (integration tests) added, we should have our tests running continually to be sure nothing breaks over time (e.g. some kind of issue with a dependency - as much as we want our tests to be hermetic they probably never will be 100%).
Our pull request tests:
We should also update the tags in our README to use these so they won't turn red every time someone's PR breaks ๐
We currently have tests which run on PRs only, but no continuous integration.
This is an example of continuous integration for knative/serving:
https://prow.knative.dev/?job=ci-knative-serving-continuous
Whenever a type
attribute is used, there should be string aliases for all the possible value values so we can see what values should be possible. This should just include the types we would initially expect to implement.
There are a bunch of places in our API where we use a type
attribute, but it's not clear what kinds of values should actually be used for these, for example https://github.com/knative/build-pipeline/blob/ef113e1d513c67b692eb1096904789be0e9616b6/pkg/apis/pipeline/v1beta1/pipelineparams_types.go#L35
According to k8s API conventions we should not use enums, but should use string constants instead, e.g. see knative/serving ServiceConditionType
The k8s API conventions explain that optional fields should have both the json omitempty
directive and should have the +optional
comment.
We are currently using only "omitempty" to indicate which fields are optional, e.g. a Pipeline's Task source bindings: https://github.com/knative/build-pipeline/blob/ef113e1d513c67b692eb1096904789be0e9616b6/pkg/apis/pipeline/v1beta1/pipeline_types.go#L55
Need to complete #15 before we can tackle this
Every PR should have integration tests run against it automatically.
(The long term dream is that we use the Pipeline CRD reference implementation itself to test this repo, but since it is just an API right now, that's a ways off!)
Even once #15 is done, we won't have anything to ensure we can actually deploy the CRDs and use them with a running controller.
To run the current equivalent of "integration tests", manually walk through the "installing and running" steps in the DEVELOPMENT.md:
# Add/update CRDs in your kubernetes cluster
make install
# Run your controller locally, (stop execution with `ctrl-c`)
make run
# In another terminal, deploy tasks
kubectl apply -f config/samples
We'll need a k8s cluster to deploy to, probably we can make use of boskos:
Whatever we implement should be easy for contributors to trigger manually and not require specific credentials.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.