This document defines Pipelines and their capabilities.
To define a configuration file for a Pipeline resource, you can specify the
following fields:
- Required:
apiVersion- Specifies the API version, for exampletekton.dev/v1alpha1.kind- Specify thePipelineresource object.metadata- Specifies data to uniquely identify thePipelineresource object, for example aname.spec- Specifies the configuration information for yourPipelineresource object. In order for aPipelineto do anything, the spec must include:tasks- Specifies whichTasksto run and how to run them
- Optional:
resources- Specifies whichPipelineResourcesof which types thePipelinewill be using in its Taskstasksresources.inputs/resource.outputsfrom- Used when the content of thePipelineResourceshould come from the output of a previous Pipeline TaskrunAfter- Used when the Pipeline Task should be executed after another Pipeline Task, but there is no output linking requiredretries- Used when the task is wanted to be executed if it fails. Could a network error or a missing dependency. It does not apply to cancellations.conditions- Used when a task is to be executed only if the specified conditons are evaluated to be true.
In order for a Pipeline to interact with the outside world, it will probably
need PipelineResources which will be given to
Tasks as inputs and outputs.
Your Pipeline must declare the PipelineResources it needs in a resources
section in the spec, giving each a name which will be used to refer to these
PipelineResources in the Tasks.
For example:
spec:
resources:
- name: my-repo
type: git
- name: my-image
type: imagePipelines can declare input parameters that must be supplied to the Pipeline
during a PipelineRun. Pipeline parameters can be used to replace template
values in PipelineTask parameters' values.
Parameter names are limited to alpha-numeric characters, - and _ and can
only start with alpha characters and _. For example, fooIs-Bar_ is a valid
parameter name, barIsBa$ or 0banana are not.
Each declared parameter has a type field, assumed to be string if not provided by the user. The other possible type is array — useful, for instance, when a dynamic number of string arguments need to be supplied to a task. When the actual parameter value is supplied, its parsed type is validated against the type field.
The following example shows how Pipelines can be parameterized, and these
parameters can be passed to the Pipeline from a PipelineRun.
Input parameters in the form of $(params.foo) are replaced inside of the
PipelineTask parameters' values (see also
variable substitution). As with
variable substitution, the deprecated syntax
${params.foo} will be supported until #1170.
The following Pipeline declares an input parameter called 'context', and uses
it in the PipelineTask's parameter. The description and default fields for
a parameter are optional, and if the default field is specified and this
Pipeline is used by a PipelineRun without specifying a value for 'context',
the default value will be used.
apiVersion: tekton.dev/v1alpha1
kind: Pipeline
metadata:
name: pipeline-with-parameters
spec:
params:
- name: context
type: string
description: Path to context
default: /some/where/or/other
tasks:
- name: build-skaffold-web
taskRef:
name: build-push
params:
- name: pathToDockerFile
value: Dockerfile
- name: pathToContext
value: "$(params.context)"The following PipelineRun supplies a value for context:
apiVersion: tekton.dev/v1alpha1
kind: PipelineRun
metadata:
name: pipelinerun-with-parameters
spec:
pipelineRef:
name: pipeline-with-parameters
params:
- name: "context"
value: "/workspace/examples/microservices/leeroy-web"A Pipeline will execute a graph of Tasks (see
ordering for how to express this graph). At a minimum, this
declaration must include a reference to the Task:
tasks:
- name: build-the-image
taskRef:
name: build-pushDeclared PipelineResources can be given to Tasks in
the Pipeline as inputs and outputs, for example:
spec:
tasks:
- name: build-the-image
taskRef:
name: build-push
resources:
inputs:
- name: workspace
resource: my-repo
outputs:
- name: image
resource: my-imageParameters can also be provided:
spec:
tasks:
- name: build-skaffold-web
taskRef:
name: build-push
params:
- name: pathToDockerFile
value: Dockerfile
- name: pathToContext
value: /workspace/examples/microservices/leeroy-webSometimes you will have Pipeline Tasks that need to take as
input the output of a previous Task, for example, an image built by a previous
Task.
Express this dependency by adding from on PipelineResources
that your Tasks need.
- The (optional)
fromkey on aninput sourcedefines a set of previousPipelineTasks(i.e. the named instance of aTask) in thePipeline - When the
fromkey is specified on an input source, the version of the resource that is from the defined list of tasks is used fromcan support fan in and fan out- The
fromclause expresses ordering, i.e. the Pipeline Task which provides thePipelineResourcemust run before the Pipeline Task which needs thatPipelineResourceas an input- The name of the
PipelineResourcemust correspond to aPipelineResourcefrom theTaskthat the referencedPipelineTaskgives as an output
- The name of the
For example see this Pipeline spec:
- name: build-app
taskRef:
name: build-push
resources:
outputs:
- name: image
resource: my-image
- name: deploy-app
taskRef:
name: deploy-kubectl
resources:
inputs:
- name: image
resource: my-image
from:
- build-appThe resource my-image is expected to be given to the deploy-app Task from
the build-app Task. This means that the PipelineResource my-image must
also be declared as an output of build-app.
This also means that the build-app Pipeline Task will run before deploy-app,
regardless of the order they appear in the spec.
Sometimes you will need to have Pipeline Tasks that need to
run in a certain order, but they do not have an explicit
output to input dependency (which is
expressed via from). In this case you can use runAfter to indicate
that a Pipeline Task should be run after one or more previous Pipeline Tasks.
For example see this Pipeline spec:
- name: test-app
taskRef:
name: make-test
resources:
inputs:
- name: workspace
resource: my-repo
- name: build-app
taskRef:
name: kaniko-build
runAfter:
- test-app
resources:
inputs:
- name: workspace
resource: my-repoIn this Pipeline, we want to test the code before we build from it, but there
is no output from test-app, so build-app uses runAfter to indicate that
test-app should run before it, regardless of the order they appear in the
spec.
Sometimes you need a policy for retrying tasks which have problems such as
network errors, missing dependencies or upload problems. Any of those issues must
be reflected as False (corev1.ConditionFalse) within the TaskRun Status
Succeeded Condition. For that reason there is an optional attribute called
retries which declares how many times that task should be retried in case of
failure.
By default and in its absence there are no retries; its value is 0.
tasks:
- name: build-the-image
retries: 1
taskRef:
name: build-pushIn this example, the task "build-the-image" will be executed and if the first run fails a second one would triggered. But, if that fails no more would triggered: a max of two executions.
Sometimes you will need to run tasks only when some conditions are true. The conditions field
allows you to list a series of references to Conditions that are run before the task
is run. If all of the conditions evaluate to true, the task is run. If any of the conditions are false,
the Task is not run. Its status.ConditionSucceeded is set to False with the reason set to ConditionCheckFailed.
However, unlike regular task failures, condition failures do not automatically fail the entire pipeline
run -- other tasks that are not dependent on the task (via from or runAfter) are still run.
tasks:
- name: conditional-task
taskRef:
name: build-push
conditions:
- conditionRef: my-conditionIn this example, my-condition refers to a Condition custom resource. The build-push
task will only be executed if the condition evaluates to true.
The Pipeline Tasks in a Pipeline can be connected and run
in a graph, specifically a Directed Acyclic Graph or DAG. Each of the Pipeline
Tasks is a node, which can be connected with an edge (i.e. a Graph) such that one will run
before another (i.e. Directed), and the execution will eventually complete
(i.e. Acyclic, it will not get caught in infinite loops).
This is done using:
fromclauses on thePipelineResourcesneeded by aTaskrunAfterclauses on the Pipeline Tasks
For example see this Pipeline spec:
- name: lint-repo
taskRef:
name: pylint
resources:
inputs:
- name: workspace
resource: my-repo
- name: test-app
taskRef:
name: make-test
resources:
inputs:
- name: workspace
resource: my-repo
- name: build-app
taskRef:
name: kaniko-build-app
runAfter:
- test-app
resources:
inputs:
- name: workspace
resource: my-repo
outputs:
- name: image
resource: my-app-image
- name: build-frontend
taskRef:
name: kaniko-build-frontend
runAfter:
- test-app
resources:
inputs:
- name: workspace
resource: my-repo
outputs:
- name: image
resource: my-frontend-image
- name: deploy-all
taskRef:
name: deploy-kubectl
resources:
inputs:
- name: my-app-image
resource: my-app-image
from:
- build-app
- name: my-frontend-image
resource: my-frontend-image
from:
- build-frontendThis will result in the following execution graph:
| |
v v
test-app lint-repo
/ \
v v
build-app build-frontend
\ /
v v
deploy-all
- The
lint-repoandtest-appPipeline Tasks will begin executing simultaneously. (They have nofromorrunAfterclauses.) - Once
test-appcompletes, bothbuild-appandbuild-frontendwill begin executing simultaneously (bothrunAftertest-app). - When both
build-appandbuild-frontendhave completed,deploy-allwill execute (it requiresPipelineResourcesfrom both Pipeline Tasks). - The entire
Pipelinewill be finished executing afterlint-repoanddeploy-allhave completed.
For complete examples, see the examples folder.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License.