CI/CD YAML syntax reference
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
This document lists the configuration options for the GitLab .gitlab-ci.yml
file.
This file is where you define the CI/CD jobs that make up your pipeline.
- If you are already familiar with basic CI/CD concepts, try creating
your own
.gitlab-ci.yml
file by following a tutorial that demonstrates a simple or complex pipeline. - For a collection of examples, see GitLab CI/CD examples.
- To view a large
.gitlab-ci.yml
file used in an enterprise, see the.gitlab-ci.yml
file forgitlab
.
When you are editing your .gitlab-ci.yml
file, you can validate it with the
CI Lint tool.
Keywords
A GitLab CI/CD pipeline configuration includes:
Global keywords that configure pipeline behavior:
Keyword Description default
Custom default values for job keywords. include
Import configuration from other YAML files. stages
The names and order of the pipeline stages. workflow
Control what types of pipeline run. Keyword Description spec
Define specifications for external configuration files. Jobs configured with job keywords:
Keyword Description after_script
Override a set of commands that are executed after job. allow_failure
Allow job to fail. A failed job does not cause the pipeline to fail. artifacts
List of files and directories to attach to a job on success. before_script
Override a set of commands that are executed before job. cache
List of files that should be cached between subsequent runs. coverage
Code coverage settings for a given job. dast_configuration
Use configuration from DAST profiles on a job level. dependencies
Restrict which artifacts are passed to a specific job by providing a list of jobs to fetch artifacts from. environment
Name of an environment to which the job deploys. extends
Configuration entries that this job inherits from. identity
Authenticate with third party services using identity federation. image
Use Docker images. inherit
Select which global defaults all jobs inherit. interruptible
Defines if a job can be canceled when made redundant by a newer run. manual_confirmation
Define a custom confirmation message for a manual job. needs
Execute jobs earlier than the stage ordering. pages
Upload the result of a job to use with GitLab Pages. parallel
How many instances of a job should be run in parallel. release
Instructs the runner to generate a release object. resource_group
Limit job concurrency. retry
When and how many times a job can be auto-retried in case of a failure. rules
List of conditions to evaluate and determine selected attributes of a job, and whether or not it’s created. script
Shell script that is executed by a runner. run
Run configuration that is executed by a runner. secrets
The CI/CD secrets the job needs. services
Use Docker services images. stage
Defines a job stage. tags
List of tags that are used to select a runner. timeout
Define a custom job-level timeout that takes precedence over the project-wide setting. trigger
Defines a downstream pipeline trigger. when
When to run job. Keyword Description Default variables
Define default CI/CD variables for all jobs in the pipeline. Job variables
Define CI/CD variables for individual jobs. Deprecated keywords that are no longer recommended for use.
Global keywords
Some keywords are not defined in a job. These keywords control pipeline behavior or import additional pipeline configuration.
default
You can set global defaults for some keywords. Each default keyword is copied to every job that doesn’t already have it defined. If the job already has a keyword defined, that default is not used.
Keyword type: Global keyword.
Supported values: These keywords can have custom defaults:
after_script
artifacts
, though due to issue 404563, the nested keywordartifacts:expire_in
has no effect.before_script
cache
hooks
id_tokens
image
interruptible
retry
services
tags
timeout
, though due to issue 213634 this keyword has no effect.
Example of default
:
default:
image: ruby:3.0
retry: 2
rspec:
script: bundle exec rspec
rspec 2.7:
image: ruby:2.7
script: bundle exec rspec
In this example:
image: ruby:3.0
andretry: 2
are the default keywords for all jobs in the pipeline.- The
rspec
job does not haveimage
orretry
defined, so it uses the defaults ofimage: ruby:3.0
andretry: 2
. - The
rspec 2.7
job does not haveretry
defined, but it does haveimage
explicitly defined. It uses the defaultretry: 2
, but ignores the defaultimage
and uses theimage: ruby:2.7
defined in the job.
Additional details:
- Control inheritance of default keywords in jobs with
inherit:default
. - Global defaults are not passed to downstream pipelines, which run independently of the upstream pipeline that triggered the downstream pipeline.
include
Use include
to include external YAML files in your CI/CD configuration.
You can split one long .gitlab-ci.yml
file into multiple files to increase readability,
or reduce duplication of the same configuration in multiple places.
You can also store template files in a central repository and include them in projects.
The include
files are:
- Merged with those in the
.gitlab-ci.yml
file. - Always evaluated first and then merged with the content of the
.gitlab-ci.yml
file, regardless of the position of theinclude
keyword.
The time limit to resolve all files is 30 seconds.
Keyword type: Global keyword.
Supported values: The include
subkeys:
And optionally:
Additional details:
- Only certain CI/CD variables can be used
with
include
keywords. - Use merging to customize and override included CI/CD configurations with local
- You can override included configuration by having the same job name or global keyword
in the
.gitlab-ci.yml
file. The two configurations are merged together, and the configuration in the.gitlab-ci.yml
file takes precedence over the included configuration. - If you rerun a:
- Job, the
include
files are not fetched again. All jobs in a pipeline use the configuration fetched when the pipeline was created. Any changes to the sourceinclude
files do not affect job reruns. - Pipeline, the
include
files are fetched again. If they changed after the last pipeline run, the new pipeline uses the changed configuration.
- Job, the
- You can have up to 150 includes per pipeline by default, including nested. Additionally:
- In GitLab 16.0 and later users on GitLab Self-Managed can change the maximum includes value.
- In GitLab 15.10 and later you can have up to 150 includes. In nested includes, the same file can be included multiple times, but duplicated includes count towards the limit.
- From GitLab 14.9 to GitLab 15.9, you can have up to 100 includes. The same file can be included multiple times in nested includes, but duplicates are ignored.
include:component
Use include:component
to add a CI/CD component to the
pipeline configuration.
Keyword type: Global keyword.
Supported values: The full address of the CI/CD component, formatted as
<fully-qualified-domain-name>/<project-path>/<component-name>@<specific-version>
.
Example of include:component
:
include:
- component: $CI_SERVER_FQDN/my-org/security-components/secret-detection@1.0
Related topics:
include:local
Use include:local
to include a file that is in the same repository and branch as the configuration file containing the include
keyword.
Use include:local
instead of symbolic links.
Keyword type: Global keyword.
Supported values:
A full path relative to the root directory (/
):
- The YAML file must have the extension
.yml
or.yaml
. - You can use
*
and**
wildcards in the file path. - You can use certain CI/CD variables.
Example of include:local
:
include:
- local: '/templates/.gitlab-ci-template.yml'
You can also use shorter syntax to define the path:
include: '.gitlab-ci-production.yml'
Additional details:
- The
.gitlab-ci.yml
file and the local file must be on the same branch. - You can’t include local files through Git submodules paths.
include
configuration is always evaluated based on the location of the file containing theinclude
keyword, not the project running the pipeline. If a nestedinclude
is in a configuration file in a different project,include: local
checks that other project for the file.
include:project
To include files from another private project on the same GitLab instance,
use include:project
and include:file
.
Keyword type: Global keyword.
Supported values:
include:project
: The full GitLab project path.include:file
A full file path, or array of file paths, relative to the root directory (/
). The YAML files must have the.yml
or.yaml
extension.include:ref
: Optional. The ref to retrieve the file from. Defaults to theHEAD
of the project when not specified.- You can use certain CI/CD variables.
Example of include:project
:
include:
- project: 'my-group/my-project'
file: '/templates/.gitlab-ci-template.yml'
- project: 'my-group/my-subgroup/my-project-2'
file:
- '/templates/.builds.yml'
- '/templates/.tests.yml'
You can also specify a ref
:
include:
- project: 'my-group/my-project'
ref: main # Git branch
file: '/templates/.gitlab-ci-template.yml'
- project: 'my-group/my-project'
ref: v1.0.0 # Git Tag
file: '/templates/.gitlab-ci-template.yml'
- project: 'my-group/my-project'
ref: 787123b47f14b552955ca2786bc9542ae66fee5b # Git SHA
file: '/templates/.gitlab-ci-template.yml'
Additional details:
include
configuration is always evaluated based on the location of the file containing theinclude
keyword, not the project running the pipeline. If a nestedinclude
is in a configuration file in a different project,include: local
checks that other project for the file.- When the pipeline starts, the
.gitlab-ci.yml
file configuration included by all methods is evaluated. The configuration is a snapshot in time and persists in the database. GitLab does not reflect any changes to the referenced.gitlab-ci.yml
file configuration until the next pipeline starts. - When you include a YAML file from another private project, the user running the pipeline
must be a member of both projects and have the appropriate permissions to run pipelines.
A
not found or access denied
error may be displayed if the user does not have access to any of the included files. - Be careful when including another project’s CI/CD configuration file. No pipelines or notifications trigger when CI/CD configuration files change.
From a security perspective, this is similar to pulling a third-party dependency. For the
ref
, consider:- Using a specific SHA hash, which should be the most stable option. Use the
full 40-character SHA hash to ensure the desired commit is referenced, because
using a short SHA hash for the
ref
might be ambiguous. - Applying both protected branch and protected tag rules to
the
ref
in the other project. Protected tags and branches are more likely to pass through change management before changing.
- Using a specific SHA hash, which should be the most stable option. Use the
full 40-character SHA hash to ensure the desired commit is referenced, because
using a short SHA hash for the
include:remote
Use include:remote
with a full URL to include a file from a different location.
Keyword type: Global keyword.
Supported values:
A public URL accessible by an HTTP/HTTPS GET
request:
- Authentication with the remote URL is not supported.
- The YAML file must have the extension
.yml
or.yaml
. - You can use certain CI/CD variables.
Example of include:remote
:
include:
- remote: 'https://gitlab.com/example-project/-/raw/main/.gitlab-ci.yml'
Additional details:
- All nested includes are executed without context as a public user,
so you can only include public projects or templates. No variables are available in the
include
section of nested includes. - Be careful when including another project’s CI/CD configuration file. No pipelines or notifications trigger
when the other project’s files change. From a security perspective, this is similar to
pulling a third-party dependency. To verify the integrity of the included file, consider using the
integrity
keyword. If you link to another GitLab project you own, consider the use of both protected branches and protected tags to enforce change management rules.
include:template
Use include:template
to include .gitlab-ci.yml
templates.
Keyword type: Global keyword.
Supported values:
- All templates can be viewed in
lib/gitlab/ci/templates
. Not all templates are designed to be used withinclude:template
, so check template comments before using one. - You can use certain CI/CD variables.
Example of include:template
:
# File sourced from the GitLab template collection
include:
- template: Auto-DevOps.gitlab-ci.yml
Multiple include:template
files:
include:
- template: Android-Fastlane.gitlab-ci.yml
- template: Auto-DevOps.gitlab-ci.yml
Additional details:
- All nested includes are executed without context as a public user,
so you can only include public projects or templates. No variables are available in the
include
section of nested includes.
include:inputs
Use include:inputs
to set the values for input parameters when the included configuration
uses spec:inputs
and is added to the pipeline.
Keyword type: Global keyword.
Supported values: A string, numeric value, or boolean.
Example of include:inputs
:
include:
- local: 'custom_configuration.yml'
inputs:
website: "My website"
In this example:
- The configuration contained in
custom_configuration.yml
is added to the pipeline, with awebsite
input set to a value ofMy website
for the included configuration.
Additional details:
- If the included configuration file uses
spec:inputs:type
, the input value must match the defined type. - If the included configuration file uses
spec:inputs:options
, the input value must match one of the listed options.
Related topics:
include:rules
You can use rules
with include
to conditionally include other configuration files.
Keyword type: Global keyword.
Supported values: These rules
subkeys:
Some CI/CD variables are supported.
Example of include:rules
:
include:
- local: build_jobs.yml
rules:
- if: $INCLUDE_BUILDS == "true"
test-job:
stage: test
script: echo "This is a test job"
In this example, if the INCLUDE_BUILDS
variable is:
true
, thebuild_jobs.yml
configuration is included in the pipeline.- Not
true
or does not exist, thebuild_jobs.yml
configuration is not included in the pipeline.
Related topics:
- Examples of using
include
with:
include:integrity
Use integrity
with include:remote
to specifiy a SHA256 hash of the included remote file.
If integrity
does not match the actual content, the remote file is not processed
and the pipeline fails.
Keyword type: Global keyword.
Supported values: Base64-encoded SHA256 hash of the included content.
Example of include:integrity
:
include:
- remote: 'https://gitlab.com/example-project/-/raw/main/.gitlab-ci.yml'
integrity: 'sha256-L3/GAoKaw0Arw6hDCKeKQlV1QPEgHYxGBHsH4zG1IY8='
stages
Use stages
to define stages that contain groups of jobs. Use stage
in a job to configure the job to run in a specific stage.
If stages
is not defined in the .gitlab-ci.yml
file, the default pipeline stages are:
The order of the items in stages
defines the execution order for jobs:
- Jobs in the same stage run in parallel.
- Jobs in the next stage run after the jobs from the previous stage complete successfully.
If a pipeline contains only jobs in the .pre
or .post
stages, it does not run.
There must be at least one other job in a different stage.
Keyword type: Global keyword.
Example of stages
:
stages:
- build
- test
- deploy
In this example:
- All jobs in
build
execute in parallel. - If all jobs in
build
succeed, thetest
jobs execute in parallel. - If all jobs in
test
succeed, thedeploy
jobs execute in parallel. - If all jobs in
deploy
succeed, the pipeline is marked aspassed
.
If any job fails, the pipeline is marked as failed
and jobs in later stages do not
start. Jobs in the current stage are not stopped and continue to run.
Additional details:
- If a job does not specify a
stage
, the job is assigned thetest
stage. - If a stage is defined but no jobs use it, the stage is not visible in the pipeline,
which can help compliance pipeline configurations:
- Stages can be defined in the compliance configuration but remain hidden if not used.
- The defined stages become visible when developers use them in job definitions.
Related topics:
- To make a job start earlier and ignore the stage order, use the
needs
keyword.
workflow
Use workflow
to control pipeline behavior.
You can use some predefined CI/CD variables in
workflow
configuration, but not variables that are only defined when jobs start.
Related topics:
workflow:auto_cancel:on_new_commit
Use workflow:auto_cancel:on_new_commit
to configure the behavior of
the auto-cancel redundant pipelines feature.
Supported values:
conservative
: Cancel the pipeline, but only if no jobs withinterruptible: false
have started yet. Default when not defined.interruptible
: Cancel only jobs withinterruptible: true
.none
: Do not auto-cancel any jobs.
Example of workflow:auto_cancel:on_new_commit
:
workflow:
auto_cancel:
on_new_commit: interruptible
job1:
interruptible: true
script: sleep 60
job2:
interruptible: false # Default when not defined.
script: sleep 60
In this example:
- When a new commit is pushed to a branch, GitLab creates a new pipeline and
job1
andjob2
start. - If a new commit is pushed to the branch before the jobs complete, only
job1
is canceled.
workflow:auto_cancel:on_job_failure
Use workflow:auto_cancel:on_job_failure
to configure which jobs should be canceled as soon as one job fails.
Supported values:
all
: Cancel the pipeline and all running jobs as soon as one job fails.none
: Do not auto-cancel any jobs.
Example of workflow:auto_cancel:on_job_failure
:
stages: [stage_a, stage_b]
workflow:
auto_cancel:
on_job_failure: all
job1:
stage: stage_a
script: sleep 60
job2:
stage: stage_a
script:
- sleep 30
- exit 1
job3:
stage: stage_b
script:
- sleep 30
In this example, if job2
fails, job1
is canceled if it is still running and job3
does not start.
Related topics:
workflow:name
You can use name
in workflow:
to define a name for pipelines.
All pipelines are assigned the defined name. Any leading or trailing spaces in the name are removed.
Supported values:
- A string.
- CI/CD variables.
- A combination of both.
Examples of workflow:name
:
A simple pipeline name with a predefined variable:
workflow:
name: 'Pipeline for branch: $CI_COMMIT_BRANCH'
A configuration with different pipeline names depending on the pipeline conditions:
variables:
PROJECT1_PIPELINE_NAME: 'Default pipeline name' # A default is not required
workflow:
name: '$PROJECT1_PIPELINE_NAME'
rules:
- if: '$CI_MERGE_REQUEST_LABELS =~ /pipeline:run-in-ruby3/'
variables:
PROJECT1_PIPELINE_NAME: 'Ruby 3 pipeline'
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
variables:
PROJECT1_PIPELINE_NAME: 'MR pipeline: $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME'
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH # For default branch pipelines, use the default name
Additional details:
- If the name is an empty string, the pipeline is not assigned a name. A name consisting of only CI/CD variables could evaluate to an empty string if all the variables are also empty.
workflow:rules:variables
become default variables available in all jobs, includingtrigger
jobs which forward variables to downstream pipelines by default. If the downstream pipeline uses the same variable, the variable is overwritten by the upstream variable value. Be sure to either:- Use a unique variable name in every project’s pipeline configuration, like
PROJECT1_PIPELINE_NAME
. - Use
inherit:variables
in the trigger job and list the exact variables you want to forward to the downstream pipeline.
- Use a unique variable name in every project’s pipeline configuration, like
workflow:rules
The rules
keyword in workflow
is similar to rules
defined in jobs,
but controls whether or not a whole pipeline is created.
When no rules evaluate to true, the pipeline does not run.
Supported values: You can use some of the same keywords as job-level rules
:
rules: if
.rules: changes
.rules: exists
.when
, can only bealways
ornever
when used withworkflow
.variables
.
Example of workflow:rules
:
workflow:
rules:
- if: $CI_COMMIT_TITLE =~ /-draft$/
when: never
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
In this example, pipelines run if the commit title (first line of the commit message) does not end with -draft
and the pipeline is for either:
- A merge request
- The default branch.
Additional details:
- If your rules match both branch pipelines (other than the default branch) and merge request pipelines, duplicate pipelines can occur.
start_in
,allow_failure
, andneeds
are not supported inworkflow:rules
, but do not cause a syntax violation. Though they have no effect, do not use them inworkflow:rules
as it could cause syntax failures in the future. See issue 436473 for more details.
Related topics:
workflow:rules:variables
You can use variables
in workflow:rules
to define variables for
specific pipeline conditions.
When the condition matches, the variable is created and can be used by all jobs
in the pipeline. If the variable is already defined at the top level as a default variable,
the workflow
variable takes precedence and overrides the default variable.
Keyword type: Global keyword.
Supported values: Variable name and value pairs:
- The name can use only numbers, letters, and underscores (
_
). - The value must be a string.
Example of workflow:rules:variables
:
variables:
DEPLOY_VARIABLE: "default-deploy"
workflow:
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
variables:
DEPLOY_VARIABLE: "deploy-production" # Override globally-defined DEPLOY_VARIABLE
- if: $CI_COMMIT_BRANCH =~ /feature/
variables:
IS_A_FEATURE: "true" # Define a new variable.
- if: $CI_COMMIT_BRANCH # Run the pipeline in other cases
job1:
variables:
DEPLOY_VARIABLE: "job1-default-deploy"
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
variables: # Override DEPLOY_VARIABLE defined
DEPLOY_VARIABLE: "job1-deploy-production" # at the job level.
- when: on_success # Run the job in other cases
script:
- echo "Run script with $DEPLOY_VARIABLE as an argument"
- echo "Run another script if $IS_A_FEATURE exists"
job2:
script:
- echo "Run script with $DEPLOY_VARIABLE as an argument"
- echo "Run another script if $IS_A_FEATURE exists"
When the branch is the default branch:
- job1’s
DEPLOY_VARIABLE
isjob1-deploy-production
. - job2’s
DEPLOY_VARIABLE
isdeploy-production
.
When the branch is feature
:
- job1’s
DEPLOY_VARIABLE
isjob1-default-deploy
, andIS_A_FEATURE
istrue
. - job2’s
DEPLOY_VARIABLE
isdefault-deploy
, andIS_A_FEATURE
istrue
.
When the branch is something else:
- job1’s
DEPLOY_VARIABLE
isjob1-default-deploy
. - job2’s
DEPLOY_VARIABLE
isdefault-deploy
.
Additional details:
workflow:rules:variables
become default variables available in all jobs, includingtrigger
jobs which forward variables to downstream pipelines by default. If the downstream pipeline uses the same variable, the variable is overwritten by the upstream variable value. Be sure to either:- Use unique variable names in every project’s pipeline configuration, like
PROJECT1_VARIABLE_NAME
. - Use
inherit:variables
in the trigger job and list the exact variables you want to forward to the downstream pipeline.
- Use unique variable names in every project’s pipeline configuration, like
workflow:rules:auto_cancel
Use workflow:rules:auto_cancel
to configure the behavior of
the workflow:auto_cancel:on_new_commit
or
the workflow:auto_cancel:on_job_failure
features.
Supported values:
on_new_commit
:workflow:auto_cancel:on_new_commit
on_job_failure
:workflow:auto_cancel:on_job_failure
Example of workflow:rules:auto_cancel
:
workflow:
auto_cancel:
on_new_commit: interruptible
on_job_failure: all
rules:
- if: $CI_COMMIT_REF_PROTECTED == 'true'
auto_cancel:
on_new_commit: none
on_job_failure: none
- when: always # Run the pipeline in other cases
test-job1:
script: sleep 10
interruptible: false
test-job2:
script: sleep 10
interruptible: true
In this example, workflow:auto_cancel:on_new_commit
is set to interruptible
and workflow:auto_cancel:on_job_failure
is set to all
for all jobs by default. But if a pipeline runs for a protected branch,
the rule overrides the default with on_new_commit: none
and on_job_failure: none
. For example, if a pipeline
is running for:
- A non-protected branch and a new commit is pushed,
test-job1
continues to run andtest-job2
is canceled. - A protected branch and a new commit is pushed, both
test-job1
andtest-job2
continue to run.
Header keywords
Some keywords must be defined in a header section of a YAML configuration file.
The header must be at the top of the file, separated from the rest of the configuration
with ---
.
spec
Add a spec
section to the header of a YAML file to configure the behavior of a pipeline
when a configuration is added to the pipeline with the include
keyword.
Specs must be declared at the top of a configuration file, in a header section separated
from the rest of the configuration with ---
.
spec:inputs
You can use spec:inputs
to define inputs for the CI/CD configuration.
Use the interpolation format $[[ inputs.input-id ]]
to reference the values outside of the header section.
Inputs are evaluated and interpolated when the configuration is fetched during pipeline creation.
When using inputs
, interpolation completes before the configuration is merged
with the contents of the .gitlab-ci.yml
file.
Keyword type: Header keyword. spec
must be declared at the top of the configuration file,
in a header section.
Supported values: A hash of strings representing the expected inputs.
Example of spec:inputs
:
spec:
inputs:
environment:
job-stage:
---
scan-website:
stage: $[[ inputs.job-stage ]]
script: ./scan-website $[[ inputs.environment ]]
Additional details:
- Inputs are mandatory unless you use
spec:inputs:default
to set a default value. Avoid mandatory inputs unless you only use inputs withinclude:inputs
. - Inputs expect strings unless you use
spec:inputs:type
to set a different input type. - A string containing an interpolation block must not exceed 1 MB.
- The string inside an interpolation block must not exceed 1 KB.
- You can define input values when running a new pipeline.
Related topics:
spec:inputs:default
Inputs are mandatory when included, unless you set a default value with spec:inputs:default
.
Use default: ''
to have no default value.
Keyword type: Header keyword. spec
must be declared at the top of the configuration file,
in a header section.
Supported values: A string representing the default value, or ''
.
Example of spec:inputs:default
:
spec:
inputs:
website:
user:
default: 'test-user'
flags:
default: ''
title: The pipeline configuration would follow...
---
In this example:
website
is mandatory and must be defined.user
is optional. If not defined, the value istest-user
.flags
is optional. If not defined, it has no value.
Additional details:
- The pipeline fails with a validation error when the input:
spec:inputs:description
Use description
to give a description to a specific input. The description does
not affect the behavior of the input and is only used to help users of the file
understand the input.
Keyword type: Header keyword. spec
must be declared at the top of the configuration file,
in a header section.
Supported values: A string representing the description.
Example of spec:inputs:description
:
spec:
inputs:
flags:
description: 'Sample description of the `flags` input details.'
title: The pipeline configuration would follow...
---
spec:inputs:options
Inputs can use options
to specify a list of allowed values for an input.
The limit is 50 options per input.
Keyword type: Header keyword. spec
must be declared at the top of the configuration file,
in a header section.
Supported values: An array of input options.
Example of spec:inputs:options
:
spec:
inputs:
environment:
options:
- development
- staging
- production
title: The pipeline configuration would follow...
---
In this example:
environment
is mandatory and must be defined with one of the values in the list.
Additional details:
- The pipeline fails with a validation error when:
spec:inputs:regex
Use spec:inputs:regex
to specify a regular expression that the input must match.
Keyword type: Header keyword. spec
must be declared at the top of the configuration file,
in a header section.
Supported values: Must be a regular expression.
Example of spec:inputs:regex
:
spec:
inputs:
version:
regex: ^v\d\.\d+(\.\d+)?$
title: The pipeline configuration would follow...
---
In this example, inputs of v1.0
or v1.2.3
match the regular expression and pass validation.
An input of v1.A.B
does not match the regular expression and fails validation.
Additional details:
inputs:regex
can only be used with atype
ofstring
, notnumber
orboolean
.- Do not enclose the regular expression with the
/
character. For example, useregex.*
, not/regex.*/
. inputs:regex
uses RE2 to parse regular expressions.- Validation of the input against the regular expression happens before variable expansion. If the input text includes a variable name, the raw value of the input (the variable name) is validated, not the variable value.
spec:inputs:type
By default, inputs expect strings. Use spec:inputs:type
to set a different required
type for inputs.
Keyword type: Header keyword. spec
must be declared at the top of the configuration file,
in a header section.
Supported values: Can be one of:
array
, to accept an array of inputs.string
, to accept string inputs (default when not defined).number
, to only accept numeric inputs.boolean
, to only accepttrue
orfalse
inputs.
Example of spec:inputs:type
:
spec:
inputs:
job_name:
website:
type: string
port:
type: number
available:
type: boolean
array_input:
type: array
title: The pipeline configuration would follow...
---
Job keywords
The following topics explain how to use keywords to configure CI/CD pipelines.
after_script
Use after_script
to define an array of commands to run last, after a job’s before_script
and
script
sections complete. after_script
commands also run when:
- The job is canceled while the
before_script
orscript
sections are still running. - The job fails with failure type of
script_failure
, but not other failure types.
Keyword type: Job keyword. You can use it only as part of a job or in the
default
section.
Supported values: An array including:
- Single line commands.
- Long commands split over multiple lines.
- YAML anchors.
CI/CD variables are supported.
Example of after_script
:
job:
script:
- echo "An example script section."
after_script:
- echo "Execute this command after the `script` section completes."
Additional details:
Scripts you specify in after_script
execute in a new shell, separate from any
before_script
or script
commands. As a result, they:
- Have the current working directory set back to the default (according to the variables which define how the runner processes Git requests).
- Don’t have access to changes done by commands defined in the
before_script
orscript
, including:- Command aliases and variables exported in
script
scripts. - Changes outside of the working tree (depending on the runner executor), like
software installed by a
before_script
orscript
script.
- Command aliases and variables exported in
- Have a separate timeout. For GitLab Runner 16.4 and later, this defaults to 5 minutes, and can be configured with the
RUNNER_AFTER_SCRIPT_TIMEOUT
variable. In GitLab 16.3 and earlier, the timeout is hard-coded to 5 minutes. - Don’t affect the job’s exit code. If the
script
section succeeds and theafter_script
times out or fails, the job exits with code0
(Job Succeeded
). - There is a known issue with using CI/CD job tokens with
after_script
. You can use a job token for authentication inafter_script
commands, but the token immediately becomes invalid if the job is canceled. See issue for more details.
For jobs that time out:
after_script
commands do not execute by default.- You can configure timeout values to ensure
after_script
runs by setting appropriateRUNNER_SCRIPT_TIMEOUT
andRUNNER_AFTER_SCRIPT_TIMEOUT
values that don’t exceed the job’s timeout.
Related topics:
- Use
after_script
withdefault
to define a default array of commands that should run after all jobs. - You can configure a job to skip
after_script
commands if the job is canceled. - You can ignore non-zero exit codes.
- Use color codes with
after_script
to make job logs easier to review. - Create custom collapsible sections to simplify job log output.
- You can ignore errors in
after_script
.
allow_failure
Use allow_failure
to determine whether a pipeline should continue running when a job fails.
- To let the pipeline continue running subsequent jobs, use
allow_failure: true
. - To stop the pipeline from running subsequent jobs, use
allow_failure: false
.
When jobs are allowed to fail (allow_failure: true
) an orange warning (
)
indicates that a job failed. However, the pipeline is successful and the associated commit
is marked as passed with no warnings.
This same warning is displayed when:
- All other jobs in the stage are successful.
- All other jobs in the pipeline are successful.
The default value for allow_failure
is:
true
for manual jobs.false
for jobs that usewhen: manual
insiderules
.false
in all other cases.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
true
orfalse
.
Example of allow_failure
:
job1:
stage: test
script:
- execute_script_1
job2:
stage: test
script:
- execute_script_2
allow_failure: true
job3:
stage: deploy
script:
- deploy_to_staging
environment: staging
In this example, job1
and job2
run in parallel:
- If
job1
fails, jobs in thedeploy
stage do not start. - If
job2
fails, jobs in thedeploy
stage can still start.
Additional details:
- You can use
allow_failure
as a subkey ofrules
. - If
allow_failure: true
is set, the job is always considered successful, and later jobs withwhen: on_failure
don’t start if this job fails. - You can use
allow_failure: false
with a manual job to create a blocking manual job. A blocked pipeline does not run any jobs in later stages until the manual job is started and completes successfully.
allow_failure:exit_codes
Use allow_failure:exit_codes
to control when a job should be
allowed to fail. The job is allow_failure: true
for any of the listed exit codes,
and allow_failure
false for any other exit code.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
- A single exit code.
- An array of exit codes.
Example of allow_failure
:
test_job_1:
script:
- echo "Run a script that results in exit code 1. This job fails."
- exit 1
allow_failure:
exit_codes: 137
test_job_2:
script:
- echo "Run a script that results in exit code 137. This job is allowed to fail."
- exit 137
allow_failure:
exit_codes:
- 137
- 255
artifacts
Use artifacts
to specify which files to save as job artifacts.
Job artifacts are a list of files and directories that are
attached to the job when it succeeds, fails, or always.
The artifacts are sent to GitLab after the job finishes. They are available for download in the GitLab UI if the size is smaller than the maximum artifact size.
By default, jobs in later stages automatically download all the artifacts created
by jobs in earlier stages. You can control artifact download behavior in jobs with
dependencies
.
When using the needs
keyword, jobs can only download
artifacts from the jobs defined in the needs
configuration.
Job artifacts are only collected for successful jobs by default, and artifacts are restored after caches.
artifacts:paths
Paths are relative to the project directory ($CI_PROJECT_DIR
) and can’t directly
link outside it.
Keyword type: Job keyword. You can use it only as part of a job or in the
default
section.
Supported values:
- An array of file paths, relative to the project directory.
- You can use Wildcards that use glob patterns and
doublestar.Glob
patterns. - For GitLab Pages job:
- In GitLab 17.10 and later,
the
pages.publish
path is automatically appended toartifacts:paths
, so you don’t need to specify it again. - In GitLab 17.10 and later,
when the
pages.publish
path is not specified, thepublic
directory is automatically appended toartifacts:paths
.
- In GitLab 17.10 and later,
the
CI/CD variables are supported.
Example of artifacts:paths
:
job:
artifacts:
paths:
- binaries/
- .config
This example creates an artifact with .config
and all the files in the binaries
directory.
Additional details:
- If not used with
artifacts:name
, the artifacts file is namedartifacts
, which becomesartifacts.zip
when downloaded.
Related topics:
- To restrict which jobs a specific job fetches artifacts from, see
dependencies
. - Create job artifacts.
artifacts:exclude
Use artifacts:exclude
to prevent files from being added to an artifacts archive.
Keyword type: Job keyword. You can use it only as part of a job or in the
default
section.
Supported values:
- An array of file paths, relative to the project directory.
- You can use Wildcards that use glob or
doublestar.PathMatch
patterns.
Example of artifacts:exclude
:
artifacts:
paths:
- binaries/
exclude:
- binaries/**/*.o
This example stores all files in binaries/
, but not *.o
files located in
subdirectories of binaries/
.
Additional details:
artifacts:exclude
paths are not searched recursively.- Files matched by
artifacts:untracked
can be excluded usingartifacts:exclude
too.
Related topics:
artifacts:expire_in
Use expire_in
to specify how long job artifacts are stored before
they expire and are deleted. The expire_in
setting does not affect:
- Artifacts from the latest job, unless keeping the latest job artifacts is disabled at the project level or instance-wide.
After their expiry, artifacts are deleted hourly by default (using a cron job), and are not accessible anymore.
Keyword type: Job keyword. You can use it only as part of a job or in the
default
section.
Supported values: The expiry time. If no unit is provided, the time is in seconds. Valid values include:
'42'
42 seconds
3 mins 4 sec
2 hrs 20 min
2h20min
6 mos 1 day
47 yrs 6 mos and 4d
3 weeks and 2 days
never
Example of artifacts:expire_in
:
job:
artifacts:
expire_in: 1 week
Additional details:
- The expiration time period begins when the artifact is uploaded and stored on GitLab. If the expiry time is not defined, it defaults to the instance wide setting.
- To override the expiration date and protect artifacts from being automatically deleted:
- Select Keep on the job page.
- Set the value of
expire_in
tonever
.
- If the expiry time is too short, jobs in later stages of a long pipeline might try to fetch
expired artifacts from earlier jobs. If the artifacts are expired, jobs that try to fetch
them fail with a
could not retrieve the needed artifacts
error. Set the expiry time to be longer, or usedependencies
in later jobs to ensure they don’t try to fetch expired artifacts. artifacts:expire_in
doesn’t affect GitLab Pages deployments. To configure Pages deployments’ expiry, usepages.expire_in
.
artifacts:expose_as
Use the artifacts:expose_as
keyword to
expose job artifacts in the merge request UI.
Keyword type: Job keyword. You can use it only as part of a job or in the
default
section.
Supported values:
- The name to display in the merge request UI for the artifacts download link.
Must be combined with
artifacts:paths
.
Example of artifacts:expose_as
:
test:
script: ["echo 'test' > file.txt"]
artifacts:
expose_as: 'artifact 1'
paths: ['file.txt']
Additional details:
- Artifacts are saved, but do not display in the UI if the
artifacts:paths
values:- Use CI/CD variables.
- Define a directory, but do not end with
/
. For example,directory/
works withartifacts:expose_as
, butdirectory
does not. - Start with
./
. For example,file
works withartifacts:expose_as
, but./file
does not.
- A maximum of 10 job artifacts per merge request can be exposed.
- Glob patterns are unsupported.
- If a directory is specified and there is more than one file in the directory, the link is to the job artifacts browser.
- If GitLab Pages is enabled, GitLab automatically
renders the artifacts when the artifacts is a single file with one of these extensions:
.html
or.htm
.txt
.json
.xml
.log
Related topics:
artifacts:name
Use the artifacts:name
keyword to define the name of the created artifacts
archive. You can specify a unique name for every archive.
If not defined, the default name is artifacts
, which becomes artifacts.zip
when downloaded.
Keyword type: Job keyword. You can use it only as part of a job or in the
default
section.
Supported values:
- The name of the artifacts archive. CI/CD variables are supported.
Must be combined with
artifacts:paths
.
Example of artifacts:name
:
To create an archive with a name of the current job:
job:
artifacts:
name: "job1-artifacts-file"
paths:
- binaries/
Related topics:
artifacts:public
artifacts:public
is now superseded by artifacts:access
which
has more options.
Use artifacts:public
to determine whether the job artifacts should be
publicly available.
When artifacts:public
is true
(default), the artifacts in
public pipelines are available for download by anonymous, guest, and reporter users.
To deny read access to artifacts in public
pipelines for anonymous, guest, and reporter users, set artifacts:public
to false
:
Keyword type: Job keyword. You can use it only as part of a job or in the
default
section.
Supported values:
true
(default if not defined) orfalse
.
Example of artifacts:public
:
job:
artifacts:
public: false
artifacts:access
Use artifacts:access
to determine who can access the job artifacts from the GitLab UI
or API. This option does not prevent you from forwarding artifacts to downstream pipelines.
You cannot use artifacts:public
and artifacts:access
in the same job.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
all
(default): Artifacts in a job in public pipelines are available for download by anyone, including anonymous, guest, and reporter users.developer
: Artifacts in the job are only available for download by users with the Developer role or higher.maintainer
: Artifacts in the job are only available for download by users with the Maintainer role or higher.none
: Artifacts in the job are not available for download by anyone.
Example of artifacts:access
:
job:
artifacts:
access: 'developer'
Additional details:
artifacts:access
affects allartifacts:reports
too, so you can also restrict access to artifacts for reports.
artifacts:reports
Use artifacts:reports
to collect artifacts generated by
included templates in jobs.
Keyword type: Job keyword. You can use it only as part of a job or in the
default
section.
Supported values:
- See list of available artifacts reports types.
Example of artifacts:reports
:
rspec:
stage: test
script:
- bundle install
- rspec --format RspecJunitFormatter --out rspec.xml
artifacts:
reports:
junit: rspec.xml
Additional details:
- Combining reports in parent pipelines using artifacts from child pipelines is not supported. Track progress on adding support in this issue.
- To be able to browse and download the report output files, include the
artifacts:paths
keyword. This uploads and stores the artifact twice. - Artifacts created for
artifacts: reports
are always uploaded, regardless of the job results (success or failure). You can useartifacts:expire_in
to set an expiration date for the artifacts.
artifacts:untracked
Use artifacts:untracked
to add all Git untracked files as artifacts (along
with the paths defined in artifacts:paths
). artifacts:untracked
ignores configuration
in the repository’s .gitignore
, so matching artifacts in .gitignore
are included.
Keyword type: Job keyword. You can use it only as part of a job or in the
default
section.
Supported values:
true
orfalse
(default if not defined).
Example of artifacts:untracked
:
Save all Git untracked files:
job:
artifacts:
untracked: true
Related topics:
artifacts:when
Use artifacts:when
to upload artifacts on job failure or despite the
failure.
Keyword type: Job keyword. You can use it only as part of a job or in the
default
section.
Supported values:
on_success
(default): Upload artifacts only when the job succeeds.on_failure
: Upload artifacts only when the job fails.always
: Always upload artifacts (except when jobs time out). For example, when uploading artifacts required to troubleshoot failing tests.
Example of artifacts:when
:
job:
artifacts:
when: on_failure
Additional details:
- The artifacts created for
artifacts:reports
are always uploaded, regardless of the job results (success or failure).artifacts:when
does not change this behavior.
before_script
Use before_script
to define an array of commands that should run before each job’s
script
commands, but after artifacts are restored.
Keyword type: Job keyword. You can use it only as part of a job or in the
default
section.
Supported values: An array including:
- Single line commands.
- Long commands split over multiple lines.
- YAML anchors.
CI/CD variables are supported.
Example of before_script
:
job:
before_script:
- echo "Execute this command before any 'script:' commands."
script:
- echo "This command executes after the job's 'before_script' commands."
Additional details:
- Scripts you specify in
before_script
are concatenated with any scripts you specify in the mainscript
. The combined scripts execute together in a single shell. - Using
before_script
at the top level, but not in thedefault
section, is deprecated.
Related topics:
- Use
before_script
withdefault
to define a default array of commands that should run before thescript
commands in all jobs. - You can ignore non-zero exit codes.
- Use color codes with
before_script
to make job logs easier to review. - Create custom collapsible sections to simplify job log output.
cache
Use cache
to specify a list of files and directories to
cache between jobs. You can only use paths that are in the local working copy.
Caches are:
- Shared between pipelines and jobs.
- By default, not shared between protected and unprotected branches.
- Restored before artifacts.
- Limited to a maximum of four different caches.
You can disable caching for specific jobs, for example to override:
For more information about caches, see Caching in GitLab CI/CD.
cache:paths
Use the cache:paths
keyword to choose which files or directories to cache.
Keyword type: Job keyword. You can use it only as part of a job or in the
default
section.
Supported values:
- An array of paths relative to the project directory (
$CI_PROJECT_DIR
). You can use wildcards that use glob anddoublestar.Glob
patterns.
CI/CD variables are supported.
Example of cache:paths
:
Cache all files in binaries
that end in .apk
and the .config
file:
rspec:
script:
- echo "This job uses a cache."
cache:
key: binaries-cache
paths:
- binaries/*.apk
- .config
Additional details:
- The
cache:paths
keyword includes files even if they are untracked or in your.gitignore
file.
Related topics:
- See the common
cache
use cases for morecache:paths
examples.
cache:key
Use the cache:key
keyword to give each cache a unique identifying key. All jobs
that use the same cache key use the same cache, including in different pipelines.
If not set, the default key is default
. All jobs with the cache
keyword but
no cache:key
share the default
cache.
Must be used with cache: paths
, or nothing is cached.
Keyword type: Job keyword. You can use it only as part of a job or in the
default
section.
Supported values:
- A string.
- A predefined CI/CD variable.
- A combination of both.
Example of cache:key
:
cache-job:
script:
- echo "This job uses a cache."
cache:
key: binaries-cache-$CI_COMMIT_REF_SLUG
paths:
- binaries/
Additional details:
If you use Windows Batch to run your shell scripts you must replace
$
with%
. For example:key: %CI_COMMIT_REF_SLUG%
The
cache:key
value can’t contain:- The
/
character, or the equivalent URI-encoded%2F
. - Only the
.
character (any number), or the equivalent URI-encoded%2E
.
- The
The cache is shared between jobs, so if you’re using different paths for different jobs, you should also set a different
cache:key
. Otherwise cache content can be overwritten.
Related topics:
- You can specify a fallback cache key
to use if the specified
cache:key
is not found. - You can use multiple cache keys in a single job.
- See the common
cache
use cases for morecache:key
examples.
cache:key:files
Use cache:key:files
to generate a new cache key when the content of the specified files change.
If the content remains unchanged, the cache key remains consistent across branches and pipelines.
You can reuse caches and rebuild them less often, which speeds up subsequent pipeline runs.
Keyword type: Job keyword. You can use it only as part of a job or in the
default
section.
Supported values:
- An array of up to two file paths or patterns.
CI/CD variables are not supported.
Example of cache:key:files
:
cache-job:
script:
- echo "This job uses a cache."
cache:
key:
files:
- Gemfile.lock
- package.json
paths:
- vendor/ruby
- node_modules
This example creates a cache for Ruby and Node.js dependencies. The cache
is tied to the current versions of the Gemfile.lock
and package.json
files. When one of
these files changes, a new cache key is computed and a new cache is created. Any future
job runs that use the same Gemfile.lock
and package.json
with cache:key:files
use the new cache, instead of rebuilding the dependencies.
Additional details:
- The cache
key
is a SHA computed from the content of the listed files. If a file doesn’t exist, it’s ignored in the key calculation. If none of the specified files exist, the fallback key isdefault
. - Wildcard patterns like
**/package.json
can be used. An issue exists to increase the number of paths or patterns allowed for a cache key.
cache:key:files_commits
Use cache:key:files_commits
to generate a new cache key when the latest commit changes
for the specified files. cache:key:files_commits
cache keys change whenever
the specified files have a new commit, even if the file content remains identical.
Keyword type: Job keyword. You can use it only as part of a job or in the
default
section.
Supported values:
- An array of up to two file paths or patterns.
Example of cache:key:files_commits
:
cache-job:
script:
- echo "This job uses a commit-based cache."
cache:
key:
files_commits:
- package.json
- yarn.lock
paths:
- node_modules
This example creates a cache based on the commit history of package.json
and yarn.lock
.
If the commit history changes for these files, a new cache key is computed and a new cache is created.
Additional details:
- The cache
key
is a SHA computed from the most recent commit for each specified file. - If a file doesn’t exist, it’s ignored in the key calculation.
- If none of the specified files exist, the fallback key is
default
. - Cannot be used together with
cache:key:files
in the same cache configuration.
cache:key:prefix
Use cache:key:prefix
to combine a prefix with the SHA computed for cache:key:files
.
Keyword type: Job keyword. You can use it only as part of a job or in the
default
section.
Supported values:
- A string.
- A predefined CI/CD variable.
- A combination of both.
Example of cache:key:prefix
:
rspec:
script:
- echo "This rspec job uses a cache."
cache:
key:
files:
- Gemfile.lock
prefix: $CI_JOB_NAME
paths:
- vendor/ruby
For example, adding a prefix
of $CI_JOB_NAME
causes the key to look like rspec-feef9576d21ee9b6a32e30c5c79d0a0ceb68d1e5
.
If a branch changes Gemfile.lock
, that branch has a new SHA checksum for cache:key:files
.
A new cache key is generated, and a new cache is created for that key. If Gemfile.lock
is not found, the prefix is added to default
, so the key in the example would be rspec-default
.
Additional details:
- If no file in
cache:key:files
is changed in any commits, the prefix is added to thedefault
key.
cache:untracked
Use untracked: true
to cache all files that are untracked in your Git repository.
Untracked files include files that are:
- Ignored due to
.gitignore
configuration. - Created, but not added to the checkout with
git add
.
Caching untracked files can create unexpectedly large caches if the job downloads:
- Dependencies, like gems or node modules, which are usually untracked.
- Artifacts from a different job. Files extracted from the artifacts are untracked by default.
Keyword type: Job keyword. You can use it only as part of a job or in the
default
section.
Supported values:
true
orfalse
(default).
Example of cache:untracked
:
rspec:
script: test
cache:
untracked: true
Additional details:
You can combine
cache:untracked
withcache:paths
to cache all untracked files, as well as files in the configured paths. Usecache:paths
to cache any specific files, including tracked files, or files that are outside of the working directory, and usecache: untracked
to also cache all untracked files. For example:rspec: script: test cache: untracked: true paths: - binaries/
In this example, the job caches all untracked files in the repository, as well as all the files in
binaries/
. If there are untracked files inbinaries/
, they are covered by both keywords.
cache:unprotect
Use cache:unprotect
to set a cache to be shared between protected
and unprotected branches.
When set to true
, users without access to protected branches can read and write to
cache keys used by protected branches.
Keyword type: Job keyword. You can use it only as part of a job or in the
default
section.
Supported values:
true
orfalse
(default).
Example of cache:unprotect
:
rspec:
script: test
cache:
unprotect: true
cache:when
Use cache:when
to define when to save the cache, based on the status of the job.
Must be used with cache: paths
, or nothing is cached.
Keyword type: Job keyword. You can use it only as part of a job or in the
default
section.
Supported values:
on_success
(default): Save the cache only when the job succeeds.on_failure
: Save the cache only when the job fails.always
: Always save the cache.
Example of cache:when
:
rspec:
script: rspec
cache:
paths:
- rspec/
when: 'always'
This example stores the cache whether or not the job fails or succeeds.
cache:policy
To change the upload and download behavior of a cache, use the cache:policy
keyword.
By default, the job downloads the cache when the job starts, and uploads changes
to the cache when the job ends. This caching style is the pull-push
policy (default).
To set a job to only download the cache when the job starts, but never upload changes
when the job finishes, use cache:policy:pull
.
To set a job to only upload a cache when the job finishes, but never download the
cache when the job starts, use cache:policy:push
.
Use the pull
policy when you have many jobs executing in parallel that use the same cache.
This policy speeds up job execution and reduces load on the cache server. You can
use a job with the push
policy to build the cache.
Must be used with cache: paths
, or nothing is cached.
Keyword type: Job keyword. You can use it only as part of a job or in the
default
section.
Supported values:
pull
push
pull-push
(default)- CI/CD variables.
Example of cache:policy
:
prepare-dependencies-job:
stage: build
cache:
key: gems
paths:
- vendor/bundle
policy: push
script:
- echo "This job only downloads dependencies and builds the cache."
- echo "Downloading dependencies..."
faster-test-job:
stage: test
cache:
key: gems
paths:
- vendor/bundle
policy: pull
script:
- echo "This job script uses the cache, but does not update it."
- echo "Running tests..."
Related topics:
cache:fallback_keys
Use cache:fallback_keys
to specify a list of keys to try to restore cache from
if there is no cache found for the cache:key
. Caches are retrieved in the order specified
in the fallback_keys
section.
Keyword type: Job keyword. You can use it only as part of a job or in the
default
section.
Supported values:
- An array of cache keys
Example of cache:fallback_keys
:
rspec:
script: rspec
cache:
key: gems-$CI_COMMIT_REF_SLUG
paths:
- rspec/
fallback_keys:
- gems
when: 'always'
coverage
Use coverage
with a custom regular expression to configure how code coverage
is extracted from the job output. The coverage is shown in the UI if at least one
line in the job output matches the regular expression.
To extract the code coverage value from the match, GitLab uses
this smaller regular expression: \d+(?:\.\d+)?
.
Supported values:
- An RE2 regular expression. Must start and end with
/
. Must match the coverage number. May match surrounding text as well, so you don’t need to use a regular expression character group to capture the exact number. Because it uses RE2 syntax, all groups must be non-capturing.
Example of coverage
:
job1:
script: rspec
coverage: '/Code coverage: \d+(?:\.\d+)?/'
In this example:
- GitLab checks the job log for a match with the regular expression. A line
like
Code coverage: 67.89% of lines covered
would match. - GitLab then checks the matched fragment to find a match to the regular expression:
\d+(?:\.\d+)?
. The sample regex can match a code coverage of67.89
.
Additional details:
- You can find regex examples in Code Coverage.
- If there is more than one matched line in the job output, the last line is used (the first result of reverse search).
- If there are multiple matches in a single line, the last match is searched for the coverage number.
- If there are multiple coverage numbers found in the matched fragment, the first number is used.
- Leading zeros are removed.
- Coverage output from child pipelines is not recorded or displayed. Check the related issue for more details.
dast_configuration
- Tier: Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
Use the dast_configuration
keyword to specify a site profile and scanner profile to be used in a
CI/CD configuration. Both profiles must first have been created in the project. The job’s stage must
be dast
.
Keyword type: Job keyword. You can use only as part of a job.
Supported values: One each of site_profile
and scanner_profile
.
- Use
site_profile
to specify the site profile to be used in the job. - Use
scanner_profile
to specify the scanner profile to be used in the job.
Example of dast_configuration
:
stages:
- build
- dast
include:
- template: DAST.gitlab-ci.yml
dast:
dast_configuration:
site_profile: "Example Co"
scanner_profile: "Quick Passive Test"
In this example, the dast
job extends the dast
configuration added with the include
keyword
to select a specific site profile and scanner profile.
Additional details:
- Settings contained in either a site profile or scanner profile take precedence over those contained in the DAST template.
Related topics:
dependencies
Use the dependencies
keyword to define a list of specific jobs to fetch artifacts
from. The specified jobs must all be in earlier stages. You can also set a job to download no artifacts at all.
When dependencies
is not defined in a job, all jobs in earlier stages are considered dependent
and the job fetches all artifacts from those jobs.
To fetch artifacts from a job in the same stage, you must use needs:artifacts
.
You should not combine dependencies
with needs
in the same job.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
- The names of jobs to fetch artifacts from.
- An empty array (
[]
), to configure the job to not download any artifacts.
Example of dependencies
:
build osx:
stage: build
script: make build:osx
artifacts:
paths:
- binaries/
build linux:
stage: build
script: make build:linux
artifacts:
paths:
- binaries/
test osx:
stage: test
script: make test:osx
dependencies:
- build osx
test linux:
stage: test
script: make test:linux
dependencies:
- build linux
deploy:
stage: deploy
script: make deploy
environment: production
In this example, two jobs have artifacts: build osx
and build linux
. When test osx
is executed,
the artifacts from build osx
are downloaded and extracted in the context of the build.
The same thing happens for test linux
and artifacts from build linux
.
The deploy
job downloads artifacts from all previous jobs because of
the stage precedence.
Additional details:
- The job status does not matter. If a job fails or it’s a manual job that isn’t triggered, no error occurs.
- If the artifacts of a dependent job are expired or deleted, then the job fails.
environment
Use environment
to define the environment that a job deploys to.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values: The name of the environment the job deploys to, in one of these formats:
- Plain text, including letters, digits, spaces, and these characters:
-
,_
,/
,$
,{
,}
. - CI/CD variables, including predefined, project, group, instance, or variables defined in the
.gitlab-ci.yml
file. You can’t use variables defined in ascript
section.
Example of environment
:
deploy to production:
stage: deploy
script: git push production HEAD:main
environment: production
Additional details:
- If you specify an
environment
and no environment with that name exists, an environment is created.
environment:name
Set a name for an environment.
Common environment names are qa
, staging
, and production
, but you can use any name.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values: The name of the environment the job deploys to, in one of these formats:
- Plain text, including letters, digits, spaces, and these characters:
-
,_
,/
,$
,{
,}
. - CI/CD variables,
including predefined, project, group, instance, or variables defined in the
.gitlab-ci.yml
file. You can’t use variables defined in ascript
section.
Example of environment:name
:
deploy to production:
stage: deploy
script: git push production HEAD:main
environment:
name: production
environment:url
Set a URL for an environment.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values: A single URL, in one of these formats:
- Plain text, like
https://prod.example.com
. - CI/CD variables,
including predefined, project, group, instance, or variables defined in the
.gitlab-ci.yml
file. You can’t use variables defined in ascript
section.
Example of environment:url
:
deploy to production:
stage: deploy
script: git push production HEAD:main
environment:
name: production
url: https://prod.example.com
Additional details:
- After the job completes, you can access the URL by selecting a button in the merge request, environment, or deployment pages.
environment:on_stop
Closing (stopping) environments can be achieved with the on_stop
keyword
defined under environment
. It declares a different job that runs to close the
environment.
Keyword type: Job keyword. You can use it only as part of a job.
Additional details:
- See
environment:action
for more details and an example.
environment:action
Use the action
keyword to specify how the job interacts with the environment.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values: One of the following keywords:
Value | Description |
---|---|
start | Default value. Indicates that the job starts the environment. The deployment is created after the job starts. |
prepare | Indicates that the job is only preparing the environment. It does not trigger deployments. Read more about preparing environments. |
stop | Indicates that the job stops an environment. Read more about stopping an environment. |
verify | Indicates that the job is only verifying the environment. It does not trigger deployments. Read more about verifying environments. |
access | Indicates that the job is only accessing the environment. It does not trigger deployments. Read more about accessing environments. |
Example of environment:action
:
stop_review_app:
stage: deploy
variables:
GIT_STRATEGY: none
script: make delete-app
when: manual
environment:
name: review/$CI_COMMIT_REF_SLUG
action: stop
environment:auto_stop_in
The auto_stop_in
keyword specifies the lifetime of the environment. When an environment expires, GitLab
automatically stops it.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values: A period of time written in natural language. For example, these are all equivalent:
168 hours
7 days
one week
never
CI/CD variables are supported.
Example of environment:auto_stop_in
:
review_app:
script: deploy-review-app
environment:
name: review/$CI_COMMIT_REF_SLUG
auto_stop_in: 1 day
When the environment for review_app
is created, the environment’s lifetime is set to 1 day
.
Every time the review app is deployed, that lifetime is also reset to 1 day
.
The auto_stop_in
keyword can be used for all environment actions except stop
.
Some actions can be used to reset the scheduled stop time for the environment. For more information, see
Access an environment for preparation or verification purposes.
Related topics:
environment:kubernetes
Use the kubernetes
keyword to configure the dashboard for Kubernetes
and GitLab-managed Kubernetes resources for an environment.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
agent
: A string specifying the GitLab agent for Kubernetes. The format ispath/to/agent/project:agent-name
. If the agent is connected to the project running the pipeline, use$CI_PROJECT_PATH:agent-name
.dashboard:namespace
: A string representing the Kubernetes namespace where the environment is deployed. The namespace must be set together with theagent
keyword.namespace
is deprecated.dashboard:flux_resource_path
: A string representing the full path to the Flux resource, such as aHelmRelease
. The Flux resource must be set together with theagent
anddashboard:namespace
keywords.flux_resource_path
is deprecated.managed_resources
: A hash with theenabled
keyword to configure the GitLab-managed Kubernetes resources for the environment.managed_resources:enabled
: A boolean value indicating whether GitLab-managed Kubernetes resources are enabled for the environment.
dashboard
: A hash with thedashboard:namespace
anddashboard:flux_resource_path
keywords to configure the dashboard for Kubernetes for the environment.
Example of environment:kubernetes
:
deploy:
stage: deploy
script: make deploy-app
environment:
name: production
kubernetes:
agent: path/to/agent/project:agent-name
dashboard:
namespace: my-namespace
flux_resource_path: helm.toolkit.fluxcd.io/v2/namespaces/flux-system/helmreleases/helm-release-resource
Example of environment:kubernetes
when disabling managed resources:
deploy:
stage: deploy
script: make deploy-app
environment:
name: production
kubernetes:
agent: path/to/agent/project:agent-name
managed_resources:
enabled: false
dashboard:
namespace: my-namespace
flux_resource_path: helm.toolkit.fluxcd.io/v2/namespaces/flux-system/helmreleases/helm-release-resource
This configuration:
- Sets up the
deploy
job to deploy to theproduction
environment - Associates the agent named
agent-name
with the environment - Configures the dashboard for Kubernetes for an environment with
the namespace
my-namespace
and theflux_resource_path
set tohelm.toolkit.fluxcd.io/v2/namespaces/flux-system/helmreleases/helm-release-resource
.
Additional details:
- To use the dashboard, you must
install the GitLab agent for Kubernetes and
configure
user_access
for the environment’s project or its parent group. - The user running the job must be authorized to access the cluster agent.
Otherwise, the dashboard ignores the
agent
,namespace
, andflux_resource_path
attributes. - If you only want to set the
agent
, you do not have to set thenamespace
, and cannot setflux_resource_path
. However, this configuration lists all namespaces in a cluster in the dashboard for Kubernetes.
environment:deployment_tier
Use the deployment_tier
keyword to specify the tier of the deployment environment.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values: One of the following:
production
staging
testing
development
other
Example of environment:deployment_tier
:
deploy:
script: echo
environment:
name: customer-portal
deployment_tier: production
Additional details:
- Environments created from this job definition are assigned a tier based on this value.
- Existing environments don’t have their tier updated if this value is added later. Existing environments must have their tier updated via the Environments API.
Related topics:
Dynamic environments
Use CI/CD variables to dynamically name environments.
For example:
deploy as review app:
stage: deploy
script: make deploy
environment:
name: review/$CI_COMMIT_REF_SLUG
url: https://$CI_ENVIRONMENT_SLUG.example.com/
The deploy as review app
job is marked as a deployment to dynamically
create the review/$CI_COMMIT_REF_SLUG
environment. $CI_COMMIT_REF_SLUG
is a CI/CD variable set by the runner. The
$CI_ENVIRONMENT_SLUG
variable is based on the environment name, but suitable
for inclusion in URLs. If the deploy as review app
job runs in a branch named
pow
, this environment would be accessible with a URL like https://review-pow.example.com/
.
The common use case is to create dynamic environments for branches and use them as review apps. You can see an example that uses review apps at https://gitlab.com/gitlab-examples/review-apps-nginx/.
extends
Use extends
to reuse configuration sections. It’s an alternative to YAML anchors
and is a little more flexible and readable.
Keyword type: Job keyword. You can use it only as part of a job.
Supported values:
- The name of another job in the pipeline.
- A list (array) of names of other jobs in the pipeline.
Example of extends
:
.tests:
stage: test
image: ruby:3.0
rspec:
extends: .tests
script: rake rspec
rubocop:
extends: .tests
script: bundle exec rubocop
In this example, the rspec
job uses the configuration from the .tests
template job.
When creating the pipeline, GitLab:
- Performs a reverse deep merge based on the keys.
- Merges the
.tests
content with therspec
job. - Doesn’t merge the values of the keys.
The combined configuration is equivalent to these jobs:
rspec:
stage: test
image: ruby:3.0
script: rake rspec
rubocop:
stage: test
image: ruby:3.0
script: bundle exec rubocop
Additional details:
- You can use multiple parents for
extends
. - The
extends
keyword supports up to eleven levels of inheritance, but you should avoid using more than three levels. - In the previous example,
.tests
is a hidden job, but you can extend configuration from regular jobs as well.
Related topics:
- Reuse configuration sections by using
extends
. - Use
extends
to reuse configuration from included configuration files.
hooks
Use hooks
to specify lists of commands to execute on the runner
at certain stages of job execution, like before retrieving the Git repository.
Keyword type: Job keyword. You can use it only as part of a job or in the
default
section.
Supported values:
- A hash of hooks and their commands. Available hooks:
pre_get_sources_script
.
hooks:pre_get_sources_script
Use hooks:pre_get_sources_script
to specify a list of commands to execute on the runner
before cloning the Git repository and any submodules.
You can use it for example to:
- Adjust the Git configuration.
- Export tracing variables.
Supported values: An array including:
- Single line commands.
- Long commands split over multiple lines.
- YAML anchors.
CI/CD variables are supported.
Example of hooks:pre_get_sources_script
:
job1:
hooks:
pre_get_sources_script:
- echo 'hello job1 pre_get_sources_script'
script: echo 'hello job1 script'
Related topics:
identity
- Tier: Free, Premium, Ultimate
- Offering: GitLab.com
- Status: Beta
This feature is in beta.
Use identity
to authenticate with third party services using identity federation.
Keyword type: Job keyword. You can use it only as part of a job or in the default:
section.
Supported values: An identifier. Supported providers:
google_cloud
: Google Cloud. Must be configured with the Google Cloud IAM integration.
Example of identity
:
job_with_workload_identity:
identity: google_cloud
script:
- gcloud compute instances list
Related topics:
id_tokens
Use id_tokens
to create JSON web tokens (JWT) to authenticate with third party services. All
JWTs created this way support OIDC authentication. The required aud
sub-keyword is used to configure the aud
claim for the JWT.
Supported values:
- Token names with their
aud
claims.aud
supports:- A single string.
- An array of strings.
- CI/CD variables.
Example of id_tokens
:
job_with_id_tokens:
id_tokens:
ID_TOKEN_1:
aud: https://vault.example.com
ID_TOKEN_2:
aud:
- https://gcp.com
- https://aws.com
SIGSTORE_ID_TOKEN:
aud: sigstore
script:
- command_to_authenticate_with_vault $ID_TOKEN_1
- command_to_authenticate_with_aws $ID_TOKEN_2
- command_to_authenticate_with_gcp $ID_TOKEN_2
Related topics:
image
Use image
to specify a Docker image that the job runs in.
Keyword type: Job keyword. You can use it only as part of a job or in the
default
section.
Supported values: The name of the image, including the registry path if needed, in one of these formats:
<image-name>
(Same as using<image-name>
with thelatest
tag)<image-name>:<tag>
<image-name>@<digest>
CI/CD variables