- Keywords
- Global keywords
- Job keywords
- Deprecated keywords
Keyword reference for the .gitlab-ci.yml file
This document lists the configuration options for your GitLab .gitlab-ci.yml file.
- For a quick introduction to GitLab CI/CD, follow the quick start guide.
- For a collection of examples, see GitLab CI/CD Examples.
- To view a large
.gitlab-ci.ymlfile used in an enterprise, see the gitlab.
When you are editing your .gitlab-ci.yml file, you can validate it with the
CI Lint tool.
If you are editing this page, make sure you follow the CI/CD YAML reference style guide.
Keywords
A GitLab CI/CD pipeline configuration includes:
-
Global keywords that configure pipeline behavior:
Keyword Description defaultCustom default values for job keywords. includeImport configuration from other YAML files. stagesThe names and order of the pipeline stages. variablesDefine CI/CD variables for all job in the pipeline. workflowControl what types of pipeline run. -
Jobs configured with job keywords:
Keyword Description after_scriptOverride a set of commands that are executed after job. allow_failureAllow job to fail. A failed job does not cause the pipeline to fail. artifactsList of files and directories to attach to a job on success. before_scriptOverride a set of commands that are executed before job. cacheList of files that should be cached between subsequent runs. coverageCode coverage settings for a given job. dast_configurationUse configuration from DAST profiles on a job level. dependenciesRestrict which artifacts are passed to a specific job by providing a list of jobs to fetch artifacts from. environmentName of an environment to which the job deploys. exceptControl when jobs are not created. extendsConfiguration entries that this job inherits from. imageUse Docker images. inheritSelect which global defaults all jobs inherit. interruptibleDefines if a job can be canceled when made redundant by a newer run. needsExecute jobs earlier than the stage ordering. onlyControl when jobs are created. pagesUpload the result of a job to use with GitLab Pages. parallelHow many instances of a job should be run in parallel. releaseInstructs the runner to generate a release object. resource_groupLimit job concurrency. retryWhen and how many times a job can be auto-retried in case of a failure. rulesList of conditions to evaluate and determine selected attributes of a job, and whether or not it’s created. scriptShell script that is executed by a runner. secretsThe CI/CD secrets the job needs. servicesUse Docker services images. stageDefines a job stage. tagsList of tags that are used to select a runner. timeoutDefine a custom job-level timeout that takes precedence over the project-wide setting. triggerDefines a downstream pipeline trigger. variablesDefine job variables on a job level. whenWhen to run job.
Global keywords
Some keywords are not defined in a job. These keywords control pipeline behavior or import additional pipeline configuration.
default
You can set global defaults for some keywords. Jobs that do not define one or more
of the listed keywords use the value defined in the default section.
Keyword type: Global keyword.
Possible inputs: These keywords can have custom defaults:
Example of default:
default:
image: ruby:3.0
rspec:
script: bundle exec rspec
rspec 2.7:
image: ruby:2.7
script: bundle exec rspec
In this example, ruby:3.0 is the default image value for all jobs in the pipeline.
The rspec 2.7 job does not use the default, because it overrides the default with
a job-specific image section:
Additional details:
- When the pipeline is created, each default is copied to all jobs that don’t have that keyword defined.
- If a job already has one of the keywords configured, the configuration in the job takes precedence and is not replaced by the default.
- Control inheritance of default keywords in jobs with
inherit:default.
include
Use You can also store template files in a central repository and include them in projects.
The You can nest up to 100 includes. In ,
the same file can be included multiple times in nested includes, but duplicates are ignored.
In ,
the time limit to resolve all files is 30 seconds.
Keyword type: Global keyword.
Possible inputs: The Additional details:
Related topics:
Use Keyword type: Global keyword.
Possible inputs:
Example of You can also use shorter syntax to define the path:
Additional details:
Including multiple files from the same project
To include files from another private project on the same GitLab instance,
use Keyword type: Global keyword.
Possible inputs:
Example of You can also specify a You can include multiple files from the same project:
Additional details:
Use Keyword type: Global keyword.
Possible inputs:
Example of Additional details:
Use Keyword type: Global keyword.
Possible inputs:
Example of Multiple Additional details:
Use If The order of the items in Keyword type: Global keyword.
Example of In this example:
If any job fails, the pipeline is marked as Additional details:
Related topics:
Use Related topics:
The When no rules evaluate to true, the pipeline does not run.
Possible inputs: You can use some of the same keywords as job-level Example of In this example, pipelines run if the commit title (first line of the commit message) does not end with Additional details:
Related topics:
You can use When the condition matches, the variable is created and can be used by all jobs
in the pipeline. If the variable is already defined at the global level, the Keyword type: Global keyword.
Possible inputs: Variable name and value pairs:
Example of When the branch is the default branch:
When the branch is When the branch is something else:
The following topics explain how to use keywords to configure CI/CD pipelines.
Use Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs: An array including:
Example of Additional details:
Scripts you specify in If a job times out or is cancelled, the Use When jobs are allowed to fail ( This same warning is displayed when:
The default value for Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs:
Example of In this example, Additional details:
Use Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs:
Example of Use The artifacts are sent to GitLab after the job finishes. They are
available for download in the GitLab UI if the size is smaller than the
the maximum artifact size.
By default, jobs in later stages automatically download all the artifacts created
by jobs in earlier stages. You can control artifact download behavior in jobs with
When using the Job artifacts are only collected for successful jobs by default, and
artifacts are restored after caches.
Use Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs:
Example of This example stores all files in Additional details:
Related topics:
Use After their expiry, artifacts are deleted hourly by default (using a cron job), and are not
accessible anymore.
Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs: The expiry time. If no unit is provided, the time is in seconds.
Valid values include:
Example of Additional details:
Use the Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs:
Example of Additional details:
Related topics:
Use the If not defined, the default name is Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs:
Example of To create an archive with a name of the current job:
Related topics:
Paths are relative to the project directory ( Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs:
Example of This example creates an artifact with Additional details:
Related topics:
Use When To deny read access for anonymous and guest users to artifacts in public
pipelines, set Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs:
Example of Use Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs:
Example of Additional details:
Use Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs:
Example of Save all Git untracked files:
Related topics:
Use Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs:
Example of Use Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs: An array including:
Example of Additional details:
Related topics:
Use Caching is shared between pipelines and jobs. Caches are restored before artifacts.
Learn more about caches in Caching in GitLab CI/CD.
Use the Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs:
Example of Cache all files in Related topics:
Use the If not set, the default key is Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs:
Example of Additional details:
The Related topics:
Use the Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs:
Example of This example creates a cache for Ruby and Node.js dependencies. The cache
is tied to the current versions of the Additional details:
Use Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs:
Example of For example, adding a Additional details:
Use Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs:
Example of Additional details:
You can combine
Use Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs:
Example of This example stores the cache whether or not the job fails or succeeds.
To change the upload and download behavior of a cache, use the To set a job to only download the cache when the job starts, but never upload changes
when the job finishes, use To set a job to only upload a cache when the job finishes, but never download the
cache when the job starts, use Use the Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs:
Example of Use To extract the code coverage value from the match, GitLab uses
this smaller regular expression: Possible inputs:
Example of In this example:
Additional details:
Use the Keyword type: Job keyword. You can use only as part of a job.
Possible inputs: One each of Example of In this example, the Additional details:
Related topics:
Use the If you do not use Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs:
Example of In this example, two jobs have artifacts: The Additional details:
Use Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs: The name of the environment the job deploys to, in one of these
formats:
Example of Additional details:
Set a name for an environment.
Common environment names are Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs: The name of the environment the job deploys to, in one of these
formats:
Example of Set a URL for an environment.
Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs: A single URL, in one of these formats:
Example of Additional details:
Closing (stopping) environments can be achieved with the Keyword type: Job keyword. You can use it only as part of a job.
Additional details:
Use the Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs: One of the following keywords:
Example of
The Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs: A period of time written in natural language. For example,
these are all equivalent:
Example of When the environment for Related topics:
Use the Keyword type: Job keyword. You can use it only as part of a job.
Example of This configuration sets up the Additional details:
Related topics:
Use the Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs: One of the following:
Example of Related topics:
Use CI/CD variables to dynamically name environments.
For example:
The The common use case is to create dynamic environments for branches and use them
as Review Apps. You can see an example that uses Review Apps at
https://gitlab.com/gitlab-examples/review-apps-nginx/.
Use Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs:
Example of In this example, the The result is this Additional details:
Related topics:
Use Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs: The name of the image, including the registry path if needed, in one of these formats:
Example of In this example, the Related topics:
The name of the Docker image that the job runs in. Similar to Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs: The name of the image, including the registry path if needed, in one of these formats:
Example of Related topics:
Command or script to execute as the container’s entry point.
When the Docker container is created, the Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs:
Example of Related topics:
Use Use Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs:
Example of Additional details:
Use Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs:
Example of Additional details:
Use This keyword is used with the automatic cancellation of redundant pipelines
feature. When enabled, a running job with You can’t cancel subsequent jobs after a job with Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs:
Example of In this example, a new pipeline causes a running pipeline to be:
Additional details:
Use You can ignore stage ordering and run some jobs without waiting for others to complete.
Jobs in multiple stages can run concurrently.
Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs:
Example of This example creates four paths of execution:
Additional details:
When a job uses Use Keyword type: Job keyword. You can use it only as part of a job. Must be used with Possible inputs:
Example of In this example:
Additional details:
Use If there is a pipeline running for the specified ref, a job with Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs:
Examples of In this example, In GitLab 13.3 and later, you can use CI/CD variables in Additional details:
Related topics:
A child pipeline can download artifacts from a job in
its parent pipeline or another child pipeline in the same parent-child pipeline hierarchy.
Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs:
Example of Parent pipeline ( Child pipeline ( In this example, the Additional details:
To need a job that sometimes does not exist in the pipeline, add Jobs that use Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs:
Example of In this example:
You can mirror the pipeline status from an upstream pipeline to a bridge job by
using the Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs:
Example of Additional details:
You can use Four keywords can be used with See specify when jobs run with Use the Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs: An array including any number of:
The following keywords:
Example of Additional details:
If a job does not use For example, Use the Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs:
Example of Related topics:
Use the Use Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs: An array including any number of:
Example of Additional details:
Related topics:
Use Keyword type: Job-specific. You can use it only as part of a job.
Possible inputs:
Example of In this example, the Use Keyword type: Job name.
Example of This example moves all files from the root of the project to the Additional details:
You must:
Use Multiple runners must exist, or a single runner must be configured to run multiple jobs concurrently.
Parallel jobs are named sequentially from Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs:
Example of This example creates 5 jobs that run in parallel, named Additional details:
Related topics:
Use Multiple runners must exist, or a single runner must be configured to run multiple jobs concurrently.
Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs: An array of hashes of variables:
Example of The example generates 10 parallel Related topics:
Use The release job must have access to the release-cli,
which must be in the If you use the Docker executor,
you can use this image from the GitLab Container Registry: Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs: The Example of This example creates a release:
Additional details:
All release jobs, except trigger jobs, must include the An The Related topics:
Required. The Git tag for the release.
If the tag does not exist in the project yet, it is created at the same time as the release.
New tags use the SHA associated with the pipeline.
Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs:
Example of To create a release when a new tag is added to the project:
To create a release and a new tag at the same time, your The release name. If omitted, it is populated with the value of Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs:
Example of The long description of the release.
Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs:
Example of The Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs:
The title of each milestone the release is associated with.
The date and time when the release is ready.
Possible inputs:
Example of Additional details:
Use Requires Example of
Use For example, if multiple jobs that belong to the same resource group are queued simultaneously,
only one of the jobs starts. The other jobs wait until the Resource groups behave similar to semaphores in other programming languages.
You can define multiple resource groups per environment. For example,
when deploying to physical devices, you might have multiple physical devices. Each device
can be deployed to, but only one deployment can occur per device at any given time.
Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs:
Example of In this example, two Related topics:
Use When a job fails, the job is processed up to two more times, until it succeeds or
reaches the maximum number of retries.
By default, all failure types cause the job to be retried. Use Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs:
Example of Use Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs:
Example of If there is a failure other than a runner system failure, the job is not retried.
Example of Related topics:
You can specify the number of retry attempts for certain stages of job execution
using variables.
Use Rules are evaluated when the pipeline is created, and evaluated in order
until the first match. When a match is found, the job is either included or excluded from the pipeline,
depending on the configuration.
You cannot use dotenv variables created in job scripts in rules, because rules are evaluated before any jobs run.
You can combine multiple keywords together for complex rules.
The job is added to the pipeline:
The job is not added to the pipeline:
You can use Use Keyword type: Job-specific and pipeline-specific. You can use it as part of a job
to configure the job behavior, or with Possible inputs:
Example of Additional details:
Related topics:
Use Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs:
Example of Additional details:
Related topics:
Use Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs:
Example of Additional details:
Use You can also use Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs:
Example of If the rule matches, then the job is a manual job with Additional details:
Use Keyword type: Job-specific. You can use it only as part of a job.
Possible inputs:
Example of Use All jobs except trigger jobs require a Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs: An array including:
Example of Additional details:
Related topics:
Use This keyword must be used with Use Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs:
Example of To specify all details explicitly and use the You can shorten this syntax. With the short syntax, To specify a custom secrets engine path in the short syntax, add a suffix that starts with
Use By default, the secret is passed to the job as a If your software can’t use Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs:
Example of Additional details:
Use Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs: The name of the services image, including the registry path if needed, in one of these formats:
Example of In this example, the job launches a Ruby container. Then, from that container, the job launches
another container that’s running PostgreSQL. Then the job then runs scripts
in that container.
Related topics:
Use If Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs: An array including any number of stage names. Stage names can be:
Example of Additional details:
Use the You must have a job in at least one stage other than Keyword type: You can only use it with a job’s Example of
Use the You must have a job in at least one stage other than Keyword type: You can only use it with a job’s Example of Use When you register a runner, you can specify the runner’s tags, for
example Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs:
Example of In this example, only runners with both the Additional details:
Related topics:
Use The job-level timeout can be longer than the project-level timeout.
but can’t be longer than the runner’s timeout.
Keyword type: Job keyword. You can use it only as part of a job or in the
Possible inputs: A period of time written in natural language. For example, these are all equivalent:
Example of Use Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs:
Example of Example of Additional details:
Related topics:
Use This behavior is different than the default, which is for the This setting makes your pipeline execution linear rather than parallel.
Example of In this example, jobs from subsequent stages wait for the triggered pipeline to
successfully complete before starting.
Use Possible inputs:
Example of Run this pipeline manually, with
the CI/CD variable CI/CD variables are configurable values that are passed to jobs.
Use Variables are always available in Keyword type: Global and job keyword. You can use it at the global level,
and also at the job level.
If you define Possible inputs: Variable name and value pairs:
Examples of Additional details:
Related topics:
Use the Must be used with Keyword type: Global keyword. You cannot set job-level variables to be pre-filled when you run a pipeline manually.
Possible inputs:
Example of Use Keyword type: Job keyword. You can use it only as part of a job.
Possible inputs:
Example of In this example, the script:
Additional details:
Related topics:
The following keywords are deprecated.
Defining Use
If you didn't find what you were looking for,
search the docs.
If you want help with something specific and could use community support,
.
For problems setting up or using this feature (depending on your GitLab
subscription).include to include external YAML files in your CI/CD configuration.
You can split one long .gitlab-ci.yml file into multiple files to increase readability,
or reduce duplication of the same configuration in multiple places.
include files are:
.gitlab-ci.yml file.
.gitlab-ci.yml file,
regardless of the position of the include keyword.
include subkeys:
.gitlab-ci.yml file. The two configurations are merged together, and the
configuration in the .gitlab-ci.yml file takes precedence over the included configuration.
include:local
include:local to include a file that is in the same repository as the .gitlab-ci.yml file.
Use include:local instead of symbolic links.
/).
.yml or .yaml.
* and ** wildcards in the file path.
include:local:
include:
- local: '/templates/.gitlab-ci-template.yml'
include: '.gitlab-ci-production.yml'
.gitlab-ci.yml file and the local file must be on the same branch.
include:file
include:file. You can use include:file in combination with include:project only.
/). The YAML file must have the
extension .yml or .yaml.
include:file:
include:
- project: 'my-group/my-project'
file: '/templates/.gitlab-ci-template.yml'
ref. If you do not specify a value, the ref defaults to the HEAD of the project:
include:
- project: 'my-group/my-project'
ref: main
file: '/templates/.gitlab-ci-template.yml'
- project: 'my-group/my-project'
ref: v1.0.0
file: '/templates/.gitlab-ci-template.yml'
- project: 'my-group/my-project'
ref: 787123b47f14b552955ca2786bc9542ae66fee5b # Git SHA
file: '/templates/.gitlab-ci-template.yml'
include:
- project: 'my-group/my-project'
ref: main
file:
- '/templates/.builds.yml'
- '/templates/.tests.yml'
local (relative to the target project), project, remote, or template includes.
.gitlab-ci.yml file configuration included by all methods is evaluated.
The configuration is a snapshot in time and persists in the database. GitLab does not reflect any changes to
the referenced .gitlab-ci.yml file configuration until the next pipeline starts.
not found or access denied error may be displayed if the user does not have access to any of the included files.
include:remote
include:remote with a full URL to include a file from a different location.
GET request. Authentication with the
remote URL is not supported. The YAML file must have the extension .yml or .yaml.
include:remote:
include:
- remote: 'https://gitlab.com/example-project/-/raw/main/.gitlab-ci.yml'
include:template
include:template to include .
include:template:
# File sourced from the GitLab template collection
include:
- template: Auto-DevOps.gitlab-ci.yml
include:template files:
include:
- template: Android-Fastlane.gitlab-ci.yml
- template: Auto-DevOps.gitlab-ci.yml
project, remote, or template includes.
stages
stages to define stages that contain groups of jobs. Use stage
in a job to configure the job to run in a specific stage.
stages is not defined in the .gitlab-ci.yml file, the default pipeline stages are:
stages defines the execution order for jobs:
stages:
stages:
- build
- test
- deploy
build execute in parallel.
build succeed, the test jobs execute in parallel.
test succeed, the deploy jobs execute in parallel.
deploy succeed, the pipeline is marked as passed.
failed and jobs in later stages do not
start. Jobs in the current stage are not stopped and continue to run.
stage, the job is assigned the test stage.
needs keyword.
workflow
workflow to control pipeline behavior.
workflow:rules
rules keyword in workflow is similar to rules defined in jobs,
but controls whether or not a whole pipeline is created.
rules:
rules: if.
rules: changes.
rules: exists.
when, can only be always or never when used with workflow.
variables.
workflow:rules:
workflow:
rules:
- if: $CI_COMMIT_TITLE =~ /-draft$/
when: never
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
-draft
and the pipeline is for either:
workflow:rules templates to import
a preconfigured workflow: rules entry.
if clauses for workflow:rules.
rules to run merge request pipelines.
workflow:rules:variables
variables in workflow:rules to define variables for
specific pipeline conditions.
workflow
variable takes precedence and overrides the global variable.
_).
workflow:rules:variables:
variables:
DEPLOY_VARIABLE: "default-deploy"
workflow:
rules:
- if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
variables:
DEPLOY_VARIABLE: "deploy-production" # Override globally-defined DEPLOY_VARIABLE
- if: $CI_COMMIT_REF_NAME =~ /feature/
variables:
IS_A_FEATURE: "true" # Define a new variable.
- when: always # Run the pipeline in other cases
job1:
variables:
DEPLOY_VARIABLE: "job1-default-deploy"
rules:
- if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
variables: # Override DEPLOY_VARIABLE defined
DEPLOY_VARIABLE: "job1-deploy-production" # at the job level.
- when: on_success # Run the job in other cases
script:
- echo "Run script with $DEPLOY_VARIABLE as an argument"
- echo "Run another script if $IS_A_FEATURE exists"
job2:
script:
- echo "Run script with $DEPLOY_VARIABLE as an argument"
- echo "Run another script if $IS_A_FEATURE exists"
DEPLOY_VARIABLE is job1-deploy-production.
DEPLOY_VARIABLE is deploy-production.
feature:
DEPLOY_VARIABLE is job1-default-deploy, and IS_A_FEATURE is true.
DEPLOY_VARIABLE is default-deploy, and IS_A_FEATURE is true.
DEPLOY_VARIABLE is job1-default-deploy.
DEPLOY_VARIABLE is default-deploy.
Job keywords
after_script
after_script to define an array of commands that run after each job, including failed jobs.
default section.
after_script:
job:
script:
- echo "An example script section."
after_script:
- echo "Execute this command after the `script` section completes."
after_script execute in a new shell, separate from any
before_script or script commands. As a result, they:
before_script or script,
including:
script scripts.
before_script or script script.
script section succeeds and the
after_script times out or fails, the job exits with code 0 (Job Succeeded).
after_script commands do not execute.
Related topics:
after_script with default
to define a default array of commands that should run after all jobs.
after_script
to make job logs easier to review.
allow_failure
allow_failure to determine whether a pipeline should continue running when a job fails.
allow_failure: true.
allow_failure: false.
allow_failure: true) an orange warning ()
indicates that a job failed. However, the pipeline is successful and the associated commit
is marked as passed with no warnings.
allow_failure is:
true for manual jobs.
false for manual jobs that also use rules.
false in all other cases.
true or false.
allow_failure:
job1:
stage: test
script:
- execute_script_1
job2:
stage: test
script:
- execute_script_2
allow_failure: true
job3:
stage: deploy
script:
- deploy_to_staging
job1 and job2 run in parallel:
job1 fails, jobs in the deploy stage do not start.
job2 fails, jobs in the deploy stage can still start.
allow_failure as a subkey of rules.
allow_failure: false with a manual job to create a blocking manual job.
A blocked pipeline does not run any jobs in later stages until the manual job
is started and completes successfully.
allow_failure:exit_codes
allow_failure:exit_codes to control when a job should be
allowed to fail. The job is allow_failure: true for any of the listed exit codes,
and allow_failure false for any other exit code.
allow_failure:
test_job_1:
script:
- echo "Run a script that results in exit code 1. This job fails."
- exit 1
allow_failure:
exit_codes: 137
test_job_2:
script:
- echo "Run a script that results in exit code 137. This job is allowed to fail."
- exit 137
allow_failure:
exit_codes:
- 137
- 255
artifacts
artifacts to specify which files to save as job artifacts.
Job artifacts are a list of files and directories that are
attached to the job when it succeeds, fails, or always.
dependencies.
needs keyword, jobs can only download
artifacts from the jobs defined in the needs configuration.
artifacts:exclude
artifacts:exclude to prevent files from being added to an artifacts archive.
default section.
doublestar.PathMatch patterns.
artifacts:exclude:
artifacts:
paths:
- binaries/
exclude:
- binaries/**/*.o
binaries/, but not *.o files located in
subdirectories of binaries/.
artifacts:exclude paths are not searched recursively.
artifacts:untracked can be excluded using
artifacts:exclude too.
artifacts:expire_in
expire_in to specify how long job artifacts are stored before
they expire and are deleted. The expire_in setting does not affect:
default section.
'42'
42 seconds
3 mins 4 sec
2 hrs 20 min
2h20min
6 mos 1 day
47 yrs 6 mos and 4d
3 weeks and 2 days
never
artifacts:expire_in:
job:
artifacts:
expire_in: 1 week
never.
artifacts:expose_as
artifacts:expose_as keyword to
expose job artifacts in the merge request UI.
default section.
artifacts:paths.
artifacts:expose_as:
test:
script: ["echo 'test' > file.txt"]
artifacts:
expose_as: 'artifact 1'
paths: ['file.txt']
artifacts:paths uses CI/CD variables, the artifacts do not display in the UI.
.html or .htm
.txt
.json
.xml
.log
artifacts:name
artifacts:name keyword to define the name of the created artifacts
archive. You can specify a unique name for every archive.
artifacts, which becomes artifacts.zip when downloaded.
default section.
artifacts:paths.
artifacts:name:
job:
artifacts:
name: "job1-artifacts-file"
paths:
- binaries/
artifacts:paths
$CI_PROJECT_DIR) and can’t directly
link outside it.
default section.
doublestar.Glob.
artifacts:paths:
job:
artifacts:
paths:
- binaries/
- .config
.config and all the files in the binaries directory.
artifacts:name defined, the artifacts file
is named artifacts, which becomes artifacts.zip when downloaded.
dependencies.
artifacts:public
non_public_artifacts. On
GitLab.com, this feature is not available.artifacts:public to determine whether the job artifacts should be
publicly available.
artifacts:public is true (default), the artifacts in
public pipelines are available for download by anonymous and guest users.
artifacts:public to false:
default section.
true (default if not defined) or false.
artifacts:paths:
job:
artifacts:
public: false
artifacts:reports
artifacts:reports to collect artifacts generated by
included templates in jobs.
default section.
artifacts:reports:
rspec:
stage: test
script:
- bundle install
- rspec --format RspecJunitFormatter --out rspec.xml
artifacts:
reports:
junit: rspec.xml
artifacts:paths keyword. Please note that this will upload and store the artifact twice.
artifacts:expire_in to set up an expiration
date for artifacts reports.
artifacts:untracked
artifacts:untracked to add all Git untracked files as artifacts (along
with the paths defined in artifacts:paths). artifacts:untracked ignores configuration
in the repository’s .gitignore, so matching artifacts in .gitignore are included.
default section.
true or false (default if not defined).
artifacts:untracked:
job:
artifacts:
untracked: true
artifacts:when
artifacts:when to upload artifacts on job failure or despite the
failure.
default section.
on_success (default): Upload artifacts only when the job succeeds.
on_failure: Upload artifacts only when the job fails.
always: Always upload artifacts (except when jobs time out). For example, when
uploading artifacts
required to troubleshoot failing tests.
artifacts:when:
job:
artifacts:
when: on_failure
before_script
before_script to define an array of commands that should run before each job’s
script commands, but after artifacts are restored.
default section.
before_script:
job:
before_script:
- echo "Execute this command before any 'script:' commands."
script:
- echo "This command executes after the job's 'before_script' commands."
before_script are concatenated with any scripts you specify
in the main script. The combined scripts execute together in a single shell.
before_script with default
to define a default array of commands that should run before the script commands in all jobs.
before_script
to make job logs easier to review.
cache
cache to specify a list of files and directories to
cache between jobs. You can only use paths that are in the local working copy.
cache:paths
cache:paths keyword to choose which files or directories to cache.
default section.
$CI_PROJECT_DIR).
You can use wildcards that use
patterns:
doublestar.Glob.
cache:paths:
binaries that end in .apk and the .config file:
rspec:
script:
- echo "This job uses a cache."
cache:
key: binaries-cache
paths:
- binaries/*.apk
- .config
cache use cases for more
cache:paths examples.
cache:key
cache:key keyword to give each cache a unique identifying key. All jobs
that use the same cache key use the same cache, including in different pipelines.
default. All jobs with the cache keyword but
no cache:key share the default cache.
default section.
cache:key:
cache-job:
script:
- echo "This job uses a cache."
cache:
key: binaries-cache-$CI_COMMIT_REF_SLUG
paths:
- binaries/
$ with %. For example: key: %CI_COMMIT_REF_SLUG%
cache:key value can’t contain:
/ character, or the equivalent URI-encoded %2F.
. character (any number), or the equivalent URI-encoded %2E.
cache:key.
Otherwise cache content can be overwritten.
cache:key is not found.
cache use cases for more
cache:key examples.
cache:key:files
cache:key:files keyword to generate a new key when one or two specific files
change. cache:key:files lets you reuse some caches, and rebuild them less often,
which speeds up subsequent pipeline runs.
default section.
cache:key:files:
cache-job:
script:
- echo "This job uses a cache."
cache:
key:
files:
- Gemfile.lock
- package.json
paths:
- vendor/ruby
- node_modules
Gemfile.lock and package.json files. When one of
these files changes, a new cache key is computed and a new cache is created. Any future
job runs that use the same Gemfile.lock and package.json with cache:key:files
use the new cache, instead of rebuilding the dependencies.
key is a SHA computed from the most recent commits
that changed each listed file.
If neither file is changed in any commits, the fallback key is default.
cache:key:prefix
cache:key:prefix to combine a prefix with the SHA computed for cache:key:files.
default section.
cache:key:prefix:
rspec:
script:
- echo "This rspec job uses a cache."
cache:
key:
files:
- Gemfile.lock
prefix: $CI_JOB_NAME
paths:
- vendor/ruby
prefix of $CI_JOB_NAME causes the key to look like rspec-feef9576d21ee9b6a32e30c5c79d0a0ceb68d1e5.
If a branch changes Gemfile.lock, that branch has a new SHA checksum for cache:key:files.
A new cache key is generated, and a new cache is created for that key. If Gemfile.lock
is not found, the prefix is added to default, so the key in the example would be rspec-default.
cache:key:files is changed in any commits, the prefix is added to the default key.
cache:untracked
untracked: true to cache all files that are untracked in your Git repository:
default section.
true or false (default).
cache:untracked:
rspec:
script: test
cache:
untracked: true
cache:untracked with cache:paths to cache all untracked files
as well as files in the configured paths. For example:
rspec:
script: test
cache:
untracked: true
paths:
- binaries/
cache:when
cache:when to define when to save the cache, based on the status of the job.
default section.
on_success (default): Save the cache only when the job succeeds.
on_failure: Save the cache only when the job fails.
always: Always save the cache.
cache:when:
rspec:
script: rspec
cache:
paths:
- rspec/
when: 'always'
cache:policy
cache:policy keyword.
By default, the job downloads the cache when the job starts, and uploads changes
to the cache when the job ends. This caching style is the pull-push policy (default).
cache:policy:pull.
cache:policy:push.
pull policy when you have many jobs executing in parallel that use the same cache.
This policy speeds up job execution and reduces load on the cache server. You can
use a job with the push policy to build the cache.
default section.
pull
push
pull-push (default)
cache:policy:
prepare-dependencies-job:
stage: build
cache:
key: gems
paths:
- vendor/bundle
policy: push
script:
- echo "This job only downloads dependencies and builds the cache."
- echo "Downloading dependencies..."
faster-test-job:
stage: test
cache:
key: gems
paths:
- vendor/bundle
policy: pull
script:
- echo "This job script uses the cache, but does not update it."
- echo "Running tests..."
coverage
coverage with a custom regular expression to configure how code coverage
is extracted from the job output. The coverage is shown in the UI if at least one
line in the job output matches the regular expression.
\d+(\.\d+)?.
/. Must match the coverage number.
May match surrounding text as well, so you don’t need to use a regular expression character group
to capture the exact number.
coverage:
job1:
script: rspec
coverage: '/Code coverage: \d+\.\d+/'
Code coverage: 67.89% of lines covered would match.
\d+(\.\d+)?.
The sample matching line above gives a code coverage of 67.89.
gitlab-ci.yml take precedence over coverage regular expression set in the
GitLab UI.
dast_configuration
dast_configuration keyword to specify a site profile and scanner profile to be used in a
CI/CD configuration. Both profiles must first have been created in the project. The job’s stage must
be dast.
site_profile and scanner_profile.
site_profile to specify the site profile to be used in the job.
scanner_profile to specify the scanner profile to be used in the job.
dast_configuration:
stages:
- build
- dast
include:
- template: DAST.gitlab-ci.yml
dast:
dast_configuration:
site_profile: "Example Co"
scanner_profile: "Quick Passive Test"
dast job extends the dast configuration added with the include keyword
to select a specific site profile and scanner profile.
dependencies
dependencies keyword to define a list of jobs to fetch artifacts from.
You can also set a job to download no artifacts at all.
dependencies, all artifacts from previous stages are passed to each job.
[]), to configure the job to not download any artifacts.
dependencies:
build osx:
stage: build
script: make build:osx
artifacts:
paths:
- binaries/
build linux:
stage: build
script: make build:linux
artifacts:
paths:
- binaries/
test osx:
stage: test
script: make test:osx
dependencies:
- build osx
test linux:
stage: test
script: make test:linux
dependencies:
- build linux
deploy:
stage: deploy
script: make deploy
build osx and build linux. When test osx is executed,
the artifacts from build osx are downloaded and extracted in the context of the build.
The same thing happens for test linux and artifacts from build linux.
deploy job downloads artifacts from all previous jobs because of
the stage precedence.
environment
environment to define the environment that a job deploys to.
-, _, /, $, {, }.
.gitlab-ci.yml file. You can’t use variables defined in a script section.
environment:
deploy to production:
stage: deploy
script: git push production HEAD:main
environment: production
environment and no environment with that name exists, an environment is
created.
environment:name
qa, staging, and production, but you can use any name.
-, _, /, $, {, }.
.gitlab-ci.yml file. You can’t use variables defined in a script section.
environment:name:
deploy to production:
stage: deploy
script: git push production HEAD:main
environment:
name: production
environment:url
https://prod.example.com.
.gitlab-ci.yml file. You can’t use variables defined in a script section.
environment:url:
deploy to production:
stage: deploy
script: git push production HEAD:main
environment:
name: production
url: https://prod.example.com
environment:on_stop
on_stop keyword
defined under environment. It declares a different job that runs to close the
environment.
environment:action for more details and an example.
environment:action
action keyword to specify jobs that prepare, start, or stop environments.
Value
Description
start
Default value. Indicates that the job starts the environment. The deployment is created after the job starts.
prepare
Indicates that the job is only preparing the environment. It does not trigger deployments. Read more about preparing environments.
stop
Indicates that the job stops a deployment. For more detail, read Stop an environment.
environment:action:
stop_review_app:
stage: deploy
variables:
GIT_STRATEGY: none
script: make delete-app
when: manual
environment:
name: review/$CI_COMMIT_REF_SLUG
action: stop
environment:auto_stop_in
auto_stop_in keyword specifies the lifetime of the environment. When an environment expires, GitLab
automatically stops it.
168 hours
7 days
one week
environment:auto_stop_in:
review_app:
script: deploy-review-app
environment:
name: review/$CI_COMMIT_REF_SLUG
auto_stop_in: 1 day
review_app is created, the environment’s lifetime is set to 1 day.
Every time the review app is deployed, that lifetime is also reset to 1 day.
environment:kubernetes
kubernetes keyword to configure deployments to a
Kubernetes cluster that is associated with your project.
environment:kubernetes:
deploy:
stage: deploy
script: make deploy-app
environment:
name: production
kubernetes:
namespace: production
deploy job to deploy to the production
environment, using the production
.
environment:deployment_tier
deployment_tier keyword to specify the tier of the deployment environment.
production
staging
testing
development
other
environment:deployment_tier:
deploy:
script: echo
environment:
name: customer-portal
deployment_tier: production
Dynamic environments
deploy as review app:
stage: deploy
script: make deploy
environment:
name: review/$CI_COMMIT_REF_SLUG
url: https://$CI_ENVIRONMENT_SLUG.example.com/
deploy as review app job is marked as a deployment to dynamically
create the review/$CI_COMMIT_REF_SLUG environment. $CI_COMMIT_REF_SLUG
is a CI/CD variable set by the runner. The
$CI_ENVIRONMENT_SLUG variable is based on the environment name, but suitable
for inclusion in URLs. If the deploy as review app job runs in a branch named
pow, this environment would be accessible with a URL like https://review-pow.example.com/.
extends
extends to reuse configuration sections. It’s an alternative to YAML anchors
and is a little more flexible and readable.
extends:
.tests:
script: rake test
stage: test
only:
refs:
- branches
rspec:
extends: .tests
script: rake rspec
only:
variables:
- $RSPEC
rspec job uses the configuration from the .tests template job.
When creating the pipeline, GitLab:
.tests content with the rspec job.
rspec job:
rspec:
script: rake rspec
stage: test
only:
refs:
- branches
variables:
- $RSPEC
extends.
extends keyword supports up to eleven levels of inheritance, but you should
avoid using more than three levels.
.tests is a hidden job,
but you can extend configuration from regular jobs as well.
extends.
extends to reuse configuration from included configuration files.
image
image to specify a Docker image that the job runs in.
default section.
<image-name> (Same as using <image-name> with the latest tag)
<image-name>:<tag>
<image-name>@<digest>
image:
default:
image: ruby:3.0
rspec:
script: bundle exec rspec
rspec 2.7:
image: registry.example.com/my-group/my-project/ruby:2.7
script: bundle exec rspec
ruby:3.0 image is the default for all jobs in the pipeline.
The rspec 2.7 job does not use the default, because it overrides the default with
a job-specific image section.
image:name
image used by itself.
default section.
<image-name> (Same as using <image-name> with the latest tag)
<image-name>:<tag>
<image-name>@<digest>
image:name:
image:
name: "registry.example.com/my/image:latest"
image:entrypoint
entrypoint is translated to the Docker --entrypoint option.
The syntax is similar to the ,
where each shell token is a separate string in the array.
default section.
image:entrypoint:
image:
name: super/sql:experimental
entrypoint: [""]
inherit
inherit to control inheritance of default keywords and variables.
inherit:default
inherit:default to control the inheritance of default keywords.
true (default) or false to enable or disable the inheritance of all default keywords.
inherit:default:
default:
retry: 2
image: ruby:3.0
interruptible: true
job1:
script: echo "This job does not inherit any default keywords."
inherit:
default: false
job2:
script: echo "This job inherits only the two listed default keywords. It does not inherit 'interruptible'."
inherit:
default:
- retry
- image
default: [keyword1, keyword2]
inherit:variables
inherit:variables to control the inheritance of global variables keywords.
true (default) or false to enable or disable the inheritance of all global variables.
inherit:variables:
variables:
VARIABLE1: "This is variable 1"
VARIABLE2: "This is variable 2"
VARIABLE3: "This is variable 3"
job1:
script: echo "This job does not inherit any global variables."
inherit:
variables: false
job2:
script: echo "This job inherits only the two listed global variables. It does not inherit 'VARIABLE3'."
inherit:
variables:
- VARIABLE1
- VARIABLE2
variables: [VARIABLE1, VARIABLE2]
interruptible
interruptible if a job should be canceled when a newer pipeline starts before the job completes.
interruptible: true can be cancelled when
a new pipeline starts on the same branch.
interruptible: false starts.
default section.
true or false (default).
interruptible:
stages:
- stage1
- stage2
- stage3
step-1:
stage: stage1
script:
- echo "Can be canceled."
interruptible: true
step-2:
stage: stage2
script:
- echo "Can not be canceled."
step-3:
stage: stage3
script:
- echo "Because step-2 can not be canceled, this step can never be canceled, even though it's set as interruptible."
interruptible: true
step-1 is running or pending.
step-2 starts.
interruptible: true if the job can be safely canceled after it has started,
like a build job. Deployment jobs usually shouldn’t be cancelled, to prevent partial deployments.
interruptible: true,
or interruptible: false jobs must not have started.
needs
needs to execute jobs out-of-order. Relationships between jobs
that use needs can be visualized as a directed acyclic graph.
[]), to set the job to start as soon as the pipeline is created.
needs:
linux:build:
stage: build
script: echo "Building linux..."
mac:build:
stage: build
script: echo "Building mac..."
lint:
stage: test
needs: []
script: echo "Linting..."
linux:rspec:
stage: test
needs: ["linux:build"]
script: echo "Running rspec on linux..."
mac:rspec:
stage: test
needs: ["mac:build"]
script: echo "Running rspec on mac..."
production:
stage: deploy
script: echo "Running production..."
lint job runs immediately without waiting for the build stage
to complete because it has no needs (needs: []).
linux:rspec job runs as soon as the linux:build
job finishes, without waiting for mac:build to finish.
mac:rspec jobs runs as soon as the mac:build
job finishes, without waiting for linux:build to finish.
production job runs as soon as all previous jobs finish:
linux:build, linux:rspec, mac:build, mac:rspec.
needs array is limited:
needs refers to a job that uses the parallel keyword,
it depends on all jobs created in parallel, not just one job. It also downloads
artifacts from all the parallel jobs by default. If the artifacts have the same
name, they overwrite each other and only the last one downloaded is saved.
needs keyword, or are referenced
in a job’s needs section.
needs refers to a job that might not be added to
a pipeline because of only, except, or rules, the pipeline might fail to create.
needs:artifacts
needs, it no longer downloads all artifacts from previous stages
by default, because jobs with needs can start before earlier stages complete. With
needs you can only download artifacts from the jobs listed in the needs configuration.
artifacts: true (default) or artifacts: false to control when artifacts are
downloaded in jobs that use needs.
needs:job.
true (default) or false.
needs:artifacts:
test-job1:
stage: test
needs:
- job: build_job1
artifacts: true
test-job2:
stage: test
needs:
- job: build_job2
artifacts: false
test-job3:
needs:
- job: build_job1
artifacts: true
- job: build_job2
- build_job3
test-job1 job downloads the build_job1 artifacts
test-job2 job does not download the build_job2 artifacts.
test-job3 job downloads the artifacts from all three build_jobs, because
artifacts is true, or defaults to true, for all three needed jobs.
dependencies keyword
with needs.
needs:project
needs:project to download artifacts from up to five jobs in other pipelines.
The artifacts are downloaded from the latest successful pipeline for the specified ref.
needs:project
does not wait for the pipeline to complete. Instead, the job downloads the artifact
from the latest pipeline that completed successfully.
needs:project must be used with job, ref, and artifacts.
needs:project: A full project path, including namespace and group.
job: The job to download artifacts from.
ref: The ref to download artifacts from.
artifacts: Must be true to download artifacts.
needs:project:
build_job:
stage: build
script:
- ls -lhR
needs:
- project: namespace/group/project-name
job: build-1
ref: main
artifacts: true
build_job downloads the artifacts from the latest successful build-1 job
on the main branch in the group/project-name project.
needs:project,
for example:
build_job:
stage: build
script:
- ls -lhR
needs:
- project: $CI_PROJECT_PATH
job: $DEPENDENCY_JOB_NAME
ref: $ARTIFACTS_DOWNLOAD_REF
artifacts: true
project
to be the same as the current project, but use a different ref than the current pipeline.
Concurrent pipelines running on the same ref could override the artifacts.
needs:project in the same job as trigger.
needs:project to download artifacts from another pipeline, the job does not wait for
the needed job to complete. Directed acyclic graph
behavior is limited to jobs in the same pipeline. Make sure that the needed job in the other
pipeline completes before the job that needs it tries to download the artifacts.
parallel.
project, job, and ref was
needs:pipeline:job.
needs:pipeline:job
needs:pipeline: A pipeline ID. Must be a pipeline present in the same parent-child pipeline hierarchy.
job: The job to download artifacts from.
needs:pipeline:job:
.gitlab-ci.yml):
create-artifact:
stage: build
script: echo "sample artifact" > artifact.txt
artifacts:
paths: [artifact.txt]
child-pipeline:
stage: test
trigger:
include: child.yml
strategy: depend
variables:
PARENT_PIPELINE_ID: $CI_PIPELINE_ID
child.yml):
use-artifact:
script: cat artifact.txt
needs:
- pipeline: $PARENT_PIPELINE_ID
job: create-artifact
create-artifact job in the parent pipeline creates some artifacts.
The child-pipeline job triggers a child pipeline, and passes the CI_PIPELINE_ID
variable to the child pipeline as a new PARENT_PIPELINE_ID variable. The child pipeline
can use that variable in needs:pipeline to download artifacts from the parent pipeline.
pipeline attribute does not accept the current pipeline ID ($CI_PIPELINE_ID).
To download artifacts from a job in the current pipeline, use needs.
needs:optional
optional: true
to the needs configuration. If not defined, optional: false is the default.
rules, only, or except, might
not always exist in a pipeline. When the pipeline is created, GitLab checks the needs
relationships before starting it. Without optional: true, needs relationships that
point to a job that does not exist stops the pipeline from starting and causes a pipeline
error similar to:
'job1' job needs 'job2' job, but it was not added to the pipeline
job: The job to make optional.
true or false (default).
needs:optional:
build:
stage: build
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
rspec:
stage: test
needs:
- job: build
optional: true
build job exists in the pipeline, and the rspec
job waits for it to complete before starting.
build job does not exist in the pipeline.
The rspec job runs immediately (similar to needs: []) because its needs
relationship to the build job is optional.
needs:pipeline
needs:pipeline keyword. The latest pipeline status from the default branch is
replicated to the bridge job.
project
keyword. For example: project: group/project-name or project: project-name.
needs:pipeline:
upstream_bridge:
stage: test
needs:
pipeline: other/project
job keyword to needs:pipeline, the job no longer mirrors the
pipeline status. The behavior changes to needs:pipeline:job.
only / except
only and except are not being actively developed. rules is the preferred
keyword to control when to add jobs to pipelines.only and except to control when to add jobs to pipelines.
only to define when a job runs.
except to define when a job does not run.
only and except:
only and except
for more details and examples.
only:refs / except:refs
only:refs and except:refs keywords to control when to add jobs to a
pipeline based on branch names or pipeline types.
main or my-feature-branch.
/^feature-.*/.
Value
Description
api
For pipelines triggered by the pipelines API.
branches
When the Git reference for a pipeline is a branch.
chat
For pipelines created by using a GitLab ChatOps command.
external
When you use CI services other than GitLab.
external_pull_requests
When an external pull request on GitHub is created or updated (See Pipelines for external pull requests).
merge_requests
For pipelines created when a merge request is created or updated. Enables merge request pipelines, merged results pipelines, and merge trains.
pipelines
For multi-project pipelines created by using the API with CI_JOB_TOKEN, or the trigger keyword.
pushes
For pipelines triggered by a git push event, including for branches and tags.
schedules
For scheduled pipelines.
tags
When the Git reference for a pipeline is a tag.
triggers
For pipelines created by using a trigger token.
web
For pipelines created by selecting Run pipeline in the GitLab UI, from the project’s CI/CD > Pipelines section.
only:refs and except:refs:
job1:
script: echo
only:
- main
- /^issue-.*$/
- merge_requests
job2:
script: echo
except:
- main
- /^stable-branch.*$/
- schedules
only: branches
run on scheduled pipelines too. Add except: schedules to prevent jobs with only: branches
from running on scheduled pipelines.
only or except used without any other keywords are equivalent to only: refs
or except: refs. For example, the following two jobs configurations have the same
behavior:
job1:
script: echo
only:
- branches
job2:
script: echo
only:
refs:
- branches
only, except, or rules, then only is set to branches
and tags by default.
job1 and job2 are equivalent:
job1:
script: echo "test"
job2:
script: echo "test"
only:
- branches
- tags
only:variables / except:variables
only:variables or except:variables keywords to control when to add jobs
to a pipeline, based on the status of CI/CD variables.
only:variables:
deploy:
script: cap staging deploy
only:
variables:
- $RELEASE == "staging"
- $STAGING
only:changes / except:changes
changes keyword with only to run a job, or with except to skip a job,
when a Git push event modifies a file.
changes in pipelines with the following refs:
branches
external_pull_requests
merge_requests (see additional details about using only:changes with merge request pipelines)
path/to/directory/*, or a directory
and all its subdirectories, for example path/to/directory/**/*.
path/to/directory/*.{rb,py,sh}.
See the
for the supported syntax list.
"*.json" or "**/*.json".
only:changes:
docker build:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
only:
refs:
- branches
changes:
- Dockerfile
- docker/scripts/*
- dockerfiles/**/*
- more_scripts/*.{rb,py,sh}
- "**/*.json"
changes resolves to true if any of the matching files are changed (an OR operation).
branches, external_pull_requests, or merge_requests,
changes can’t determine if a given file is new or old and always returns true.
only: changes with other refs, jobs ignore the changes and always run.
except: changes with other refs, jobs ignore the changes and never run.
only: changes and except: changes examples.
changes with only allow merge requests to be merged if the pipeline succeeds,
you should also use only:merge_requests.
only: changes.
only:kubernetes / except:kubernetes
only:kubernetes or except:kubernetes to control if jobs are added to the pipeline
when the Kubernetes service is active in the project.
kubernetes strategy accepts only the active keyword.
only:kubernetes:
deploy:
only:
kubernetes: active
deploy job runs only when the Kubernetes service is active
in the project.
pages
pages to define a GitLab Pages job that
uploads static content to GitLab. The content is then published as a website.
pages:
pages:
stage: deploy
script:
- mkdir .public
- cp -r * .public
- mv .public public
artifacts:
paths:
- public
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
public/ directory.
The .public workaround is so cp does not also copy public/ to itself in an infinite loop.
public/ directory.
artifacts with a path to the public/ directory.
parallel
parallel to run a job multiple times in parallel in a single pipeline.
job_name 1/N to job_name N/N.
2 to 50.
parallel:
test:
script: rspec
parallel: 5
test 1/5 to test 5/5.
CI_NODE_INDEX and CI_NODE_TOTAL
predefined CI/CD variable set.
parallel:matrix
parallel:matrix to run a job multiple times in parallel in a single pipeline,
but with different variable values for each instance of the job.
_).
parallel:matrix:
deploystacks:
stage: deploy
script:
- bin/deploy
parallel:
matrix:
- PROVIDER: aws
STACK:
- monitoring
- app1
- app2
- PROVIDER: ovh
STACK: [monitoring, backup, app]
- PROVIDER: [gcp, vultr]
STACK: [data, processing]
deploystacks jobs, each with different values
for PROVIDER and STACK:
deploystacks: [aws, monitoring]
deploystacks: [aws, app1]
deploystacks: [aws, app2]
deploystacks: [ovh, monitoring]
deploystacks: [ovh, backup]
deploystacks: [ovh, app]
deploystacks: [gcp, data]
deploystacks: [gcp, processing]
deploystacks: [vultr, data]
deploystacks: [vultr, processing]
release
release to create a release.
$PATH.
registry.gitlab.com/gitlab-org/release-cli:latest
release subkeys:
tag_name
name (optional)
description
ref (optional)
milestones (optional)
released_at (optional)
assets:links (optional)
release keyword:
release_job:
stage: release
image: registry.gitlab.com/gitlab-org/release-cli:latest
rules:
- if: $CI_COMMIT_TAG # Run this job when a tag is created manually
script:
- echo "Running the release job."
release:
tag_name: $CI_COMMIT_TAG
name: 'Release $CI_COMMIT_TAG'
description: 'Release created using the release-cli.'
script keyword. A release
job can use the output from script commands. If you don’t need the script, you can use a placeholder:
script:
- echo "release job"
release section executes after the script keyword and before the after_script.
release keyword fails.
release-cli on the server where the runner is registered.
release keyword.
release:tag_name
release:tag_name:
$CI_COMMIT_TAG CI/CD variable as the tag_name.
rules:if or only: tags to configure
the job to run only for new tags.
job:
script: echo "Running the release job for the new tag."
release:
tag_name: $CI_COMMIT_TAG
description: 'Release description'
rules:
- if: $CI_COMMIT_TAG
rules or only
should not configure the job to run only for new tags. A semantic versioning example:
job:
script: echo "Running the release job and creating a new tag."
release:
tag_name: ${MAJOR}_${MINOR}_${REVISION}
description: 'Release description'
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
release:name
release: tag_name.
release:name:
release_job:
stage: release
release:
name: 'Release $CI_COMMIT_TAG'
release:description
$CI_PROJECT_DIR).
$CI_PROJECT_DIR.
./path/to/file and filename can’t contain spaces.
release:description:
job:
release:
tag_name: ${MAJOR}_${MINOR}_${REVISION}
description: './path/to/CHANGELOG.md'
release:ref
ref for the release, if the release: tag_name doesn’t exist yet.
release:milestones
release:released_at
release:released_at:
released_at: '2021-03-15T08:00:00Z'
release:assets:links
release:assets:links to include asset links in the release.
release-cli version v0.4.0 or later.
release:assets:links:
assets:
links:
- name: 'asset1'
url: 'https://example.com/assets/1'
- name: 'asset2'
url: 'https://example.com/assets/2'
filepath: '/pretty/url/1' # optional
link_type: 'other' # optional
resource_group
resource_group to create a resource group that
ensures a job is mutually exclusive across different pipelines for the same project.
resource_group is free.
-, _, /, $, {, }, ., and spaces.
It can’t start or end with /.
resource_group:
deploy-to-production:
script: deploy
resource_group: production
deploy-to-production jobs in two separate pipelines can never run at the same time. As a result,
you can ensure that concurrent deployments never happen to the production environment.
retry
retry to configure how many times a job is retried if it fails.
If not defined, defaults to 0 and jobs do not retry.
retry:when
to select which failures to retry on.
default section.
0 (default), 1, or 2.
retry:
test:
script: rspec
retry: 2
retry:when
retry:when with retry:max to retry jobs for only specific failure cases.
retry:max is the maximum number of retries, like retry, and can be
0, 1, or 2.
default section.
always: Retry on any failure (default).
unknown_failure: Retry when the failure reason is unknown.
script_failure: Retry when the script failed.
api_failure: Retry on API failure.
stuck_or_timeout_failure: Retry when the job got stuck or timed out.
runner_system_failure: Retry if there is a runner system failure (for example, job setup failed).
runner_unsupported: Retry if the runner is unsupported.
stale_schedule: Retry if a delayed job could not be executed.
job_execution_timeout: Retry if the script exceeded the maximum execution time set for the job.
archived_failure: Retry if the job is archived and can’t be run.
unmet_prerequisites: Retry if the job failed to complete prerequisite tasks.
scheduler_failure: Retry if the scheduler failed to assign the job to a runner.
data_integrity_failure: Retry if there is a structural integrity problem detected.
retry:when (single failure type):
test:
script: rspec
retry:
max: 2
when: runner_system_failure
retry:when (array of failure types):
test:
script: rspec
retry:
max: 2
when:
- runner_system_failure
- stuck_or_timeout_failure
rules
rules to include or exclude jobs in pipelines.
rules replaces only/except and they can’t be used together
in the same job. If you configure one job to use both keywords, the GitLab returns
a key may not be used with rules error.
rules accepts an array of rules defined with:
if
changes
exists
allow_failure
variables
when
if, changes, or exists rule matches and also has when: on_success (default),
when: delayed, or when: always.
when: on_success, when: delayed, or when: always.
when: never.
!reference tags to reuse rules configuration
in different jobs.
rules:if
rules:if clauses to specify when to add a job to a pipeline:
if statement is true, add the job to the pipeline.
if statement is true, but it’s combined with when: never, do not add the job to the pipeline.
if statements are true, do not add the job to the pipeline.
if clauses are evaluated based on the values of predefined CI/CD variables
or custom CI/CD variables.
workflow to configure the pipeline behavior.
rules:if:
job:
script: echo "Hello, Rules!"
rules:
- if: '$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME =~ /^feature/ && $CI_MERGE_REQUEST_TARGET_BRANCH_NAME != $CI_DEFAULT_BRANCH'
when: never
- if: '$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME =~ /^feature/'
when: manual
allow_failure: true
- if: '$CI_MERGE_REQUEST_SOURCE_BRANCH_NAME'
when defined, the rule uses the when
defined for the job, which defaults to on_success if not defined.
when once per rule, or once at the job-level,
which applies to all rules. You can’t mix when at the job-level with when in rules.
when configuration in rules takes precedence over when at the job-level.
script
sections, variables in rules expressions are always formatted as $VARIABLE.
rules:if with include to conditionally include other configuration files.
if expressions for rules.
rules to run merge request pipelines.
rules:changes
rules:changes to specify when to add a job to a pipeline by checking for changes
to specific files.
rules: changes only with branch pipelines or merge request pipelines.
You can use rules: changes with other pipeline types, but rules: changes always
evaluates to true when there is no Git push event. Tag pipelines, scheduled pipelines, manual pipelines,
and so on do not have a Git push event associated with them. A rules: changes job
is always added to those pipelines if there is no if that limits the job to
branch or merge request pipelines.
rules:changes:
docker build:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
changes:
- Dockerfile
when: manual
allow_failure: true
Dockerfile for changes.
Dockerfile has changed, add the job to the pipeline as a manual job, and the pipeline
continues running even if the job is not triggered (allow_failure: true).
Dockerfile has not changed, do not add job to any pipeline (same as when: never).
rules: changes works the same way as only: changes and except: changes.
when: never to implement a rule similar to except:changes.
changes resolves to true if any of the matching files are changed (an OR operation).
rules:exists
exists to run a job when certain files exist in the repository.
$CI_PROJECT_DIR)
and can’t directly link outside it. File paths can use glob patterns.
rules:exists:
job:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- exists:
- Dockerfile
job runs if a Dockerfile exists anywhere in the repository.
File::FNM_PATHNAME | File::FNM_DOTMATCH | File::FNM_EXTGLOB.
exists patterns or
file paths. After the 10,000th check, rules with patterned globs always match.
In other words, the exists rule always assumes a match in projects with more
than 10,000 files.
exists resolves to true if any of the listed files are found (an OR operation).
rules:allow_failure
allow_failure: true in rules to allow a job to fail
without stopping the pipeline.
allow_failure: true with a manual job. The pipeline continues
running without waiting for the result of the manual job. allow_failure: false
combined with when: manual in rules causes the pipeline to wait for the manual
job to run before continuing.
true or false. Defaults to false if not defined.
rules:allow_failure:
job:
script: echo "Hello, Rules!"
rules:
- if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == $CI_DEFAULT_BRANCH'
when: manual
allow_failure: true
allow_failure: true.
rules:allow_failure overrides the job-level allow_failure,
and only applies when the specific rule triggers the job.
rules:variables
variables in rules to define variables for specific conditions.
VARIABLE-NAME: value.
rules:variables:
job:
variables:
DEPLOY_VARIABLE: "default-deploy"
rules:
- if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
variables: # Override DEPLOY_VARIABLE defined
DEPLOY_VARIABLE: "deploy-production" # at the job level.
- if: $CI_COMMIT_REF_NAME =~ /feature/
variables:
IS_A_FEATURE: "true" # Define a new variable.
script:
- echo "Run script with $DEPLOY_VARIABLE as an argument"
- echo "Run another script if $IS_A_FEATURE exists"
script
script to specify commands for the runner to execute.
script keyword.
script:
job1:
script: "bundle exec rspec"
job2:
script:
- uname -a
- bundle exec rspec
script, you must use single quotes (') or double quotes (") .
script
to make job logs easier to review.
secrets
secrets to specify CI/CD secrets to:
file type by default).
secrets:vault.
secrets:vault
secrets:vault to specify secrets provided by a .
engine:name: Name of the secrets engine.
engine:path: Path to the secrets engine.
path: Path to the secret.
field: Name of the field where the password is stored.
secrets:vault:
job:
secrets:
DATABASE_PASSWORD: # Store the path to the secret in this CI/CD variable
vault: # Translates to secret: `ops/data/production/db`, field: `password`
engine:
name: kv-v2
path: ops
path: production/db
field: password
engine:name and engine:path
both default to kv-v2:
job:
secrets:
DATABASE_PASSWORD: # Store the path to the secret in this CI/CD variable
vault: production/db/password # Translates to secret: `kv-v2/data/production/db`, field: `password`
@:
job:
secrets:
DATABASE_PASSWORD: # Store the path to the secret in this CI/CD variable
vault: production/db/password@ops # Translates to secret: `ops/data/production/db`, field: `password`
secrets:file
secrets:file to configure the secret to be stored as either a
file or variable type CI/CD variable
file type CI/CD variable. The value
of the secret is stored in the file and the variable contains the path to the file.
file type CI/CD variables, set file: false to store
the secret value directly in the variable.
true (default) or false.
secrets:file:
job:
secrets:
DATABASE_PASSWORD:
vault: production/db/password@ops
file: false
file keyword is a setting for the CI/CD variable and must be nested under
the CI/CD variable name, not in the vault section.
services
services to specify an additional Docker image to run scripts in. The services image is linked
to the image specified in the image keyword.
default section.
<image-name> (Same as using <image-name> with the latest tag)
<image-name>:<tag>
<image-name>@<digest>
services:
default:
image:
name: ruby:2.6
entrypoint: ["/bin/bash"]
services:
- name: my-postgres:11.7
alias: db-postgres
entrypoint: ["/usr/local/bin/db-postgres"]
command: ["start"]
before_script:
- bundle install
test:
script:
- bundle exec rake spec
services.
services in the .gitlab-ci.yml file.
stage
stage to define which stage a job runs in. Jobs in the same
stage can execute in parallel (see Additional details).
stage is not defined, the job uses the test stage by default.
stage:
stages:
- build
- test
- deploy
job1:
stage: build
script:
- echo "This job compiles code."
job2:
stage: test
script:
- echo "This job tests the compiled code. It runs when the build stage completes."
job3:
script:
- echo "This job also runs in the test stage".
job4:
stage: deploy
script:
- echo "This job deploys the code. It runs when the test stage completes."
concurrent setting
is greater than 1.
stage: .pre
.pre stage to make a job run at the start of a pipeline. .pre is
always the first stage in a pipeline. User-defined stages execute after .pre.
You do not have to define .pre in stages.
.pre or .post.
stage keyword.
stage: .pre:
stages:
- build
- test
job1:
stage: build
script:
- echo "This job runs in the build stage."
first-job:
stage: .pre
script:
- echo "This job runs in the .pre stage, before all other stages."
job2:
stage: test
script:
- echo "This job runs in the test stage."
stage: .post
.post stage to make a job run at the end of a pipeline. .post
is always the last stage in a pipeline. User-defined stages execute before .post.
You do not have to define .post in stages.
.pre or .post.
stage keyword.
stage: .post:
stages:
- build
- test
job1:
stage: build
script:
- echo "This job runs in the build stage."
last-job:
stage: .post
script:
- echo "This job runs in the .post stage, after all other stages."
job2:
stage: test
script:
- echo "This job runs in the test stage."
tags
tags to select a specific runner from the list of all runners that are
available for the project.
ruby, postgres, or development. To pick up and run a job, a runner must
be assigned every tag listed in the job.
default section.
tags:
job:
tags:
- ruby
- postgres
ruby and postgres tags can run the job.
timeout
timeout to configure a timeout for a specific job. If the job runs for longer
than the timeout, the job fails.
default section.
3600 seconds
60 minutes
one hour
timeout:
build:
script: build.sh
timeout: 3 hours 30 minutes
test:
script: rspec
timeout: 3h 30m
trigger
trigger to start a downstream pipeline that is either:
trigger for multi-project pipeline:
rspec:
stage: test
script: bundle exec rspec
staging:
stage: deploy
trigger: my/deployment
trigger for child pipelines:
trigger_job:
trigger:
include: path/to/child-pipeline.yml
trigger can only use a limited set of keywords.
For example, you can’t run commands with script, before_script,
or after_script. Also, environment is not supported with trigger.
when:manual in the same job as trigger. In GitLab 13.4 and
earlier, using them together causes the error jobs:#{job-name} when should be on_success, on_failure or always.
trigger keyword.
trigger:strategy
trigger:strategy to force the trigger job to wait for the downstream pipeline to complete
before it is marked as success.
trigger job to be marked as
success as soon as the downstream pipeline is created.
trigger:strategy:
trigger_job:
trigger:
include: path/to/child-pipeline.yml
strategy: depend
trigger:forward
ci_trigger_forward_variables.
The feature is not ready for production use.trigger:forward to specify what to forward to the downstream pipeline. You can control
what is forwarded to both parent-child pipelines
and multi-project pipelines.
yaml_variables: true (default), or false. When true, variables defined
in the trigger job are passed to downstream pipelines.
pipeline_variables: true or false (default). When true, manual pipeline variables and scheduled pipeline variables
are passed to downstream pipelines.
trigger:forward:
MYVAR = my value:
variables: # default variables for each job
VAR: value
# Default behavior:
# - VAR is passed to the child
# - MYVAR is not passed to the child
child1:
trigger:
include: .child-pipeline.yml
# Forward pipeline variables:
# - VAR is passed to the child
# - MYVAR is passed to the child
child2:
trigger:
include: .child-pipeline.yml
forward:
pipeline_variables: true
# Do not forward YAML variables:
# - VAR is not passed to the child
# - MYVAR is not passed to the child
child3:
trigger:
include: .child-pipeline.yml
forward:
yaml_variables: false
variables
variables to create custom variables.
script, before_script, and after_script commands.
You can also use variables as inputs in some job keywords.
variables at the global level, each variable is copied to
every job configuration when the pipeline is created. If the job already has that
variable defined, the job-level variable takes precedence.
_). In some shells,
the first character must be a letter.
variables:
variables:
DEPLOY_SITE: "https://example.com/"
deploy_job:
stage: deploy
script:
- deploy-script --url $DEPLOY_SITE --path "/"
deploy_review_job:
stage: deploy
variables:
REVIEW_PATH: "/review"
script:
- deploy-review-script --url $DEPLOY_SITE --path $REVIEW_PATH
variables:description
description keyword to define a pipeline-level (global) variable that is prefilled
when running a pipeline manually.
value, for the variable value.
variables:description:
variables:
DEPLOY_ENVIRONMENT:
value: "staging"
description: "The deployment target. Change this variable to 'canary' or 'production' if needed."
when
when to configure the conditions for when jobs run. If not defined in a job,
the default value is when: on_success.
on_success (default): Run the job only when all jobs in earlier stages succeed
or have allow_failure: true.
manual: Run the job only when triggered manually.
always: Run the job regardless of the status of jobs in earlier stages.
on_failure: Run the job only when at least one job in an earlier stage fails.
delayed: Delay the execution of a job
for a specified duration.
never: Don’t run the job.
when:
stages:
- build
- cleanup_build
- test
- deploy
- cleanup
build_job:
stage: build
script:
- make build
cleanup_build_job:
stage: cleanup_build
script:
- cleanup build when failed
when: on_failure
test_job:
stage: test
script:
- make test
deploy_job:
stage: deploy
script:
- make deploy
when: manual
cleanup_job:
stage: cleanup
script:
- cleanup after jobs
when: always
cleanup_build_job only when build_job fails.
cleanup_job as the last step in pipeline regardless of
success or failure.
deploy_job when you run it manually in the GitLab UI.
trigger. In GitLab 13.4 and
earlier, using them together causes the error jobs:#{job-name} when should be on_success, on_failure or always.
allow_failure changes to true with when: manual.
However, if you use when: manual with rules, allow_failure defaults
to false.
when can be used with rules for more dynamic job control.
when can be used with workflow to control when a pipeline can start.
Deprecated keywords
Globally-defined
types
Job-defined
type
Globally-defined
image, services, cache, before_script, after_script
image, services, cache, before_script, and
after_script globally is deprecated. Support could be removed
from a future release.
default instead. For example:
default:
image: ruby:3.0
services:
- docker:dind
cache:
paths: [vendor/]
before_script:
- bundle config set path vendor/bundle
- bundle install
after_script:
- rm -rf tmp/
Help & feedback
Docs
Edit this page
to fix an error or add an improvement in a merge request.
Create an issue
to suggest an improvement to this page.
Product
Create an issue
if there's something you don't like about this feature.
Propose functionality
by submitting a feature request.
to help shape new features.
Feature availability and product trials
to see all GitLab tiers and features, or to upgrade.
with access to all features for 30 days.
Get Help