Skip to content

Control how jobs run

DETAILS: Tier: Free, Premium, Ultimate Offering:, Self-managed, GitLab Dedicated

When a new pipeline starts, GitLab checks the pipeline configuration to determine which jobs should run in that pipeline. You can configure jobs to run depending on factors like the status of variables, or the pipeline type.

To configure a job to be included or excluded from certain pipelines, use rules.

Use needs to configure a job to run as soon as the earlier jobs it depends on finish running.

Create a job that must be run manually

You can require that a job doesn't run unless a user starts it. This is called a manual job. You might want to use a manual job for something like deploying to production.

To specify a job as manual, add when: manual to the job in the .gitlab-ci.yml file.

By default, manual jobs display as skipped when the pipeline starts.

You can use protected branches to more strictly protect manual deployments from being run by unauthorized users.

Types of manual jobs

Manual jobs can be either optional or blocking.

In optional manual jobs:

  • allow_failure is true, which is the default setting for jobs that have when: manual and no rules, or when: manual defined outside of rules.
  • The status does not contribute to the overall pipeline status. A pipeline can succeed even if all of its manual jobs fail.

In blocking manual jobs:

  • allow_failure is false, which is the default setting for jobs that have when: manual defined inside rules.
  • The pipeline stops at the stage where the job is defined. To let the pipeline continue running, run the manual job.
  • Merge requests in projects with Pipelines must succeed enabled can't be merged with a blocked pipeline.
  • The pipeline shows a status of blocked.

When using manual jobs in triggered pipelines with strategy: depend, the type of manual job can affect the trigger job's status while the pipeline runs.

Run a manual job

To run a manual job, you must have permission to merge to the assigned branch:

  1. Go to the pipeline, job, environment, or deployment view.
  2. Next to the manual job, select Run ({play}).

You can also add custom CI/CD variables when running a manual job.

Add a confirmation dialog for manual jobs

Use manual_confirmation with when: manual to add a confirmation dialog for manual jobs. The confirmation dialog helps to prevent accidental deployments or deletions, especially for sensitive jobs like those that deploy to production.

Users are prompted to confirm the action before the manual job runs, which provides an additional layer of safety and control.

Protect manual jobs

DETAILS: Tier: Premium, Ultimate Offering:, Self-managed, GitLab Dedicated

Use protected environments to define a list of users authorized to run a manual job. You can authorize only the users associated with a protected environment to trigger manual jobs, which can:

  • More precisely limit who can deploy to an environment.
  • Block a pipeline until an approved user "approves" it.

To protect a manual job:

  1. Add an environment to the job. For example:

      stage: deploy
        - echo "Deploy to production server"
        name: production
      when: manual
  2. In the protected environments settings, select the environment (production in this example) and add the users, roles or groups that are authorized to trigger the manual job to the Allowed to Deploy list. Only those in this list can trigger this manual job, and GitLab administrators who are always able to use protected environments.

You can use protected environments with blocking manual jobs to have a list of users allowed to approve later pipeline stages. Add allow_failure: false to the protected manual job and the pipeline's next stages only run after the manual job is triggered by authorized users.

Run a job after a delay

Use when: delayed to execute scripts after a waiting period, or if you want to avoid jobs immediately entering the pending state.

You can set the period with start_in keyword. The value of start_in is an elapsed time in seconds, unless a unit is provided. The minimum is one second, and the maximum is one week. Examples of valid values include:

  • '5' (a value with no unit must be surrounded by single quotes)
  • 5 seconds
  • 30 minutes
  • 1 day
  • 1 week

When a stage includes a delayed job, the pipeline doesn't progress until the delayed job finishes. You can use this keyword to insert delays between different stages.

The timer of a delayed job starts immediately after the previous stage completes. Similar to other types of jobs, a delayed job's timer doesn't start unless the previous stage passes.

The following example creates a job named timed rollout 10% that is executed 30 minutes after the previous stage completes:

timed rollout 10%:
  stage: deploy
  script: echo 'Rolling out 10% ...'
  when: delayed
  start_in: 30 minutes
  environment: production

To stop the active timer of a delayed job, select Unschedule ({time-out}). This job can no longer be scheduled to run automatically. You can, however, execute the job manually.

To start a delayed job manually, select Unschedule ({time-out}) to stop the delay timer and then select Run ({play}). Soon GitLab Runner starts the job.

Parallelize large jobs

To split a large job into multiple smaller jobs that run in parallel, use the parallel keyword in your .gitlab-ci.yml file.

Different languages and test suites have different methods to enable parallelization. For example, use Semaphore Test Boosters and RSpec to run Ruby tests in parallel:

# Gemfile
source ''

gem 'rspec'
gem 'semaphore_test_boosters'
  parallel: 3
    - bundle
    - bundle exec rspec_booster --job $CI_NODE_INDEX/$CI_NODE_TOTAL

You can then go to the Jobs tab of a new pipeline build and see your RSpec job split into three separate jobs.

WARNING: Test Boosters reports usage statistics to the author.

Run a one-dimensional matrix of parallel jobs

You can create a one-dimensional matrix of parallel jobs:

  stage: deploy
    - bin/deploy
      - PROVIDER: [aws, ovh, gcp, vultr]
  environment: production/$PROVIDER

You can also create a multi-dimensional matrix.

Run a matrix of parallel trigger jobs

You can run a trigger job multiple times in parallel in a single pipeline, but with different variable values for each instance of the job.

  stage: deploy
    include: path/to/child-pipeline.yml
      - PROVIDER: aws
        STACK: [monitoring, app1]
      - PROVIDER: ovh
        STACK: [monitoring, backup]
      - PROVIDER: [gcp, vultr]
        STACK: [data]

This example generates 6 parallel deploystacks trigger jobs, each with different values for PROVIDER and STACK, and they create 6 different child pipelines with those variables.

deploystacks: [aws, monitoring]
deploystacks: [aws, app1]
deploystacks: [ovh, monitoring]
deploystacks: [ovh, backup]
deploystacks: [gcp, data]
deploystacks: [vultr, data]

Select different runner tags for each parallel matrix job

You can use variables defined in parallel: matrix with the tags keyword for dynamic runner selection:

  stage: deploy
      - PROVIDER: aws
        STACK: [monitoring, app1]
      - PROVIDER: gcp
        STACK: [data]
    - ${PROVIDER}-${STACK}
  environment: $PROVIDER/$STACK

Fetch artifacts from a parallel:matrix job

You can fetch artifacts from a job created with parallel:matrix by using the dependencies keyword. Use the job name as the value for dependencies as a string in the form:

<job_name> [<matrix argument 1>, <matrix argument 2>, ... <matrix argument N>]

For example, to fetch the artifacts from the job with a RUBY_VERSION of 2.7 and a PROVIDER of aws:

  image: ruby:${RUBY_VERSION}
      - RUBY_VERSION: ["2.5", "2.6", "2.7", "3.0", "3.1"]
        PROVIDER: [aws, gcp]
  script: bundle install

  image: ruby:2.7
  stage: deploy
    - "ruby: [2.7, aws]"
  script: echo hello
  environment: production

Quotes around the dependencies entry are required.

Specify a parallelized job using needs with multiple parallelized jobs

You can use variables defined in needs:parallel:matrix with multiple parallelized jobs.

For example:

  stage: build
  script: echo "Building linux..."
      - PROVIDER: aws
          - monitoring
          - app1
          - app2

  stage: build
  script: echo "Building mac..."
      - PROVIDER: [gcp, vultr]
        STACK: [data, processing]

  stage: test
    - job: linux:build
          - PROVIDER: aws
            STACK: app1
  script: echo "Running rspec on linux..."

  stage: test
    - job: mac:build
          - PROVIDER: [gcp, vultr]
            STACK: [data]
  script: echo "Running rspec on mac..."

  stage: deploy
  script: echo "Running production..."
  environment: production

This example generates several jobs. The parallel jobs each have different values for PROVIDER and STACK.

  • 3 parallel linux:build jobs:
    • linux:build: [aws, monitoring]
    • linux:build: [aws, app1]
    • linux:build: [aws, app2]
  • 4 parallel mac:build jobs:
    • mac:build: [gcp, data]
    • mac:build: [gcp, processing]
    • mac:build: [vultr, data]
    • mac:build: [vultr, processing]
  • A linux:rspec job.
  • A production job.

The jobs have three paths of execution:

  • Linux path: The linux:rspec job runs as soon as the linux:build: [aws, app1] job finishes, without waiting for mac:build to finish.
  • macOS path: The mac:rspec job runs as soon as the mac:build: [gcp, data] and mac:build: [vultr, data] jobs finish, without waiting for linux:build to finish.
  • The production job runs as soon as all previous jobs finish.