Skip to main content
GitLab DevSecOps Part 11: Scheduled Pipelines for Production Code
  1. Blogs/

GitLab DevSecOps Part 11: Scheduled Pipelines for Production Code

Author
Romano Roth
I believe the next competitive edge isn’t AI itself, it’s the organisation around it. As Chief AI Officer at Zühlke, I work with C-level leaders to build enterprises that sense, decide, and adapt continuously. 20+ years turning this conviction into practice.
Ask AI about this article

Over ten sessions we wired six security tools into a GitLab pipeline that fires on every commit and every Merge Request. So are we done? Not quite. Code in production sits there for weeks or months, and during that time researchers keep finding new CVEs in the dependencies you are already shipping. In Part 11 of the GitLab DevSecOps series, Patrick Steger and I add a scheduled pipeline so the production branch gets re-scanned automatically — without anyone having to push a commit.

Why a Commit-Triggered Pipeline Is Not Enough
#

Every job we built so far runs when something changes. That covers new code, but it does not cover what happens to old code as the world around it changes. A library you pulled in three months ago might have been clean back then and have a critical CVE today. The unit tests will not tell you. The static analyzer will not tell you. The only thing that catches it is re-running the dependency scanner against the same unchanged code.

That is what scheduled pipelines are for: re-execute the security checks on the branch that contains your production release, on a schedule, so newly disclosed vulnerabilities surface even when development has moved on.

What to Run — and What to Skip
#

A scheduled run does not need the entire pipeline. Plenty of jobs are pure waste in this context. Unit tests against unchanged code will give the same answer in three months as they do today. The same is true for SAST against code that has not moved. Re-running them adds minutes and noise, nothing else.

The two jobs that earn their keep on a schedule are the ones that look outside your repo:

  • SCA / Dependency Scanning. Walks the dependency tree, matches every library and version against the latest CVE database. This is the whole reason we are doing this.
  • Container Scanning. The Docker image you shipped to production may have picked up new OS-level vulnerabilities since you built it. Same idea, different layer.

Two jobs, run on a schedule, against the production branch. That is enough for the vast majority of teams.

Setting It Up in GitLab
#

The mechanics are simple, with one trick. GitLab’s scheduled pipelines run a normal .gitlab-ci.yml, so we need a way for each job to know whether it is allowed to run in scheduled mode or not. We do that with a variable.

The recipe:

  1. Make sure you have a distinct release branch — the one that mirrors what is in production.
  2. In GitLab, go to CI/CD → Schedules and create a new schedule. Pick the cron interval (daily is a good default, weekly is fine for many teams), select your production release branch, and add a variable named SCAN_ONLY set to any value (we use true).
  3. In .gitlab-ci.yml, add a rules: block to every job you do not want to run on the scheduled pass. The rule says: if SCAN_ONLY is defined, never run this job. The SCA and container-scanning jobs get no such rule, so they keep running.

Save the schedule. From here on, GitLab kicks off the pipeline on the cron you defined, on the branch you picked, with SCAN_ONLY set — and only the security jobs you care about execute.

Why the Variable Is the Key
#

You could solve this with two pipeline files, but then you have two pipelines to keep in sync. As soon as you add a new tool, you have to remember to add it in both places. Using one pipeline file with a SCAN_ONLY rule on every irrelevant job keeps everything in one place. The schedule decides what runs by setting the variable; every job decides for itself whether it cares.

It also makes the intent obvious when someone reads the pipeline. A job with if: $SCAN_ONLY and when: never is self-documenting: “this is not part of the scheduled scan.”

Are We Secure Now?
#

If you have everything we built across the series — SAST, secret detection, SCA, container scanning, DAST, vulnerability management, Merge Request gating, and now a scheduled re-scan of production — you are on the best path GitLab makes available without bolting on third-party tools. You will still find issues. You will still need to triage. But you will find them when they are cheap, instead of when an auditor or an incident hands them to you.

In the next session, we wrap the series with the recommendations Patrick and I would actually give a team starting this from scratch.

Key Takeaways
#

  1. Commit triggers do not catch new CVEs in old code. A library that was clean three months ago can be critical today. You need a schedule to find that.

  2. Run only what changes between runs. SCA and container scanning earn their keep on a schedule. Unit tests and SAST against unchanged code are noise.

  3. One pipeline file, one variable. Set SCAN_ONLY from the schedule, gate irrelevant jobs with a rules: block. No second pipeline to keep in sync.

  4. Pick a real production branch. The schedule is only as useful as the branch it points at. Make sure that branch actually mirrors what is in production.

  5. Daily or weekly is enough. You do not need to scan every six minutes. Pick a cadence that matches how fast you can react to a new finding.

  6. Scheduled scanning closes the loop. Pipelines on commit catch what you wrote. Schedules catch what the world wrote about your dependencies. You need both.