Skip to main content
GitHub DevSecOps Part 11: Scheduled Pipelines for Production Code
  1. Blogs/

GitHub DevSecOps Part 11: Scheduled Pipelines for Production Code

Author
Romano Roth
I believe the next competitive edge isn’t AI itself, it’s the organisation around it. As Chief AI Officer at Zühlke, I work with C-level leaders to build enterprises that sense, decide, and adapt continuously. 20+ years turning this conviction into practice.
Ask AI about this article

Across ten sessions we wired security checks into a GitHub Actions pipeline that fires on every commit and every Pull Request. That covers code we are actively changing. It does not cover the code that is already running in production while researchers keep finding new CVEs in the libraries it uses. In Part 11 of the GitHub DevSecOps series, Patrick Steger and I add a scheduled workflow that re-scans the production branch — and we run straight into a GitHub limitation worth knowing about up front.

Why Commit Triggers Are Not Enough
#

Every check we built — SAST, SCA, container scanning, DAST, the lot — runs when somebody pushes code. That model assumes the risk surface only changes when the code changes. It does not. A library you depended on two months ago could be clean back then and have a critical CVE today. Nothing in your repo moved, and yet your application is now vulnerable.

The fix is to run the security tests on a regular schedule against the branch that holds your production code. New CVEs in old dependencies surface within a day instead of whenever the next hotfix forces a build.

Pick the Right Jobs for the Schedule
#

A scheduled run is not a re-run of the entire pipeline. The jobs that earn their keep are the ones whose results can change without the code changing — anything based on Software Composition Analysis. Concretely, in our pipeline:

  • Build stays in. SCA needs the resolved dependency graph the build produces.
  • SCA / Dependency Scanning stays in. This is the whole reason for the schedule.
  • Container Image Scan stays in. The base image you shipped may have new OS-level CVEs.
  • License Compliance comes out. Licenses do not change while you sleep.
  • DAST comes out. We are not redeploying to a test environment to scan it.

Three jobs, on a cron, against production. That is the shape.

The GitHub Limitation: Default Branch Only
#

In GitHub Actions you add on: schedule: with a cron expression to a workflow file, and the workflow runs on that schedule. Easy. Then you read the small print: the schedule always runs against the default branch, on the latest commit on that branch.

That is a real constraint. If you keep main as your development trunk and ship from a release branch, the schedule will not touch the release branch. The workarounds — using GitHub hooks, references, or external triggers — exist, but they require real custom code. Patrick and I keep things simple in this video and accept the implication: if you want scheduled scans against production, your production branch has to be the default branch. That has knock-on effects (Pull Requests default there, merges go there) so handle it deliberately.

Building the Scheduled Workflow
#

Rather than crowd the existing main pipeline with conditionals, we create a second workflow file. We copy main-pipeline.yml, paste it as schedule.yml, and rename the workflow to “Schedule Main CI/CD Pipeline.”

The trigger swaps from on: push (and friends) to on: schedule with a cron expression. For the demo we set every six minutes so we can see runs back-to-back; in real life you would pick daily or weekly. Then we strip out the jobs we do not need for a scheduled scan: license compliance and DAST go. Build, SCA, and container image scan stay.

Commit, push, and we are back in the Actions tab. The next normal pipeline run gets cancelled. A few minutes later the scheduled run kicks off, and the only jobs in the run are the ones we kept. Six minutes later it runs again. The mechanism works.

Two Workflows, Two Responsibilities
#

The two-file approach has a real advantage over packing everything into one workflow with if: conditions: each file has one job. main-pipeline.yml is what runs on developer activity. schedule.yml is what runs on a clock. New job? You decide which file it belongs in. That clarity is worth the small duplication.

The downside is that the two files can drift. If you add a new SCA-style scanner to the main pipeline, remember to add it to the scheduled one too. A short comment at the top of each file pointing at the other goes a long way.

Are We Secure Now?
#

With everything built across the series — SAST, secret detection, SCA, container scanning, DAST, Pull Request gating, and now a scheduled re-scan of production — you are about as far as the GitHub-native tooling takes you without significant custom work. You will still find vulnerabilities. You will still need to triage them. But you will find them on a clock instead of by accident.

In the next and last session, Patrick and I lay out our recommendations for a team running DevSecOps on GitHub.

Key Takeaways
#

  1. Commit triggers do not catch CVEs in unchanged code. New vulnerabilities in your dependencies surface every day. A schedule is the only way to see them without redeploying.

  2. Schedule only the SCA-style jobs. Build, SCA, and container scanning earn their keep on a cron. License compliance and DAST do not.

  3. GitHub schedules only run on the default branch. Plan for it. If your production branch is not your default branch, schedules will not see it without serious workarounds.

  4. Use a second workflow file, not conditionals. main-pipeline.yml for commits, schedule.yml for cron. Each file has one purpose; new jobs go in the right place.

  5. Cron interval is a real-life decision. Every six minutes is a demo cadence. Pick daily or weekly based on how fast your team can act on a new finding.

  6. GitHub’s defaults push you toward main as production. That has consequences for Pull Requests, merges, and human error. Make the choice consciously, not by accident.