We have already wired SAST, secret detection, and software composition analysis into the GitLab pipeline. Those checks cover the source code and its dependencies — but the artifact we actually ship is a container image. Operating system packages, the base image, and everything copied in along the way can carry vulnerabilities of their own. In Part 6 of our series, Patrick Steger and I add container scanning to the pipeline, build a Docker image from the jar we compiled earlier, and push it through Trivy and Grype.
What Container Scanning Does#
Container scanning looks for known vulnerabilities inside your container image. It scans every file in the image — OS packages, libraries, the application binaries — against vulnerability databases. GitLab uses two open source tools under the hood: Trivy and Grype. The feature is part of GitLab Ultimate, and it works for Linux images only; Windows containers are not supported yet.
A heads-up Patrick raises early: container scanning will overlap with the SCA results we already produced. If you have an Ultimate license, GitLab can filter those duplicates out for you. Without Ultimate, expect to see the same library show up in both reports.
Enabling the Job#
As with the previous stages, the scanning job itself is one include line. We pull Container-Scanning.gitlab-ci.yml from GitLab’s templates and add it to the existing list of imports. The catch this time is that the job needs an actual image to scan — and we do not have one yet.
Building the Image First#
Before container scanning can run, we have to produce a container. We start from the Dockerfile that came with the project template, but we change it. The original ran mvn package inside the image and rebuilt the whole application from scratch. That is wasteful — we already compiled the jar in the build job earlier in the pipeline. So we strip the Maven step out and use COPY target to pull the prebuilt jar into the image. The last entry in the Dockerfile runs that jar. The base image is JDK 13.
Then we extend the pipeline with a build_image job. We define a variable CONTAINER_TEST_IMAGE for the image name in the GitLab registry, log in to Docker, build the image, and push it. Because we need the compiled jar, we add a needs: statement at the bottom that makes build_image depend on the earlier build job. Without it, the COPY target line would have nothing to copy.
To run Docker commands inside a GitLab job, we need Docker available. We add docker:dind (Docker-in-Docker) as a service in the services: section of build_image. That makes the Docker tooling available inside the runner.
Finally, we point the container scanning job at the image we just built. The template exposes a predefined variable called DOCKER_IMAGE — we set it to CONTAINER_TEST_IMAGE in the global variables section. That tells the scanner exactly which image to pull from the registry and analyse.
So Many Images#
It is worth pausing to count Docker images at this point. Each build job runs inside its own image. Now we have created a new image that holds our application. And that new application image is the one container scanning will analyse. Three different roles for “Docker image”, and the pipeline has them all.
Findings — and a Green Pipeline#
The pipeline runs. The container scanning job goes green. Then we open it and find vulnerabilities. Why is the pipeline not red?
GitLab considers a job successful if it finished its work — in this case, “I scanned the image and produced a report.” Findings on their own do not break the pipeline. From a pure security standpoint Patrick would prefer a hard fail on any finding, but in practice you almost always have at least one CVE somewhere, and a pipeline that is permanently red is a pipeline nobody trusts. A reasonable middle ground would be failing only on high or critical severity, but GitLab does not currently offer that switch out of the box. The remediation Patrick points to is merge request approval rules — we can require sign-off from specific reviewers when a change introduces new vulnerabilities. We will dig into that in a later session.
You can review the findings in two places. First, the Security tab on the pipeline run, filtered by tool — pick “Container Scanning”. Second, in Security & Compliance → Vulnerability Report in the project’s governance area, again filterable by tool.
Key Takeaways#
Container scanning checks the artifact you actually ship. SAST and SCA cover source and dependencies. Container scanning covers the image: OS packages, base image, copied binaries. Skip it and you ship blind.
Reuse the build, do not rebuild in the Dockerfile. Drop
mvn packagefrom the Dockerfile andCOPY targetthe jar that the pipeline already produced. Anything else duplicates work and fragments the artifact you tested.Wire dependencies explicitly with
needs:. Container scanning needs an image. The image-build job needs the compiled jar. State those dependencies in the YAML or the order will surprise you.Docker-in-Docker is the price of building images in CI. Add
docker:dindas a service on thebuild_imagejob. Without it, the Docker CLI inside the runner has nothing to talk to.A green pipeline with vulnerabilities is by design. GitLab marks the job successful if the scan ran. Severity-based pipeline failure is not built in. Use merge request approval rules to keep new findings from sneaking in.
Expect overlap with SCA — and have a plan. Container scanning will rediscover application-library CVEs that SCA already flagged. Ultimate filters duplicates; without Ultimate, agree upfront on which report owns which finding so the team is not arguing in two places.
