Week 10 - DevOps¶
Topic¶
Auditing a third-party container image for supply-chain risk (manual layer inspection with Dive, automated CVE scanning with Trivy), and migrating the existing MinIO and inventory API containers from raw docker run to a single Docker Compose stack — without losing the data they currently hold.
Company Requests¶
Ticket #1001: Audit the consultancy image
"We've started using a Docker image from an external consultancy team for the FizzOps inventory project. Compliance flagged it for review before we run it anywhere. Audit the image's layers manually — there have been a few high-profile supply-chain attacks lately and we don't want to be next. Specifically, identify any layer that introduces something it shouldn't."
Ticket #1002: Run an automated CVE scan
"On top of the manual audit, run a vulnerability scanner against the same image and report at least one CRITICAL CVE. We need this for the security review docs."
Ticket #1003: Consolidate Docker workloads with Compose
"Operations is tired of
docker runcommands buried in shell history. Our MinIO and inventory API containers should be managed declaratively. Move them into a singledocker-compose.yml. The named volume holding our backup data and the inventory bind-mount must NOT be lost in the migration — those have real data."
Image hosting
The consultancy image lives at registry.hpc.ut.ee/public/lab10-consultancy:latest. It is purely an audit artifact — do not run any container from it. The image is built on a deliberately old base so Trivy will report several CRITICAL CVEs, and contains a single planted file in one layer for Dive to find.
Scoring Checks¶
- Check 10.1: The malicious layer's SHA256 digest is recorded in
/home/centos/lab10/malicious_layer.txt.- Method: SSH into the VM, read the file, compare against the expected layer digest.
- Expected: file exists; content (with optional
sha256:prefix and any whitespace stripped) matches the expected digest. Unique prefixes of at least 12 hex chars are accepted.
- Check 10.2: A CRITICAL CVE-ID is recorded in
/home/centos/lab10/malicious_cve.txt.- Method: SSH-based; read the file, validate the format, check membership in a curated allowlist of CRITICAL CVEs that Trivy reports against the image.
- Expected: file exists; content matches
CVE-YYYY-NNNNN; CVE-ID is on the accepted list.
- Check 10.3: The MinIO container is managed by Docker Compose.
- Method: SSH; locate the MinIO container; check its labels for
com.docker.compose.project. - Expected: MinIO container running with the Compose project label set.
- Method: SSH; locate the MinIO container; check its labels for
- Check 10.4: The inventory API container is managed by Docker Compose.
- Method: SSH; locate a container with
/data/inventorymounted; check labels forcom.docker.compose.project. - Expected: Inventory container running with the Compose project label set.
- Method: SSH; locate a container with
Week 9 checks still apply
The week-9 checks (Lab09_03, Lab09_04, Lab09_05, Lab09_06, Lab09_07) continue to run and serve as regression checks for the migration. If your S3 endpoint stops answering, your restic snapshots are missing, or the inventory API stops accepting bearer-token auth after Task 4, those will fail first — fix them before worrying about the new Lab10 checks.
Tasks¶
Task 1: Prepare the audit environment¶
Create a working directory for this week's artefacts and install the two image-audit tools plus the Docker Compose plugin.
Complete
- Create
/home/centos/lab10/. This is where you'll write the layer-SHA file, the CVE-ID file, and the Compose project. - Install Dive (interactive layer audit).
- Install Trivy (automated CVE scan).
- Install Docker Compose plugin (
docker-compose-plugin).
Reference: Technologies: Dive, Technologies: Trivy, Technologies: Docker Compose
Task 2: Audit the consultancy image with Dive¶
The consultancy team has provided a container image. Before running anything from it, audit its layers manually and identify the layer that introduces a clearly out-of-place file.
Complete
-
Pull the image:
sudo docker pull registry.hpc.ut.ee/public/lab10-consultancy:latest -
Open it in Dive:
sudo dive registry.hpc.ut.ee/public/lab10-consultancy:latest -
Walk the layer list (
↑/↓) top-to-bottom. Switch to the file-tree panel withTaband useCtrl+Uto hide unmodified files. The remaining files are exactly what the selected layer added or changed. -
Find the layer whose
Command:field copies a clearly suspicious file into a system path. Note the layer'sDigest:field — thesha256:...value at the top of the layer details panel. -
Write that digest to
/home/centos/lab10/malicious_layer.txt. Both the full digest and a unique prefix of at least 12 hex chars are accepted.
Reference: SOP: Container Operations — Audit an Image's Layers with Dive, Technologies: Dive
Task 3: Scan the consultancy image with Trivy¶
In parallel to the manual audit, run an automated vulnerability scan against the same image. The base image is deliberately old, so several CRITICAL CVEs will show up.
Complete
-
Run a CRITICAL-only scan:
The first run will download the vulnerability database (~50–100 MB). This is normal.trivy image --severity CRITICAL --no-progress \ registry.hpc.ut.ee/public/lab10-consultancy:latest -
Pick any one CRITICAL CVE from the output (the
Vulnerabilitycolumn). -
Write its CVE-ID (
CVE-YYYY-NNNNN) to/home/centos/lab10/malicious_cve.txt. One CVE-ID on a single line, no extra text.
Why is the manual audit needed if Trivy already finds CVEs?
Trivy looks up packages and dependencies against a public vulnerability database. It cannot detect purposeful malicious code that someone has planted in their own image — there's no CVE for "ACME Consultancy embedded a backdoor in their Dockerfile". Tools like Dive let you see exactly what the image actually contains, which is the only way to catch supply-chain attacks like the XZ backdoor (CVE-2024-3094) before they hit your systems.
Reference: SOP: Container Operations — Scan an Image for Vulnerabilities with Trivy, Technologies: Trivy
Task 4: Migrate MinIO and the inventory API to Docker Compose¶
The two containers from week 9 (MinIO + inventory API) are currently managed by long docker run commands. Move them into a single docker-compose.yml so the stack can be brought up or down with one command. The migration must preserve the existing named volume backing MinIO and the /data/inventory bind mount used by the inventory API.
Read this before touching anything
The existing MinIO container holds your week-9 backup data inside a named Docker volume. If you create a new (empty) volume with the same name in Compose without marking it external: true, your backups will be silently orphaned. Always validate with sudo docker compose config and check sudo docker volume ls before stopping the old container.
Complete
-
Inventory the current containers. For each one, run
sudo docker inspect <container>and note: image, port mappings, volume mounts (named volumes vs bind mounts), environment variables, and restart policy. -
Write
/home/centos/lab10/docker-compose.yml. The file should:- Define a
minioservice that uses the same image, port mappings (still bound to*********for the Apache proxy), env vars, and named volume mount as your week-9 container. - Define an
inventoryservice that uses the same image you built in week 9, with the/data/inventorybind mount and the localhost-bound port the Apache proxy expects. - Declare the pre-existing MinIO named volume at the top level with
external: true, so Compose reuses it instead of creating an empty new one. - Set
restart: unless-stoppedon both services.
- Define a
-
Validate the YAML before doing anything destructive:
cd /home/centos/lab10 sudo docker compose config -
Stop and remove the old
docker run-managed containers (named volumes survivedocker rm):Use the names fromsudo docker stop <minio-container> <inventory-container> sudo docker rm <minio-container> <inventory-container>sudo docker ps. -
Bring the stack up:
sudo docker compose up -d sudo docker compose ps -
Verify everything still works end-to-end:
https://s3.<vm_name>.sysadm.ee/minio/health/livereturns HTTP 200 (week-9 checkLab09_04).restic snapshotsagainst the bucket still lists your existing snapshots (week-9 checkLab09_05).- Inventory API still answers with the bearer token (week-9 check
Lab09_07).
Compose vs. raw docker run — what changes operationally?
Compose doesn't add capabilities — anything you can do with docker compose up you can also do with a long enough docker run. What it adds is a checked-in description of "the production deployment".
Consider:
- You join a team. The deployment is 12 manual
docker runcommands buried in someone's bash history. How do you know what's running? - You need to roll back after a bad deploy. With raw
docker run, you stop/rm/run each container by hand. With Compose,git checkout HEAD~1 -- docker-compose.yml && docker compose up -d. - You need to spin up a staging copy. With raw
docker run, copy and edit a wiki page. With Compose,cp docker-compose.yml docker-compose.staging.yml.
Reference: SOP: Container Operations — Migrate Containers from docker run to Docker Compose, Technologies: Docker Compose, Concepts: Container Orchestration
Ansible Tips¶
This section covers tips for automating the tasks in this lab with Ansible.
Audit Tools Installation¶
Both Dive and Trivy install idempotently:
- name: Install Dive
dnf:
name: "https://github.com/wagoodman/dive/releases/download/v0.13.1/dive_0.13.1_linux_amd64.rpm"
state: present
disable_gpg_check: yes
- name: Add Trivy repository
yum_repository:
name: trivy
description: Trivy repository
baseurl: "https://aquasecurity.github.io/trivy-repo/rpm/releases/$basearch/"
gpgcheck: yes
gpgkey: https://aquasecurity.github.io/trivy-repo/rpm/public.key
- name: Install Trivy and Compose plugin
dnf:
name:
- trivy
- docker-compose-plugin
state: present
Compose File Deployment¶
Don't try to manage docker compose up from Ansible — it's not idempotent. Instead, deploy the supporting files and run Compose manually once. Same pattern as week 9's "build the supporting files, run the actual docker run once":
- name: Create lab10 project directory
file:
path: /home/centos/lab10
state: directory
owner: centos
group: centos
mode: '0755'
- name: Deploy docker-compose.yml
copy:
src: files/docker-compose.yml
dest: /home/centos/lab10/docker-compose.yml
owner: centos
group: centos
mode: '0644'
After Ansible has deployed the file, run sudo docker compose up -d from the project directory once.
Automation Limits¶
The Dive and Trivy audit steps are one-off security reviews — they don't need to be automated. Record the layer SHA and CVE-ID by hand. In a production setup, Trivy is usually invoked as a CI gate against every built image, but that's outside the scope of this lab.
Useful Modules¶
dnf— install Dive (from a URL), Trivy,docker-compose-pluginyum_repository— declare the Trivy RPM repocopy/template— deploydocker-compose.ymlfrom a fixed file or Jinja2 templatefile— create the project directory