Container Operations¶
Prerequisites¶
-
/etc/docker/daemon.jsoncreated with UT-approved network configuration - Docker installed (
docker-ce,docker-ce-cli,containerd.io) and daemon active
Procedure: Install Docker Safely¶
When to use: Setting up Docker on a UT cloud VM for the first time.
Steps:
-
Create the daemon configuration before installing or starting Docker:
sudo mkdir -p /etc/docker sudo tee /etc/docker/daemon.json <<'EOF' { "bip": "192.168.67.1/24", "fixed-cidr": "192.168.67.0/24", "storage-driver": "overlay2", "mtu": 1400, "default-address-pools": [ { "base": "192.168.167.1/24", "size": 24 }, { "base": "192.168.168.1/24", "size": 24 }, { "base": "192.168.169.1/24", "size": 24 }, { "base": "192.168.170.1/24", "size": 24 }, { "base": "192.168.171.1/24", "size": 24 }, { "base": "192.168.172.1/24", "size": 24 }, { "base": "192.168.173.1/24", "size": 24 }, { "base": "192.168.174.1/24", "size": 24 } ] } EOF -
Add the Docker repository and install packages:
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo sudo dnf install docker-ce docker-ce-cli containerd.io -
Start and enable Docker:
sudo systemctl enable --now docker -
Verify the network configuration is correct:
ip -4 addr show docker0 # Should show 192.168.67.1/24 -
Run a test container:
Expected output contains:sudo docker run --rm registry.hpc.ut.ee/mirror/library/hello-worldHello from Docker!
Troubleshooting:
- If
docker0shows an unexpected IP (e.g.,172.17.0.1): stop Docker, fixdaemon.json, then runsudo systemctl restart docker. - If the VM loses SSH connectivity after starting Docker without
daemon.json: you must access the VM via ETAIS console.
Procedure: Pull and Run a Container¶
When to use: Launching a new application instance.
Steps:
-
Pull image:
docker pull registry.hpc.ut.ee/mirror/library/nginx:latest -
Run container (background mode):
docker run -d --name my-nginx -p 8080:80 nginx:latest -
Verify:
docker ps
Troubleshooting:
- "Bind for 0.0.0.0:8080 failed: port is already allocated": Choose a different host port (e.g.,
-p 8081:80).
Procedure: Build an Image from a Dockerfile¶
When to use: Creating a custom image with your application code.
Steps:
-
Create
Dockerfile:FROM registry.hpc.ut.ee/mirror/library/python:3.9 COPY . /app WORKDIR /app RUN pip install -r requirements.txt CMD ["python", "app.py"] -
Build image:
docker build -t myapp:v1 .
Troubleshooting:
- "COPY failed": Ensure source files exist in the build context (directory where you run build).
Procedure: Inspect a Running Container¶
When to use: Debugging configuration or networking issues.
Steps:
-
View JSON metadata:
docker inspect my-nginx -
Filter specific info (e.g., IP address):
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' my-nginx
Troubleshooting:
- "No such object": Check container name with
docker ps -a.
Procedure: View Container Logs¶
When to use: Application is crashing or behaving incorrectly.
Steps:
-
View all logs:
docker logs my-nginx -
Follow logs in real-time:
docker logs -f my-nginx
Troubleshooting:
- Logs are empty: Ensure application writes to stdout/stderr, not a file.
Procedure: Execute a Command Inside a Container¶
When to use: Manual troubleshooting, checking files, or database administration inside a container.
Steps:
-
Open a shell:
(Usedocker exec -it my-nginx /bin/bash/bin/shif bash is not available, e.g., Alpine images) -
Run a single command:
docker exec my-nginx cat /etc/nginx/nginx.conf
Troubleshooting:
- "exec failed": The container might be stopped or crashing.
Procedure: Publish a Container Port¶
When to use: Exposing an internal service to the host network.
Steps:
-
Map port at runtime:
docker run -p <host_port>:<container_port> <image> -
Example (Host 8080 -> Container 80):
docker run -p 8080:80 nginx
Troubleshooting:
- Service unreachable: Check host firewall and ensure container is listening on
0.0.0.0, not127.0.0.1.
Procedure: Run a Container with a Named Volume¶
When to use: Deploying a service that needs persistent storage managed by Docker (databases, object stores, etc.).
Steps:
-
Create the named volume:
sudo docker volume create myservice-data -
Run the container mounting the volume:
The volume data lives atsudo docker run -d \ --name myservice \ --restart unless-stopped \ -v myservice-data:/var/lib/myservice \ myservice-image/var/lib/docker/volumes/myservice-data/_dataon the host. -
Inspect the volume:
sudo docker volume inspect myservice-data sudo docker volume ls
Troubleshooting:
- If data is missing after container recreation: confirm you used the same volume name and mount path.
- Named volumes are not deleted by
docker rm; usedocker volume rm myservice-datato delete them explicitly.
Procedure: Keep a Container Running on Boot¶
When to use: Making a container survive system reboots or process crashes without wrapping it in a systemd unit.
Steps:
-
Set a restart policy at run time:
sudo docker run -d --name myapp --restart unless-stopped myimageunless-stoppedrestarts on crash and on host reboot, but stays stopped if you explicitly randocker stop. -
Or update an existing container:
sudo docker update --restart unless-stopped myapp -
Verify the policy was applied:
Expected:sudo docker inspect myapp | grep -A1 RestartPolicy"Name": "unless-stopped" -
Confirm Docker itself starts on boot (it should already be enabled):
sudo systemctl is-enabled docker
Troubleshooting:
- Container does not start after reboot: check
docker ps -afor exit codes anddocker logs myappfor errors. docker inspectshows"Name": "no": the restart policy was not set. Rundocker updateagain.
Procedure: Containerise an Existing Service¶
When to use: Replacing a process managed by systemd with an equivalent Docker container.
Steps:
-
Stop and disable the existing systemd service:
sudo systemctl stop myservice sudo systemctl disable myservice -
Write a
Dockerfilefor the application. A Dockerfile needs to:- Choose a suitable base image (
FROM) - Set a working directory inside the image (
WORKDIR) - Copy application files and any dependency manifest into the image (
COPY) - Install dependencies at build time (
RUN) - Document the port the app listens on (
EXPOSE) - Set the command that starts the application (
CMD)
Check the existing systemd unit file (
ExecStart=line) for the exact command and any CLI arguments — include those inCMD.See Technologies: Docker — Dockerfile Reference for a full explanation of each instruction.
- Choose a suitable base image (
-
Build the image (run from the directory containing the
Dockerfile):sudo docker build -t myapp:latest . -
Run the container, bind-mounting any host paths the application must read/write:
Binding tosudo docker run -d \ --name myapp \ --restart unless-stopped \ -p 127.0.0.1:5000:5000 \ -v /data/myapp:/data/myapp \ myapp:latest127.0.0.1:5000ensures the port is only accessible locally (via a reverse proxy), not directly from the network. -
Verify the application responds:
curl http://127.0.0.1:5000/
Troubleshooting:
- Container exits immediately: check
sudo docker logs myappfor the error. Common causes: missing files in the image, wrongCMD, or a port already in use. - Old systemd service still running and occupying the port:
systemctl stopit before starting the container.
Procedure: Back Up Files to S3 with Restic¶
When to use: Creating encrypted, deduplicated backups of host files into an S3-compatible bucket (e.g., MinIO).
Steps:
-
Install restic:
sudo dnf install restic -
Set up environment variables (put these in a script or
~root/.bashrcto avoid retyping):export AWS_ACCESS_KEY_ID="<minio-root-user>" export AWS_SECRET_ACCESS_KEY="<minio-root-password>" export RESTIC_REPOSITORY="s3:http://127.0.0.1:9000/inventory-backup" export RESTIC_PASSWORD="<backup-encryption-password>"RESTIC_PASSWORDencrypts the backup data — do not lose it, or you cannot restore. -
Initialise the restic repository (one time only):
Expected output contains:sudo -E restic initcreated restic repository -
Run the first backup:
sudo -E restic backup /data/inventory -
Verify snapshots:
sudo -E restic snapshots -
Optional — automate with cron (runs daily at 02:00):
sudo crontab -e # Add: 0 2 * * * AWS_ACCESS_KEY_ID=<user> AWS_SECRET_ACCESS_KEY=<pass> RESTIC_REPOSITORY=s3:http://127.0.0.1:9000/inventory-backup RESTIC_PASSWORD=<pass> restic backup /data/inventory
Troubleshooting:
Fatal: unable to open config file: Stat: ...: the repository has not been initialised. Runrestic initfirst.Fatal: wrong password or no key found:RESTIC_PASSWORDdoes not match the password used duringrestic init.- Connection refused: MinIO is not running. Check
sudo docker ps.
Procedure: Audit an Image's Layers with Dive¶
When to use: Manually inspecting a third-party image before deploying it. Trivy can find known CVEs but cannot detect custom-planted malicious files — Dive lets you walk every filesystem change a layer introduces.
Steps:
- Make sure the image is in the local cache:
sudo docker pull <image> -
Open the image in Dive:
sudo dive <image> -
Walk the layer list (left panel) top-to-bottom. For each layer, read the
Command:line — that's the Dockerfile instruction that produced it. UseTabto switch to the file tree (right panel) andCtrl+Uto hide files unmodified in the selected layer. The remaining files are exactly what that layer added or changed. -
When you find a suspicious file (unexpected scripts in
/usr/local/bin/,/opt/,/etc/cron.d/, …), note the layer'sDigest:field. The digest is the SHA256 content hash of the layer and uniquely identifies it across pulls. Record it if you need to reference the layer later. -
Quit Dive with
QorCtrl+C.
Troubleshooting:
unable to get image manifest: pull the image first.permission denied while trying to connect to the Docker daemon socket: run withsudo.- Terminal looks broken / wrapped: resize the window. Dive is dense and needs ~120 columns.
Procedure: Scan an Image for Vulnerabilities with Trivy¶
When to use: Automated CVE scan of an image before deployment, alongside manual audit. Catches known vulnerabilities in OS packages and language dependencies.
Steps:
-
Make sure Trivy is installed (see Technologies: Trivy — Installation).
-
Run a CRITICAL-only scan against the image:
First run downloads the vulnerability database (~50–100 MB). Subsequent scans hit the cache.trivy image --severity CRITICAL --no-progress <image> -
Read the table column-by-column:
- Library — the affected package
- Vulnerability — the CVE-ID (record this if you need to file or track it)
- Status —
fixedmeans a patched upstream version exists;affectedmeans it's unpatched - Installed / Fixed Version — what's in your image vs. what fixes it
-
For each CRITICAL CVE, decide: rebuild on a newer base image, upgrade the offending package, or accept the risk and document why.
-
(Optional) If you need a clean machine-readable list of CVE-IDs:
trivy image --severity CRITICAL --scanners vuln --format json <image> \ | jq -r '.Results[].Vulnerabilities[]?.VulnerabilityID' | sort -u
Troubleshooting:
unable to download db: check thatghcr.iois reachable from the VM.OS version is no longer supported by the distribution: expected on deliberately old base images. Trivy still reports CVEs; the warning just means upstream stopped patching.- Lots of HIGH/MEDIUM noise: filter with
--severity CRITICALor--severity CRITICAL,HIGH.
Procedure: Migrate Containers from docker run to Docker Compose¶
When to use: Consolidating multiple existing docker run-managed containers into a single declarative docker-compose.yml, without losing data in named volumes or breaking existing reverse-proxy setups.
Prerequisites: docker-compose-plugin installed (see Technologies: Docker Compose — Installation).
Steps:
-
Inventory the existing containers. For each one, note the image, port mappings, volume mounts (named volumes vs bind mounts), environment variables, and restart policy:
sudo docker inspect <container> -
Create a project directory and write
docker-compose.ymldescribing every service. The translation fromdocker runflags to Compose keys:docker runflagCompose key --name fooservice key ( foo:)-p 8080:80ports: ["8080:80"]-v data:/var/lib/foo(named volume)volumes: [data:/var/lib/foo]+ top-levelvolumes: data:-v /host/path:/container/path(bind)volumes: [/host/path:/container/path]-e KEY=VALUEenvironment: { KEY: VALUE }--restart unless-stoppedrestart: unless-stopped--network mynetnetworks: [mynet]+ top-levelnetworks: mynet: -
Crucially, mark pre-existing named volumes and networks as
external: trueso Compose reuses them instead of creating empty new ones:If you skip this, Compose creates a brand-new volume with a project-prefixed name (e.g.volumes: minio-data: external: true services: minio: image: registry.hpc.ut.ee/mirror/minio/minio:latest volumes: - minio-data:/datalab10_minio-data) and your old data is silently orphaned. -
Validate the file before doing anything destructive:
This parses the YAML, interpolates env vars, and prints the merged result. Any error here means the file would not start.sudo docker compose config -
Stop and remove the old
docker run-managed containers (named volumes survivedocker rm):sudo docker stop <name1> <name2> ... sudo docker rm <name1> <name2> ... -
Bring the stack up with Compose:
sudo docker compose up -d -
Verify each service is running and Compose-managed:
Compose-managed containers always havesudo docker compose ps sudo docker inspect <container> | grep com.docker.compose.projectcom.docker.compose.project=<project name>in their labels.
Troubleshooting:
- Existing data missing after
docker compose up: the named volume was not declaredexternal: true. Fix the YAML,docker compose down, thendocker compose up -d. The original volume should still be present (sudo docker volume ls) and intact. - Reverse proxy can no longer reach the service: confirm the new Compose container binds to the same host port and address as the old one. Compose does not automatically inherit the bind address from the previous container.
external network not found: the network was removed when you cleaned up. Recreate it manually (docker network create <name>) or removeexternal: trueand let Compose create it.
Quick Reference¶
| Action | Command |
|---|---|
| Run (detached, restart) | sudo docker run -d --restart unless-stopped image |
| List Running | sudo docker ps |
| List All | sudo docker ps -a |
| Logs | sudo docker logs <name> |
| Follow Logs | sudo docker logs -f <name> |
| Shell | sudo docker exec -it <name> sh |
| Stop | sudo docker stop <name> |
| Remove | sudo docker rm <name> |
| Build | sudo docker build -t name . |
| Named Volume Create | sudo docker volume create vol-name |
| Named Volume List | sudo docker volume ls |
| Set Restart Policy | sudo docker update --restart unless-stopped <name> |
| Prune | sudo docker system prune |
| Audit image layers | sudo dive <image> |
| Scan image for CVEs | trivy image --severity CRITICAL <image> |
| Compose stack up | sudo docker compose up -d |
| Compose stack down | sudo docker compose down |
| Compose validate | sudo docker compose config |
Related Documentation¶
- Technologies: Docker, Docker Compose, Dive, Trivy, MinIO
- Concepts: Containers, Container Orchestration