Docker¶
What Is It?¶
Docker is a containerization platform that packages applications and their dependencies into portable, isolated containers. It uses images as immutable templates and provides networking, volume management, and a build system (Dockerfiles).
Installation¶
Configure networking BEFORE starting Docker
On University of Tartu cloud VMs, Docker's default bridge network (172.17.0.0/16) conflicts with the UT internal network. If you start Docker without first creating daemon.json, your VM will lose network connectivity and can only be recovered via console. Create /etc/docker/daemon.json first — then install and start Docker.
# 1. Create the daemon configuration FIRST
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<'EOF'
{
"bip": "192.168.67.1/24",
"fixed-cidr": "192.168.67.0/24",
"storage-driver": "overlay2",
"mtu": 1400,
"default-address-pools": [
{ "base": "192.168.167.1/24", "size": 24 },
{ "base": "192.168.168.1/24", "size": 24 },
{ "base": "192.168.169.1/24", "size": 24 },
{ "base": "192.168.170.1/24", "size": 24 },
{ "base": "192.168.171.1/24", "size": 24 },
{ "base": "192.168.172.1/24", "size": 24 },
{ "base": "192.168.173.1/24", "size": 24 },
{ "base": "192.168.174.1/24", "size": 24 }
]
}
EOF
# 2. Add Docker repository
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# 3. Install Docker
sudo dnf install docker-ce docker-ce-cli containerd.io
# 4. Start and enable Docker
sudo systemctl enable --now docker
# 5. Verify
sudo docker run --rm registry.hpc.ut.ee/mirror/library/hello-world
Use sudo, not the docker group
Adding a user to the docker group gives that user unrestricted root access to the host (they can mount /etc into a container and modify system files). Use sudo docker instead.
Key Files and Directories¶
| Path | Purpose |
|---|---|
/etc/docker/daemon.json | Docker daemon configuration — networking, storage driver, MTU |
/var/lib/docker/volumes/ | Named volume data storage |
/var/lib/docker/ | Images, containers, and all runtime data |
Dockerfile | Image build instructions |
Configuration¶
Docker is configured via the daemon configuration file and per-image build instructions.
Minimal Working Configuration¶
Daemon config (/etc/docker/daemon.json) — the configuration required on UT cloud VMs:
{
"bip": "192.168.67.1/24",
"fixed-cidr": "192.168.67.0/24",
"storage-driver": "overlay2",
"mtu": 1400,
"default-address-pools": [
{ "base": "192.168.167.1/24", "size": 24 },
{ "base": "192.168.168.1/24", "size": 24 },
{ "base": "192.168.169.1/24", "size": 24 },
{ "base": "192.168.170.1/24", "size": 24 },
{ "base": "192.168.171.1/24", "size": 24 },
{ "base": "192.168.172.1/24", "size": 24 },
{ "base": "192.168.173.1/24", "size": 24 },
{ "base": "192.168.174.1/24", "size": 24 }
]
}
bip— IP and subnet for thedocker0bridge. Must not conflict with the host or cloud network.fixed-cidr— subnet from which container IPs are allocated on the default bridge.mtu— set to 1400 to stay within UT cloud packet size limits.storage-driver—overlay2is the recommended driver on modern Linux.default-address-pools— address ranges for user-defined Docker networks.
Dockerfile — a text file that describes how to build a container image, instruction by instruction. Each instruction creates a layer in the image.
A Dockerfile is built with docker build -t myimage:tag /path/to/directory/. Docker reads the Dockerfile in that directory and runs each instruction in order.
Dockerfile Reference¶
FROM <image>[:<tag>]- Always the first instruction. Sets the base image — everything else builds on top of it. Choose a minimal, trusted image.
python:3.11-slimis a small Debian-based Python image.alpine-based images are even smaller but useapkinstead ofapt.FROM python:3.11-slim WORKDIR /path- Sets the working directory inside the image for all subsequent instructions (
COPY,RUN,CMD). If the directory does not exist, Docker creates it. Use an absolute path.WORKDIR /app COPY <src> <dest>- Copies files from the build context (the directory you pass to
docker build) into the image.<src>is relative to the build context.<dest>is relative toWORKDIR(or absolute).Copy dependency files before source code so Docker can cache theCOPY requirements.txt . # copies into WORKDIR COPY app.py /app/app.py # explicit destinationRUN pip installlayer and only re-run it whenrequirements.txtactually changes. RUN <command>- Executes a shell command at build time and commits the result as a new layer. Used to install packages, compile code, or set up configuration. Chain related commands with
RUN pip install --no-cache-dir -r requirements.txt RUN dnf install -y curl && dnf clean all&&to keep them in a single layer and reduce image size. EXPOSE <port>- Documents which network port the container process listens on. This is informational only — it does not publish the port to the host. Actual publishing happens with
-patdocker run.EXPOSE 5000 CMD ["executable", "arg1", "arg2"]- Sets the default command that runs when a container starts. There can only be one
CMD. Use the JSON array form (exec form) rather than a shell string to avoid signal-handling issues.CMD ["python", "app.py"] CMD ["python", "app.py", "--storage", "/data/inventory/storage.db"]CMDcan be overridden at runtime:docker run myimage python other_script.py. ENV <key>=<value>- Sets environment variables that are available both at build time and at runtime.
ENV PYTHONUNBUFFERED=1
Important Directives¶
- Port mapping (
-p) - Maps a host port to a container port:
-p 8080:80makes the container's port 80 accessible on the host's port 8080. - Detached mode (
-d) - Runs the container in the background.
- Naming (
--name) - Assigns a human-readable name to the container instead of a random ID.
- Volume mounts (
-v) - Two forms exist — bind mounts (
-v /host/path:/container/path) mount a specific host directory, and named volumes (-v volume-name:/container/path) let Docker manage the storage under/var/lib/docker/volumes/. Named volumes are preferred for service data because the path is stable and Docker manages lifecycle. - Named volumes (
--mountor-v name:path) docker volume create mydatacreates a named volume. Use-v mydata:/app/datato mount it. List withdocker volume ls, inspect withdocker volume inspect mydata.- Environment variables (
-e) - Pass configuration into the container:
-e MYSQL_ROOT_PASSWORD=secret. - Networks (
--network) - Attach the container to a specific Docker network for inter-container communication.
- Restart policy (
--restart) - Controls what happens when a container exits or the host reboots. Set at run time (
--restart unless-stopped) or updated later (docker update --restart unless-stopped myapp). Docker's restart policy is the correct way to keep containers running persistently — do not wrap containers in systemd units.
| Policy | Behaviour |
|---|---|
no | Never restart (default) |
on-failure | Restart only on non-zero exit |
always | Always restart, including after host reboot |
unless-stopped | Restart unless explicitly stopped with docker stop |
Common Commands¶
# Run a container (detached, named, with port mapping)
docker run -d --name myapp -p 8080:80 nginx
# List running containers
docker ps
# List all containers (including stopped)
docker ps -a
# Stop / start / restart a container
docker stop myapp
docker start myapp
docker restart myapp
# Remove a container
docker rm myapp
# View container logs
docker logs myapp
docker logs -f myapp # Follow (tail) logs
# Execute a command inside a running container
docker exec -it myapp /bin/bash
# Inspect container details (IP, mounts, config)
docker inspect myapp
# List images
docker image ls
# Build an image from a Dockerfile
docker build -t myimage:latest .
# Remove an image
docker rmi myimage:latest
# Pull an image from a registry
docker pull nginx:latest
# View disk usage
docker system df
# Clean up unused resources
docker system prune
Logging and Debugging¶
- Container logs:
docker logs <container>shows stdout/stderr output from the container process. - Follow logs:
docker logs -f <container>for real-time log tailing. - Inspect:
docker inspect <container>shows full configuration including network settings, mounts, and environment variables. - Exec into container:
docker exec -it <container> /bin/shopens an interactive shell inside a running container for debugging. - Events:
docker eventsstreams real-time events from the Docker daemon. - Resource usage:
docker statsshows live CPU, memory, and network usage per container.
Troubleshooting checklist:
docker ps -a— is the container running or has it exited?docker logs <container>— any application errors?docker inspect <container>— correct network/port/volume config?ss -tulpn | grep <port>— is the port mapped on the host?docker exec -it <container> /bin/sh— can you reach the application from inside?
Security Considerations¶
- Use
sudo docker, not the docker group: Any user in thedockergroup can trivially become root on the host (by mounting/etcinto a container). Usesudo dockerto keep auditability and limit access. - Network conflicts: Docker creates virtual networks. The
bipanddefault-address-poolssettings must not overlap with your host network or cloud VPC ranges, or you will lose network connectivity. - Image provenance: Only pull images from trusted registries. Use a local cache registry (e.g.
registry.hpc.ut.ee/mirror) to avoid rate limits and verify image sources. - Non-root containers: Where possible, run container processes as a non-root user (use
USERdirective in Dockerfile). - Read-only filesystems: Use
--read-onlyflag for containers that do not need to write to their filesystem. - Limit resources: Use
--memoryand--cpusflags to prevent a single container from consuming all host resources. - Do not store secrets in images: Use environment variables or Docker secrets at runtime, not baked into the image.
Further Reading¶
Related Documentation¶
- Concepts: Containers
- SOPs: Container Operations