Week 09 - Containers¶
Topic¶
Installing Docker and safely configuring it for the UT cloud environment, deploying a self-hosted S3 object storage server (MinIO) with Docker named volumes and TLS, configuring encrypted backups of the inventory API data using restic, and replacing the existing Python-based inventory service with a Dockerised container.
Company Requests¶
Ticket #901: Platform Modernisation
"The development team keeps complaining that deployments are inconsistent across environments. We need Docker installed on the server so we can start running services as containers. Make sure the setup is production-ready and does not break existing network connectivity. Important: the cloud network team has confirmed that any Docker bridge network using the 172.16.0.0/12 range will conflict with VPN and cause the VM to lose connectivity. Configure Docker's network before starting the daemon for the first time."
Ticket #902: Backup Object Storage
"IT policy now requires that critical application data be backed up to object storage. Set up a self-hosted S3 server and configure daily encrypted backups of the inventory API's data directory to it. The S3 endpoint must be accessible over HTTPS at
s3.<vm_name>.sysadm.ee. The compliance team requires that backups be verifiable — an on-call engineer must be able to confirm a backup exists and is recent without decrypting it."
Ticket #903: Containerise the Inventory API
"The inventory API has been running as a raw Python process managed by systemd. Package it as a Docker image and migrate it to run as a container. The existing Apache proxy and bearer-token authentication must continue to work without changes."
DNS and TLS
This lab adds a new subdomain (s3.<vm_name>.sysadm.ee). Add the DNS record in Week 05 and obtain a certificate from Vault (Week 07). If you used a wildcard certificate (*.<vm_name>.sysadm.ee), it already covers this subdomain.
Scoring Checks¶
- Check 9.1: Docker is installed and the daemon is running.
- Method: SSH into the VM and check
systemctl is-active docker. - Expected:
dockeris active.
- Method: SSH into the VM and check
- Check 9.2: Docker bridge networking is configured to the UT-approved IP range.
- Method: SSH into the VM and inspect
/etc/docker/daemon.jsonand thedocker0interface. - Expected:
daemon.jsonexists anddocker0is in the192.168.67.0/24range.
- Method: SSH into the VM and inspect
- Check 9.3: MinIO container is running with a Docker named volume.
- Method: SSH into the VM and run
docker inspecton the MinIO container. - Expected: MinIO is running and its
/datadirectory is backed by a named volume (not a bind mount).
- Method: SSH into the VM and run
- Check 9.4: S3 endpoint is reachable over HTTPS.
- Method: DNS resolution of
s3.<vm_name>.sysadm.eefollowed bycurl https://s3.<vm_name>.sysadm.ee/minio/health/live. - Expected: DNS resolves, TLS handshake succeeds, HTTP 200 response.
- Method: DNS resolution of
- Check 9.5: Restic backup exists in the
inventory-backupbucket.- Method: SSH into the VM and check the MinIO volume for restic repository data.
- Expected: The
inventory-backupbucket exists and contains at least one restic snapshot.
- Check 9.6: Inventory API is running from a Docker container.
- Method: SSH into the VM and look for a running container with
/data/inventorybind-mounted. - Expected: A container with
/data/inventorymounted is running, and the old systemd service is stopped.
- Method: SSH into the VM and look for a running container with
- Check 9.7: Inventory API still accepts bearer token authentication.
- Method: HTTP request to
inventory.<vm_name>.sysadm.ee/api/v1/inventory, first without and then withAuthorization: Bearer 845e6732f32b81dd778972703474ccbb. - Expected: 401/403 without credentials, 200 with the correct token.
- Method: HTTP request to
Tasks¶
Task 1: Install Docker¶
Before starting Docker for the first time, you must configure its bridge network. Docker's default address range conflicts with UT cloud infrastructure — starting Docker without the configuration file will make your VM unreachable over SSH.
Create /etc/docker/daemon.json BEFORE starting Docker
If you start Docker without the daemon configuration, your VM will lose network connectivity and can only be recovered via the ETAIS console. Confirm you have your root password set before proceeding.
Complete
- Create
/etc/docker/daemon.jsonwith the UT-approved network configuration (see the reference below for the exact content). - Add the Docker repository and install
docker-ce,docker-ce-cli, andcontainerd.io. - Enable and start the
dockerservice and verify thedocker0bridge is at the expected address. - Confirm Docker is working by running the hello-world container from
registry.hpc.ut.ee/mirror/library/hello-world.
Use registry.hpc.ut.ee/mirror when pulling Docker images.
Pulling images directly from Docker Hub is subject to rate limiting, which is quickly reached due to the large number of users within the university network. To avoid these limitations, prepend registry.hpc.ut.ee/mirror/library/ to the image name when pulling images (example: docker run registry.hpc.ut.ee/mirror/library/nginx).
Reference: SOP: Container Operations — Install Docker Safely, Technologies: Docker
Task 2: Deploy the MinIO S3 Server¶
MinIO exposes an S3-compatible API on port 9000 and a management console on port 9001. Both ports should be bound to localhost only and served through the existing Apache setup over HTTPS. Store MinIO's data in a Docker named volume so Docker manages the storage lifecycle.
Complete
- Create a named Docker volume and start a MinIO container with a restart policy and both ports bound to
127.0.0.1. - Install the
mc(MinIO Client) command-line tool and register your MinIO instance as an alias namedsysadm. Usemcto create a bucket namedinventory-backupand confirm it appears in the bucket list. - Add a DNS
Arecord fors3.<vm_name>.sysadm.ee(same procedure as Week 05). - Obtain a TLS certificate for
s3.<vm_name>.sysadm.eefrom Vault (same as Week 07). A wildcard certificate already covers this subdomain. - Create an Apache HTTPS virtual host that reverse-proxies requests to MinIO on port 9000. The MinIO documentation lists the required proxy directives.
- Verify the endpoint is healthy:
https://s3.<vm_name>.sysadm.ee/minio/health/liveshould return HTTP 200.
Optional: MinIO web console
MinIO's browser UI runs on port 9001. It is not part of the S3 protocol — it is a MinIO-specific management interface and is not scored. You can reach it by SSH port-forwarding (ssh -L 9001:127.0.0.1:9001 ...) or by creating an optional Apache virtual host for it.
If you do not complete this optional step, you cannot access the S3 via browser! Only command line tools, as on browser-based approach MinIO tries to redirect you to the UI.
Reference: Technologies: MinIO, SOP: Container Operations — Run a Container with a Named Volume, SOP: Web Server Management — Set Up a Reverse Proxy, SOP: Certificate Management
Task 3: Configure Restic Backups to S3¶
Restic is a backup tool that stores encrypted, deduplicated snapshots and speaks the S3 protocol natively — no extra adapter needed to use it with MinIO.
Complete
- Install
restic(dnf install restic). - Configure the four required environment variables:
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,RESTIC_REPOSITORY(pointing tos3:http://127.0.0.1:9000/inventory-backup), andRESTIC_PASSWORD. - Initialise the restic repository, run the first backup of
/data/inventory, and verify a snapshot was created withrestic snapshots.
Do not lose RESTIC_PASSWORD
All backup data is encrypted with this password. If you lose it, the backups cannot be restored.
Reference: SOP: Container Operations — Back Up Files to S3 with Restic
Task 4: Containerise the Inventory API¶
The inventory API source was deployed in Week 04 as a Python process under systemd. Package it as a Docker image, replace the systemd service with a container using Docker's restart policy, and keep /data/inventory bind-mounted so the existing data persists unchanged.
Complete
- Write a
Dockerfilefor the inventory API usingpython:3.11-slimas the base image. Install Flask, copyapp.py, expose port 5000, and set the start command. Check the existing systemd unit (/etc/systemd/system/inventory.service) to see the exact command and arguments the app is started with. - Build the image with
docker build. - Stop and disable the existing inventory API systemd service.
- Run the container with
--restart unless-stopped, bind-mount/data/inventory, and bind the port to127.0.0.1:5000so the existing Apache reverse proxy continues to work without changes. - Verify the API still responds through Apache with the bearer token (
845e6732f32b81dd778972703474ccbb).
Reference: SOP: Container Operations — Containerise an Existing Service, Technologies: Docker
Systemd service vs. Docker container — what changed operationally?
You migrated the inventory API from a systemd unit to a Docker container.
Consider these operational scenarios. Which approach (systemd, docker container) handles each better, and why?
- The service crashes at 3am.
- A developer pushes a broken version of
app.py. - You need to check what version of Flask the service is running.
- You need to see the last 100 lines of application output.
Ansible Tips¶
This section covers tips for automating the tasks in this lab with Ansible.
Docker Installation¶
Automate the daemon configuration and package installation:
- name: Create Docker configuration directory
file:
path: /etc/docker
state: directory
mode: '0755'
- name: Deploy daemon.json
copy:
dest: /etc/docker/daemon.json
content: |
{
"bip": "192.168.67.1/24",
"fixed-cidr": "192.168.67.0/24",
"storage-driver": "overlay2",
"mtu": 1400,
"default-address-pools": [
{ "base": "192.168.167.1/24", "size": 24 },
{ "base": "192.168.168.1/24", "size": 24 }
]
}
notify: restart docker
- name: Add Docker repository
get_url:
url: https://download.docker.com/linux/centos/docker-ce.repo
dest: /etc/yum.repos.d/docker-ce.repo
- name: Install Docker packages
dnf:
name:
- docker-ce
- docker-ce-cli
- containerd.io
state: present
- name: Enable and start Docker
systemd:
name: docker
enabled: yes
state: started
Image Builds and Container Runs¶
Image builds and docker run calls are not idempotent — Ansible will re-run them every time. A practical pattern is to use Ansible to deploy all supporting files (Dockerfiles, config files, Apache virtual hosts, daemon.json), then run the docker build and docker run commands manually once. Use the command or community.docker.docker_container module only if you are comfortable managing idempotency explicitly.
Containerised Inventory API¶
Automate the setup files and stopping the old service:
- name: Deploy inventory API Dockerfile
copy:
src: files/inventory-api/
dest: /opt/inventory-api/
- name: Stop and disable old inventory service
systemd:
name: inventory # adjust to match your Week 04 service name
enabled: no
state: stopped
ignore_errors: yes # service may not exist on a fresh run
Restic Backup Cron Job¶
- name: Schedule restic backup
cron:
name: "Restic backup of inventory"
minute: "0"
hour: "2"
job: >-
AWS_ACCESS_KEY_ID={{ minio_user }}
AWS_SECRET_ACCESS_KEY={{ minio_password }}
RESTIC_REPOSITORY=s3:http://127.0.0.1:9000/inventory-backup
RESTIC_PASSWORD={{ restic_password }}
restic backup /data/inventory
Store minio_password and restic_password in Ansible Vault:
ansible-vault encrypt_string 'your-password-here' --name minio_password
Useful Modules¶
file/copy/template— deploy Dockerfiles, Apache configs, daemon.jsondnf— install Docker packages and resticsystemd— enable/start Docker, stop/disable old servicesget_url— download the Docker repo file ormcbinarycron— schedule restic backups