Skip to content

Week 08 - Filesystems

Topic

Creating and managing local filesystems (partitioning, formatting, mounting), setting up NFS and Samba network shares, and migrating the inventory API to persistent disk-backed storage.

Company Requests

Ticket #801: Persistent Inventory Storage

"The warehouse API is currently storing inventory data in memory — every time the service restarts, all data is lost. Ops has provisioned a dedicated 1 GB disk for this VM. Create a filesystem on it, mount it at /data, and deploy the updated version of the inventory API that writes its database to /data/inventory/storage.db."

Ticket #802: NFS File Share

"The infrastructure team needs a network-accessible file share for internal tooling. Set up an NFS server on your VM exporting /data/nfs, and verify it works by mounting it locally at /mnt/nfs."

Ticket #803: Samba File Share

"The Windows-side of the team also needs access to a shared folder. Set up a Samba share at /data/samba (share name: sysadm). The scoring user must be able to authenticate and access the share. Verify it works by mounting it locally at /mnt/samba."

Object Storage (S3)

Object storage (S3-compatible) was introduced in this week's lecture but the practical setup will be covered in Week 09 (Containers), where we use it together with Docker.

Scoring Checks

  • Check 8.1: /data is mounted as an XFS filesystem.
    • Method: The scoring server logs in via SSH and checks /proc/mounts for an XFS entry at /data.
    • Expected: XFS filesystem is mounted at /data.
  • Check 8.2: The inventory API storage file exists and has content.
    • Method: The scoring server logs in via SSH and checks that /data/inventory/storage.db exists and is non-empty.
    • Expected: File exists and contains data (at least one API request has been made). A WARNING is returned if the file exists but is empty.
  • Check 8.3: NFS port 2049 is open and reachable.
    • Method: The scoring server opens a TCP connection to your VM on port 2049.
    • Expected: Connection succeeds.
  • Check 8.4: NFS server exports /data/nfs.
    • Method: The scoring server logs in via SSH and runs showmount -e localhost to verify the path is exported.
    • Expected: /data/nfs appears in the export list.
  • Check 8.5: /mnt/nfs is mounted.
    • Method: The scoring server logs in via SSH and checks /proc/mounts for a mount at /mnt/nfs.
    • Expected: /mnt/nfs is mounted.
  • Check 8.6: Samba port 445 is open and reachable.
    • Method: The scoring server opens a TCP connection to your VM on port 445.
    • Expected: Connection succeeds.
  • Check 8.7: The Samba [sysadm] share is accessible to the scoring user.
    • Method: The scoring server runs smbclient -L //<ip> -U scoring%<password> and checks that a share named sysadm appears.
    • Expected: Share sysadm is listed.
  • Check 8.8: /mnt/samba is mounted.
    • Method: The scoring server logs in via SSH and checks /proc/mounts for a mount at /mnt/samba.
    • Expected: /mnt/samba is mounted.

Tasks

Task 1: Create and Mount a Data Filesystem

Your VM has a second disk (typically /dev/vdb or /dev/sdb) that was provisioned but not yet partitioned or formatted. Create a single primary partition on it, format it as XFS, and mount it persistently at /data. Also create a /data/inventory subdirectory for the next task.

Complete

Partition and format the disk, mount it at /data via /etc/fstab, and verify the mount is active. Refer to the SOP below for the exact procedure.

After the mount is in place, run systemctl daemon-reload so systemd picks up the new fstab entry, and restorecon -Rv /data to apply the correct SELinux context to the freshly formatted filesystem. Without the latter, services writing files under /data may be denied access even if the file permissions look correct.

Do not format the wrong disk

Identify the correct disk with lsblk before running any destructive commands. Use df -h to confirm which disk your OS is on. Formatting the OS disk will destroy your system.

Reference: SOP: Filesystem Management, Concepts: Filesystems

Task 2: Update the Inventory API

The warehouse team has released a new version of the inventory API that persists its data to disk instead of keeping it in memory. Pull the update from the company repository and reconfigure the service.

Complete

The new version of the API accepts a --storage flag specifying a file path for the database. Update the ExecStart line in your existing systemd unit to pass --storage /data/inventory/storage.db, then reload and restart the service. Make a test request to the API to confirm the storage file is created on disk.

Also ensure that the user running the inventory service has write access to /data/inventory. The directory is owned by root after you create it — check the User= directive in your systemd unit and set ownership on the directory accordingly.

Reference: SOP: Service Management

Task 3: Set Up an NFS Server

Set up your VM as an NFS server and export /data/nfs. Mount the export locally at /mnt/nfs to verify everything works end-to-end, and make both the export and the mount persistent across reboots.

Complete

Install nfs-utils, configure /etc/exports to export /data/nfs, start the server, open the required firewall services (nfs, mountd, rpc-bind), and mount the share locally at /mnt/nfs. Add both the export and the local mount to the appropriate config files so they survive a reboot.

Reference: Technologies: NFS, SOP: Filesystem Management

Task 4: Set Up a Samba Server

Set up your VM as a Samba server and share /data/samba. Mount the share locally at /mnt/samba to verify it works, and make everything persistent.

Complete

Install the Samba packages, configure the share in /etc/samba/smb.conf, set the correct permissions and SELinux context on the directory, and start the smb and nmb services. Open the samba firewall service and mount the share locally at /mnt/samba.

Key requirements the scoring server checks for:

  • Share name must be sysadm
  • The scoring user must be in samba_group and have a Samba password set to 2daysuperadmin
  • The shared directory path is /data/samba

Reference: Technologies: Samba, SOP: Filesystem Management


Ansible Tips

Filesystem Modules and Their Limits

Ansible ships with modules for all three steps of the filesystem workflow covered in this lab:

  • community.general.parted — create and manage partition tables
  • community.general.filesystem — format a partition with a given filesystem type
  • ansible.posix.mount — manage /etc/fstab entries and mount state

The parted and filesystem modules are potentially destructive — a misconfigured playbook re-run could repartition or reformat a disk that already has data. They can be used safely, but only with explicit guards that prevent them from running when the partition or filesystem already exists.

The simplest guard is a stat check on the partition device before running either module:

- name: Check if partition already exists
  stat:
    path: /dev/vdb1
  register: partition_stat

- name: Create partition on /dev/vdb
  community.general.parted:
    device: /dev/vdb
    number: 1
    state: present
  when: not partition_stat.stat.exists

- name: Create XFS filesystem on /dev/vdb1
  community.general.filesystem:
    fstype: xfs
    dev: /dev/vdb1
  when: not partition_stat.stat.exists

community.general.filesystem also has a force parameter that defaults to false, meaning it will refuse to reformat a device that already contains a filesystem — an additional safety net on top of the when guard.

Once the disk is partitioned and formatted, ansible.posix.mount handles the persistent mount and is safe to run unconditionally on every playbook run:

- name: Mount /data filesystem
  ansible.posix.mount:
    path: /data
    src: "UUID={{ data_disk_uuid }}"
    fstype: xfs
    opts: defaults
    state: mounted

state: mounted both adds the entry to /etc/fstab and mounts it immediately. Use state: present if you only want to write the fstab entry without mounting.

Handlers for NFS and Samba

Use handlers to reload NFS exports or restart Samba only when the config actually changes — not on every playbook run:

# In tasks:
- name: Configure /etc/exports
  copy:
    ...
  notify: reload nfs exports

# In handlers:
- name: reload nfs exports
  command: exportfs -a

The same pattern applies to Samba — notify a restart smb handler from any task that modifies smb.conf.

SELinux Contexts

For Samba shares, use community.general.sefcontext to set a persistent SELinux file context rule, then trigger restorecon via a handler. This is more robust than a one-shot chcon, which does not survive a filesystem relabel:

- name: Set SELinux context on Samba share
  community.general.sefcontext:
    target: "/data/samba(/.*)?" 
    setype: samba_share_t
    state: present
  notify: restorecon /data/samba

Useful Modules

  • ansible.posix.mount — manage fstab entries and mount state (safe to automate)
  • community.general.filesystem — create filesystems — use with caution, not in regular playbooks
  • community.general.parted — manage partitions — use with caution, not in regular playbooks
  • community.general.sefcontext — set persistent SELinux file context rules
  • ansible.posix.firewalld — manage firewalld services and ports
  • file — create directories, set permissions and ownership
  • template — deploy config files with variables (e.g., smb.conf.j2)
  • systemd — manage services