| your Linux construction kit
Source
← Back to Overview

Development Workstation — your laptop has an undo button for everything.

You're about to run apt upgrade on 347 packages. GNOME might break. Your Python environment might break. Your kernel might break. On a normal workstation, you cross your fingers and hope. On a kldload workstation, you take a snapshot first and roll back if it breaks. Every project gets its own dataset with its own quota. Docker images live on ZFS. Your development environment is versioned, isolated, and recoverable.

The recipe

Step 1: Install the desktop profile

# Boot the kldload USB, pick desktop profile
# You get GNOME + ZFS on root + all kldload tools

# After install, verify your datasets
zfs list -o name,used,avail,mountpoint
# NAME                          USED  AVAIL  MOUNTPOINT
# rpool                         12G   450G   /
# rpool/ROOT/kldload-node       8.2G  450G   /
# rpool/home                    3.1G  450G   /home

Step 2: Snapshot before system upgrades

# Before any system upgrade, snapshot everything
ksnap pre-upgrade

# That's equivalent to:
zfs snapshot -r rpool/ROOT/kldload-node@pre-upgrade-$(date +%Y%m%d-%H%M)

# Now upgrade fearlessly
kpkg update
kpkg upgrade

# GNOME broke? Kernel panic? Roll it back.
ksnap rollback pre-upgrade

# Reboot. You're exactly where you were before the upgrade.
# Try again tomorrow when the fix lands.
Every system upgrade is a save point. If the game crashes, load the save. No reinstalls, no recovery USBs, no weekend rebuilds.

Step 3: Per-project datasets with quotas

# Create a dataset for each project
kdir /home/dev/projects/webapp
kdir /home/dev/projects/ml-pipeline
kdir /home/dev/projects/firmware

# Set quotas so one project can't eat all your disk
zfs set quota=50G rpool/home/dev/projects/webapp
zfs set quota=100G rpool/home/dev/projects/ml-pipeline
zfs set quota=20G rpool/home/dev/projects/firmware

# Each project has independent snapshots
zfs snapshot rpool/home/dev/projects/webapp@before-refactor
# ... refactor goes badly ...
zfs rollback rpool/home/dev/projects/webapp@before-refactor

# Check usage per project
zfs list -o name,used,quota -r rpool/home/dev/projects
# NAME                                    USED  QUOTA
# rpool/home/dev/projects/webapp          12G   50G
# rpool/home/dev/projects/ml-pipeline     34G   100G
# rpool/home/dev/projects/firmware        2.1G  20G

Step 4: Docker on ZFS

# Docker already uses ZFS as its storage driver on kldload
docker info | grep "Storage Driver"
# Storage Driver: zfs

# Each image layer is a ZFS dataset
zfs list -r rpool/var/lib/docker
# Every layer shares common blocks via copy-on-write
# 10 containers from the same base image? One copy of the base on disk.

# Snapshot your entire Docker state before experiments
zfs snapshot -r rpool/var/lib/docker@before-experiment

# Pull some images, run some containers, make a mess
docker pull postgres:16
docker pull redis:7
docker run -d --name testdb postgres:16

# Changed your mind? Roll it all back.
docker stop testdb && docker rm testdb
zfs rollback -r rpool/var/lib/docker@before-experiment
Docker on ZFS means every container image is a dataset. Layers share blocks. Snapshots cover the entire Docker state. Roll back Docker itself like rolling back a file.

Step 5: Automated snapshot schedule

# Install Sanoid for automatic snapshots
kpkg install sanoid

# Configure snapshot policy for your workstation
cat > /etc/sanoid/sanoid.conf <<'SANOID'
[rpool/ROOT/kldload-node]
  use_template = workstation
  recursive = yes

[rpool/home]
  use_template = workstation
  recursive = yes

[template_workstation]
  autosnap = yes
  autoprune = yes
  frequently = 4
  hourly = 24
  daily = 14
  weekly = 4
  monthly = 3
SANOID

systemctl enable --now sanoid.timer

# Now you have automatic snapshots every 15 minutes
# Accidentally deleted a file 20 minutes ago?
ls /home/dev/.zfs/snapshot/
# autosnap_2026-03-23_14:00  autosnap_2026-03-23_14:15  ...

# Just copy it back
cp /home/dev/.zfs/snapshot/autosnap_2026-03-23_14:00/important-file.py /home/dev/
The .zfs/snapshot directory is a time machine. Every snapshot is a read-only view of your files at that moment. Browse it like any directory.

Step 6: Replicate your workstation to a backup server

# Your workstation is now valuable. Back it up.
# Replicate to a NAS, a server, or another kldload box.

# Initial sync
syncoid --recursive rpool/home backup-nas:tank/workstations/$(hostname)/home

# Ongoing — add to cron
cat > /etc/cron.d/syncoid-backup <<'CRON'
0 */4 * * * root syncoid --recursive --no-sync-snap rpool/home backup-nas:tank/workstations/$(hostname)/home >> /var/log/syncoid.log 2>&1
CRON

# Laptop stolen? Disk dies? Buy new hardware, install kldload,
# receive the backup, boot. Everything is back.

The developer workflow

Fearless upgrades

Snapshot before every system update. If it breaks, rollback in seconds. No more waiting six months to upgrade because you're afraid of breaking your toolchain.

Project isolation

Each project is a ZFS dataset with its own quota, snapshots, and replication. One runaway node_modules can't fill your disk.

Docker without the mess

Docker on ZFS means image layers share blocks and snapshots cover your entire Docker state. Roll back docker pull mistakes instantly.

Time-travel file recovery

Sanoid takes snapshots every 15 minutes. Deleted a file? It's in .zfs/snapshot/. Browse, copy, done. No special recovery tools needed.