Build Your Own
← Back to Overview
Backup & Disaster Recovery — because snapshots are not backups.
Say it again: SNAPSHOTS ARE NOT BACKUPS.
Snapshots protect against accidental deletion and logical corruption.
They do NOT protect against hardware failure, pool corruption, or site-wide disasters.
If the pool dies, all snapshots die with it.
You need zfs send/recv to a separate system. Here's how.
The recipe
Step 1: Set up the backup target
# On the backup server (another kldload box, TrueNAS, or any ZFS system)
kdir /srv/backups
# Create per-host datasets
kdir /srv/backups/web-prod-01
kdir /srv/backups/db-prod-01
Step 2: Install Sanoid + Syncoid
# On the production server
kpkg install sanoid
# Configure snapshot policy
cat > /etc/sanoid/sanoid.conf <<'SANOID'
[rpool/ROOT/kldload-node]
use_template = production
recursive = yes
[rpool/home]
use_template = production
recursive = yes
[rpool/srv]
use_template = production
recursive = yes
[template_production]
autosnap = yes
autoprune = yes
hourly = 24
daily = 30
weekly = 8
monthly = 12
yearly = 2
SANOID
# Enable the timer
systemctl enable --now sanoid.timer
Step 3: Automated replication with Syncoid
# One-time initial sync (full send)
syncoid --recursive rpool/home backup-server:tank/backups/web-prod-01/home
# Cron job for ongoing incremental replication
# Every hour, only changed blocks are sent
cat > /etc/cron.d/syncoid-backup <<'CRON'
0 * * * * root syncoid --recursive --no-sync-snap rpool/home backup-server:tank/backups/$(hostname)/home >> /var/log/syncoid.log 2>&1
15 * * * * root syncoid --recursive --no-sync-snap rpool/srv backup-server:tank/backups/$(hostname)/srv >> /var/log/syncoid.log 2>&1
30 * * * * root syncoid --recursive --no-sync-snap rpool/ROOT backup-server:tank/backups/$(hostname)/ROOT >> /var/log/syncoid.log 2>&1
CRON
First sync sends everything. Every sync after that sends only the blocks that changed. A 500GB dataset with 2GB of daily changes? The hourly backup takes seconds, not hours.
Step 4: Encrypted offsite replication
# Replicate encrypted datasets without decrypting
# The backup server stores ciphertext — can't read your data
syncoid --recursive --sendoptions="w" \
rpool/home offsite-server:tank/offsite/$(hostname)/home
# Or send to cloud storage (S3-compatible)
zfs send -w rpool/home@daily-$(date +%Y%m%d) | \
aws s3 cp - s3://my-bucket/backups/$(hostname)/home-$(date +%Y%m%d).zfs
# Even AWS can't read your data — keys never leave your machine
Step 5: Disaster recovery runbook
# SCENARIO: Production server is dead. Recover from backup.
# 1. Boot a new kldload system (USB or VM)
# 2. Create the pool
zpool create -o ashift=12 rpool /dev/vda2
# 3. Receive the backup
ssh backup-server "zfs send -R tank/backups/web-prod-01/ROOT@latest" | \
zfs recv -F rpool/ROOT
ssh backup-server "zfs send -R tank/backups/web-prod-01/home@latest" | \
zfs recv -F rpool/home
ssh backup-server "zfs send -R tank/backups/web-prod-01/srv@latest" | \
zfs recv -F rpool/srv
# 4. Install bootloader
krecovery reinstall-bootloader /dev/vda
# 5. Reboot — you're back online
# Data loss = time since last syncoid run (worst case: 1 hour)
Server dies at 3 AM. Spin up a new one. Pull the backup. Boot it. Back online before breakfast. That's ZFS replication.
The 3-2-1 rule with ZFS
3 copies
Production + local backup server + offsite (cloud or remote site). Sanoid manages snapshots. Syncoid replicates.
2 media types
NVMe/SSD on production. HDD on backup server. Or cloud object storage for offsite. Different failure modes.
1 offsite
Encrypted
zfs send -w to a remote location. Building burns down? Data survives. Keys stay with you.