ZFS on root. Every distro. Automatic. Identical.
Every kldload install creates the same ZFS dataset hierarchy. The pool layout, compression settings, snapshot policies, and mount points are identical across Debian, CentOS, Rocky, and RHEL. Switch distros and your ZFS muscle memory transfers completely.
Dataset layout
rpoolnonerpool/ROOT/<host>/rpool/home/homerpool/root/rootrpool/srv/srvrpool/var/varrpool/var/log/var/logrpool/var/cache/var/cacherpool/var/tmp/var/tmp/home, /var/log, and /srv
stay untouched. Your user data, your logs, and your service data survive the rollback.
This is deliberate. Rolling back the OS shouldn't destroy your work.
Pool properties
ZFS encryption
AES-256-GCM — native ZFS encryption
Optional at install time. Not LUKS, not dm-crypt — native ZFS encryption.
Per-dataset. Hardware-accelerated on modern CPUs (AES-NI).
Passphrase entered at ZFSBootMenu before the OS loads.
Encrypted snapshots and replication work transparently (zfs send -w sends raw ciphertext).
Overhead: 5–15% for sequential I/O. Negligible for random I/O. Recovery: None. Forget the passphrase, lose the data. This is by design.
Replication
ZFS send/recv — block-level replication
Replicate datasets to any ZFS target: another kldload node, TrueNAS, any Linux with OpenZFS. Initial full send, then incremental (only changed blocks). Over SSH, over WireGuard, over anything.
# Full replication
zfs send -R rpool/srv@snap | ssh backup "zfs recv -F backup/srv"
# Incremental (only changes since last sync)
zfs send -R -I @old @new | ssh backup "zfs recv backup/srv"
# Encrypted replication (ciphertext only, receiver can't read)
zfs send -w rpool/srv@snap | ssh backup "zfs recv backup/srv"