Migrating to kldloadOS
Every migration follows the same pattern: backup the old system, install kldloadOS, restore the data. The details change depending on what you are migrating from and what you are migrating. This guide covers all of it.
The migration philosophy: you are not "converting" your existing system. You are building a clean, ZFS-rooted system and moving your data and config into it. Clean installs are always more reliable than in-place upgrades. kldloadOS makes clean installs fast enough that this is the right call every time.
From ext4/btrfs to ZFS
The idea
You cannot convert an ext4 or btrfs filesystem to ZFS in place. You need a second disk (or enough free space on a second partition), install kldloadOS there, then move your data over. Think of it like moving house — you cannot renovate the foundation while living in the building.
Step 1 — Inventory your data
# What filesystems do you have?
df -hT
# How much data?
du -sh /home /var/lib /opt /srv /etc 2>/dev/null
# What services are running?
systemctl list-units --type=service --state=running
Step 2 — Full backup to external drive
# Mount your backup drive
mount /dev/sdb1 /mnt/backup
# Backup everything that matters
rsync -aAXv --progress \
--exclude='/dev/*' \
--exclude='/proc/*' \
--exclude='/sys/*' \
--exclude='/tmp/*' \
--exclude='/run/*' \
--exclude='/mnt/*' \
--exclude='/media/*' \
--exclude='/lost+found' \
/ /mnt/backup/old-system/
# Backup package list for reference
rpm -qa --qf '%{NAME}\n' | sort > /mnt/backup/old-system/rpm-packages.txt
# or for Debian:
dpkg --get-selections | awk '{print $1}' > /mnt/backup/old-system/deb-packages.txt
Step 3 — Install kldloadOS
# Boot the kldloadOS USB
# Select your target distro (CentOS, Debian, or RHEL)
# Install to the disk — ZFS on root is automatic
# Reboot into your new system
Step 4 — Restore data
# Mount the backup drive on the new system
mount /dev/sdb1 /mnt/backup
# Restore home directories
rsync -aAXv /mnt/backup/old-system/home/ /home/
# Restore application data
rsync -aAXv /mnt/backup/old-system/var/lib/postgresql/ /var/lib/postgresql/
rsync -aAXv /mnt/backup/old-system/opt/ /opt/
rsync -aAXv /mnt/backup/old-system/srv/ /srv/
# Restore select configs (NOT all of /etc — cherry-pick what you need)
cp -a /mnt/backup/old-system/etc/ssh/ssh_host_* /etc/ssh/
cp -a /mnt/backup/old-system/etc/crontab /etc/crontab
cp -a /mnt/backup/old-system/etc/fstab.custom /etc/ # if you had custom mounts
From bare metal
Side-by-side installation
If your server has two disks (or free space for a second partition), you can install kldloadOS alongside the existing system and migrate data live — no downtime for the backup step. Boot the kldloadOS USB, install to the second disk, then rsync from the old disk while both are mounted.
Two-disk migration (preferred)
# Boot kldloadOS USB
# Install to /dev/sdb (leave /dev/sda alone — that is your old system)
# Reboot into kldloadOS on /dev/sdb
# Mount the old disk
mkdir -p /mnt/old
mount /dev/sda2 /mnt/old # adjust partition number
# Rsync data directly — no external backup needed
rsync -aAXv --progress /mnt/old/home/ /home/
rsync -aAXv --progress /mnt/old/var/lib/ /var/lib/
rsync -aAXv --progress /mnt/old/etc/ssh/ssh_host_* /etc/ssh/
# When satisfied, wipe the old disk and add it to your ZFS pool
wipefs -a /dev/sda
zpool attach rpool /dev/sdb2 /dev/sda2 # mirror the root pool
# Now you have ZFS mirrored across both disks
Single-disk migration
# You must backup to an external drive first (see ext4/btrfs section above)
# Then install kldloadOS to the same disk (destructive — wipes the disk)
# Then restore from backup
From VMs
VM migration pattern
Do not try to convert a VM image to ZFS. Create a fresh kldloadOS VM, then move your data into it. The VM is disposable — your data is what matters.
From VMware / VirtualBox / Hyper-V
# On the OLD VM — export your data
tar czf /tmp/app-data.tar.gz /home /var/lib/myapp /etc/myapp /srv
scp /tmp/app-data.tar.gz user@new-host:/tmp/
# On the NEW kldloadOS VM — import
cd /
tar xzf /tmp/app-data.tar.gz
# Reinstall your application packages
dnf install -y postgresql-server nginx # CentOS/RHEL
# or
apt install -y postgresql nginx # Debian
From Proxmox
# On the Proxmox host — take a backup
vzdump 100 --storage local --mode snapshot
# Create a new kldloadOS VM in Proxmox
# Boot the kldloadOS ISO, install
# Then restore data from the old VM backup or rsync from the old VM directly:
rsync -aAXv --progress root@old-vm:/home/ /home/
rsync -aAXv --progress root@old-vm:/var/lib/ /var/lib/
rsync -aAXv --progress root@old-vm:/srv/ /srv/
From KVM / libvirt
# List your existing VMs
virsh list --all
# Create a new kldloadOS VM (or use deploy.sh kvm-deploy)
# Boot, install kldloadOS
# Mount the old VM disk image to copy data out
guestmount -a /var/lib/libvirt/images/old-vm.qcow2 -m /dev/sda2 /mnt/old
rsync -aAXv /mnt/old/home/ /home/
guestunmount /mnt/old
From another distro
Distro-to-distro migration
kldloadOS supports CentOS Stream 9, Debian 13 (Trixie), and RHEL 9. If you are on an older version of one of these (or Ubuntu), you are effectively doing a major version upgrade plus moving to ZFS. Do them together with a clean install.
CentOS 7/8 to kldloadOS CentOS 9
# CentOS 7/8 is EOL. Do NOT attempt an in-place upgrade.
# Inventory what you have installed
rpm -qa --qf '%{NAME}\n' | sort > /tmp/old-packages.txt
# Backup your data and config
rsync -aAXv /home /var/lib /etc /srv /opt /tmp/migration-backup/
# Install kldloadOS (select CentOS Stream 9)
# Restore data (see steps above)
# Reinstall packages that still exist in CentOS 9
# Compare your old package list against dnf:
while read pkg; do
dnf list available "$pkg" &>/dev/null && echo "$pkg"
done < /tmp/old-packages.txt > /tmp/installable.txt
dnf install -y $(cat /tmp/installable.txt)
Debian 11/12 to kldloadOS Debian 13
# Same pattern — backup data, install fresh, restore
dpkg --get-selections | awk '{print $1}' > /tmp/old-packages.txt
# Backup data
tar czf /tmp/migration-data.tar.gz /home /var/lib /etc /srv
# Install kldloadOS (select Debian)
# Restore data
# Reinstall packages:
while read pkg; do
apt-cache show "$pkg" &>/dev/null 2>&1 && echo "$pkg"
done < /tmp/old-packages.txt > /tmp/installable.txt
xargs apt install -y < /tmp/installable.txt
Ubuntu to kldloadOS
# Ubuntu packages are mostly compatible with Debian.
# Choose kldloadOS Debian 13 as your target.
# Note what Ubuntu-specific PPAs you use
ls /etc/apt/sources.list.d/
# Backup your data (same pattern)
# Install kldloadOS Debian 13
# Most packages from Ubuntu main/universe exist in Debian.
# PPA packages (like some NVIDIA drivers) will need manual reinstallation.
Database migration
PostgreSQL on ZFS
ZFS is excellent for databases. The key is setting the right recordsize for your workload. PostgreSQL uses 8K pages, so recordsize=8k on the data dataset eliminates read amplification. WAL logs get their own dataset with recordsize=64k for sequential write throughput.
PostgreSQL — pg_dump and pg_restore
# === ON THE OLD SERVER ===
# Dump all databases (custom format for parallel restore)
pg_dumpall -U postgres > /tmp/all-databases.sql
# Or dump a specific database in custom format (faster restore)
pg_dump -U postgres -Fc mydb > /tmp/mydb.dump
# Transfer to new server
scp /tmp/mydb.dump root@new-server:/tmp/
# === ON THE NEW kldloadOS SERVER ===
# Create ZFS datasets with optimal recordsize
zfs create -o recordsize=8k -o primarycache=metadata \
-o logbias=throughput rpool/postgres-data
zfs create -o recordsize=64k rpool/postgres-wal
# Install PostgreSQL
dnf install -y postgresql-server # CentOS/RHEL
# or
apt install -y postgresql # Debian
# Point PostgreSQL at the ZFS datasets
mkdir -p /rpool/postgres-data/pgdata /rpool/postgres-wal/pg_wal
chown postgres:postgres /rpool/postgres-data/pgdata /rpool/postgres-wal/pg_wal
# Initialize with custom WAL directory
sudo -u postgres initdb \
-D /rpool/postgres-data/pgdata \
--waldir=/rpool/postgres-wal/pg_wal
# Update postgresql.conf
cat >> /rpool/postgres-data/pgdata/postgresql.conf <<'PGCONF'
data_directory = '/rpool/postgres-data/pgdata'
full_page_writes = off # ZFS is copy-on-write, no torn pages
wal_init_flags = ''
PGCONF
# Start and restore
systemctl start postgresql
# Restore all databases
sudo -u postgres psql < /tmp/all-databases.sql
# Or restore a specific database
sudo -u postgres createdb mydb
pg_restore -U postgres -d mydb /tmp/mydb.dump
MySQL / MariaDB
# === ON THE OLD SERVER ===
mysqldump --all-databases --single-transaction > /tmp/all-mysql.sql
scp /tmp/all-mysql.sql root@new-server:/tmp/
# === ON THE NEW kldloadOS SERVER ===
zfs create -o recordsize=16k -o primarycache=metadata rpool/mysql-data
dnf install -y mariadb-server # CentOS/RHEL
# Point datadir at ZFS
systemctl stop mariadb
rsync -aAXv /var/lib/mysql/ /rpool/mysql-data/
sed -i 's|datadir=.*|datadir=/rpool/mysql-data|' /etc/my.cnf.d/mariadb-server.cnf
systemctl start mariadb
mysql < /tmp/all-mysql.sql
Docker migration
Docker on ZFS
kldloadOS can run Docker with the ZFS storage driver. Every container layer and volume becomes a ZFS dataset. You get snapshots, checksums, and compression for free. The migration path: export your images and volumes from the old system, install kldloadOS, configure the ZFS Docker driver, import everything.
Export from the old system
# Save all images
docker images --format '{{.Repository}}:{{.Tag}}' | while read img; do
filename=$(echo "$img" | tr '/:' '_')
docker save "$img" -o "/tmp/docker-images/${filename}.tar"
done
# Backup named volumes
docker volume ls --format '{{.Name}}' | while read vol; do
docker run --rm -v "${vol}:/data" -v /tmp/docker-volumes:/backup \
alpine tar czf "/backup/${vol}.tar.gz" -C /data .
done
# Export docker-compose files
find / -name 'docker-compose*.yml' -exec cp {} /tmp/docker-compose/ \; 2>/dev/null
# Transfer everything
rsync -aAXv /tmp/docker-images/ root@new-server:/tmp/docker-images/
rsync -aAXv /tmp/docker-volumes/ root@new-server:/tmp/docker-volumes/
rsync -aAXv /tmp/docker-compose/ root@new-server:/tmp/docker-compose/
Import on kldloadOS
# Create ZFS dataset for Docker
zfs create -o mountpoint=/var/lib/docker rpool/docker
# Install Docker
dnf install -y docker-ce docker-ce-cli containerd.io # CentOS/RHEL
# or
apt install -y docker-ce docker-ce-cli containerd.io # Debian
# Configure ZFS storage driver
mkdir -p /etc/docker
cat > /etc/docker/daemon.json <<'EOF'
{
"storage-driver": "zfs"
}
EOF
systemctl enable --now docker
# Load images
for img in /tmp/docker-images/*.tar; do
docker load -i "$img"
done
# Restore volumes
for vol in /tmp/docker-volumes/*.tar.gz; do
volname=$(basename "$vol" .tar.gz)
docker volume create "$volname"
docker run --rm -v "${volname}:/data" -v /tmp/docker-volumes:/backup \
alpine tar xzf "/backup/${volname}.tar.gz" -C /data
done
# Start your compose stacks
cd /tmp/docker-compose
docker compose up -d
Config migration
Do not copy all of /etc
Copying the entire /etc from an old system to a new one will break things. Different distro versions have different config formats, different default users, different systemd units. Cherry-pick the configs you actually customized. Leave everything else at the new system defaults.
User accounts
# On the old system — extract human users (UID >= 1000)
awk -F: '$3 >= 1000 && $3 < 65534 {print $0}' /etc/passwd > /tmp/users-passwd.txt
awk -F: '$3 >= 1000 && $3 < 65534 {print $0}' /etc/shadow > /tmp/users-shadow.txt
awk -F: '$3 >= 1000 && $3 < 65534 {print $0}' /etc/group > /tmp/users-group.txt
# On the new system — append (do NOT overwrite)
cat /tmp/users-passwd.txt >> /etc/passwd
cat /tmp/users-shadow.txt >> /etc/shadow
cat /tmp/users-group.txt >> /etc/group
# Restore home directories
rsync -aAXv /mnt/backup/home/ /home/
SSH keys
# Host keys — keeps the server fingerprint the same (no "WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED")
cp -a /mnt/backup/etc/ssh/ssh_host_* /etc/ssh/
chmod 600 /etc/ssh/ssh_host_*_key
chmod 644 /etc/ssh/ssh_host_*_key.pub
systemctl restart sshd
# User keys come with home directory restore
# Authorized keys are in ~/.ssh/authorized_keys (already restored with /home)
Cron jobs
# System crontab
cp /mnt/backup/etc/crontab /etc/crontab
# User crontabs
cp -a /mnt/backup/var/spool/cron/crontabs/* /var/spool/cron/crontabs/ 2>/dev/null # Debian
cp -a /mnt/backup/var/spool/cron/* /var/spool/cron/ 2>/dev/null # CentOS/RHEL
# Systemd timers (modern replacement for cron)
ls /mnt/backup/etc/systemd/system/*.timer
# Copy any custom timer/service pairs you created
cp /mnt/backup/etc/systemd/system/my-backup.* /etc/systemd/system/
systemctl daemon-reload
systemctl enable --now my-backup.timer
Firewall rules
# kldloadOS uses nftables by default
# If coming from iptables — export and translate
iptables-save > /tmp/old-iptables.rules
iptables-restore-translate -f /tmp/old-iptables.rules > /tmp/new-nftables.rules
# Review the translated rules, then apply:
nft -f /tmp/new-nftables.rules
# If coming from firewalld — it still works on kldloadOS CentOS
cp -a /mnt/backup/etc/firewalld/ /etc/firewalld/
systemctl restart firewalld
# If coming from ufw (Ubuntu) — translate to nftables manually
# ufw rules are in /etc/ufw/user.rules — read and recreate with nft
DNS / network cutover
Zero-downtime switchover
If the old server and new server are on the same network, the cleanest cutover is: bring up the new server with a temporary IP, verify everything works, then swap IPs. Total downtime: the time it takes to change two IP addresses (seconds, not minutes).
Same IP switchover
# === PREPARATION (both servers running) ===
# New server uses a temporary IP during setup
# Old server: 192.168.1.100 (the production IP)
# New server: 192.168.1.200 (temporary)
# Do all your data migration while both are up
rsync -aAXv root@192.168.1.100:/home/ /home/
rsync -aAXv root@192.168.1.100:/var/lib/ /var/lib/
# === CUTOVER (brief downtime) ===
# On the OLD server — remove the production IP
nmcli con mod ens18 ipv4.addresses ""
nmcli con down ens18
# On the NEW server — take the production IP
nmcli con mod ens18 ipv4.addresses "192.168.1.100/24"
nmcli con mod ens18 ipv4.gateway "192.168.1.1"
nmcli con up ens18
# Send a gratuitous ARP to update the network
arping -U -c 3 -I ens18 192.168.1.100
# === DONE — clients reconnect automatically ===
Same hostname
# Set the hostname on the new system
hostnamectl set-hostname myserver.example.com
# If using DNS — update the A record to point to the new IP
# (or keep the same IP as above, and DNS stays the same)
# If using /etc/hosts on other machines — no change needed if IP is the same
DNS cutover with TTL trick
# 48 hours BEFORE migration — lower the DNS TTL
# In your DNS provider, change the TTL from 3600 (1 hour) to 60 (1 minute)
# Wait 48 hours for the old TTL to expire everywhere
# During migration — update the DNS A record to the new IP
# Clients will pick up the change within 60 seconds
# After migration — raise TTL back to 3600
Verification checklist
What to check after migration
Do not declare victory until you have verified everything. This checklist is the difference between a smooth migration and a 3am page.
System basics
# ZFS pool is healthy
zpool status
# Boot environment exists
zfs list -t snapshot | head -20
# Correct hostname and IP
hostnamectl
ip addr show
# Time is synced
timedatectl
# Disk space is reasonable
zfs list
df -h
Services
# All expected services are running
systemctl list-units --type=service --state=running
# Check specific critical services
systemctl status sshd
systemctl status postgresql # if applicable
systemctl status docker # if applicable
systemctl status nginx # if applicable
# No failed units
systemctl --failed
Data integrity
# Run a ZFS scrub to verify all checksums
zpool scrub rpool
zpool status rpool # watch for errors
# Verify database connectivity
sudo -u postgres psql -c "SELECT count(*) FROM pg_database;"
# Verify file counts match
find /home -type f | wc -l # compare with old system
find /var/lib/myapp -type f | wc -l
Networking
# Can reach the internet
ping -c 3 1.1.1.1
# DNS resolution works
dig example.com
# Other hosts can reach this server
# (from another machine)
ssh root@myserver.example.com "hostname"
# Firewall rules are active
nft list ruleset | head -40
# or
firewall-cmd --list-all
Backups and replication
# If using sanoid/syncoid — verify snapshot policy is active
systemctl status sanoid.timer
sanoid --configcheck
# ZFS replication is working (if configured)
syncoid --dryrun rpool/data remote-server:backup/data
# Take a post-migration snapshot as your baseline
zfs snapshot -r rpool@post-migration-$(date +%Y%m%d)
Final sign-off
# Reboot and verify everything comes back
reboot
# After reboot — check that all services started automatically
systemctl --failed
zpool status
docker ps # if using Docker
systemctl status postgresql # if using PostgreSQL