| your Linux construction kit
Source

Zero to Hero — from git clone to production in every command.

This is the complete, end-to-end walkthrough. Every command. Every output. Every file. You start with nothing. You end with a custom ZFS-on-root Linux image deployed to production — cloud, on-prem, or bare metal. Follow it top to bottom. No steps skipped. No "exercise left to the reader."

Impatient? Start here.

git clone https://github.com/kldload/kldload.git
cd kldload
cp kldload.env.example kldload.env
source kldload.env
bash builder/container-build.sh

Five lines. Come back in 10 minutes. You'll have a bootable ISO with ZFS on root.
The rest of this page is for people who want to understand what just happened.

Phase 1: Get the source

Step 1.1: Clone the repo

# Start here. Everything else follows from this.
git clone https://github.com/kldload/kldload.git
cd kldload

# What you now have:
ls
# builder/          — Dockerfile + build-iso.sh (builds the ISO)
# build/            — darksite config (package lists for offline repos)
# deploy.sh         — orchestrator (build, spawn, release)
# live-build/       — chroot overlay (everything baked into the ISO)
# profiles/         — server.yaml, desktop.yaml
# kldload.env       — your local config (gitignored, never committed)

Step 1.2: Understand the directory structure

kldload/
├── builder/
│   ├── Dockerfile              # CentOS Stream 9 build container
│   ├── build-iso.sh            # THE build script — creates the ISO
│   └── container-build.sh      # Launches Docker, runs build-iso.sh inside
│
├── build/darksite/
│   └── config/package-sets/    # Package lists for offline repos
│       ├── target-base.txt     # Core packages (89 packages)
│       ├── target-server.txt   # Server profile additions
│       └── target-desktop.txt  # Desktop profile additions (GNOME, Firefox)
│
├── live-build/config/
│   ├── includes.chroot/        # Files baked into the live ISO
│   │   ├── usr/sbin/kldload-install-target   # Main installer
│   │   ├── usr/lib/kldload-installer/lib/    # Installer libraries
│   │   │   ├── bootstrap.sh    # Package install (dnf + apt paths)
│   │   │   ├── bootloader.sh   # ZFSBootMenu + initramfs
│   │   │   ├── storage-zfs.sh  # ZFS pool + dataset creation
│   │   │   └── profiles.sh     # Profile-specific packages
│   │   ├── usr/local/bin/      # CLI tools (kst, ksnap, kbe, kdf...)
│   │   └── usr/local/share/kldload-webui/    # Web UI frontend
│   └── output/                 # Built ISOs land here
│
└── deploy.sh                   # Build + deploy orchestrator
Every file is readable bash or HTML. No compiled binaries. No vendor SDKs. cat anything and understand it.

Phase 2: Configure your build

Step 2.1: Create your environment file

# Copy the example and edit
cat > kldload.env <<'EOF'
# kldload build configuration
PROFILE=desktop              # desktop or server
DISTRO=centos                # centos, debian, or rhel

# RHEL only (leave blank for CentOS/Debian)
RHEL_ACTIVATION_KEY=
RHEL_ORG_ID=

# Proxmox deployment (optional)
PROXMOX_HOST=10.100.10.225
PROXMOX_NODE=fiend
PROXMOX_TOKEN_ID=root@pam!mytoken
PROXMOX_TOKEN_SECRET=your-token-here

# VM defaults
VM_MEMORY=8192
VM_CORES=4
VM_DISK_GB=40
EOF

# This file is gitignored — your credentials never leave your machine

Step 2.2: Customize the package list (optional)

# Want to add packages to the offline repo?
# Edit the package sets:
vim build/darksite/config/package-sets/target-base.txt

# Add your packages — one per line
echo "nginx" >> build/darksite/config/package-sets/target-server.txt
echo "postgresql-server" >> build/darksite/config/package-sets/target-server.txt

# These get downloaded and baked into the ISO's offline repo
# Available for install without internet on the target

Step 2.3: Add a custom postinstall script (optional)

# Create a postinstall.sh that runs after the base install
cat > live-build/config/includes.chroot/root/darksite/postinstall.sh <<'POSTINSTALL'
#!/bin/bash
set -euo pipefail

echo "Running custom postinstall..."

# Install your application
dnf install -y nginx postgresql-server
systemctl enable nginx postgresql

# Configure your app
cat > /etc/nginx/conf.d/myapp.conf <<NGINX
server {
    listen 80;
    server_name _;
    root /srv/myapp;
}
NGINX

# Create ZFS dataset for your app data
zfs create -o compression=zstd rpool/srv/myapp

# Snapshot the clean state
zfs snapshot rpool/ROOT/$(hostname)@post-install

echo "Custom postinstall complete"
POSTINSTALL
chmod +x live-build/config/includes.chroot/root/darksite/postinstall.sh
postinstall.sh is your hook into the installed system. It runs as root with full access. Install packages, configure services, create datasets — whatever you need.

Phase 3: Build the ISO

Step 3.1: Build the Docker builder image (first time only)

# This creates the build environment — CentOS Stream 9 with lorax, squashfs, xorriso
docker build -t kldload-live-builder:latest -f builder/Dockerfile builder/

# Takes ~2 minutes. Only needed once (or after Dockerfile changes).
# Output:
# Successfully tagged kldload-live-builder:latest

Step 3.2: Build the ISO

# Source your config
source kldload.env

# Build
bash builder/container-build.sh

# What happens inside:
# 1. Downloads all RPMs for offline repo (~974 packages)
# 2. Creates CentOS rootfs via dnf --installroot
# 3. Installs GNOME, Firefox, ZFS, WireGuard, tools
# 4. Builds ZFS DKMS kernel module
# 5. Copies your custom scripts and webui
# 6. Creates squashfs + ISO with EFI boot
# 7. Generates SHA256 checksum
#
# Takes ~10 minutes. Output:
# ISO: live-build/output/kldload-free-centos-amd64-20260321.iso
# Size: 2.2G
# SHA256: 1080f7917d61aabe3c6fd6aeac4...

Step 3.3: Verify the ISO

# Check the ISO exists and has the ZFS module
ls -lh live-build/output/*.iso
# -rw-r--r--. 1 root root 2.2G kldload-free-centos-amd64-20260321.iso

# Verify checksum
cd live-build/output
sha256sum -c kldload-free-centos-amd64-20260321.iso.sha256
# kldload-free-centos-amd64-20260321.iso: OK

# Verify ZFS module is inside (optional but recommended)
mkdir -p /tmp/verify && mount -o loop,ro *.iso /tmp/verify
unsquashfs -l /tmp/verify/LiveOS/squashfs.img | grep zfs.ko
# squashfs-root/usr/lib/modules/5.14.0-687.el9.x86_64/extra/zfs.ko.xz
umount /tmp/verify && rm -rf /tmp/verify
Always verify. The build script now dies if the ZFS module is missing, but trust your own eyes. That's the kldload way.

Phase 4: Deploy locally (KVM / bare metal)

Option A: Write to USB (bare metal)

# One-liner: download, burn, eject — replace /dev/sdX with your USB device
curl -L -o /tmp/kldload.iso https://dl.kldload.com/kldload-free-latest.iso && \
  dd if=/tmp/kldload.iso of=/dev/sdX bs=4M status=progress oflag=sync conv=fsync && \
  sync && eject /dev/sdX

# Or if you already have the ISO:
dd if=kldload-free-*.iso of=/dev/sdX bs=4M status=progress oflag=sync conv=fsync && sync

# Boot from USB. GNOME desktop loads. Web UI opens in Firefox.
# Pick your distro (CentOS/Debian/RHEL/Rocky). Pick your profile. Install. Done.

Option B: Deploy to KVM (local VM)

# Create a VM disk
qemu-img create -f qcow2 /var/lib/libvirt/images/my-kldload.qcow2 40G

# Copy ISO to libvirt images
cp live-build/output/kldload-free-centos-amd64-20260321.iso \
   /var/lib/libvirt/images/

# Create and boot the VM
virt-install \
  --name my-kldload \
  --ram 8192 \
  --vcpus 4 \
  --cpu host-passthrough \
  --os-variant centos-stream9 \
  --machine q35 \
  --boot uefi,loader.secure=no,cdrom,hd \
  --disk /var/lib/libvirt/images/my-kldload.qcow2,bus=virtio \
  --cdrom /var/lib/libvirt/images/kldload-free-centos-amd64-20260321.iso \
  --network default \
  --graphics vnc,listen=0.0.0.0 \
  --noautoconsole

# Connect via VNC or virt-manager
# GNOME desktop loads → webui opens → install → VM powers off

# Eject ISO and boot from disk
virsh change-media my-kldload sda --eject --config
virsh start my-kldload

# You now have a ZFS-on-root CentOS VM

Phase 5: Deploy to Proxmox

Step 5.1: Upload ISO to Proxmox

# SCP the ISO to Proxmox storage
scp live-build/output/kldload-free-centos-amd64-20260321.iso \
    root@10.100.10.225:/var/lib/vz/template/iso/

# Or use the Proxmox API
curl -k -X POST "https://10.100.10.225:8006/api2/json/nodes/fiend/storage/local/upload" \
  -H "Authorization: PVEAPIToken=root@pam!mytoken=YOUR-TOKEN" \
  -F "content=iso" \
  -F "filename=@live-build/output/kldload-free-centos-amd64-20260321.iso"

Step 5.2: Create VM via API

# Create the VM
pvesh create /nodes/fiend/qemu \
  --vmid 100 \
  --name kldload-node-01 \
  --memory 8192 \
  --cores 4 \
  --cpu host \
  --machine q35 \
  --bios ovmf \
  --efidisk0 local-lvm:1 \
  --scsi0 local-lvm:40 \
  --scsihw virtio-scsi-single \
  --ide2 local:iso/kldload-free-centos-amd64-20260321.iso,media=cdrom \
  --net0 virtio,bridge=vmbr0 \
  --boot order=ide2 \
  --agent enabled=1

# Start the VM
pvesh create /nodes/fiend/qemu/100/status/start

# Open the console in Proxmox web UI
# Install via the kldload webui
# VM powers off when done

# Remove ISO and boot from disk
pvesh set /nodes/fiend/qemu/100/config --delete ide2
pvesh set /nodes/fiend/qemu/100/config --boot order=scsi0
pvesh create /nodes/fiend/qemu/100/status/start

Phase 6: Deploy to cloud (AWS / Azure / GCP)

Step 6.1: Convert ISO to cloud image

# First, install to a qcow2 disk using QEMU (headless)
qemu-img create -f qcow2 kldload-cloud.qcow2 40G

# Boot the ISO and install to the disk
qemu-system-x86_64 \
  -m 4096 \
  -cpu host \
  -enable-kvm \
  -drive file=kldload-cloud.qcow2,format=qcow2,if=virtio \
  -cdrom live-build/output/kldload-free-centos-amd64-20260321.iso \
  -boot d \
  -vnc :1 \
  -daemonize

# Connect via VNC (localhost:5901), install, wait for poweroff
# The qcow2 is now a bootable ZFS-on-root disk image

Step 6.2: Upload to AWS (AMI)

# Convert qcow2 to raw
qemu-img convert -f qcow2 -O raw kldload-cloud.qcow2 kldload-cloud.raw

# Upload to S3
aws s3 cp kldload-cloud.raw s3://my-bucket/kldload-cloud.raw

# Import as AMI
aws ec2 import-image \
  --description "kldload ZFS-on-root CentOS 9" \
  --disk-containers "Description=kldload,Format=raw,UserBucket={S3Bucket=my-bucket,S3Key=kldload-cloud.raw}"

# Wait for import (takes 10-30 min)
aws ec2 describe-import-image-tasks --import-task-ids import-ami-XXXXX

# Once complete, launch an instance
aws ec2 run-instances \
  --image-id ami-XXXXX \
  --instance-type m5.large \
  --key-name my-key \
  --security-group-ids sg-XXXXX \
  --subnet-id subnet-XXXXX

# SSH in — ZFS on root, in AWS
ssh admin@ec2-XX-XX-XX-XX.compute.amazonaws.com
sudo kst

Step 6.3: Upload to Azure (VHD)

# Convert qcow2 to VHD (fixed size, required by Azure)
qemu-img convert -f qcow2 -O vpc -o subformat=fixed,force_size \
  kldload-cloud.qcow2 kldload-cloud.vhd

# Upload to Azure blob storage
az storage blob upload \
  --account-name mystorageaccount \
  --container-name images \
  --type page \
  --file kldload-cloud.vhd \
  --name kldload-cloud.vhd

# Create managed image
az image create \
  --name kldload-zfs \
  --resource-group mygroup \
  --source "https://mystorageaccount.blob.core.windows.net/images/kldload-cloud.vhd" \
  --os-type Linux

# Create VM from image
az vm create \
  --name kldload-node-01 \
  --resource-group mygroup \
  --image kldload-zfs \
  --size Standard_D4s_v3 \
  --admin-username admin \
  --ssh-key-values ~/.ssh/id_ed25519.pub

Step 6.4: Upload to GCP

# Convert qcow2 to raw, then tar.gz (GCP format)
qemu-img convert -f qcow2 -O raw kldload-cloud.qcow2 disk.raw
tar czf kldload-cloud.tar.gz disk.raw

# Upload to GCS
gsutil cp kldload-cloud.tar.gz gs://my-bucket/

# Create image
gcloud compute images create kldload-zfs \
  --source-uri gs://my-bucket/kldload-cloud.tar.gz \
  --guest-os-features UEFI_COMPATIBLE

# Create instance
gcloud compute instances create kldload-node-01 \
  --image kldload-zfs \
  --machine-type n2-standard-4 \
  --zone us-west1-a

Phase 7: Deploy with Terraform

Step 7.1: Terraform for AWS

# main.tf — deploy kldload instances to AWS
provider "aws" {
  region = "us-west-2"
}

# Reference your imported AMI
data "aws_ami" "kldload" {
  most_recent = true
  owners      = ["self"]
  filter {
    name   = "name"
    values = ["kldload-zfs-*"]
  }
}

# Deploy instances
resource "aws_instance" "kldload_node" {
  count         = 3
  ami           = data.aws_ami.kldload.id
  instance_type = "m5.large"
  key_name      = "my-key"

  tags = {
    Name = "kldload-node-${count.index + 1}"
  }
}

output "instance_ips" {
  value = aws_instance.kldload_node[*].public_ip
}
# Deploy
terraform init
terraform plan
terraform apply

# Output:
# instance_ips = [
#   "54.201.123.45",
#   "54.201.123.46",
#   "54.201.123.47",
# ]

# SSH into any node
ssh admin@54.201.123.45
sudo kst
# Pool: rpool ONLINE
# ZFS on root. In AWS. From your custom image.

Step 7.2: Terraform for Proxmox

# main.tf — deploy kldload to Proxmox
terraform {
  required_providers {
    proxmox = {
      source  = "telmate/proxmox"
      version = "~> 3.0"
    }
  }
}

provider "proxmox" {
  pm_api_url      = "https://10.100.10.225:8006/api2/json"
  pm_api_token_id = "root@pam!mytoken"
  pm_api_token_secret = var.proxmox_token
  pm_tls_insecure = true
}

resource "proxmox_vm_qemu" "kldload_node" {
  count       = 3
  name        = "kldload-node-${count.index + 1}"
  target_node = "fiend"
  iso         = "local:iso/kldload-free-centos-amd64-20260321.iso"
  cores       = 4
  memory      = 8192
  machine     = "q35"
  bios        = "ovmf"

  disk {
    storage = "local-lvm"
    size    = "40G"
    type    = "scsi"
  }

  network {
    bridge = "vmbr0"
    model  = "virtio"
  }
}

Phase 8: Post-deploy operations

Step 8.1: Verify ZFS on every node

# SSH into each node and verify
ssh admin@node-1 "sudo kst"

# Expected output:
# kldload (build abc1234)
# Pool    • rpool ONLINE (No known data errors)
# Root    1.54G used / 34.6G available (compression: 1.79x)
# Snapshots 0 total
# Boot envs 1 available
# Services ✓ sshd ✓ zfs-zed ✓ sanoid.timer

Step 8.2: Take a golden snapshot

# On each node — snapshot the clean installed state
sudo ksnap
# Snapshot: rpool/ROOT/kldload-node@manual-20260321-070000
# Snapshot: rpool/home@manual-20260321-070000
# Snapshot: rpool/srv@manual-20260321-070000

# This is your rollback point. Anything goes wrong, come back here.
sudo kbe list
# NAME                          CREATED              USED
# rpool/ROOT/kldload-node       2026-03-21 07:00     1.54G

Step 8.3: Set up replication

# Install sanoid/syncoid (already included)
# Configure automatic replication to a backup server

# Initial full send
sudo zfs snapshot -r rpool@backup-base
sudo zfs send -R rpool@backup-base | ssh backup-server "sudo zfs recv -F tank/backups/node-1"

# Cron for hourly incremental
echo '0 * * * * root syncoid --recursive --no-sync-snap rpool backup-server:tank/backups/$(hostname) >> /var/log/syncoid.log 2>&1' | \
  sudo tee /etc/cron.d/kldload-backup

# You now have hourly block-level backups. Only changed blocks are sent.

Step 8.4: Upgrade with rollback safety

# Safe upgrade — snapshot before, upgrade, rollback if broken
sudo kupgrade

# What kupgrade does:
# 1. kbe create pre-upgrade-20260321
# 2. dnf upgrade -y (or apt upgrade)
# 3. dkms rebuild zfs for new kernel (if kernel changed)
# 4. update-initramfs / dracut rebuild
# 5. Verify ZFS module loads

# If something breaks after reboot:
# 1. Reboot
# 2. At ZFSBootMenu, select pre-upgrade snapshot
# 3. Boot. You're back. 15 seconds.
# Or from a running system:
sudo kbe rollback pre-upgrade-20260321

Phase 9: Scale it

Step 9.1: Clone for fleet deployment

# You have one working node. Now make 10 more.

# On Proxmox — clone the VM
for i in $(seq 2 10); do
  pvesh create /nodes/fiend/qemu/100/clone \
    --newid $((100 + i)) \
    --name "kldload-node-$(printf '%02d' $i)" \
    --full true
  pvesh create /nodes/fiend/qemu/$((100 + i))/status/start
done

# On KVM — clone from ZFS snapshot
for i in $(seq 2 10); do
  sudo zfs clone rpool/vms/golden@base rpool/vms/node-$(printf '%02d' $i)
  # Each clone: instant, zero extra space
done

# On AWS — launch more instances from the same AMI
aws ec2 run-instances \
  --image-id ami-XXXXX \
  --count 10 \
  --instance-type m5.large
One image. Any platform. Any scale. Same ZFS. Same tools. Same rollback. That's kldload.

Step 9.2: Wire up WireGuard mesh

# On node-1 (hub)
wg genkey | sudo tee /etc/wireguard/wg1.key | wg pubkey | sudo tee /etc/wireguard/wg1.pub
sudo cat > /etc/wireguard/wg1.conf <<WG
[Interface]
Address = 10.78.0.1/16
ListenPort = 51821
PrivateKey = $(sudo cat /etc/wireguard/wg1.key)
WG
sudo systemctl enable --now wg-quick@wg1

# On each additional node
wg genkey | sudo tee /etc/wireguard/wg1.key | wg pubkey | sudo tee /etc/wireguard/wg1.pub
sudo cat > /etc/wireguard/wg1.conf <<WG
[Interface]
Address = 10.78.0.N/32
PrivateKey = $(sudo cat /etc/wireguard/wg1.key)

[Peer]
PublicKey = HUB_PUBLIC_KEY_HERE
Endpoint = HUB_IP:51821
AllowedIPs = 10.78.0.0/16
PersistentKeepalive = 25
WG
sudo systemctl enable --now wg-quick@wg1

# Add each node as a peer on the hub
sudo wg set wg1 peer NODE_PUBLIC_KEY allowed-ips 10.78.0.N/32

# All nodes can now communicate over encrypted WireGuard mesh
ping 10.78.0.2  # from hub to node-2

Phase 10: What you have now

You started with git clone. You now have:

A custom Linux image with ZFS on root, boot environments, automatic snapshots, 30+ CLI tools, and your own packages — deployed to bare metal, KVM, Proxmox, AWS, Azure, or GCP. Every node is identical. Every node can roll back a bad upgrade in 15 seconds. Every node replicates to a backup server hourly. The whole thing is auditable bash scripts you can read and modify.

You built the image once. You deployed it everywhere. Same artifact. Every platform. Every time.

Total time from git clone to production: ~30 minutes. 10 minutes to build. 5 minutes to install. 5 minutes to verify. 10 minutes to deploy to cloud/Proxmox/fleet. The rest of your career to appreciate not having to do it again.