kldload + Packer + Terraform
kldload produces disk images. Packer customizes them. Terraform deploys them. Use kldload Core as your Packer base image — you get ZFS on root with zero manual setup. Then your existing cloud workflow handles the rest.
Without kldload: manual ZFS setup (2 hours) → Packer → Terraform
With kldload: kldload Core (2 minutes) → Packer → Terraform
Your workflow doesn't change. Just the input image.
Where kldload fits
There are tools in this space. Good ones. But they each solve a different slice of the problem.
| Approach | Bare metal | ZFS | WireGuard | eBPF | GPU | Single node | No infra needed |
|---|---|---|---|---|---|---|---|
| kldload | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Cloud image builders | — | — | — | — | — | ✓ | ✓ |
| Network provisioners | ✓ | — | — | — | — | — | — |
| PXE-based tools | ✓ | — | — | — | — | — | — |
| Lifecycle managers | ✓ | — | — | — | — | — | — |
The gap
Cloud image builders make VM images but can’t touch bare metal. Network provisioners and PXE tools handle bare metal but require dedicated infrastructure — DHCP servers, databases, workflow engines — just to install an OS. Lifecycle managers are built for enterprise datacenters with hundreds of nodes.
Nobody built for the person with one machine, three servers, or a small production cluster who wants a production-quality stack running on real hardware without standing up a provisioning infrastructure first.
The Pipeline
# 1. Build a kldload Core image
./deploy.sh clean && ./deploy.sh builder-image
PROFILE=desktop ./deploy.sh build
# 2. Install Core profile to a VM (or use unattended)
virt-install --name kldload-base --ram 4096 --vcpus 4 \
--disk size=40,format=qcow2 --cdrom kldload-free-*.iso \
--os-variant centos-stream9 --boot uefi --noautoconsole
# 3. Export after install
kexport qcow2 # KVM / Proxmox / OpenStack
kexport raw # AWS (import-image)
kexport vhd # Azure / Hyper-V
kexport vmdk # VMware ESXi
Packer — Add Your App Layer
# kldload-base.pkr.hcl
source "qemu" "kldload" {
disk_image = true
iso_url = "kldload-base.qcow2" # your kldload Core export
format = "qcow2"
ssh_username = "admin"
qemuargs = [["-bios", "/usr/share/OVMF/OVMF_CODE.fd"]]
}
build {
sources = ["source.qemu.kldload"]
provisioner "shell" {
inline = [
# ZFS is already on root — snapshot before changes
"sudo zfs snapshot rpool@pre-packer",
# Install your app
"sudo apt-get update",
"sudo apt-get install -y nginx postgresql redis",
"sudo systemctl enable nginx postgresql redis",
# Create ZFS datasets for your data
"sudo zfs create -o mountpoint=/srv/app rpool/srv/app",
"sudo zfs create -o mountpoint=/srv/db -o recordsize=8k rpool/srv/db",
# Snapshot after — instant rollback point
"sudo zfs snapshot rpool@post-packer",
]
}
}
Terraform — Deploy the Fleet
AWS
Export raw → upload to S3 → aws ec2 import-image → launch with Terraform. UEFI boot mode, gp3 EBS.
resource "aws_instance" "app" {
count = 3
ami = data.aws_ami.kldload.id
instance_type = "t3.medium"
}
Azure
Export VHD → upload as page blob → az image create → deploy with Terraform.
resource "azurerm_linux_virtual_machine" "app" {
source_image_id = azurerm_image.kldload.id
size = "Standard_B2ms"
}
Proxmox
qcow2 directly → template → clone with Terraform Proxmox provider. Full CoW cloning on ZFS storage.
resource "proxmox_vm_qemu" "app" {
count = 3
clone = "kldload-core-template"
cores = 4
memory = 4096
}
What You Get For Free
None of this requires configuration. kldload Core does it during the 2-minute install:
✓ ZFS on root
Proper ashift, lz4 compression, acltype, xattr=sa. Not bolted on.
✓ Boot environments
ZFSBootMenu. Roll back any upgrade in 30 seconds.
✓ Dataset hierarchy
Separate /home, /var/log, /srv — independent snapshots per path.
✓ DKMS + initramfs
ZFS module built for the installed kernel. Initramfs configured. Hostid set.