Frequently Asked Questions
Philosophy, use cases, architecture decisions, and the questions people ask before they burn the ISO.
Philosophy
What is kldload, really?
kldload downloads official packages from upstream sources — CentOS, Debian, RHEL, Rocky — and assembles them into a single bootable ISO with ZFS on root. Nothing patched, nothing forked. The ISO is a build artifact, not a distribution. What comes out is your distro, not ours.
Four distros. Three profiles — Desktop (GNOME + tools), Server (headless + tools), Core (just ZFS, nothing else). Custom pool layouts via the installer or manual shell. Export to qcow2, VHD, VMDK, OVA. One USB. Zero internet.
Why build this instead of just installing a distro normally?
kldload isn't a distro — it's a distro build tool. You pick the distro (CentOS, Debian, RHEL). kldload handles the parts that every distro gets wrong or makes painful: ZFS on root, kernel module compilation, DKMS signing, boot environment creation, and offline package mirroring.
kldload does the boring parts reproducibly, in ~90 seconds, on any distro. And because it's an image factory, you build once and export to any format — qcow2, raw, VHD, USB, bare metal.
What does "100% bash" mean?
The installer, the firstboot scripts, the snapshot system, the boot environment manager, the darksite builder — all of it is plain bash. No Go binary to cross-compile. No Python virtualenv to manage. No Ruby gems. No compiled operator to debug at 2am when a node won't boot.
Bash scripts are readable by any Linux admin without prior kldload knowledge. You can read every line of what ran on your machine. The Web UI server is the only exception — a single Python3 file with no dependencies beyond the standard library and websockets.
Who is kldload for?
People who run real infrastructure and are tired of the accidental complexity that accumulates around it:
Homelab operators who want a proper, reproducible system without cloud pricing. Small infrastructure teams who need a reliable base for bare-metal servers, KVM hypervisors, or mixed workloads. Security-conscious environments that need air-gap installs, encrypted ZFS, and no surprise outbound connections. Anyone who has been burned by a botched upgrade with no rollback path.
Is this opinionated?
ZFS on root is the one opinion. That’s what makes boot environments and reliable rollbacks possible. Everything else is your choice.
Three profiles: Desktop and Server add quality-of-life tools (snapshots, universal package manager, web UI). Core gives you just ZFS on root with a stock distro — nothing added, nothing modified. You can even drop to a shell and build your own pool layout. The distro, the profile, the configuration management — all your call.
Why ZFS
Why ZFS instead of ext4 / btrfs / LVM?
Three things no other Linux filesystem gives you together: atomic snapshots, boot environments, and end-to-end checksumming.
Atomic snapshots mean a snapshot before every upgrade is zero-cost and takes under a second. If the upgrade breaks something, you roll back in 30 seconds without a rescue USB. Boot environments mean each OS state is a separate dataset that ZFSBootMenu can present as a boot option. Checksumming means silent data corruption is detected and (with redundancy) automatically repaired.
Does ZFS use a lot of RAM?
The ZFS ARC (Adaptive Replacement Cache) is dynamic — it uses free RAM and shrinks under memory pressure. On a 4 GB machine it will use ~1 GB. On a 64 GB machine it might use 20 GB. This is a feature, not a bug: unused RAM is wasted RAM.
How do boot environments work in practice?
Each boot environment is a ZFS snapshot or clone of rpool/ROOT/default. ZFSBootMenu presents them at boot as a menu. You pick the one you want and the system boots into it — full OS state, exactly as it was when the snapshot was taken.
Installation
How long does a full install take?
On modern hardware with NVMe: 60–90 seconds for CentOS (DNF from local darksite), 2–3 minutes for Debian (debootstrap from embedded APT mirror). On spinning disks, add a minute or two. The ZFS pool creation and bootloader install are nearly instant.
One ISO really installs both CentOS and Debian?
Yes. The live ISO boots a CentOS environment with a web UI. You pick your target distro from the card selector. Two completely separate bootstrap paths run underneath:
CentOS/RHEL/Rocky: dnf --installroot from the RPM darksite at /root/darksite/rpm/
Debian: debootstrap from the APT darksite served on localhost:3142
Both darksites are baked into the ISO. Both paths share the same ZFS storage setup, ZFSBootMenu bootloader, and user/network configuration. Two package managers, one installer.
What about RHEL?
RHEL installs require a Red Hat subscription. When you select RHEL in the installer, you enter your activation key and org ID. The installer uses subscription-manager to register and pull packages from the Red Hat CDN. This is the only install path that requires internet access.
Can I install non-interactively?
Yes. Pass an answers file: kldload-install-target --config /path/to/answers.env. The answers file is a shell env file with variables like KLDLOAD_DISK, KLDLOAD_HOSTNAME, KLDLOAD_DISTRO, KLDLOAD_PROFILE, etc.
Air-Gap & Offline
What exactly is the "darksite"?
A complete local package repository baked into the ISO. kldload ships two:
RPM darksite (~900 packages) at /root/darksite/rpm/ — for CentOS/RHEL installs
APT darksite (~2,700 packages) at /root/darksite/debian/apt/ — for Debian installs, served on localhost:3142
Air-gapped deployment for a secured facility?
Yes — air-gap is a first-class scenario, not an afterthought. Walk into a data center with no internet, plug in the USB key, pick your distro and profile, walk out with a running ZFS system. The ISO is the entire deployment payload.
vs. Everything Else
kldload vs. Packer
Packer builds one image per platform per template. It needs a running hypervisor or cloud account. kldload builds one ISO that installs any distro to any target — bare metal, KVM, Proxmox, cloud — from a single USB stick with zero internet.
Where kldload wins: One artifact, many targets. Air-gap native. No infrastructure to build. Bare metal first. ZFS on root by default. Interactive or unattended. 100% auditable bash. kexport outputs qcow2, raw, VHD, VMDK, and OVA from any running system. AWS AMI import via aws ec2 import-image from the raw/VHD export.
Where Packer wins: Native cloud image output (no import step). Mature plugin ecosystem. Terraform/Vault integration. Multi-provisioner support (Ansible, Chef, Puppet). Parallel builds. Windows support.
Feature comparison
| Capability | kldload | Packer | Terraform |
|---|---|---|---|
| Multi-distro from one artifact | ✓ | ✗ | — |
| Air-gap / offline install | ✓ | ✗ | ✗ |
| Bare metal from USB | ✓ | ✗ | ✗ |
| ZFS on root + boot environments | ✓ | ✗ | ✗ |
| Interactive web UI installer | ✓ | ✗ | ✗ |
| Unattended / answers file | ✓ | ✓ | ✓ |
| Export to qcow2 / raw / VHD / VMDK / OVA | ✓ kexport | ✓ | — |
| Cloud AMI / GCP image import | ✓ via export | ✓ native | ✓ |
| No infrastructure to build | ✓ | ✗ | ✗ |
| 100% auditable (no compiled bins) | ✓ | ✗ | ✗ |
| Terraform / Vault integration | ✗ | ✓ | ✓ |
| Multi-cloud provisioning | ✗ | ✓ | ✓ |
| Windows support | ✗ | ✓ | ✓ |
| Parallel multi-platform builds | ✗ | ✓ | — |
kldload vs. Proxmox VE
Proxmox is a hypervisor appliance. Its OS layer is not something you are meant to modify or replicate as a general-purpose server. kldload is a general-purpose installer that can set up a KVM hypervisor, a desktop workstation, a headless server, or anything else — on any distro.
kldload vs. NixOS
NixOS is declarative and reproducible at the cost of learning an entirely new configuration language. kldload is reproducible through a different mechanism: a fixed ISO, deterministic ZFS layout, and idempotent firstboot scripts. Every standard Linux skill transfers directly. No new language required.
kldload vs. Ubuntu Server
Ubuntu Server ships with ext4 by default, no boot environments, no snapshot policy, and no offline installer. Getting from a fresh Ubuntu install to what kldload gives you out of the box takes hours of manual configuration. Also: no snap.
Use Cases
Daily driver desktop?
Yes. The desktop profile installs GNOME on either CentOS or Debian. It is a standard Linux desktop with ZFS on root. Boot environments mean you can upgrade packages fearlessly — a bad update is a 30-second rollback at next boot.
Edge deployments — remote sites, kiosks?
The air-gap installer makes kldload well-suited to remote deployments. Ship a USB key, boot it, done. No internet, no PXE server, no provisioning infrastructure required at the remote site.
Miscellaneous
Open source?
Yes. BSD-3-Clause. The build system, installer, snapshot tools, boot environment manager, firstboot scripts, and Web UI are all open source. The ISO is built entirely from upstream distro packages and open-source components.
What architectures?
AMD64 (x86_64). The installer code is architecture-agnostic — ARM64 support is planned for a future release.
Can I customize the ISO for my organisation?
Yes. Add package sets to build/darksite/config/package-sets/, modify the installer libraries, add your own firstboot scripts, and rebuild. ./deploy.sh full handles the full pipeline. Fork the repo, read the scripts, modify them. That's the point.
Known Issues — RC-1 Beta
Tested & Working
| Target | Desktop | Server | Core |
|---|---|---|---|
| CentOS Stream 9 | ✓ | ✓ | ✓ |
| Debian 13 (Trixie) | ✓ | ✓ | ✓ |
| Rocky Linux 9 | ✓ | ✓ | ✓ |
| RHEL 9 | ✓ | ✓ | ✓ |
| RHEL 10 | ✗ | ✗ | ✗ |
| CentOS 10 / Rocky 10 | ? | ? | ? |
Secure Boot
KVM VMs fail to boot with Secure Boot enabled. The ZFS module on the live ISO is not signed with an enrolled MOK key. Workaround: Disable Secure Boot in VM firmware settings. Bare metal with Secure Boot disabled works fine.
RHEL 10
The version selector offers RHEL 10 but installs fail. The ISO only ships redhat-release-9.7 and some Red Hat Developer subscriptions don't serve RHEL 10 content. Use RHEL 9.
RHEL install speed
RHEL installs are slower than CentOS or Debian because packages come from the Red Hat CDN over the internet. Debian installs are the fastest (~2 minutes) because everything comes from the offline darksite.
ZFS encryption
ZFS encryption (AES-256-GCM) is not fully tested. The UI toggle and backend code exist, but passphrase-at-boot, key management, and Clevis/TPM sealing have not been validated across all distros.
Image export (kexport)
kexport uses qemu-img convert for all five formats (qcow2, raw, VHD, VMDK, OVA) but exported images have not been fully validated booting on all target hypervisors.
Pool Designer
The Pool Designer is experimental. It visualizes topologies and generates zpool create commands but does not yet drive the actual install. Use the Core profile’s manual storage mode for custom layouts.
Cross-distro verification
Full verification of all OS + profile + version combinations is ongoing. The compatibility matrix above reflects confirmed working installs. Untested combinations may have package differences or repo issues. Report issues on GitHub.