| your Linux construction kit
Source
← Back to Overview

Live Workloads — XMPP cluster accepting connections on first boot.

This is the pattern that changes how you think about deployment. Not "install the OS, then SSH in and set up the application." Instead: the application is part of the image. When the machine boots for the first time, the service is already running, the database is initialized, TLS certificates are provisioned, and connections are being accepted. First boot = production ready.

We'll use ejabberd (XMPP messaging) as the example because it's real — a clustered, PostgreSQL-backed, TLS-encrypted messaging platform with WebSocket support, file uploads, and multi-user chat. This isn't a toy. This is what Slack alternatives are built on.

The architecture

What boots on first power-on

ejabberd
XMPP server — C2S (port 5222), S2S (port 5269), WebSocket, BOSH, admin console, REST API
PostgreSQL
Message archive (MAM), user database, MUC history — streaming replication ready
NGINX
TLS termination, reverse proxy for admin + WebSocket, Let's Encrypt via Cloudflare DNS
Certbot
Automatic TLS certificate provisioning and renewal via Cloudflare DNS challenge
Four services. Zero manual configuration. Boot the machine, point DNS, start chatting.

The postinstall.sh

Bake this into your kldload image

#!/bin/bash
# postinstall-xmpp.sh — complete XMPP cluster, ready on first boot
set -euo pipefail

# ── Configuration (baked at build time or from env) ──
EJAB_DOMAIN="${EJAB_DOMAIN:-chat.example.com}"
ADMIN_USER="admin"
ADMIN_PASS="${ADMIN_PASS:-$(tr -dc A-Za-z0-9 </dev/urandom | head -c 24)}"
DB_NAME="ejabberd_db"
DB_USER="ejabberd_user"
DB_PASS="$(tr -dc A-Za-z0-9 </dev/urandom | head -c 32)"
CF_EMAIL="${CF_API_EMAIL:-}"
CF_KEY="${CF_API_KEY:-}"

# ── Install packages ──
kpkg install ejabberd nginx certbot python3-certbot-dns-cloudflare \
    postgresql postgresql-contrib \
    erlang-base erlang-dev

# ── PostgreSQL ──
systemctl enable --now postgresql

sudo -u postgres psql <<SQL
CREATE USER ${DB_USER} WITH ENCRYPTED PASSWORD '${DB_PASS}';
CREATE DATABASE ${DB_NAME} OWNER ${DB_USER};
SQL

# For clustering — enable streaming replication
PG_CONF="/etc/postgresql/*/main/postgresql.conf"
sed -i "s/^#listen_addresses.*/listen_addresses = '*'/" $PG_CONF
sed -i "s/^#wal_level.*/wal_level = replica/" $PG_CONF
sed -i "s/^#max_wal_senders.*/max_wal_senders = 10/" $PG_CONF
systemctl restart postgresql

# ── ejabberd configuration ──
cat > /etc/ejabberd/ejabberd.yml <<YAML
hosts:
  - "${EJAB_DOMAIN}"

auth_method: sql
sql_type: pgsql
sql_server: "127.0.0.1"
sql_database: "${DB_NAME}"
sql_username: "${DB_USER}"
sql_password: "${DB_PASS}"

listen:
  - port: 5222
    module: ejabberd_c2s
    starttls: true
    max_stanza_size: 262144
  - port: 5269
    module: ejabberd_s2s_in
    tls: true
  - port: 5280
    module: ejabberd_http
    request_handlers:
      "/admin": ejabberd_web_admin
      "/websocket": ejabberd_http_ws
      "/bosh": mod_bosh
  - port: 5281
    module: ejabberd_http
    request_handlers:
      "/api": mod_http_api

modules:
  mod_mam:
    db_type: sql
    default: always
  mod_muc:
    access_create: muc_create
    default_room_options:
      mam: true
      public: true
  mod_http_upload:
    put_url: "https://@HOST@:5443/upload"
  mod_push: {}
  mod_push_keepalive: {}
  mod_roster:
    versioning: true
  mod_stream_mgmt:
    resend_on_timeout: if_offline
YAML

# ── TLS certificate (Cloudflare DNS challenge) ──
if [[ -n "$CF_EMAIL" && -n "$CF_KEY" ]]; then
  mkdir -p /root/.secrets
  cat > /root/.secrets/cf.ini <<CF
dns_cloudflare_email = ${CF_EMAIL}
dns_cloudflare_api_key = ${CF_KEY}
CF
  chmod 600 /root/.secrets/cf.ini

  certbot certonly --dns-cloudflare \
    --dns-cloudflare-credentials /root/.secrets/cf.ini \
    --non-interactive --agree-tos \
    -d "${EJAB_DOMAIN}" -m "${CF_EMAIL}"
fi

# ── NGINX reverse proxy ──
cat > /etc/nginx/sites-available/ejabberd <<NGX
server {
    listen 80;
    server_name ${EJAB_DOMAIN};
    return 301 https://\\\$host\\\$request_uri;
}
server {
    listen 443 ssl;
    server_name ${EJAB_DOMAIN};
    ssl_certificate /etc/letsencrypt/live/${EJAB_DOMAIN}/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/${EJAB_DOMAIN}/privkey.pem;

    location /admin { proxy_pass https://localhost:5280/admin; }
    location /ws/ {
        proxy_pass http://127.0.0.1:5280/ws/;
        proxy_http_version 1.1;
        proxy_set_header Upgrade \\\$http_upgrade;
        proxy_set_header Connection "upgrade";
    }
}
NGX
ln -sf /etc/nginx/sites-available/ejabberd /etc/nginx/sites-enabled/
systemctl enable --now nginx

# ── Start ejabberd and create admin ──
systemctl enable --now ejabberd
sleep 5
ejabberdctl register "${ADMIN_USER}" "${EJAB_DOMAIN}" "${ADMIN_PASS}"

# ── Auto-renew certs ──
echo "0 2 * * * root certbot renew --quiet && systemctl reload nginx ejabberd" \
    > /etc/cron.d/cert-renew

# ── Snapshot the working state ──
ksnap

echo "XMPP ready at ${EJAB_DOMAIN}"
echo "Admin: ${ADMIN_USER}@${EJAB_DOMAIN} / ${ADMIN_PASS}"

Clustering: add more nodes

Scale from 1 to N nodes

# Node 1 is already running (the postinstall above)
# For nodes 2-N, add this to their postinstall:

# Point PostgreSQL to the master (node 1)
DB_HOST="10.100.10.101"  # node 1's IP

# After ejabberd starts, join the cluster
ejabberdctl join_cluster "ejabberd@node-1.example.com"

# That's it. ejabberd handles message routing between nodes.
# Users connected to node-2 can message users on node-1.
# MUC rooms span all nodes. MAM archive is shared via PostgreSQL.

PostgreSQL replication for the database

# On master (node 1) — already configured by postinstall
# wal_level=replica, max_wal_senders=10

# Create replication user
sudo -u postgres psql -c \
  "CREATE USER replicator WITH REPLICATION ENCRYPTED PASSWORD 'replpass';"

# On replica (node 2)
systemctl stop postgresql
rm -rf /var/lib/postgresql/*/main/*
pg_basebackup -h 10.100.10.101 -D /var/lib/postgresql/15/main \
  -U replicator -W --wal-method=stream
touch /var/lib/postgresql/15/main/standby.signal
systemctl start postgresql

# Replica is now streaming from master.
# If master dies, promote replica: pg_ctl promote

Why this pattern matters

First boot = production

The application isn't installed after the OS. It's part of the image. When the machine boots, the service is already running. No SSH. No Ansible. No "now install the application." The image IS the application.

Snapshot before changes

Update ejabberd? ksnap first. If the update breaks something, kbe rollback. The entire application stack — ejabberd, PostgreSQL data, NGINX config, TLS certs — rolls back together. Because it's all on ZFS.

Clone for testing

kclone the entire server. Test a config change on the clone. If it works, apply to production. If it breaks, destroy the clone. Zero-cost copy-on-write. Zero risk to production.

Replicate for DR

zfs send the entire XMPP server to a backup machine. If the primary dies, zfs recv + boot = back online. Block-level replication. Not "restore from backup." Instant.

This pattern works for any workload. XMPP is the example. The pattern is universal. GitLab, Nextcloud, Matrix/Synapse, Keycloak, Grafana, Jenkins — any application that runs on Linux can be baked into a kldload image and be production-ready on first boot. The postinstall.sh is your recipe. ZFS is your safety net. Build it once. Deploy it everywhere. Roll back anything.