| your Linux construction kit
Source

WireGuard Networking — Mesh, Peer-to-Peer & Multi-Site

This guide goes beyond point-to-point tunnels. It covers every WireGuard topology you will actually use in production — from two-node direct connections to full mesh fleets with ZFS replication running through the tunnels. Every config is complete and copy-pasteable. No fragments.

Why WireGuard? It is in the Linux kernel. It is fast — near line-rate on modern hardware. The config is a single file. The codebase is ~4,000 lines (versus ~100,000 for OpenVPN). It does one thing and does it well: encrypted point-to-point tunnels. Everything else — mesh, hub-spoke, multi-site — is just how you arrange those tunnels.


Peer-to-peer — two nodes, direct connection

The simplest topology

Two machines, one tunnel between them. Each machine has one peer. This is the building block for everything else on this page. Master this and the rest is just repeating the pattern.

analogy: two tin cans connected by a string. Simple, direct, private.

Generate keys on both nodes

# Run this on EACH node
wg genkey | tee /etc/wireguard/private.key | wg pubkey > /etc/wireguard/public.key
chmod 600 /etc/wireguard/private.key
cat /etc/wireguard/public.key   # you will need this for the other node

Node A — /etc/wireguard/wg0.conf

[Interface]
Address    = 10.99.0.1/24
ListenPort = 51820
PrivateKey = <node-A-private-key>

[Peer]
PublicKey  = <node-B-public-key>
AllowedIPs = 10.99.0.2/32
Endpoint   = 203.0.113.20:51820
PersistentKeepalive = 25

Node B — /etc/wireguard/wg0.conf

[Interface]
Address    = 10.99.0.2/24
ListenPort = 51820
PrivateKey = <node-B-private-key>

[Peer]
PublicKey  = <node-A-public-key>
AllowedIPs = 10.99.0.1/32
Endpoint   = 198.51.100.10:51820
PersistentKeepalive = 25

Bring it up and test

# On both nodes
systemctl enable --now wg-quick@wg0

# Verify the tunnel
wg show
# Look for: "latest handshake" — if it appears, the tunnel is up

# Test connectivity
ping -c 3 10.99.0.2    # from Node A
ping -c 3 10.99.0.1    # from Node B

NAT traversal

When one side is behind NAT

If Node B is behind a home router or NAT, it cannot receive incoming connections. The fix: Node B sets PersistentKeepalive = 25 and connects outward to Node A. Node A does not need an Endpoint for Node B — WireGuard learns it from the incoming handshake. Node A must have a public IP or port forward on UDP 51820.

analogy: if you cannot ring their doorbell, you call them and ask them to come outside. The keepalive is you calling back every 25 seconds to keep the line open.
# Node A (public IP) — /etc/wireguard/wg0.conf
[Interface]
Address    = 10.99.0.1/24
ListenPort = 51820
PrivateKey = <node-A-private-key>

[Peer]
# Node B — no Endpoint needed, it will connect to us
PublicKey  = <node-B-public-key>
AllowedIPs = 10.99.0.2/32

# Node B (behind NAT) — /etc/wireguard/wg0.conf
[Interface]
Address    = 10.99.0.2/24
PrivateKey = <node-B-private-key>
# No ListenPort needed — kernel picks an ephemeral port

[Peer]
PublicKey  = <node-A-public-key>
AllowedIPs = 10.99.0.1/32
Endpoint   = 198.51.100.10:51820
PersistentKeepalive = 25

Hub and spoke — the road warrior VPN

One server, many clients

The hub has a public IP and acts as the gateway. Clients connect to the hub and can optionally reach each other through it. This is the classic VPN pattern — remote workers connecting to the office, or laptops connecting to a home lab.

analogy: an airport hub. All flights go through the central hub. Passengers (packets) transfer there to reach other destinations.

Hub — /etc/wireguard/wg0.conf

[Interface]
Address    = 10.99.0.1/24
ListenPort = 51820
PrivateKey = <hub-private-key>

# Enable forwarding so clients can reach each other and the LAN
PostUp   = sysctl -w net.ipv4.ip_forward=1; nft add rule inet filter forward iifname "wg0" accept; nft add rule inet nat postrouting oifname "eth0" masquerade
PostDown = nft flush ruleset

[Peer]
# Client 1 — laptop
PublicKey  = <client-1-pubkey>
AllowedIPs = 10.99.0.10/32

[Peer]
# Client 2 — phone
PublicKey  = <client-2-pubkey>
AllowedIPs = 10.99.0.11/32

[Peer]
# Client 3 — remote office
PublicKey  = <client-3-pubkey>
AllowedIPs = 10.99.0.12/32

Client — /etc/wireguard/wg0.conf

[Interface]
Address    = 10.99.0.10/24
PrivateKey = <client-1-private-key>
DNS        = 10.99.0.1

[Peer]
PublicKey  = <hub-pubkey>
Endpoint   = 198.51.100.10:51820
# Route only the tunnel subnet through WireGuard (split tunnel)
AllowedIPs = 10.99.0.0/24
# Or route ALL traffic through the hub (full tunnel):
# AllowedIPs = 0.0.0.0/0
PersistentKeepalive = 25

AllowedIPs routing explained

AllowedIPs is a routing table, not a firewall

AllowedIPs does two things: (1) it tells WireGuard which source IPs to accept from this peer, and (2) it tells the kernel which destination IPs to route through this peer. AllowedIPs = 10.99.0.0/24 means "accept packets from this peer if they come from 10.99.0.0/24, and send packets to 10.99.0.0/24 through this peer." AllowedIPs = 0.0.0.0/0 means "route everything through this peer" — that is a full tunnel.

analogy: AllowedIPs is the address on the envelope. It tells the post office which truck (peer) to put the letter on.

Full mesh — every node connects to every other node

No single point of failure

In a full mesh, every node has a direct tunnel to every other node. There is no hub. If any node goes down, the rest still communicate directly. The cost: N nodes need N-1 peer entries each. For 5 nodes, that is 20 peer entries total. For 16 nodes, 240. This is where scripting the config generation pays off.

analogy: a group chat where everyone has everyone else's phone number. No switchboard operator needed.

Example: 4-node full mesh

NodePublic IPTunnel IP
node-1198.51.100.110.99.0.1
node-2198.51.100.210.99.0.2
node-3203.0.113.110.99.0.3
node-4203.0.113.210.99.0.4

Node 1 — /etc/wireguard/wg0.conf

[Interface]
Address    = 10.99.0.1/24
ListenPort = 51820
PrivateKey = <node-1-private-key>

[Peer]
# node-2
PublicKey  = <node-2-pubkey>
AllowedIPs = 10.99.0.2/32
Endpoint   = 198.51.100.2:51820
PersistentKeepalive = 25

[Peer]
# node-3
PublicKey  = <node-3-pubkey>
AllowedIPs = 10.99.0.3/32
Endpoint   = 203.0.113.1:51820
PersistentKeepalive = 25

[Peer]
# node-4
PublicKey  = <node-4-pubkey>
AllowedIPs = 10.99.0.4/32
Endpoint   = 203.0.113.2:51820
PersistentKeepalive = 25

(Nodes 2, 3, and 4 follow the same pattern — list every other node as a peer.)

Script to generate N-node mesh configs

#!/bin/bash
# generate-mesh.sh — generates wg0.conf for N nodes
# Usage: ./generate-mesh.sh nodes.txt
# nodes.txt format: one line per node — "name public_ip tunnel_ip"
#
# Example nodes.txt:
#   node-1 198.51.100.1 10.99.0.1
#   node-2 198.51.100.2 10.99.0.2
#   node-3 203.0.113.1  10.99.0.3
#   node-4 203.0.113.2  10.99.0.4

NODES_FILE="$1"
OUTDIR="./mesh-configs"
mkdir -p "$OUTDIR"

# Generate keys for all nodes
declare -A PRIVKEYS PUBKEYS
while read -r name pub_ip tun_ip; do
  priv=$(wg genkey)
  pub=$(echo "$priv" | wg pubkey)
  PRIVKEYS[$name]="$priv"
  PUBKEYS[$name]="$pub"
done < "$NODES_FILE"

# Generate config for each node
while read -r name pub_ip tun_ip; do
  conf="$OUTDIR/${name}.conf"
  cat > "$conf" <> "$conf" <
# Usage
chmod +x generate-mesh.sh
./generate-mesh.sh nodes.txt
# Output: mesh-configs/node-1.conf, mesh-configs/node-2.conf, etc.

# Deploy to each node
for node in node-1 node-2 node-3 node-4; do
  scp "mesh-configs/${node}.conf" "root@${node}:/etc/wireguard/wg0.conf"
  ssh "root@${node}" "systemctl enable --now wg-quick@wg0"
done

Multi-site — connecting offices with subnet routing

Site-to-site VPN

Three offices, each with its own LAN subnet. The WireGuard nodes at each site act as gateways — they route traffic between the LANs through the tunnels. Machines on each LAN do not need WireGuard installed — they just need a route pointing at their site's WireGuard gateway.

analogy: three islands connected by bridges. The bridges (tunnels) connect the islands (LANs). Islanders do not need to know about the bridges — they just follow the road signs (routes).

Network layout

SiteLAN SubnetWG GatewayTunnel IPPublic IP
HQ192.168.1.0/24192.168.1.110.99.0.1198.51.100.1
Branch A192.168.2.0/24192.168.2.110.99.0.2198.51.100.2
Branch B192.168.3.0/24192.168.3.110.99.0.3203.0.113.1

HQ gateway — /etc/wireguard/wg0.conf

[Interface]
Address    = 10.99.0.1/24
ListenPort = 51820
PrivateKey = <hq-private-key>

PostUp   = sysctl -w net.ipv4.ip_forward=1
PostDown = sysctl -w net.ipv4.ip_forward=0

[Peer]
# Branch A — route their entire LAN through this peer
PublicKey  = <branch-a-pubkey>
AllowedIPs = 10.99.0.2/32, 192.168.2.0/24
Endpoint   = 198.51.100.2:51820
PersistentKeepalive = 25

[Peer]
# Branch B
PublicKey  = <branch-b-pubkey>
AllowedIPs = 10.99.0.3/32, 192.168.3.0/24
Endpoint   = 203.0.113.1:51820
PersistentKeepalive = 25

Branch A gateway — /etc/wireguard/wg0.conf

[Interface]
Address    = 10.99.0.2/24
ListenPort = 51820
PrivateKey = <branch-a-private-key>

PostUp   = sysctl -w net.ipv4.ip_forward=1
PostDown = sysctl -w net.ipv4.ip_forward=0

[Peer]
# HQ — route HQ LAN + Branch B LAN through HQ
PublicKey  = <hq-pubkey>
AllowedIPs = 10.99.0.1/32, 192.168.1.0/24, 10.99.0.3/32, 192.168.3.0/24
Endpoint   = 198.51.100.1:51820
PersistentKeepalive = 25

Branch B gateway — /etc/wireguard/wg0.conf

[Interface]
Address    = 10.99.0.3/24
ListenPort = 51820
PrivateKey = <branch-b-private-key>

PostUp   = sysctl -w net.ipv4.ip_forward=1
PostDown = sysctl -w net.ipv4.ip_forward=0

[Peer]
# HQ — route HQ LAN + Branch A LAN through HQ
PublicKey  = <hq-pubkey>
AllowedIPs = 10.99.0.1/32, 192.168.1.0/24, 10.99.0.2/32, 192.168.2.0/24
Endpoint   = 198.51.100.1:51820
PersistentKeepalive = 25

LAN routing — tell other machines about the tunnels

# On each LAN, machines need a route to the remote subnets.
# Option 1: add a static route on each machine
ip route add 192.168.2.0/24 via 192.168.1.1   # on HQ LAN machines
ip route add 192.168.3.0/24 via 192.168.1.1   # on HQ LAN machines

# Option 2: set the WG gateway as the default gateway (simpler)
# Or configure the routes on your DHCP server / router

Split tunneling

# To only route specific traffic through the tunnel (not everything),
# use precise AllowedIPs on the client side.
#
# Example: only route traffic to the remote LAN through the tunnel
AllowedIPs = 192.168.1.0/24, 10.99.0.0/24
# Internet traffic goes direct — not through the tunnel

WireGuard + ZFS replication

Encrypted offsite backup

syncoid sends ZFS snapshots over SSH. If you run SSH through a WireGuard tunnel, your replication traffic is encrypted twice — once by WireGuard, once by SSH. More importantly, your ZFS replication endpoint is not exposed to the public internet. Only machines inside the tunnel can reach it.

analogy: sending your backup tapes in an armored truck (WireGuard) through a private highway (the tunnel) to a vault (the remote ZFS pool).

Setup

# Assumptions:
# - Primary server: 10.99.0.1 (WireGuard tunnel IP)
# - Backup server:  10.99.0.2 (WireGuard tunnel IP)
# - WireGuard tunnel is already up (see peer-to-peer section above)
# - Both servers run kldloadOS with sanoid/syncoid installed

# On the PRIMARY server — configure syncoid to replicate over the tunnel
# Use the tunnel IP, not the public IP
syncoid --recursive \
  rpool/data \
  root@10.99.0.2:backup/data

# Automate with a systemd timer
cat > /etc/systemd/system/zfs-replicate.service <<'EOF'
[Unit]
Description=ZFS replication to offsite backup
After=wg-quick@wg0.service
Requires=wg-quick@wg0.service

[Service]
Type=oneshot
ExecStart=/usr/sbin/syncoid --recursive --no-privilege-elevation rpool/data root@10.99.0.2:backup/data
EOF

cat > /etc/systemd/system/zfs-replicate.timer <<'EOF'
[Unit]
Description=Run ZFS replication every hour

[Timer]
OnCalendar=hourly
Persistent=true
RandomizedDelaySec=300

[Install]
WantedBy=timers.target
EOF

systemctl daemon-reload
systemctl enable --now zfs-replicate.timer

Verify replication

# On the backup server — check that snapshots are arriving
zfs list -t snapshot -r backup/data | tail -5

# Verify data integrity
zpool scrub backup
zpool status backup

WireGuard + kldload fleet

The four WireGuard planes

kldloadOS cluster mode uses four separate WireGuard interfaces to isolate different types of traffic. Each plane has its own subnet, port, and purpose. This is not academic overengineering — it means a compromised monitoring agent cannot reach your storage network, and a noisy data transfer cannot starve your control plane.

analogy: four separate highway systems in the same city. Emergency vehicles, delivery trucks, commuter cars, and city buses each get their own roads. A traffic jam on one does not affect the others.
InterfaceSubnetPortPurpose
wg010.77.0.0/1651820Enrollment — new nodes join the fleet here
wg110.78.0.0/1651821Management — SSH, Salt, control commands
wg210.79.0.0/1651822Backend — metrics, monitoring, Prometheus scrapes
wg310.80.0.0/1651823Storage — ZFS replication, Kubernetes overlay, iSCSI

How auto-mesh works on firstboot

# When a kldloadOS node boots in cluster mode:
# 1. It generates keys for all 4 interfaces
# 2. It connects to the cluster manager over wg0 (enrollment plane)
# 3. The cluster manager distributes peer configs for wg1, wg2, wg3
# 4. The node brings up all 4 interfaces
# 5. Full mesh is established — every node can reach every other node on all 4 planes

# You can inspect the mesh at any time:
for iface in wg0 wg1 wg2 wg3; do
  echo "=== $iface ==="
  wg show "$iface"
  echo
done

Manual 4-plane setup

# If you want the same isolation without cluster mode, create 4 configs:

for i in 0 1 2 3; do
  port=$((51820 + i))
  subnet=$((77 + i))
  wg genkey | tee "/etc/wireguard/wg${i}-private.key" | wg pubkey > "/etc/wireguard/wg${i}-public.key"
  chmod 600 "/etc/wireguard/wg${i}-private.key"

  cat > "/etc/wireguard/wg${i}.conf" <

Firewall integration — nftables rules for WireGuard

Per-interface access control

WireGuard interfaces are normal Linux network interfaces. nftables can filter traffic on them just like eth0 or br0. The key insight: use iifname (input interface name) and oifname (output interface name) to write rules that apply only to specific WireGuard tunnels.

analogy: different doors to your building. The front door (wg0) lets in visitors. The loading dock (wg3) lets in deliveries. Each door has its own security guard with its own rulebook.

Basic nftables rules for WireGuard

# /etc/nftables.conf

table inet filter {
  chain input {
    type filter hook input priority 0; policy drop;

    # Allow established connections
    ct state established,related accept

    # Allow loopback
    iifname "lo" accept

    # Allow WireGuard UDP port from anywhere
    udp dport 51820 accept

    # Allow SSH only from the management tunnel (wg1)
    iifname "wg1" tcp dport 22 accept

    # Allow Prometheus scrapes only from the monitoring tunnel (wg2)
    iifname "wg2" tcp dport 9090 accept
    iifname "wg2" tcp dport 9100 accept

    # Allow ZFS replication (SSH) only from the storage tunnel (wg3)
    iifname "wg3" tcp dport 22 accept

    # Allow ICMP (ping) on all WireGuard interfaces
    iifname "wg*" icmp type echo-request accept

    # Drop everything else
    log prefix "nft-drop: " drop
  }

  chain forward {
    type filter hook forward priority 0; policy drop;

    # Allow forwarding between WireGuard interfaces (for site-to-site routing)
    iifname "wg0" oifname "eth0" accept
    iifname "eth0" oifname "wg0" ct state established,related accept

    # Allow inter-site traffic
    iifname "wg0" oifname "wg0" accept
  }
}

table inet nat {
  chain postrouting {
    type nat hook postrouting priority 100;
    # Masquerade WireGuard traffic going out to the LAN
    oifname "eth0" masquerade
  }
}
# Apply the rules
nft -f /etc/nftables.conf

# Make them persistent
systemctl enable nftables

# Verify
nft list ruleset

Troubleshooting

The debugging checklist

WireGuard is silent by design. There are no connection logs, no handshake messages, no error output. If it is not working, you diagnose by checking the handshake timestamp and working backward through the possible failure points.

analogy: WireGuard is like a locked mailbox. If no mail appears, you check: is the address right? Is the key the right shape? Is the mailbox even there?

Step-by-step diagnosis

# 1. Is the interface up?
ip link show wg0
# Should say "UP" — if not:
wg-quick up wg0

# 2. Is there a handshake?
wg show wg0
# Look for "latest handshake: X seconds ago"
# If it says "none" — the tunnel is NOT working. Continue debugging.

# 3. Is the firewall blocking UDP?
# On the RECEIVING end:
nft list ruleset | grep 51820
# Make sure UDP 51820 is allowed inbound
# Quick test — temporarily open everything:
nft flush ruleset
# Then try again. If it works, your firewall is the problem.

# 4. Is the endpoint reachable?
# From the side that CANNOT handshake:
nc -zuv 198.51.100.10 51820
# Should say "succeeded" or "open"

# 5. Are the keys correct?
# The public key on Node A's [Peer] section must match Node B's private key (and vice versa)
# Regenerate if unsure:
wg genkey | tee /etc/wireguard/private.key | wg pubkey > /etc/wireguard/public.key

# 6. Is the kernel module loaded?
lsmod | grep wireguard
# If not:
modprobe wireguard

# 7. Check system logs
journalctl -u wg-quick@wg0 --no-pager -n 50
dmesg | grep wireguard

Handshake age

# A healthy tunnel shows a recent handshake:
wg show wg0
# peer: abc123...
#   endpoint: 198.51.100.10:51820
#   allowed ips: 10.99.0.2/32
#   latest handshake: 12 seconds ago     <-- GOOD
#   transfer: 1.24 MiB received, 3.48 MiB sent

# If "latest handshake" is missing or says "none":
# - The peer has never successfully connected
# - Check firewall, endpoint, and keys

# If "latest handshake" is more than 2-3 minutes old:
# - The peer may be offline
# - PersistentKeepalive may not be set (needed behind NAT)

MTU issues

# WireGuard adds 60 bytes of overhead (IPv4) or 80 bytes (IPv6)
# Default MTU is 1420, which works for most networks
# If you are seeing packet fragmentation or slow transfers:

# Check current MTU
ip link show wg0 | grep mtu

# Lower the MTU if needed (common behind PPPoE or other encapsulation)
# In wg0.conf:
# [Interface]
# MTU = 1380

# Or set it live:
ip link set wg0 mtu 1380

# Test with a specific packet size
ping -c 3 -s 1400 -M do 10.99.0.2
# If this fails but smaller sizes work, you have an MTU problem

Performance

Near line-rate, in the kernel

WireGuard runs in the Linux kernel, not in userspace. On modern hardware, it adds less than 5% overhead. On a 10 Gbps link, expect 9.4+ Gbps through the tunnel. It uses ChaCha20 for encryption, which is extremely fast on CPUs without AES-NI (like ARM), and competitive with AES-GCM on CPUs that do have AES-NI.

analogy: the difference between a toll booth that stops every car (OpenVPN in userspace) and an electronic toll tag that reads at highway speed (WireGuard in the kernel).

Benchmarking with iperf3

# Install iperf3 (already in kldloadOS darksite)
dnf install -y iperf3   # CentOS/RHEL
apt install -y iperf3    # Debian

# === BASELINE — no tunnel ===
# On Node B (server):
iperf3 -s

# On Node A (client):
iperf3 -c 198.51.100.20 -t 30 -P 4
# Note the bandwidth

# === THROUGH THE TUNNEL ===
# On Node B (server):
iperf3 -s -B 10.99.0.2

# On Node A (client):
iperf3 -c 10.99.0.2 -t 30 -P 4
# Compare with baseline — expect less than 5% drop on modern hardware

Typical results on kldloadOS

LinkBare metalWireGuard tunnelOverhead
1 Gbps940 Mbps920 Mbps~2%
10 Gbps9.41 Gbps9.12 Gbps~3%
25 Gbps23.7 Gbps22.1 Gbps~7%

CPU matters more than link speed. A single WireGuard tunnel uses one CPU core. If your core maxes out before your link does, you can run multiple tunnels across different cores and bond them — but on any hardware built after 2018, a single core handles 10+ Gbps easily.