ZFS Wiki
← Back to ZFS Overview
Pool Design & VDEV Layout — the decision you can't undo.
The layout of your pool determines performance, redundancy, and scalability.
Once a ZFS pool is created, you cannot change the RAID layout
without rebuilding the entire pool. This is the most important decision you'll make.
Get it right the first time.
VDEV types
Mirror
Best for high IOPS. Two (or more) disks with identical data. Every mirror vdev can serve reads independently. Use for: VMs, databases, high-traffic workloads. Expandable by adding more mirror pairs.
RAIDZ1
Single-parity striping. Survives one disk failure. Good for: bulk storage, archives, media. Terrible for random writes. Cannot add disks to an existing RAIDZ vdev.
RAIDZ2
Double-parity. Survives two disk failures. Recommended for large arrays where resilver times are long. Best balance of space efficiency and safety for bulk storage.
RAIDZ3
Triple-parity. Survives three disk failures. For very large arrays (20+ disks) where resilver can take days and the risk of a second failure during resilver is real.
Special VDEV
SSD-based vdev that stores metadata and small files. Dramatically accelerates database workloads, container storage, and anything metadata-heavy. Must be mirrored — losing this vdev loses the pool.
SLOG
Separate ZFS Intent Log. Accelerates synchronous writes (databases, NFS, VM storage). Must be enterprise NVMe with power loss protection. Not a general write cache.
L2ARC
SSD-based read cache that extends ARC beyond RAM. Useful when RAM is limited but you need fast reads. Does NOT improve writes.
The RAIDZ random write penalty
Why RAIDZ is terrible for databases and VMs
RAIDZ excels at sequential workloads — large blocks written contiguously. Media streaming, archives, backups.
But for random writes (databases, VMs, email servers), RAIDZ hits the read-modify-write penalty:
if a full stripe isn't written, ZFS must read the old data and parity blocks, compute new parity, and write back.
This multiplies IOPS and kills latency.
Example: A PostgreSQL server on RAIDZ2 performing frequent 8KB updates.
Each write touches multiple disks inefficiently, causing IOPS bottlenecks. The same workload on mirrors
scales linearly — each mirror pair serves requests independently.
Rule of thumb: RAIDZ for throughput. Mirrors for IOPS.
Common pitfalls
Choosing RAIDZ for VMs or databases
This is the #1 mistake. RAIDZ has terrible random write performance. Use mirrors for anything that needs IOPS.
Not using special vdevs for metadata
Metadata-heavy workloads (containers, small files, databases) crawl without an SSD special vdev. One mirrored SSD pair changes everything.
Adding single disks instead of vdevs
You cannot add a single disk to an existing RAIDZ vdev. You must add an entirely new vdev of equal size. Plan for expansion from day one.
Mixing vdev sizes
ZFS distributes writes across vdevs proportionally. Mismatched vdev sizes create unbalanced performance. Keep all vdevs the same size and type.
Best practices
VMs & databases
Mirrored vdevs. 6 disks = 3 mirror pairs. Linear IOPS scaling. Easy expansion by adding more pairs.
Bulk storage
RAIDZ2 or RAIDZ3. Optimize for capacity and sequential throughput. Accept the random write penalty.
Mixed workloads
Mirrors for data + special vdev (mirrored SSDs) for metadata. Best of both worlds.
Expansion planning
Mirrors expand by adding pairs. RAIDZ expands by adding full vdevs. Plan your growth path before creating the pool.