| your Linux construction kit
Source
← Back to ZFS Overview

Memory & ARC — the engine that makes ZFS fast.

ARC (Adaptive Replacement Cache) is ZFS's in-memory read cache. It's one of ZFS's biggest strengths — and one of the most common sources of problems when misconfigured. Understanding how ARC works is essential to getting good performance.

How ARC works

MFU — Most Frequently Used

Stores data that is accessed repeatedly over time. Database indexes, frequently-read configs, hot files. This is the "long-term memory" of ARC.

MRU — Most Recently Used

Stores data that was just accessed. Scan reads, recently opened files. This is the "short-term memory." Data graduates to MFU if accessed again.

L2ARC — SSD extension

When RAM isn't enough, L2ARC caches data on a fast SSD. Extends read performance beyond physical memory. Read cache only — does not improve writes.

SLOG — Write intent log

Not a write cache. Stores the intent of synchronous writes so the application can continue without waiting for the data to hit the main pool. Critical for databases and NFS.

The Linux problem

Linux aggressively reclaims ARC memory

On FreeBSD, ARC and the kernel cooperate gracefully. On Linux, the kernel treats ARC memory as reclaimable cache and can evict it under memory pressure. This causes unpredictable performance drops — ZFS has cached your hot data, the kernel throws it away, and your next read hits disk.

Fix: Set ARC limits manually.

# Set maximum ARC size (e.g., 8GB)
echo 8589934592 > /sys/module/zfs/parameters/zfs_arc_max

# Make persistent across reboots
echo "options zfs zfs_arc_max=8589934592" > /etc/modprobe.d/zfs.conf
On FreeBSD: ARC manages itself. On Linux: you manage ARC, or the kernel manages it badly.

Memory guidelines

8GB RAM
Minimum for ZFS. Set ARC max to 4GB. Basic file serving and light workloads.
16GB RAM
Comfortable for most workloads. Set ARC max to 8-10GB. VMs, databases, containers.
32GB+ RAM
Performance territory. Let ARC use 16-24GB. Large database caching, heavy file serving.
Deduplication
1-2GB of RAM per TB of deduped data. This is why dedup is a trap. 10TB of storage needs 10-20GB of RAM just for the dedup table.
Do NOT enable deduplication unless absolutely necessary. Use LZ4 compression instead. It gives you real space savings with near-zero CPU overhead. Dedup sounds great on paper and destroys performance in practice. The RAM requirement alone makes it impractical for most deployments.