The conversation nobody is having.
Here's a question they used to teach in university and mostly don't anymore: what is a kernel, and why does the architecture matter?
The kernel is the one piece of software that talks directly to your hardware. Every program you run, every file you save, every packet you send — it all goes through the kernel. It's the translator between your code and the metal. How you design that translator changes everything about what your system can do.
Three architectures, three philosophies
Monolithic Kernel (Linux)
Everything runs in one address space. The filesystem, the network stack, the device drivers, the scheduler — they're all inside the kernel. They can call each other directly. No overhead. Blazing fast.
The tradeoff: if any component crashes or gets compromised, it takes down the entire kernel. There is no isolation. A bug in a WiFi driver can corrupt memory used by the filesystem. A kernel exploit owns everything.
Microkernel (QNX, L4, Mach)
The kernel does almost nothing — just memory management, scheduling, and message passing. Everything else (filesystems, drivers, networking) runs as separate processes in userspace. They communicate through IPC (inter-process communication).
The tradeoff: rock-solid isolation. A crashed driver gets restarted without affecting anything else. But every operation that crosses a boundary costs an IPC call, and IPC is inherently slower than a direct function call.
Hybrid Kernel (Windows NT, macOS XNU)
A pragmatic middle ground. Critical stuff runs in kernel space for performance. Less critical stuff runs in userspace for safety. The boundary is a design choice, not a dogma.
The reality: most production systems end up here, even if they started somewhere else. macOS's XNU combines Mach (microkernel) with BSD (monolithic) components. Windows NT runs drivers in kernel space despite its microkernel heritage. Purity yields to pragmatism.
Why this matters right now
People deploy thousands of containers on a shared Linux kernel and call it "isolated." It's not. Containers use namespaces and cgroups — they're policy, not isolation. Every container shares the same monolithic kernel. A kernel exploit in one container owns them all. That's not a bug. That's the architecture.
This doesn't mean Linux is bad. It means you should understand what it is and what it isn't. Linux is phenomenal for performance, hardware support, and ecosystem. But if your threat model requires real isolation, you need hardware boundaries (VMs) or a different architecture. Knowing the difference is the whole game.
And here's the beautiful irony: eBPF — the technology that lets you safely inject code into a running Linux kernel — is essentially the monolithic kernel admitting it needs microkernel ideas. eBPF programs are verified, sandboxed, and can't crash the kernel. It's safe, modular code running inside a monolithic system. The architectures are converging. Understanding both helps you see where they're going.