Back to DAG

Buffer Pool Manager

databases

What is a Buffer Pool?

A buffer pool is a fixed-size region of memory that a database engine uses to cache disk pages. Instead of reading from disk on every query, the database keeps frequently accessed pages in the buffer pool, dramatically reducing I/O.

Core structure

The buffer pool is divided into frames — fixed-size slots, each capable of holding one disk page (typically 4 KB or 8 KB). A page table (implemented as a hash map) maps each page_id to the frame that currently holds it. This allows O(1) lookup to check whether a page is already in memory.

Fetch operation

When a query needs a page:

  1. Check the page table — if the page_id maps to a frame, it is a buffer pool hit. Return a pointer to that frame.
  2. Buffer pool miss — the page is not in memory. Pick a victim frame (using a replacement policy), write the victim back to disk if its dirty flag is set, then read the requested page from disk into that frame.
  3. Update the page table — remove the old mapping, add the new one.

Pin count

Each frame maintains a pin count — the number of threads currently using that page. A pinned page (pin count > 0) must never be evicted. When a thread finishes with a page, it calls unpin, decrementing the count. Only frames with pin count 0 are candidates for eviction.

Dirty flag

When a page is modified in memory, its frame is marked dirty. Before evicting a dirty frame, the buffer pool must write the page back to disk. Clean frames can be evicted immediately since the disk already has the latest copy.

Why not use the OS page cache?

Operating systems provide their own virtual-memory page cache, but databases manage their own buffer pool because:

  • The DB knows access patterns (e.g., sequential scans vs. index lookups) and can choose smarter replacement policies.
  • The DB needs to control write ordering for crash recovery (WAL protocol).
  • The OS uses a generic LRU policy that can be badly polluted by sequential scans.
  • The DB can pin pages and manage concurrency at a finer granularity.
OperationTime complexityNotes
Fetch (hit)O(1)Hash map lookup
Fetch (miss)O(1) amortizedPlus disk I/O
UnpinO(1)Decrement counter
FlushO(1)Write page to disk

Real-Life: PostgreSQL Shared Buffers

Real-World Example

PostgreSQL's shared_buffers setting controls the size of its buffer pool. A typical production configuration sets this to 25% of available RAM (e.g., 8 GB on a 32 GB machine).

When you run a query like SELECT * FROM users WHERE id = 42:

  1. The executor determines it needs page 7 of the users table.
  2. It calls the buffer manager to fetch page 7.
  3. If page 7 is already in shared buffers (a hit), PostgreSQL returns a pointer immediately — no disk I/O at all.
  4. If it is a miss, PostgreSQL picks a victim buffer using its clock-sweep algorithm, writes back the victim if dirty, reads page 7 from disk, and pins it.

Why this matters for performance:

  • A buffer pool hit takes ~100 nanoseconds (memory access).
  • A disk read takes ~5-10 milliseconds (HDD) or ~100 microseconds (SSD).
  • That is a 1,000x to 100,000x difference. A well-tuned buffer pool keeps the hot working set in memory, making most queries disk-free.

MySQL's InnoDB uses a similar design with its buffer pool, typically configured to 70-80% of available memory. It additionally maintains a young and old sublist to resist scan pollution.

Buffer Pool Architecture

Page Table (HashMap) pg5 → frame 0 pg12 → frame 1 pg3 → frame 2 pg9 → frame 3 Buffer Pool (4 frames) frame 0: page 5 pin=2 frame 1: page 12 pin=0 dirty frame 2: page 3 pin=1 frame 3: page 9 pin=0 Fetch page 5 → HIT (in frame 0) Return pointer, pin++ Fetch page 20 → MISS 1. Pick victim (frame 1, pin=0) 2. Flush dirty page 12 to disk 3. Read page 20 from disk into frame 1 4. Update page table: remove pg12, add pg20 Disk page 1 page 2 page 3 ... page N page 5 page 12 page 20 page 42 page 99

Interactive Buffer Pool

Loading demo...
Step 1 of 3