Back to DAG

Replication Lag

databases

Understanding Replication Lag

Replication lag is the delay between a write being committed on the leader and that write becoming visible on a follower. In an ideal system this delay would be zero, but in practice it ranges from milliseconds to minutes depending on system conditions.

Why Lag Occurs

  1. Network delay: Data must travel over the network from leader to follower. Even in the same datacenter, this adds milliseconds. Across datacenters, it can be tens or hundreds of milliseconds.
  2. Follower under load: If a follower is processing many read queries, it may fall behind in applying replication events. CPU saturation on the follower is a common culprit.
  3. Large transactions: A single transaction that modifies millions of rows generates a massive replication event. The follower must apply the entire transaction atomically, which takes time.
  4. WAL shipping delays: In WAL-based replication, the follower must process WAL segments sequentially. If compaction, vacuum, or checkpointing is running, WAL replay slows down.
  5. Schema changes (DDL): An ALTER TABLE on a large table can block replication for minutes while the follower reconstructs the table.

Effects of Lag

  • Stale reads: A user writes data, then reads from a follower that hasn't caught up yet, and doesn't see their own write.
  • Monotonic read violations: A user makes two successive reads routed to different followers. The second follower is further behind, so the user sees data "go backward in time."
  • Causal ordering violations: User A posts a message, User B replies. If the reply is replicated before the original message, readers on that follower see the reply without context.

Monitoring Lag

PostgreSQL exposes replication lag via the pg_stat_replication view. Key columns include:

  • sent_lsn vs replay_lsn: the difference tells you how far behind the follower is in bytes.
  • replay_lag: an interval representing the time lag. MySQL uses Seconds_Behind_Master in SHOW SLAVE STATUS.

Mitigations

  • Read from leader for recent writes: If the user just modified their profile, route subsequent reads to the leader for a few seconds.
  • Causal consistency protocols: Track logical timestamps and ensure the follower has caught up past the required timestamp before serving the read.
  • Parallel replication: Apply replication events across multiple worker threads on the follower, reducing lag caused by sequential processing.

Real-Life: Social Media Timeline Lag

Real-World Example

Consider a social media platform with leader–follower replication:

  • User posts a photo: The write goes to the leader. The leader commits it and starts replicating to followers.
  • User immediately refreshes their profile: The read is routed to a nearby follower. If replication lag is 500ms and the user refreshes within that window, the photo does not appear. The user thinks their upload failed and tries again.
  • Friend sees a comment before the post: User A writes a post, User B replies. If the follower serving User C has received the reply but not the original post, User C sees a reply to a non-existent post.

How companies handle this:

  • Facebook/Meta: Uses read-after-write tokens. When a write completes, the response includes a token with the leader's LSN. Subsequent reads carry this token, and the serving follower waits until it has caught up to that LSN before responding.
  • Amazon DynamoDB: Returns a LastEvaluatedKey that includes version information, enabling consistent pagination even across replicas.
  • GitHub: For repository pushes, the UI routes the user to the primary datacenter for a brief window after pushing, ensuring they see their own commits.

Replication Lag Timeline

Time t0 t1 t2 t3 Leader WRITE F1 APPLIED 50ms lag F2 APPLIED 2s lag (under load) READ from F2 Stale! Write not yet visible on F2 READ from F1 (OK)
Step 1 of 2