FIFO Caching: When Insertion Order Is the Right Policy

3 min readDatabases & Storage

FIFO evicts the oldest-inserted item regardless of how recently or frequently it was accessed. That's usually a weakness, but for streaming buffers, network queues, and scenarios where items are consumed exactly once, FIFO's predictable eviction order is a feature, not a bug.

cachingfifo

What FIFO does and doesn't track

FIFO (First In, First Out) evicts the item that has been in the cache the longest, regardless of how recently it was accessed or how many times it has been used. There is no access tracking, no frequency counter, no recency bookkeeping — just insertion order.

This makes FIFO deterministic and cheap. Given the same sequence of insertions and evictions, you always know exactly which item will be evicted next.

FIFO eviction mechanics

ConceptCaching Algorithms

FIFO maintains an ordered queue of cache keys by insertion time. When capacity is exceeded, the front of the queue (oldest insertion) is evicted. Updating an existing key's value does not change its position in the eviction queue.

Prerequisites

  • queue data structures
  • hash maps

Key Points

  • Eviction target is always the earliest-inserted key, regardless of access history.
  • No per-item metadata beyond the insertion order queue. O(1) insert, O(1) evict.
  • Updating a key's value does not promote it in the eviction queue (unlike LRU).
  • Predictable eviction order enables reasoning about cache lifetime of specific items.

Implementation

A hash map for O(1) lookups combined with a deque for O(1) queue operations:

from collections import deque
import threading

class FIFOCache:
    def __init__(self, capacity: int):
        self.cache = {}           # key → value
        self.order = deque()      # insertion order of keys
        self.capacity = capacity
        self.lock = threading.Lock()

    def get(self, key: int) -> int:
        with self.lock:
            return self.cache.get(key, -1)

    def put(self, key: int, value: int) -> None:
        with self.lock:
            if key in self.cache:
                # Update value, but do NOT change eviction order
                self.cache[key] = value
            else:
                if len(self.cache) >= self.capacity:
                    oldest_key = self.order.popleft()
                    del self.cache[oldest_key]
                self.cache[key] = value
                self.order.append(key)

Note that put on an existing key updates the value but leaves the key's position in the eviction queue unchanged. This is the correct FIFO semantics — the insertion time is what matters, not when the value was last written.

When FIFO is the right choice

FIFO's predictability is useful when items have a natural expiration tied to insertion order rather than usage pattern.

Streaming data buffers: a sliding window of sensor readings, log events, or time-series data. You want the most recent N data points. The oldest reading is the one to drop — not the least-recently-queried one. FIFO's behavior aligns exactly with the semantic requirement.

# Temperature sensor reading buffer — keep last 1000 readings
sensor_buffer = FIFOCache(capacity=1000)
for reading in sensor_stream:
    sensor_buffer.put(reading.timestamp, reading.value)

Network packet queues: router buffers under congestion drop packets in arrival order. FIFO is fair and avoids the complexity of priority schemes when no QoS differentiation is needed.

Request deduplication with short TTLs: if you're deduplicating requests within a 60-second window by request ID, FIFO naturally evicts the oldest IDs first. As long as your cache capacity covers the expected request volume in 60 seconds, the eviction order aligns with the deduplication window.

Replay buffers in testing: storing the last N API requests or database queries for replay-based testing. Oldest entries age out in insertion order, which matches the expected behavior.

When FIFO causes problems

Stable hot items: a frequently accessed item inserted early in the cache's lifetime will eventually be evicted by FIFO regardless of ongoing access. LRU or LFU would keep it; FIFO evicts it because it's old.

Cache capacity: 3
Insertions: A, B, C
Now: user reads A repeatedly → A has highest utility

Insert D: FIFO evicts A (oldest), even though A is actively used

This is the classic FIFO failure mode for general-purpose caching. It's why LRU is the default for most application caches.

Thrashing with cyclic access patterns: if your working set is just larger than cache capacity and accessed cyclically (A→B→C→D→A→B→...), FIFO will miss on every access. Each item is evicted just before it's accessed again.

FIFO anomaly: more cache space can mean more misses

FIFO (and other non-stack algorithms) can exhibit Bélády's anomaly: adding more cache capacity increases miss rate for certain access patterns.

Example with access sequence: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

With cache size 3: 9 misses With cache size 4: 10 misses

This happens because increasing cache size changes which items are evicted and when, and for specific access patterns this produces more misses.

LRU does not exhibit Bélády's anomaly — it is a "stack algorithm," meaning the set of items in a cache of size N is always a subset of the items in a cache of size N+1. Adding capacity never hurts hit rate with LRU.

In practice, Bélády's anomaly is rare with real workloads but is a reason to benchmark cache hit rates when increasing FIFO cache capacity.

FIFO vs LRU in practice

FIFO is simpler to implement and has lower overhead — no access-time updates on reads, no doubly linked list reordering. For high-throughput caches where read operations happen under lock, eliminating the list update on every get reduces contention.

However, LRU's hit rate advantage is significant for most application workloads. The cost of maintaining recency order (an O(1) operation with a hash map + doubly linked list) is worth it for caches where usage pattern determines item value.

Use FIFO when:

  • Items have natural expiration by age (streaming data, time-windowed buffers)
  • All items have equal expected future utility regardless of access history
  • Implementation simplicity and overhead minimization matter more than optimal hit rate
  • Items are consumed exactly once (more of a queue than a cache)

Use LRU when:

  • Access frequency or recency predicts future utility
  • Protecting actively-used items from eviction is important
  • Hit rate matters more than implementation simplicity

A session store uses FIFO eviction with capacity for 100,000 sessions. The application has 80,000 active users. A deployment pipeline runs integration tests that create 30,000 test sessions. After the test run completes and test sessions are cleaned up, active users start reporting being logged out. What happened?

easy

Test sessions are created in bulk at the start of the integration test run. The 80,000 active user sessions were already in the cache before the test run started.

  • AThe cleanup of 30,000 test sessions caused a memory leak that evicted user sessions
    Incorrect.Deleting entries from a cache does not cause evictions. It frees space.
  • BDuring test session insertion, FIFO evicted the 30,000 oldest user sessions to make room — those users were logged out
    Correct!The cache held 80,000 sessions (at or near capacity). Inserting 30,000 test sessions required evicting 30,000 entries — FIFO evicts the oldest, which are the earliest-logged-in user sessions. Those users' sessions were gone before the test cleanup happened. Fix: use a separate cache namespace or key prefix for test sessions, or increase cache capacity to accommodate test load without evicting production sessions.
  • CFIFO cannot handle concurrent writes from test and production traffic
    Incorrect.The implementation shown uses a lock for thread safety. Concurrent writes are handled correctly. The issue is the eviction policy, not concurrency.
  • DThe sessions that were evicted had expired TTLs
    Incorrect.Basic FIFO eviction doesn't use TTLs — it evicts by insertion order. If TTL-based expiry were involved, the outcome would be different.

Hint:80,000 sessions + 30,000 new test sessions = 110,000. Cache holds 100,000. Which sessions get evicted?