/kæʃ koʊˈhɪərəns/
noun … “Keeping multiple caches in sync.”
Cache Coherency is the consistency model ensuring that multiple copies of data in different caches reflect the same value at any given time. In multiprocessor or multi-core systems, each CPU may have its own cache, and maintaining coherency prevents processors from operating on stale or conflicting data. Cache coherency is critical for correctness in concurrent programs and high-performance systems.
Key characteristics of Cache Coherency include:
- Write propagation: changes to a cached value must propagate to other caches or main memory.
- Transaction serialization: read and write operations appear in a consistent order across processors.
- Protocols: hardware or software protocols like MESI (Modified, Exclusive, Shared, Invalid) manage coherency efficiently.
- Latency vs. correctness: strict coherency ensures correctness but can introduce delays; relaxed models trade consistency for performance.
- Multi-level consideration: coherency must be maintained across all cache levels (L1, L2, L3) and sometimes across multiple systems in distributed memory setups.
Workflow example: In a multi-core system:
Core1.cache.write(address, 42)
Core2.cache.read(address) -- Protocol ensures Core2 sees 42 or waits until propagation completes
Memory[address] = 42 -- Main memory updated after caches synchronizeHere, a write to Core1’s cache is propagated according to the coherency protocol so that Core2 and main memory remain consistent.
Conceptually, Cache Coherency is like multiple chefs sharing copies of a recipe: when one chef updates an ingredient or instruction, all other chefs must see the same update to avoid cooking conflicting dishes.
See Cache, CPU, Multiprocessing, Memory, Concurrency.