/ˌdiː diː ˈɑːr/
n. — “DDR: SDRAM that figured out both clock edges work, doubling bandwidth while pretending SDR wasn't embarrassingly slow.”
DDR (Double Data Rate) SDRAM transfers data on both rising and falling edges of the clock—hence "double"—effectively doubling bandwidth over single-edge SDR without jacking clock frequencies into EMI hell. Built on DRAM cells with the same leaky-cap shenanigans, DDR pipelines activate-CAS-precharge sequences through source-synchronous strobes (DQS) that track data bursts, while commands stay single-pumped on CK. This pin-efficient trick scales from DDR1 toilet paper to DDR5's channelized madness, fueling CPUs, servers, and spawning the GDDR graphics mutants.
Key characteristics and concepts include:
- Source-synchronous DQS strobe per data group, latching bursts without CK skew murder, because global clocks hate >1GHz pretensions.
- Prefetch depth doubling each generation (2N→4N→8N→16N), bursting sequential column data to pins so CPUs pretend random access isn't pathological.
- On-Die Termination (ODT) and command fly-by topologies taming reflections across DIMM daisy chains, mocking point-to-point wiring fantasies.
- Progressive voltage drops (2.5V→1.2V→1.1V) and bank group partitioning to squeeze more parallelism from finite pins without thermal apocalypse.
In a memory controller beat, DDR endures rank interleaving for concurrency, page-open policies for row hit bliss, and ODT juggling to fake multi-drop scalability while refresh demons steal cycles.
An intuition anchor is to picture DDR as SDRAM with a split personality: commands saunter on one edge like proper Victorians, but data whores both edges for twice the throughput—leaving single-rate fossils coughing in the dust.