ECC

/ˌiː siː ˈsiː/

n. — "Extra bits catching flipped data before it corrupts your server."

ECC (Error Correcting Code) memory adds 7-8 parity bits per 64 data bits using Hamming codes to detect/correct single-bit errors and detect multi-bit faults in DRAM, standard for servers/workstations where cosmic rays or voltage noise flip bits during long-running workloads. Unlike consumer DDR, ECC modules use 9 chips (8 data + 1 parity) with controller support for SECDED (single error correction, double error detection), mandatory on-die ECC in DDR5 scrubbing internal cell errors invisible to system.

Key characteristics and concepts include:

  • Hamming(72,64) encoding 7 parity bits per 64 data bits, correcting 1-bit flips, detecting 2-bit errors via syndrome decoding.
  • Server ECC RDIMMs vs consumer non-parity DIMMs, x9 organization vs x8 with system controller overhead ~1-2% performance.
  • On-die ECC in DDR5/LPDDR5X scrubs internal 128b blocks to 120b data, invisible to memory controller.
  • Critical for financial/scientific workloads where 1 bit-flip = million-dollar trades or physics discoveries ruined.

In server memory traffic, DDR5 controller writes 64b data + 8b ECC, readback recomputes syndrome—if non-zero, flips corrected bit and logs CE (correctable error), DE halts system.

An intuition anchor is to picture ECC as spellcheck for binary: single typos auto-fixed, double typos flagged for manual review—keeping server ledgers pristine while consumer RAM gambles on cosmic ray roulette.

tRP

/tiː ɑːr ˈpiː/

n. — "Row close-to-next-open delay—DRAM's precharge housekeeping timer."

tRP (Row Precharge time) measures minimum clock cycles required to complete precharge (PRE) command and prepare a DRAM bank for new row activation, typically 10-18 cycles terminating the open page state before next ACT command. Third timing parameter (CL-tRCD-tRP-tRAS), tRP triggers on row conflicts when controllers swap pages, combining with tRCD for full row-cycle penalty while DDR prefetch masks sequential hits. Scales ~12-15ns across generations despite clock inflation, critical for random access where row thrashing murders bandwidth.

Key characteristics and concepts include:

  • Row conflict penalty = tRP + tRCD + CL, versus pure CL for page hits—controllers chase spatial locality to dodge this tax.
  • All-bank precharge (PREAB) resets entire chip (tRP × banks), used during refresh or power-down sequences.
  • Separate tRP values per bank group in DDR4+ reflecting internal timing variations.
  • Stays ~13ns constant (tRP=15×0.867ns @DDR4-3200), mocking MT/s race while dominating random-access benchmarks.

In DDR5 random stream, PRE row47 closes page (tRP=36 cycles=12ns), ACT row128 (tRCD=36), CAS col3 (CL=36)—full 84-cycle row miss vs 36-cycle page hit, repeat across 32 banks while scheduler hunts locality.

An intuition anchor is to picture tRP as kitchen cleanup after serving from stocked counter: PRE command wipes surfaces (sense amps discharge), tRP waits for dry before restocking—rushed cleanup leaves residue, slow cleanup idles hungry customers.

tRCD

/tiː ɑːr siː ˈdiː/

n. — "Row activation to CAS delay—DRAM's 'kitchen ready' timer."

tRCD (Row address to Column address Delay) measures minimum clock cycles between row activation (ACT) and CAS read/write command in DRAM, typically 10-18 cycles where sense amplifiers stabilize the open page before column access. Listed as second timing parameter (CL-tRCD-tRP-tRAS), tRCD governs random access latency (=tRCD+CL) while DDR prefetch hides sequential sins, scaling roughly constant ~13-15ns across generations despite clock inflation.

Key characteristics and concepts include:

  • Critical path for row miss → first data: ACT waits tRCD, then CAS waits CL—total random latency benchmark.
  • Separate read/write values (tRCDRD/tRCDWR) in DDR4+ reflecting DQS strobe vs command timing differences.
  • Bank interleaving hides one tRCD while others process, essential for GDDR shader streams.
  • True latency (ns) = cycles × (2000/MT/s), staying ~12-15ns from DDR1 (tRCD=2×500ns) to DDR5 (tRCD=36×0.357ns).

In DDR5 random access, ACT row47 (tRCD=36 cycles=12ns), CAS col3 (CL=36=12ns), data via DQS—repeat across 32 banks while controller chases row hits to dodge full tRCD+CL penalty.

An intuition anchor is to picture tRCD as kitchen prep after ordering: row activation stocks counters (sense amps stable), tRCD waits for organization before waiter (CAS) grabs your plate—rushed prep burns food, idle prep wastes time.

page

/peɪdʒ/

n. — "Open row's data latched in sense amps, primed for fast CAS column grabs."

Page is the open row state in DRAM after row activation dumps thousands of cells onto sense amplifiers, creating a cache where subsequent CAS commands access columns with minimal latency instead of full row cycles. Row hits keep the page open for rapid sequential CAS bursts, while conflicts force precharge + new activation, crippling throughput as controllers predict spatial locality across DDR banks.

Key characteristics and concepts include:

  • One open page per bank: CAS to same page = instant column decode vs full activation+CAS for conflicts.
  • Page-mode chaining multiple CAS cycles while row stays active, classic DRAM speed trick.
  • Controllers favor open-page policies betting sequential access stays within active page.
  • tRAS caps page lifetime before forced precharge, balancing refresh vs retention.

In DDR4 streaming, activate row47 opens page, CAS col3/7/15 grab columns (row hit), precharge closes, activate row128 (row miss)—repeat while banks hide latency by parallel page juggling.

An intuition anchor is to picture DRAM page as a restaurant counter stocked after kitchen opens pantry: CAS grabs specific items instantly while counter stays loaded—closing/re-stocking wastes time servers hate.

row

/roʊ/

n. — "DRAM's horizontal data platter that must activate before CAS can serve column snacks."

Row activation (ACT command) in DRAM dumps an entire row's worth of capacitors (~1K-16K cells) onto sense amplifiers via wordline assertion, opening the page for subsequent CAS column reads/writes measured by tRCD latency (row-to-column delay). Measured in clock cycles (tRCD=10-18), row hits skip re-activation for instant CAS while conflicts force tRP precharge + new ACT, crippling bandwidth as controllers chase spatial locality across DDR banks.

Key characteristics and concepts include:

  • Wordline assertion connects entire row (~8KB) to bitlines, sense amps latch charge differences—tRCD waits for stable voltages before CAS releases column data.
  • Row hit policy keeps hot rows open for back-to-back CAS, row conflict closes (tRP) then reopens (tRCD+CAS)—classic latency vs throughput war.
  • Bank-level parallelism hides one row cycle while others cook, critical for GDDR shader traffic pretending random access exists.
  • tRAS (row active time) caps how long a row lingers before forced precharge, balancing refresh needs against greedy open-page policies.

In a DDR4 stream, ACT row0 (tRCD=13), CAS col47 (CL=16), CAS col128 (row hit), PRE row0 (tRP=13), ACT row42 (tRCD), CAS col3—repeat across 16 banks while controller predicts the next winning row.

An intuition anchor is to picture DRAM row as a restaurant kitchen: ACT swings open the pantry door dumping ingredients onto counters (sense amps), CAS grabs specific shelves—leaving the pantry open risks spoilage (tRAS), closing/reopening wastes time (row conflict).

CAS

/kæs/

n. — "Clock cycles DRAM waits after row activation before coughing up column data."

CAS (Column Address Strobe) latency measures clock cycles between a DRAM read command and data appearing on pins after row activation opens the page, typically CL=2-3 for DDR1, 14-22 for DDR4, 32-40+ for DDR5 where higher numbers mask faster clocks. Primary DDR timing parameter advertised on specs (CL16-18-18-36 etc.), CAS dominates random-access latency while prefetch depth hides sequential sins, with true latency (ns) = CL × (2000/MT/s).

Key characteristics and concepts include:

  • Critical path for random column reads from open pages, where lower CL wins 1:1 latency battles but higher MT/s often wins real-world wars.
  • Posted CAS (AL) in DDR2+ pipelines commands ahead, faking lower effective latency through prefetch trickery.
  • tCL governs read-to-data-valid window, while tCWL (write CAS) lags reads by 0-2 cycles for write leveling calibration bliss.
  • Absolute latency stays ~10-15ns across DDR generations despite CL inflation, because physics hates free speed lunch.

In a DDR5 controller beat, ACT opens row (tRCD), CAS fires after CL=40 cycles (13ns real), data bursts via DQS strobes—repeat 100k+ times/sec while bank groups hide conflicts and refresh steals cycles.

An intuition anchor is to picture CAS as the DRAM receptionist: row activation dials the extension, CAS is hold music duration before data transfer connects—faster receptionists beat longer hold times, even if the office clock sped up.

GDDR6X

/ˌdʒiː ˌdiː ˌdiː ˈɑːr sɪks ɛks/

n. — “GDDR6X: Nvidia's PAM4 fever dream that crams 21Gbps/pin by pretending analog noise doesn't hate multi-level signaling.”

GDDR6X (Graphics Double Data Rate 6 eXtreme) is Micron's proprietary graphics SGRAM using 4-level PAM4 signaling to double per-pin bandwidth vs standard GDDR6, hitting 19-24Gbps/pin (38-48GT/s effective) on GPU PCBs for RTX 30/40-series flagships. Trading GDDR6's clean NRZ for PAM4's four voltage eyes (00/01/10/11), GDDR6X shrinks burst length to BL8 while delivering identical 32B/channel transfers, but demands heroic ODT, training, and error correction to combat PAM4's signal-to-noise massacre. This bandwidth beast enables 1+TB/s on 384-bit buses but guzzles power and yields like a drunk toddler.

Key characteristics and concepts include:

  • PAM4 signaling transmitting 2 bits/symbol vs NRZ's 1 bit, theoretically doubling throughput at half the clock—until ISI/noise/JEP137 makes engineers cry.
  • BL8 bursts (vs GDDR6 BL16) matching 32B/channel throughput, with per-pin rates 19-24Gbps turning 384-bit buses into 1.15TB/s monsters.
  • 16n prefetch + dual 16-bit channels per die like GDDR6, but PAM4 complexity demands CA training, write leveling, and per-lane deskew wizardry.
  • 1.35-1.4V operation with higher power draw than GDDR6 (20-25W/chip), because bandwidth isn't free and thermals hate NVIDIA's ambitions.

In an RTX workload, GDDR6X feeds ray-tracing BVHs, 4K textures, and DLSS frames as wide PAM4 firehoses, with GPU controllers fighting eye closure and bit errors to sustain 1TB/s+—until GDDR7 mercifully brings PAM3 sanity.

An intuition anchor is to picture GDDR6X as GDDR6 that snorted PAM4: twice the bits per symbol sounds brilliant until analog reality slaps you with four squished eyes instead of two clean ones, yet somehow squeezes 50% more bandwidth for flagship GPUs.

DDR2

/ˌdiː diː ˈɑːr tuː/

n. — "DDR2: DDR1's gym-rat sequel that halved voltage to 1.8V and pretended 4n prefetch made it bandwidth royalty."

DDR2 (Double Data Rate 2) SDRAM drops DDR's 2.5V bloat to 1.8V with 4n prefetch (double DDR1's 2n wimpiness), 400-1066 MT/s data rates, 8 banks, Posted CAS, and on-die termination (ODT) to mock multi-DIMM signal reflections while internal clock runs at half data-bus speed for power savings. Building on DRAM foundations, DDR2 introduces AL (additive latency), write latency = read latency -1, and RDQS strobe for x8 devices, fueling mid-2000s Core 2 Duo glory before DDR3's fly-by wizardry.

Key characteristics and concepts include:

  • 4n prefetch architecture bursting four column words per activate to pins, pretending DDR1's 2n was cute at 533-1066MT/s.
  • Internal clock at half I/O rate + SSTL_18 signaling slashing power 40% vs DDR1, because 2.5V was so early-2000s.
  • Posted CAS (AL) and write leveling pretending memory controllers aren't timing-juggling clowns.
  • 8 banks + ODT faking concurrency across 240-pin DIMMs, with burst-8 chopping for sequential bliss.

In a classic dual-channel Northbridge mambo, DDR2 interleaves rank accesses while ODT tames stubs, chasing row hit policies and AL tricks to sustain 8.5GB/s (PC2-6400) until DDR3 fly-by nuked the topology.

An intuition anchor is to picture DDR2 as DDR1 that discovered the gym and diet: double prefetch muscles, voltage cut for stamina, ODT tattoos—strutting twice the bandwidth while mocking its predecessor's power-guzzling flab.

DDR1

/ˌdiː diː ˈɑːr wʌn/

n. — “DDR1: the plucky SDRAM pioneer that discovered both clock edges work, kickstarting the DDR dynasty before anyone cared.”

DDR1 (Double Data Rate 1) SDRAM introduced dual-edge data transfers at 2.5-2.6V with 2n prefetch architecture, 100-200MHz clocks (200-400MT/s effective), and source-synchronous DQS strobes to double bandwidth over SDR without clock frequency insanity. Building on DRAM leaky cells with four banks and SSTL_2 I/O, DDR1 pipelines activate-CAS bursts through DLL-aligned interfaces while commands single-pump on CK edges, launching PC-2100/PC-2700 DIMMs that fueled early 2000s Pentium 4s before DDR2 voltage diets arrived.

Key characteristics and concepts include:

  • 2n prefetch bursting two column words per activate to pins, mocking SDR's single-edge stinginess at the cost of doubled internal pipelining.
  • Single DQS strobe per data group (edge-aligned reads, center-aligned writes), pretending global CK distribution doesn't suck at >133MHz.
  • Four banks with no bank groups, basic ODT-lite termination, and 2.5V SSTL_2 pretending signal integrity scales beyond two DIMMs/channel.
  • CAS 2-3 latencies with burst lengths 2/4/8, auto-precharge options, and 7.8µs refresh intervals mocking async DRAM's timing anarchy.

In a classic Northbridge controller waltz, DDR1 interleaves rank accesses across stubby multi-DIMM channels, chasing row hit bliss while DLLs fight CK-DQS skew—delivering 3.2GB/s dual-channel glory until DDR2's prefetch muscles flexed harder.

An intuition anchor is to picture DDR1 as SDRAM's rebellious teenager: data on *both* clock edges instead of just rising like a prude, birthing modern GDDR mutants while sipping 2.5V like it owned the place.

DDR5

/ˌdiː diː ˈɑːr fɪv/

n. — “DDR5: DDR4 that split the channel in two and pretended PMIC wizardry fixed everything.”

DDR5 (Double Data Rate 5) SDRAM goes rogue at 1.1V with dual 32-bit sub-channels per 64-bit module, on-DIMM PMIC, 16n/32n prefetch, 32 banks in 8 groups, and Decision Feedback Equalization to blast 4800-8800+ MT/s while mocking DDR4's single-channel monotony. Building on DRAM leaky buckets, DDR5 mandates on-die ECC, two-channel-per-DIMM architecture, internal VREF generators, and same-bank refresh to fake error-free operation at terabyte-scale densities. This beast powers 2020s desktops, servers, and AI farms before DDR6 whispers sweet nothings.

Key characteristics and concepts include:

  • Dual sub-channels (2x32-bit) per DIMM with independent timing, doubling efficiency while PMIC serves clean power—bye-bye DDR4 voltage droop drama.
  • On-die ECC scrubbing single-bit flukes within the chip, pretending density scaling doesn't summon cosmic rays.
  • 32 banks across 8 groups with same-bank refresh, slashing tRFC penalties so controllers juggle like deranged clowns.
  • DFE equalization + internal VREF/DCA pretending >6400MT/s eye diagrams aren't suicide pacts with physics.

In a modern IMC symphony, DDR5 interleaves sub-channel traffic across PMIC-stabilized rails, chasing row hit nirvana while on-die ECC mops bit-flips and refresh daemons target single banks—delivering AI/data-center bliss until DDR6 crashes the party.

An intuition anchor is to picture DDR5 as DDR4 that cloned itself: two sub-channels per DIMM, personal power butler (PMIC), and error-cleaning janitor (on-die ECC), mocking single-channel ancestors while sipping less juice.