/ˌdʒiː ˌdiː ˌdiː ˈɑːr sɪks/
n. — “GDDR6: because GDDR5 wasn’t quite fast enough to pretend modern GPUs don’t starve for bandwidth.”
GDDR6 (Graphics Double Data Rate 6) is the high-performance graphics DRAM generation that succeeded GDDR5, delivering dramatically higher per-pin data rates for dedicated video memory on contemporary graphics cards. Mounted as synchronous graphics RAM (SGRAM) directly on the graphics card’s PCB, GDDR6 interfaces with the GPU over wide, high-speed buses optimized for massive sequential bursts rather than low-latency random access. This design choice sustains the throughput demands of thousands of parallel shader cores processing textures, geometry, and ray-tracing structures in real time.
Key characteristics and concepts include:
- Extreme per-pin transfer rates via PAM4 signaling and deep prefetch, turning each internal access into a bandwidth tsunami that dwarfs GDDR5 and earlier DDR-family pretenders.
- Per-device channel architecture that interleaves traffic from shader armies, hiding latency behind sheer parallelism while pretending random access patterns don’t exist.
- Ubiquitous on mid-to-flagship GPUs where bus width (192-bit to 384-bit+) multiplies sky-high pin rates into terabytes-per-second territory—until GDDR6X or HBM rudely interrupts.
- Carefully tuned power delivery and signal integrity that somehow keeps this speed demon stable without melting consumer-grade boards.
In a real-world rendering pipeline, a GPU hammers GDDR6 with coalesced bursts of vertex data, massive textures, and BVH structures for ray tracing, keeping execution units saturated so frames hit target rates without the polite stuttering of lesser memories.
An intuition anchor is to see GDDR6 as the graphics memory equivalent of a firehose mocking a garden sprinkler: it doesn’t finesse single drops but blasts entire oceans of pixels to feed the GPU’s endless thirst.