/ˌdʒiː ˌdiː ˌdiː ˈɑːr/
n. — “GDDR: graphics memory that sneers at DDR's pedestrian pace while force-feeding GPUs the bandwidth they pretend not to crave.”
GDDR (Graphics Double Data Rate) is the family of high-bandwidth synchronous graphics RAM (SGRAM) specialized for dedicated video memory on graphics cards, distinct from general-purpose system DDR despite sharing double-data-rate signaling roots. Chips from the GDDR lineage—GDDR3 through GDDR7—mount directly on the graphics card’s PCB, wired to the GPU via wide, high-speed buses that prioritize massive parallel bursts for textures, frame buffers, and shaders over low-latency random pokes. Each generational leap cranks per-pin rates, prefetch depth, and channel smarts to sustain terabyte-scale throughput, mocking DDR's balanced-but-bland compromises.
Key characteristics and concepts include:
- Graphics-first tuning that swaps DDR's latency obsession for raw throughput via deep prefetch and burst modes, turning shader hordes into bandwidth gluttons.
- Progressive signaling wizardry—NRZ in early GDDR3, PAM4/PAM3 in later GDDR6/7—pushing 20+ Gbps per pin so 384-bit buses hit 1+ TB/s without apology.
- Family evolution from GDDR3's modest 1-2 GT/s to GDDR7's 40 GT/s beasts, each outpacing system DDR equivalents while staying cheap enough for consumer GPUs.
- Tight on-board integration with power/thermal tricks to keep screaming data rates stable, unlike DDR's cushy mainboard suburbs.
In any GPU pipeline, GDDR endures endless coalesced barrages of geometry, ray data, and AI weights as wide sequential floods, saturating channels to fake low latency through brute-force parallelism that would choke vanilla DDR in seconds.
An intuition anchor is to see GDDR as the graphics card's industrial fire main: not sipping delicately like DDR's kitchen faucet, but hosing GPU factories with data floods so relentless they render bandwidth complaints delightfully obsolete.