/ˌdʒiː ˌdiː ˌdiː ˈɑːr sɪks ɛks/
n. — “GDDR6X: Nvidia's PAM4 fever dream that crams 21Gbps/pin by pretending analog noise doesn't hate multi-level signaling.”
GDDR6X (Graphics Double Data Rate 6 eXtreme) is Micron's proprietary graphics SGRAM using 4-level PAM4 signaling to double per-pin bandwidth vs standard GDDR6, hitting 19-24Gbps/pin (38-48GT/s effective) on GPU PCBs for RTX 30/40-series flagships. Trading GDDR6's clean NRZ for PAM4's four voltage eyes (00/01/10/11), GDDR6X shrinks burst length to BL8 while delivering identical 32B/channel transfers, but demands heroic ODT, training, and error correction to combat PAM4's signal-to-noise massacre. This bandwidth beast enables 1+TB/s on 384-bit buses but guzzles power and yields like a drunk toddler.
Key characteristics and concepts include:
- PAM4 signaling transmitting 2 bits/symbol vs NRZ's 1 bit, theoretically doubling throughput at half the clock—until ISI/noise/JEP137 makes engineers cry.
- BL8 bursts (vs GDDR6 BL16) matching 32B/channel throughput, with per-pin rates 19-24Gbps turning 384-bit buses into 1.15TB/s monsters.
- 16n prefetch + dual 16-bit channels per die like GDDR6, but PAM4 complexity demands CA training, write leveling, and per-lane deskew wizardry.
- 1.35-1.4V operation with higher power draw than GDDR6 (20-25W/chip), because bandwidth isn't free and thermals hate NVIDIA's ambitions.
In an RTX workload, GDDR6X feeds ray-tracing BVHs, 4K textures, and DLSS frames as wide PAM4 firehoses, with GPU controllers fighting eye closure and bit errors to sustain 1TB/s+—until GDDR7 mercifully brings PAM3 sanity.
An intuition anchor is to picture GDDR6X as GDDR6 that snorted PAM4: twice the bits per symbol sounds brilliant until analog reality slaps you with four squished eyes instead of two clean ones, yet somehow squeezes 50% more bandwidth for flagship GPUs.