Samsung announced its next-generation memory technology (opens in a new tab) for high performance graphics cards: GDDR6W. Claiming to offer twice the capacity and performance of conventional GDDR6 memory, GDDR6W would be comparable to HBM2E for absolute performance and will allow for an overall graphics memory bandwidth of 1.4TB/s. For reference, Nvidia’s beastly GeForce RTX 4090 (opens in a new tab) currently achieves 1 TB/s with GDDR6X.
Samsung sees new technology as key to enabling “immersive metaverse experiences”. (opens in a new tab) However, the new technology is unlikely to allow for more memory bandwidth or faster graphics cards in the short term. But more on that in a moment.
Samsung says the new memory specification increases bandwidth per pin to 22 Gbps from GDDR6’s maximum specification of 16 Gbps (GDDR6X hits 21 Gbps per pin). However, GDDR6W doubles the overall bandwidth per memory chip package from 24 Gbps to GDDR6 at 48 Gbps, primarily due to doubling the number of pins on each memory package.
The chip capacity has also doubled from 16GB to 32GB. Samsung has achieved all of this while maintaining the exact same physical footprint as GDDR6 and GDDR6X. Perhaps most impressively of all, this was done through dual-stacked memory chips in the case while reducing the overall height of the case by 36%.
It should be noted that the overall memory bandwidth claim of 1.4TB/s is for a 512-bit memory interface with eight GDDR6W packages totaling 32GB of total graphics memory. A 24GB RTX 4090 card uses 12 GDDR6X packages on a 384-bit bus. With the same bus width, GDDR6X would only be slightly slower than the new GDDR6W standard.
So the critical point to note here is that Samsung is making direct performance comparisons with GDDR6 rather than GDDR6Xprobably because GDDR6X is only produced by Micron.
Where GDDR6W has a clear advantage, however, is in capacity. With double the capacity of GDDR6 and GDDR6X, only half the number of chips are needed for a given amount of total memory, opening up the possibility for graphics cards with even more VRAM.
Using this RTX 4090 comparison, if the card had been GDDR6W-based, it would have only needed six memory chips instead of 12 to achieve the same 24GB capacity and 1TB/s bandwidth. .
So here’s the takeaway: with GDDR6W you get double the performance (actually a little more than double, thanks to that 22Gbps per pin versus 21Gbps for GDDR6X) and double the capacity per memory packet. However, for a given capacity, you only need half the number of packages. Ultimately, the actual memory bandwidth available to the GPU is essentially the same as GDDR6X at a given capacity.
What GDDR6W really offers then is the ability to use fewer packages to achieve the same capacity and performance, most likely at a lower cost. Or increase capacity and performance to levels never before seen. If you look at a 24GB RTX 4090 card, for example, there’s very little space around the GPU for more memory packs.
A 48GB RTX 4090 with double the memory capacity just wouldn’t be possible with GDDR6X, although that amount of memory would almost certainly be very silly. The fact is that GDDR6W opens possibilities for the future. GDDR6W also seems particularly interesting for laptops. Fewer memory packets will always be a good thing for mobiles.
Nvidia currently favors Micron’s GDDR6X while AMD sticks to GDDR6 for its latest graphics cards. Neither has indicated any intention to jump on Samsung’s new GDDR6W technology. Indeed, it’s not entirely clear whether existing GPUs, including Nvidia’s latest RTX 40 series and AMD’s new Radeon RX 7000 (opens in a new tab), supports GDDR6W. However, we believe that the support given to GDDR6W is likely to essentially amount to a new packaging technology for GDDR6 rather than a new memory technology per se. Watch this place…
Where are the best Cyber Week graphics card deals?
In the USA: