Micron Technology this week confirmed that it had begun mass production of GDDR5X memory. As revealed last week, the first graphics card to use the new type of graphics DRAM will be NVIDIA’s upcoming GeForce GTX 1080 graphics adapter powered by the company’s new high-performance GPU based on its Pascal architecture.

Micron’s first production GDDR5X chips (or, how NVIDIA calls them, G5X) will operate at 10 Gbps and will enable memory bandwidth of up to 320 GB/s for the GeForce GTX 1080, which is only a little less than the memory bandwidth of NVIDIA’s much wider memory bus equipped (and current-gen flagship)  GeForce GTX Titan X/980 Ti. NVIDIA’s GeForce GTX 1080 video cards are expected to hit the market on May 27, 2016, and presumably Micron has been helping NVIDIA stockpile memory chips for a launch for some time now.

NVIDIA GPU Specification Comparison
  GTX 1080 GTX 1070 GTX 980 Ti GTX 980 GTX 780
TFLOPs (FMA) 9 TFLOPs 6.5 TFLOPs 5.6 TFLOPs 5 TFLOPs 4.1 TFLOPs
Memory Clock 10Gbps GDDR5X GDDR5 7Gbps
GDDR5
6Gbps
GDDR5
Memory Bus Width 256-bit ? 384-bit 256-bit 384-bit
VRAM 8 GB 8 GB 6 GB 4 GB 3 GB
VRAM Bandwidth 320 GB/s ? 336 GB/s 224 GB/s 288 GB/s
Est. VRAM Power Consumption ~20 W ? ~31.5 W ~20 W ?
TDP 180 W ? 250 W 165 W 250 W
GPU "GP104" "GP104" GM200 GM204 GK110
Manufacturing Process TSMC 16nm TSMC 16nm TSMC 28nm
Launch Date 05/27/2016 06/10/2016 05/31/2015 09/18/2014 05/23/2013

Earlier this year Micron began to sample GDDR5X chips rated to operate at 10 Gb/s, 11 Gb/s and 12 Gb/s in quad data rate (QDR) mode with 16n prefetch. However, it looks like NVIDIA decided to be conservative and only run the chips at the minimum frequency.

As reported, Micron’s first GDDR5X memory ICs (integrated circuits) feature 8 Gb (1 GB) capacity, sport 32-bit interface, use 1.35 V supply and I/O voltage as well as 1.8 V pump voltage (Vpp). The chips come in 190-ball BGA packages with 14×10 mm dimensions, so, they will take a little less space on graphics cards than GDDR5 ICs.

The announcement by Micron indicates that the company will be the only supplier of GDDR5X memory for NVIDIA’s GeForce GTX 1080 graphics adapters, at least initially. Another important thing is that GDDR5X is real, it is mass produced now and it can indeed replace GDDR5 as a cost-efficient solution for gaming graphics cards. How affordable is GDDR5X? It should not be too expensive - particularly as it's designed as an alternative to more complex technologies such as HBM - but this early in the game it's definitely a premium product over tried and true (and widely available) GDDR5.

Source: Micron

POST A COMMENT

59 Comments

View All Comments

  • gurok - Thursday, May 12, 2016 - link

    What's SDRRAM? I think you mean just SDRAM -- synchronous dynamic RAM. Reply
  • extide - Friday, May 13, 2016 - link

    It's not the tick and the tock, it's the rise and the fall of the signal. With Single Data Rate, you send on every rise or every fall, with DDR you send on the rise and the fall. Reply
  • willis936 - Thursday, May 12, 2016 - link

    My guess would be because the implementer can choose to run GDDR5X memory in either DDR or QDR mode. Reply
  • Yojimbo - Thursday, May 12, 2016 - link

    I wonder how the 1080 gets by with such relatively low RAM bandwidth (in terms of ratio to performance as compared to Maxwell GPUs)? Did they improve their compression algorithms significantly again or is it something else? NVIDIA also seems to be claiming a lower real-work performance gain compared to peak FLOPS gain with Pascal over Maxwell (25%(So I've seen claimed somewhere) for performance and >50% for FLOPS), which is odd (if true). Is that because of an aggressive boost clock? Otherwise it seems to imply an architecture that isn't fed as efficiently. What would be the beneficial trade-off that could lead to that? Overall power efficiency? Die size efficiency? Reply
  • jasonelmore - Thursday, May 12, 2016 - link

    Double the Register size and much higher clocks Reply
  • Yojimbo - Thursday, May 12, 2016 - link

    What question were you trying to answer there? I asked more than one. Were you referring to the RAM bandwidth? What does core clock have to do with that? Does register size help alleviate need for RAM bandwidth? If so, how? Reply
  • dragonsqrrl - Thursday, May 12, 2016 - link

    Larger registers and caches can help reduce memory bottlenecks (the GPU doesn't have to access main memory as often). Reply
  • Yojimbo - Sunday, May 15, 2016 - link

    I just found this on videocardz: http://cdn.videocardz.com/1/2016/05/NVIDIA-GeForce... NVIDIA is claiming a 1.7 times bandwidth improvement with Pascal versus Maxwell due to fourth generation delta color compression. Reply
  • Yojimbo - Sunday, May 15, 2016 - link

    http://cdn.videocardz.com/1/2016/05/NVIDIA-GeForce... Reply
  • Yojimbo - Sunday, May 15, 2016 - link

    Sorry, the color compression only accounts for 20% improvement in the bandwidth. The other ~40% is from faster DRAM. Reply

Log in

Don't have an account? Sign up now