This past week Micron has quietly added its GDDR5X memory chips to its product catalogue and revealed that the DRAM devices are currently sampling to partners. The company also disclosed specifications of the chips they currently ship to allies and which potentially will be mass-produced later this summer. As it appears, the first samples, though running at much higher data rates than GDDR5, will not be reaching the maximum data rates initially laid out in the GDDR5X specification.

The first GDDR5X memory chips from Micron are marked as MT58K256M32JA, feature 8 Gb (1GB) capacity, and are rated to run at 10 Gb/s, 11 Gb/s and 12 Gb/s in quad data rate (QDR) mode with 16n prefetch. The chips use 1.35 V supply and I/O voltage as well as 1.8 V pump voltage (Vpp). Micron’s GDDR5X memory devices sport 32-bit interfaces and come in 190-ball BGA packages with 14×10 mm dimensions. As reported, the GDDR5X DRAMs are manufactured using 20 nm process technology, which Micron has been using for over a year now.

The GDDR5X memory standard, as you might remember from our previous reports, is largely based on the GDDR5 specification, but has three crucial improvements: significantly higher data-rates (up to 14 Gb/s per pin with potential up to 16 Gb/s per pin), higher and more flexible chip capacities (4 Gb, 6 Gb, 8 Gb, 12 Gb and 16 Gb capacities are supported) and better energy efficiency thanks to lower supply and I/O voltage.

The first samples of GDDR5X memory chips fully leverage key architectural enhancements of the specification, including quad data rate (QDR) data signaling technology that doubles the amount of data transferred per cycle over the memory bus (compared to GDDR5) and allows it to use a wider 16n prefetch architecture, which enables up to 512 bit (64 Bytes) per array read or write access. However, the maximum data rates of Micron's sample chips are below tose initially advertised, possibly because of a conservative approach taken by Micron and its partners.

The addition of GDDR5X samples to Micron’s parts catalog has three important implications. First, the initial development of Micron’s GDDR5X memory chips is officially complete and the company has achieved its key goals (to increase performance of GDDR5X without increasing its power consumption). Second, one or more customers of Micron are already testing processors with GDDR5X memory controllers, which means that certain future GPUs from companies like AMD or NVIDIA do support GDDR5X and already exist in silicon. Third, the initial GDDR5X lineup from Micron will consist of moderately clocked ICs.

GPU Memory Math
  AMD Radeon R9 Fury X AMD Radeon
R9 290X
NVIDIA GeForce
GTX 980 Ti
NVIDIA GeForce
GTX 960
GDDR5X 256-bit
interface
GDDR5X 256-bit
interface
GDDR5X 128-bit
interface
GDDR5X 128-bit
interface
Total Capacity 4 GB 4 GB 6 GB 2 GB 8 GB 4 GB
B/W Per Pin 1 GB/s 5 Gb/s 7 Gb/s 7 Gb/s 12 Gb/s 10 Gb/s 12 Gb/s 10 Gb/s
Chip capacity 8 Gb 2 Gb 4 Gb 4 Gb 8 Gb
No. Chips/Stacks 4 16 12 4 8 4
B/W Per Chip/Stack 128 GB/s 20
GB/s
28
GB/s
28
GB/s
48
GB/s
40
GB/s
48
GB/s
40
GB/s
Bus Width 4096-bit 512-bit 384-bit 128-bit 256-bit 128-bit
Total B/W 512 GB/s 320
GB/s
336
GB/s
112
GB/s
384
GB/s
320
GB/s
192
GB/s
160
GB/s
Estimated DRAM
Power Consumption
14.6 W 30 W 31.5 W 10 W 20 W 10 W

Thanks to GDDR5X memory chips with 10 Gb/s – 12 Gb/s data rates, developers of graphics cards will be able to increase peak bandwidth of 256-bit memory sub-systems to 320 GB/s – 384 GB/s. Which is an impressive achievement, because this amount of bandwidth is comparable to that of AMD’s Radeon R9 290/390 or NVIDIA’s GeForce GTX 980 Ti/Titan X graphics adapters. The latter use 512-bit and 384-bit memory interfaces, respectively, which are quite expensive and intricate to implement.

Micron originally promised to start sampling of its GDDR5X with customers in Q1 and the company has formally delivered on its promise. What now remains to be seen is when designers of GPUs plan to roll-out their GDDR5X supporting processors. Micron claims that it is set to start mass production of the new memory this summer, which hopefully means we're going to be seeing graphics cards featuring GDDR5X before the end of the year.

More information about GDDR5X memory:

Source: Micron

Comments Locked

37 Comments

View All Comments

  • Khenglish - Tuesday, March 29, 2016 - link

    This is my thoughts too. I honestly don't see the hbm costs ever coming down much. It will always need an interposer. The interposer is a slab of silicon with just the interconnect stack and no logic. This is something fabs already know how to make very well, so if it's expensive to make now, it will stay expensive. I see hbm staying as the top end only memory solution being used on what today are 384-bit and 512-bit cards, with gddr5x taking the spot of what today is gddr5 128-bit to 256-bit cards.
  • BurntMyBacon - Wednesday, March 30, 2016 - link

    @Khenglish: "I honestly don't see the hbm costs ever coming down much. It will always need an interposer. The interposer is a slab of silicon with just the interconnect stack and no logic. This is something fabs already know how to make very well, so if it's expensive to make now, it will stay expensive."

    Five points of interest. 1) The interposer size is currently at the max limit that the reticle can handle on the current fabrication process. 2) There isn't a large demand for it yet so economies of scale haven't kicked in. 3) They are still looking for return on investment to cover research costs. 4) High end "premium" items often carry an extra "premium tax" that doesn't follow it to commodity items. 5) Smaller chips can use smaller interposers.

    I think there is room in there for the price to drop some. A little more of the price will be hidden by the savings from less complicated board layout and fabrication. The rest of the cost will need to be justified by performance. Cost may never get low enough for the lowest end discrete cards, but it is also uncertain whether there will be much of a market for sub-$100 cards for much longer given IGP progression. What market remains will not likely be looking for the bandwidth of HBM anyways. The bigger question on my mind is whether (or how soon) HBM will become cost effective for the mainstream market.
  • Azix - Wednesday, March 30, 2016 - link

    wasn't the interposer supposed to cost something like $4?
  • beginner99 - Wednesday, March 30, 2016 - link

    Yeah in that area. Of course depends on the exact size. But it's far from $30. much cheaper.
  • ltcommanderdata - Tuesday, March 29, 2016 - link

    GDDR5X seems like a good candidate for the PS4K to increase bandwidth without widening the bus.
  • Lolimaster - Tuesday, March 29, 2016 - link

    PS4 needs gpu power, not only bandwidth to delive 4k properly.
  • andrewaggb - Wednesday, March 30, 2016 - link

    I think gddr5x+14nm die shrink+more gcn cores might allow the PS4 to run games in 1080p at 60 fps. You could probably run simple games in 4k, and certainly the user interface and netflix and whatnot in 4k.

    I think gaming in 4k on a console is years away.
  • III-V - Tuesday, March 29, 2016 - link

    The efficiency boost is impressive. It's a shame this didn't come to market sooner, but it'll be great to fill the gap between DDR and HBM-equipped GPUs.
  • haukionkannel - Tuesday, March 29, 2016 - link

    So low end will use ddr as before.
    Middle range will use gddr5 as before
    highend may use gddr5 or gddr5+
    superhighend may use gddr5, gddr5+ or HBM...
  • DanNeely - Tuesday, March 29, 2016 - link

    Except on rebadged cards (where sadly GDDR5 may linger for a few years), GDDR5X will probably displace GDDR5 over a single generation; it's very close to being a dropin replacement and the higher bandwidth per chip should allow midrange cards to drop in price due to simpler PCBs allowed by using narrower buses.

Log in

Don't have an account? Sign up now