This week, Samsung has announced that it has started mass production of its GDDR6 memory chips for next-generation graphics cards and other applications. The new chips will be available in 16 Gb densities and will feature an interface speed that is significantly higher when compared to that of the fastest GDDR5 and GDDR5X ICs can offer.

GDDR6 is a next-generation specialized DRAM standard that will be supported by all three leading makers of memory. Over time, the industry will introduce a great variety of GDDR6 ICs for different applications, performance and price requirements. What Samsung is announcing this week is its first 16 Gb GDDR6 IC that features an 18 Gbps per pin data transfer rate and offers up to 72 GB/s of bandwidth per chip. A 256-bit memory subsystem comprised of such DRAMs will have a combined memory bandwidth of 576 GB/s, whereas a 384-bit memory subsystem will hit 864 GB/s, outperforming existing HBM2-based 1.7 Gbps/3092-bit memory subsystems that offer up to 652 GB/s. The added expense with GDDR6 will be in the power budget, much like current GDDR5/5X technology.

GPU Memory Math: GDDR6 vs. HBM2 vs. GDDR5X
  Theoretical GDDR6 256-bit memory sub-system Theoretical GDDR6 384-bit memory sub-system NVIDIA Titan V
(HBM2)
NVIDIA Titan Xp
 
NVIDIA GeForce GTX 1080 Ti NVIDIA GeForce GTX 1080
Total Capacity 16 GB 24 GB 12 GB 12 GB 11 GB 8 GB
B/W Per Pin 18 Gb/s 1.7 Gb/s 11.4 Gbps 11 Gbps
Chip capacity 2 GB (16 Gb) 4 GB (32 Gb) 1 GB (8 Gb)
No. Chips/KGSDs 8 12 3 12 11 8
B/W Per Chip/Stack 72 GB/s 217.6 GB/s 45.6 GB/s 44 GB/s
Bus Width 256-bit 384-bit 3092-bit 384-bit 352-bit 256-bit
Total B/W 576 GB/s 864 GB/s 652.8 GB/s 547.7 GB/s 484 GB/s 352 GB/s
DRAM Voltage 1.35 V 1.2 V (?) 1.35 V

The new GDDR6 architecture enables Samsung to support new and higher data transfer rates with non-esoteric memory form factors. To increase the interface speed, GDDR6 memory was redesigned both internally and externally. While details about the new standard will be covered in a separate article, two key things about the new memory tech is that GDDR6 features a x8/x16 per-channel I/O configuration, and each chip now has two channels. By contrast, GDDR5/GDDR5X ICs feature a x16/x32 I/O config as well as one channel per chip. While physically GDDR6 chips continue to feature a 16-/32-bit wide bus, it now works differently when compared to prior generations (as it consists of two channels).

In addition to higher performance, Samsung’s GDDR6 16 Gb chips also operate at 1.35 V voltage, down 13% from 1.55 V required by high-performance GDDR5 ICs (e.g., 9 Gbps, 10 Gbps, etc.). According to Samsung, the lowered voltage enables it to lower energy consumption of GDDR6 components by 35% when compared to ultra-fast GDDR5 chips. Samsung attributes lowered voltage to its new low-power circuit design. Meanwhile, based on information we know from Micron and SK Hynix, their GDDR6 DRAMs will also operate at 1.35 V.

Samsung uses one of its 10nm-class process technology to produce its GDDR6 components. The company claims that its 16 Gb ICs bring about a 30% manufacturing productivity gain compared to its 8 Gb GDDR5 chips made using its 20 nm process technology. Typically, Samsung’s productivity gain means increase in the number of chips per wafer, so the company has managed to make its 16 Gb ICs smaller than its previous-gen 8 Gb ICs. The company does not elaborate on its achievement, but it looks like the new chips are not only made using a thinner process technology, but have other advantages over predecessors, such as a new DRAM cell structure, or an optimized architecture.

Samsung’s 16 Gb GDDR6 chips come in FBGA180 packages, just like all industry-standard GDDR6 memory components from other manufacturers.

Samsung did not disclose when it plans to ship its GDDR6 DRAMs commercially, but since it had already started mass production, it is highly likely that the company’s clients are ready to build products featuring the new memory.

Related Reading

Source: Samsung

POST A COMMENT

27 Comments

View All Comments

  • PerterLustig - Thursday, January 18, 2018 - link

    Happy to hear about Samsung officially mass producing 16Gb density ICs of GDDR6 :) So Volta GV104 could be announced at Computex this year.

    Question is how many ICs will Jensen put on the "castrated" GV104 (XX70)?
    Due to the fact that 64Rops are a given for the GV104 my bet is one IC disabled and therefore 14GB Vram with a 224bit SI (56Rops).

    Good times ahead fellas!
    Reply
  • PeachNCream - Thursday, January 18, 2018 - link

    Good times until the *coin miners drive GPU prices well above the already crazy MSRP and we end up in shortage situations like we're in right now. Reply
  • PerterLustig - Thursday, January 18, 2018 - link

    The Price climbing in Europe is maybe 5%. I payed 780€ on release for the GTX1080Ti and the Price is now around 750€, so i struggle to see the problem.

    Its basically FUD and i dont care.

    You take Pascal launch prices and you get the "nextgen" Prices: not cheap but far away from a ripp off.
    Reply
  • PeachNCream - Thursday, January 18, 2018 - link

    Prices are not a mere 5% above MSRP here in the US. For example, the MSRP for a 1080 is $599 and the current retail price on Amazon is $750 to $1,000+. Those price trends extend down the Pascal product stack all the way to the 1060 so if a buyer here is interested in a new GPU, the 1050 is the highest end model that sells close to MSRP. Reply
  • plopke - Thursday, January 18, 2018 - link

    I am not sure were to begin if you think there is no market shortage on some particular cards in europe. And to top it off you say it is all FUD and you do not care,... Some prices are slowly becoming stable again. And the general rule was/is the higher end you buy the less you see of the mining hype.

    So yes the 1080Ti has been immune to most of the miner hype , but that is like living in your own bubble and projecting it to the entire market...... very naive.
    Reply
  • npz - Thursday, January 18, 2018 - link

    The prices swing WILDLY. I bought 1080Ti for right around $700. And in November 2017 I bought some Vega 64s for a mere $539. But PeachNCream is right, currently at least in the US, since all of the stock was depleted during the Black-friday to Dec 2017 season, the prices have absolutely skyrocketed. I don't think it was miners eithers. Gamers obviously took advantage of good prices while it lasted during the end of last year. Reply
  • PeachNCream - Thursday, January 18, 2018 - link

    I was being a *tiny* bit tongue-in-cheek when I was accusing miners of being the only reason for GPU shortages and the currently nutty prices. They're kinda an easy group to scapegoat, but yeah, there are gamers that also helped deplete the supply. Reply
  • Manch - Friday, January 19, 2018 - link

    Depends on the market as far as actual fluctuation goes. AMD cards tend to be more popular especially here in Europe for mining and also way better for Etherium which is one of the more popular ones atm. They're far more power efficient in that regard and your ROI is much shorter. In the states where electricity is about 1/6th the cost of Germany and or 1/3 the cost of UK/France anything that puts out a decent hash rate will be used if one cant get their hands on the other. So while it may be a paltry 5% for you, it's a much larger bump over MSRP for others. Reply
  • Opencg - Saturday, January 20, 2018 - link

    You also have to remember that price/performance ratio is not progressing as much now that chip makers are hitting hard obstacles in reducing transistor size. Yes new cards are offering more performance but they are also costing more to make. Expect top end cpu and gpu prices to continue to rise. Eventually only architecture improvements will offer more performance for the price. And the market is clearly showing that people are willing to pay more for more performance so there is little pressure to drop prices on top end cards. The good news is that it helps pay for further research that could make things better. The bad news is that without some unforseen breakthroughs we are looking at a future that will lead to monopolies (due to architecture being the main route to improvement) and most consumers will stop seeing benefits to upgrade unless they wait longer or are forced to by artificial obsolescense. Reply
  • boeush - Thursday, January 18, 2018 - link

    so the company has managed to make its 16 Gb ICs smaller than its previous-gen 8 Gb ICs. The company does not elaborate on its achievement, but it looks like the new chips are not only made using a thinner process technology, but have other advantages over predecessors, such as a new DRAM cell structure, or an optimized architecture.


    Maybe my math is naive or somehow wrong, but at a first approximation, going from a 20 nm-class process down to a 10-nm class increases the number of circuit elements per unit area by roughly a factor of 4.

    So if the density went up by 4x, but the memory capacity went only from 8 Gb to 16 GB (2x), then why wouldn't we expect roughly 2x the number of dies per wafer even with otherwise no other changes?
    Reply

Log in

Don't have an account? Sign up now