Samsung has introduced the industry’s first memory that correspond to the HBM2E specification. The company’s new Flashbolt memory stacks increase performance by 33% and offer double per-die as well as double per-package capacity. Samsung introduced its HBM2E DRAMs at GTC, a fitting location since NVIDIA is one of the biggest HBM2 consumers due to their popular GV100 processor.

Samsung’s Flashbolt KGSDs (known good stacked die) are based on eight 16-Gb memory dies interconnected using TSVs (through silicon vias) in an 8-Hi stack configuration. Every Flashbolt package features a 1024-bit bus with a 3.2 Gbps data transfer speed per pin, thus offering up to 410 GB/s of bandwidth per KGSD.

Samsung positions its Flashbolt KGSDs for next-gen datacenter, HPC, AI/ML, and graphics applications. By using four Flashbolt stacks with a processor featuring a 4096-bit memory interface, developers can get 64 GB of memory with a 1.64 TB/s peak bandwidth, something that will be a great advantage for capacity and bandwidth-hungry chips. With two KGSDs they get 32 GB of DRAM with an 820 GB/s peak bandwidth.

Samsung's HBM2 Memory Comparison
  Flashbolt Aquabolt Flarebolt
Total Capacity 16 GB 8 GB 8 GB 4 GB 8 GB 4 GB
Bandwidth Per Pin 3.2 Gb/s 2.4 Gb/s 2 Gb/s 2 Gb/s 1.6 Gb/s 1.6 Gb/s
Number of DRAM ICs per Stack 8 8 8 4 8 4
DRAM IC Process Technology ? 20 nm
Effective Bus Width 1024-bit
Voltage ? 1.2 V 1.35 V 1.2 V
Bandwidth per Stack 410 GB/s 307.2 GB/s 256 GB/s 204.8 GB/s

To increase DRAM transfer speed per pin to 3.2 Gbps, Samsung probably had to employ various methods to reduce collateral clock interference between the 5000+ TSVs and ensure clean signals, yet the company does not discuss this in its current announcement. Last year the company did disclose some of the tricks used by its Aquabolt HBM2 DRAMs to increase bandwidth per pin to 2.4 Gbps, so most of these methods have probably evolved in case of the Flashbolt.

In fact, Samsung’s announcement does not state that that the company has started mass production of its Flashbolt HBM2E memory, so it looks like the company has finished development of the technology, but is not yet ready to start shipments of such chips in mass quantities.

Related Reading:

Source: Samsung

POST A COMMENT

25 Comments

View All Comments

  • Ryan Smith - Wednesday, March 20, 2019 - link

    The previous 2.4gbps/pin "Aquabolt" and 2.0gbps/pin "Flarebolt" HBM2 could get you over 1TB/sec in a 4 stack configuration.

    AMD's Radeon Instinct MI60 in fact does just that, its 2gbps memory clock giving it 1TB/sec on the dot.
    Reply
  • SaturnusDK - Wednesday, March 20, 2019 - link

    Same as Radeon VII so we already have it on a gaming GPU as well. Reply
  • ksec - Thursday, March 21, 2019 - link

    Thanks. Aren't really keeping up with these tech as they are mostly out of my budget :/ Reply
  • TheJian - Wednesday, March 27, 2019 - link

    For the love of GOD AMD (and my stock price...ROFL), DO NOT PUT THIS ON A CONSUMER CARD! You will blow your income (NET) yet again. Unless you can explain how it makes card X FASTER, and PROVE IT with REAL benchmarks smoking NVDA, don't waste your time killing your new cards. So far HBM has been absolutely useless to gamers and has killed every card AMD put it on for gamers. Quit doing it. HBM has it's use cases, but gaming isn't one of them. It only raises prices, and kills margins, oh and usually causes shortages too thus killing any chance of your new card's profit anyway. If you can't sell it (because you can't make enough of it...LOL), you can't profit from it. This is not rocket science here. Leave HBM to cards well over $1000 (just a reference here, you get the point, $1299+?). Reply
  • mdriftmeyer - Thursday, August 08, 2019 - link

    Boy these comments about just another Vega variant didn't age well, and yes big Navi will soon be upon us and blowing the doors off of Vega/Vega II, not to mention Nvidia.

    How's that ROME love so far? Intel's getting spanked.
    Reply

Log in

Don't have an account? Sign up now