A new generation of IBM's FlashSystem storage appliances will be adopting a new architecture with magnetoresistive RAM (MRAM) write caches instead of capacitor-backed DRAM. MRAM is one of the fastest and highest-endurance nonvolatile memory technologies currently available, but it has severely limited density compared to NAND flash memory or Intel's 3D XPoint memory. Until recently, most applications of MRAM were in embedded systems where MRAM could replace small flash chips or battery-backed DRAM and SRAM buffers.

Everspin, the only current supplier of discrete MRAM chips, has been pushing capacities higher with their currently available 256Mb chips and 1Gb chips that will be sampling by the end of this year. This is being enabled by their migration from in-house manufacturing on wafers fabbed at Freescale to a partnership with GlobalFoundries to make MRAM on their 22nm FD-SOI process. These capacity increases have made MRAM more attractive for use in systems that deal with a high volume of data, even though they are still far too small to be used as primary storage.

IBM's existing FlashSystem appliances have used custom form factor SSDs and a system-wide power loss protection design, along with FPGA-based controllers. The new system switches to a standard 2.5" U.2 drive form factor, which requires implementing power loss protection at a per-drive level. IBM found it impractical to include enough supercaps to keep the FPGA controller running long enough to flush their DRAM write caches, but the inherent nonvolatility of MRAM eliminates the need for any bulky supercaps.

The new SSDs for IBM's FlashSystem have a usable storage capacity of up to 19.2TB of 64L 3D TLC NAND flash memory, making them some of the highest-density TLC-based drives. The drives use a 20-channel NAND interface and a four-lane PCIe 4.0 host interface that can operate in dual-port 2+2 mode. IBM's controller also provides optional transparent compression with a typical 3:1 compression ratio, and FIPS 140 compliant encryption. The new FlashSystem drives will be on display this week at Flash Memory Summit and will be shipping to customers this month.

POST A COMMENT

15 Comments

View All Comments

  • a1exh - Wednesday, August 08, 2018 - link

    That was exactly what I was thinking. A traditional map table for a 20TB drive would take approx 32GB of DRAM Reply
  • abufrejoval - Wednesday, August 08, 2018 - link

    But would you store a map table in MRAM? AFAIK you can reconstruct the map table from the on flash structures: Yes it would take time, but enterprise hardware typically doesn't get powered off until its going to the knackers and for the rare powerfail event the reconstruction time may not be crucial enough to keep it there. I would have thought that MRAM here is used for crucial in-transit data and base data structures required to reconstruct any map.

    That's one reason more I don't think this use case is an ideal demonstrator for MRAM, which to my knowledge tried to focus more on really logic embedded SRAM replacement, where complex logic state say in massive DSP or SoC could be persisted very quickly, so sleep/wakeup times could be minimized and the energy expense for the transition reduced to reduce the cost of sleeping. That's IoT, ultra embedded etc. but not high-end ent
    Reply
  • abufrejoval - Wednesday, August 08, 2018 - link

    erprise where juice rarely it such a determining factor (hit the wrong key, want edit!) Reply
  • phoenix_rizzen - Wednesday, August 08, 2018 - link

    Is that 19 TB of physical storage space, with 3x compression on top, for a theoretical storage capability of 57 TB?

    Or is that 19 TB after 3x compression, meaning there's a little less than 7 TB of physical storage onboard?

    The text is a little confusing.
    Reply
  • apriest - Wednesday, August 08, 2018 - link

    I was wondering the same! It reads more like a tape drive label. :-P Reply

Log in

Don't have an account? Sign up now