After 11 years of partnership, Sandisk and Toshiba's timing could not have been better. Just seven days after losing the NAND crown to Intel and Micron (IMFT) they announce they will have 19nm NAND samples rolling off their fabs as we speak. This one upmanship is normal for this industry but the announcement coming so soon after IMFT's 20nm announcement was a surprise to many. Like IMFT, they expect to start mass production during the second half of this year, and finding homes for their latest NAND in the next wave of tablets, smartphones and SSDs.

Toshiba NAND on display on the Kingston SSDNow V+100
 
Anand and I had the opportunity to speak with representatives from Toshiba and SanDisk who sounded quite optimistic about the competitiveness of their 19nm products. Like IMFT, they expect to see similar endurance as their 24nm products, around 3,000 program/erase cycles. This is owing to ever improving ECC and wear leveling algorithms, ensuring as few wasted p/e cycles as possible. SanDisk and Toshiba did not publish die size for this iteration, but did state a decrease of approximately 25%. Previously, their 24nm die size was published as 151mm2, so this iteration should bring that down to about 113mm2, besting IMFT's 118mm2 at 20nm. Page and block sizes remain at their 24nm levels of 8 KB per page and 256 pages per block. 
 
While the benefits for SSDs are the main focus of our NAND coverage, consumer electronics are where the vast majority of readers interact with NAND products. Flash memory is a key component of smartphones, tablets, media players and set-top boxes, in addition to the USB drives and memory cards. Different calibers of NAND are used in these products, with the fastest (SLC) going into enterprise-class SSDs, the mainstream NAND (2-bit MLC) going into your consumer and enthusiast SSDs, phones and tablets and slower products (3-bit TLC and 4-bit 16LC) going into products like jump drives and SD cards. Drawing from Anand's SSD Anthology we recall that the functional difference between SLC and MLC is the amount of data per NAND cell, 1-bit or 2-bits. Adding that extra bit to SLC to make MLC decreases speed and reliability; the same holds true in producing 3-bit NAND. With controllers and ECC algorithms improving, it is likely that 3-bit NAND will be produced in this venture that will end up in more consumer electronics devices, perhaps not your next SSD, but certainly some tablets and smart phones (note that there are 3-bit SSDs floating around in beta form). The benefit for consumers is larger capacity with little price increase and relatively little impact on performance and reliability since consumer electronics devices rarely tax the limits of NAND speed and are designed to endure less than even a conservative estimate for how long the NAND inside should last.

 

 
Within the consumer electronics space, however, NAND can be configured in various ways. Above is a block diagram of your average SSD; SATA interface leads to the Controller which uses DRAM to coordinate operations and reads/writes to/from the supply of MLC NAND. Some consumer electronics devices use this exact configuration, as in the Acer Iconia W500 windows tablet. In other devices a similar configuration is used but in an embedded format, where controller and NAND are in a single package soldered to the device's PCB. Still others use raw NAND and utilize the device's own silicon to read/write to the NAND itself, as in all iOS devices. In nearly all of these configurations, volatile memory (DRAM) is used as cache for pending operations (Intel and SandForce excluded, they cache in SRAM); which means that a sudden loss of power could lead to the loss of data that had been waiting to be written to NAND. SanDisk-Toshiba NAND has learned a new trick, apparently present even in the currently shipping 24nm products, that eliminates this risk while also adding a few new options for OEMs.
 

 
In the block diagram above, gleaned from SanDisk marketing and discussions, DRAM is abandoned in favor of designating a portion of NAND as cache. Data is stored here initially, to be written to MLC during time the controller would otherwise be idle. This area designated as cache isn't just regular MLC NAND, however; it's what was described to us as "pseudo-SLC," treated at the hardware level like SLC but with the narrow voltage deltas of MLC. The benefits are obvious, an increase in speed and reliability, the former of which should mitigate any speed lost from not using SDRAM. Further it seems this pseudo-SLC feature can be used to fill other roles. In its iSSD products, intended for larger consumer electronics devices, SanDisk uses this pseudo-SLC NAND as in the diagram above. A conversation with Toshiba, however, revealed that other OEMs will use psuedo-SLC as speedy reliable storage for firmware (OS) and system files. The technology is manipulated at the block level, and with each block's capacity halved by treating it as SLC, OEMs have a 1MB granularity with which to designate this SLC space for use as cache or as an OS partition. 
 

 
In their respective press releases, SanDisk and Toshiba both highlight the smartphone and tablet spaces as motivating factors in their drive towards smaller process nodes. Indeed this has been the focus of much of their marketing of late. Their innovations in ECC and this pseudo-SLC technology are tailor made for this space and though no specific devices were named, we are no doubt benefiting from their implementation already.

The iPad 2 Mainboard featuring Toshiba NAND, courtesy of iFixit
 
There are costs to this approach, the main one being . . . well, cost. DRAM is cheap relative to NAND, so sacrificing several blocks of NAND is more expensive than adding 512MB of RAM. Further, though faster than MLC, the writes to SLC NAND are slower than they would be to SDRAM, so a performance penalty is paid. With cost and speed being the driving factors in the consumer SSD space, SanDisk-Toshiba's new trick seems ill suited to the PC space. Enterprise-class SSDs could benefit from the data resilience but these applications are typically laden with power redundancies that provide their SSDs sufficient power to complete any pending writes before losing power. Only in the consumer electronics space is there a strong and growing market for just fast enough, incredibly reliable NAND.
POST A COMMENT

35 Comments

View All Comments

  • marc1000 - Thursday, May 05, 2011 - link

    the point is: toshiba already has 18nm in the works. Intel just announced 22nm, but as this news shows, they are NOT "the only ball on the park". Reply
  • sheh - Thursday, May 05, 2011 - link

    CPUs and flash are different things. Reply
  • Iketh - Thursday, May 05, 2011 - link

    IN the park... really, you should never post again Reply
  • jjj - Thursday, May 05, 2011 - link

    do they rotate the space designated as cache? otherwise i imagine the cache would die rather fast.
    also any idea if the size of the "cache"can be dynamically adjusted by the OS?
    Reply
  • JasonInofuentes - Thursday, May 05, 2011 - link

    We asked just these questions when speaking with the reps. To cover the longevity issue, remember that the pseudo-SLC cache really is a lot like SLC, so in addition to its speed its reliability goes up significantly, nothing specific but certainly something approaching an order of magnitude (closer to 30,000 than 3,000 p/e cycles). So the physical blocks that are designated pseudo-SLC remain so unless rewritten by software.

    Further, the best case scenario in terms of reliability is that it is used as a firmware partition in which case it will go through very few cycles over the life of the device. The worst case scenario is that it used as cache and even in that case since it's primary role is as random write cache, assuming a 512MB cache, it would still take an awful lot of random data written (in a typical use scenario) before writing to the entire cache once, let a lone 30,000 times.

    Based on our discussions the size of the cache is determined at the controller level, so while the OS must be aware of this arrangement it is not in control of it. Great questions, keep'em coming. If we don't know we'll ask!
    Reply
  • jjj - Thursday, May 05, 2011 - link

    Thanks for the reply.
    Any chance they are whilling to talk about BiCS and 3D Read/Write?
    Reply
  • FunBunny2 - Thursday, May 05, 2011 - link

    Since the post is about "different kind of NAND", have you gotten any information on what Unity Semiconductor was/is doing? Or any other "better than NAND" replacement? Reply
  • JasonInofuentes - Thursday, May 05, 2011 - link

    @jjj and @FunBunny2, We will get back to you on those questions. Thanks for the replies. Reply
  • sheh - Thursday, May 05, 2011 - link

    "...they expect to see similar endurance as their 24nm products, around 3,000 program/erase cycles. This is owing to ever improving ECC and wear leveling algorithms..."

    Aren't P/E cycles the raw device value, with wear leveling and error correction coming afterwards?
    Reply
  • tygrus - Friday, May 06, 2011 - link

    Are they using the same MLC cells but treating them like SLC in firmware. Write '1' as '11' (actually the default after erase) and '0' as '00'. The reads back the '11' and '10' of MLC cell as a '1' and the '01' and '00' as '0'. It allows the tight tolerances of MLC to be relaxed so the two extremes can still be read even though the signal has moved towards the middle. What would normally fail

    I suppose it's more flexible than adding SLC cells and trying to identify them to the controller or adding SLC chips to the array of channels which might limit bandwidth and cost more.

    Maybe they could put faster/wider SLC (but less blocks) in the controller to use as the write cache to combine writes before erasing a MLC cell.

    They could also slow down the erase (lower voltage but takes longer) to reduce the wear and electron migration/burying so as to increase reliability and the number of erase cycles it can last. If you have enough over-provisioning, garbage collection (cleared blocks ready for use) and trim to allow more free blocks you can mitigate slow erase. Sustained sequential/random write will be a smaller percentage of old peak eg. 20-25% of peak write, 50% of old sustained write.
    Reply

Log in

Don't have an account? Sign up now