Total Bytes Written & Spare Area

90K p/e cycles seems a bit high and I can't find any Intel documentation that actually quotes that number, it's just what I heard at the 710 briefing in San Francisco. Luckily Intel has another metric it likes to use: total bytes written.

You don't get TBW for client drives, but for enterprise drives Intel will tell you exactly how many tera or petabytes of random 4KB or 8KB data you can write to the drive. These values are "up to" of course as actual lifespan will depend on the specific workloads.

Intel SSD Endurance Comparison
  X25-E 32GB X25-E 64GB 710 100GB 710 200GB 710 300GB
4KB Random Writes 1.0 PB 2.0 PB 500 TB 1.0 PB 1.1 PB
w/ +20% Spare Area - - 900 TB 1.5 PB 1.5 PB

Doing the math these values work out to be about 5K writes per cell (~5243), however that's assuming no write amplification. Performing a 100% random write across all LBAs for a full petabyte of data is going to generate some serious write amplification. The controller in the 710 tends to see write amplification of around 12x for 4KB random writes, which would put the rated cycle count at just under 63,000.

There's just one problem. The 200GB 710 I'm basing these calculations on doesn't actually have 200GB of NAND on-board, it has 320GB.

Opening up the 710 that Intel sent me I found a total of 20 NAND packages on-board. This isn't surprising as Intel's controllers have always supported 10 parallel NAND channels, in this case the 710 uses two packages per channel and interleaves requests to them. Each NAND package however has 128Gbit (16GBytes) of NAND inside in the form of 2 x 8GB 25nm MLC-HET die. Multiply all of that out and you get 320GB of NAND inside this 200GB drive.

Of course 200GB is defined as 200,000,000,000,000 bits, so actual binary storage capacity is 186.3GiB. This is absolutely insane: over 41% of the NAND on the 710's PCB is set aside as spare area. We have never reviewed an SSD with anywhere near this much spare area before.

If we run the p/e count with 320GB as the actual amount of NAND available, it works out to be just under 40K p/e cycles per cell. The significant spare area on the 710 increases the drive's projected lifespan by 55%! Intel even recommends setting aside another 20% of the drive if you need a longer lifespan. An extra 20% spare area will give you another 50% increase in total bytes written. Tinkering with spare area just helps reduce write amplification, it doesn't magically make the NAND cells last longer.

If we believe Intel's specifications, MLC-HET actually sounds pretty decent. You get endurance in the realm of the X25-E but at significantly lower cost and with more reasonable capacity options.

Thankfully we don't need to just take Intel's word, we can measure ourselves. For the past couple of years Intel has included a couple of counters in the SMART data of its SSDs. SMART attribute E2h gives you an accurate count of how much wear your current workload is putting on the drive's NAND. To measure all you need to do is reset the workload timer (E4h) and run your workload on the drive for at least 60 minutes. Afterwards, take the raw value in E2h, divide by 1024 and you get the percentage of wear your workload put on the drive's NAND. I used smartmontools to reset E4h before running a 60 minute loop of our SQL benchmarks on the drive, simulating about a day of our stats DB workload.

Once the workloads finished looping I measured 0.0145% wear on the drive for a day of our stats DB workload. That works out to be 5.3% of wear per year or around 18.9 years before the NAND is done for. I'd be able to find more storage in my pocket before the 710 died due to NAND wear running our stats DB.

For comparison I ran the same test on an Intel SSD 320 and ended up with a much shorter 4.6 year lifespan. Our stats DB does much more than just these two tasks however - chances are we'd see failure much sooner than 4.6 years on the 320. An even heavier workload would quickly favor the 710's MLC-HET NAND.

But what about performance? SLC write speeds are much higher than MLC, but Intel's MLC performance has come a long way since the old X25-E. Let's get to the benchmarks.

The Test

We're still building up our Enterprise Bench data so forgive the lack of comparison data here. We included a number of consumer drives simply as a reference point.

CPU

Intel Core i7 2600K running at 3.4GHz (Turbo & EIST Disabled)

Motherboard:

Intel H67 Motherboard

Chipset:

Intel H67

Chipset Drivers:

Intel 9.1.1.1015 + Intel RST 10.2

Memory: Qimonda DDR3-1333 4 x 1GB (7-7-7-20)
Video Card: eVGA GeForce GTX 285
Video Drivers: NVIDIA ForceWare 190.38 64-bit
Desktop Resolution: 1920 x 1200
OS: Windows 7 x64

Random Read/Write Speed

NAND Recap Random & Sequential Read/Write Speed
Comments Locked

68 Comments

View All Comments

  • AnnonymousCoward - Sunday, October 2, 2011 - link

    Use a CD; that's mechanical storage.

    Flash is definitely a poor choice, since charge is slowly lost. I'm not sure about magnetic hard drives.
  • ggathagan - Monday, October 3, 2011 - link

    If you use optical media, better have it a manufactured ROM disk, as opposed to writable media.
    Even the best consumer optical media is only good for around 10 years.
  • James5mith - Monday, October 3, 2011 - link

    One thing intel and the rest of the SSD crowd by far have failed to address is the limitations of SATA for enterprise workloads. Doesn't matter how fast your drive is, on a SATA interface it will tank in truly heavy mixed I/O environments. Native SAS SSD's are needed to move the hdd form factor SSD's forward in the industry.

    That being said, in my research for an in-house project, I discovered Smart Modular. Their Optimus SAS SSD's are up to 1.6TB/drive, and at that capacity are still below $3/GB.

    To be fair, I have not gotten any of them in house to test yet, so I can't say how well they will do. But knowing the difference moving from SATA to SAS drives make in an array, I would have to imagine it will be significant when I do.

    Not to mention, from a pure density standpoint, 1.6TB 2.5" SSDs can pack more density per rack unit than 3TB 3.5" drives. Once you get to the 4TB 3.5" drives, the crown returns to slow spinning media though.
  • jhh - Monday, October 3, 2011 - link

    A conventional HDD wouldn't fare much better. The typical service life of an enterprise HDD is 5 years after a storage period of no more than 6 months. Beyond that point, the MTBF numbers aren't valid. That doesn't mean the drive will immediately fail. But, conventional HDDs aren't meant to be stored for a long period of time. The lubricating medium (grease or oil) will start moving away from where it is supposed to be, so that parts which were supposed to be lubricated won't be lubricated. The lubricant may also start oxidizing, becoming less of a lubricant. I'm not sure that a drive sitting around for 5 years will be any worse than a drive which was operating for 5 years, but I wouldn't count on either to be around for 20 years, let alone 50.

    If you want to be sure those baby pictures will still be around to show their grandkids, best to print the pictures, as we have never developed a better archival storage device. Better yet, paint it in some obscure cave in France. If you want the time capsule information to be available, print it on archival paper with archival inks.
  • rikmorgan - Tuesday, October 11, 2011 - link

    I see an article in EE Times saying Samsung is producing flash NAND on their new 20nm FAB now and expects to begin running it at 10nm next year. Just passing it on, don't know how newsworthy it is. ET Times 10/10/2011 edition.
  • Linkpl4y - Saturday, October 15, 2011 - link

    Well you've made it this far. Hopefully you've learned something on the way.

    Here's the next path on your journey. http://bit.ly/nqPUwd
  • Iketh - Monday, October 17, 2011 - link

    LOL
  • mayaw - Saturday, November 5, 2011 - link

    Because I do video graphics and its really the only thing that slows my system down for any type of reference # on tests. Even with a 10,000 RPM drive I have 12gb of mem and I upgrade my processor every 3-4 years when I can but I really never think of the hard drive. But now I am seriously thinking about getting one of these.
    Oh and by the way guys I think that the collar of his shirt is stretched out because his head is so big and it has to stretch itself out cause of the big brain he has in it unlike ppl who are arguing about someone that is wearing a t-shirt on a hard drive review.

Log in

Don't have an account? Sign up now