Original Link: http://www.anandtech.com/show/6433/intel-ssd-dc-s3700-200gb-review



Update: I'll be answering questions about the S3700 live from this year's SC12 conference. Head over here to get your questions answered!

When Intel arrived on the scene with its first SSD, it touted superiority in controller, firmware and NAND as the reason it was able to so significantly outperform the competition. Slow but steady improvements to the design followed over the next year and until 2010 Intel held our recommendation for best SSD on the market. The long awaited X25-M G3 ended up being based on the same 3Gbps SATA controller as the previous two drives, just sold under a new brand. Intel changed its tune, claiming that the controller (or who made it) wasn't as important as firmware and NAND.

Then came the 510, Intel's first 6Gbps SATA drive...based on a Marvell controller. Its follow-on, the Intel SSD 520 used a SandForce controller. With the release of the Intel SSD 330, Intel had almost completely moved to third party SSD controllers. Intel still claimed proprietary firmware, however in the case of the SandForce based drives Intel never seemed to have access to firmware source code - but rather its custom firmware was the result of Intel validation, and SandForce integrating changes into a custom branch made specifically for Intel. Intel increasingly looked like a NAND and validation house, giving it a small edge over the competition. Meanwhile, Samsung aggressively went after Intel's consumer SSD business with the SSD 830/840 Pro, while others attempted to pursue Intel's enterprise customers.

This is hardly a fall from grace, but Intel hasn't been able to lead the market it helped establish in 2008 - 2009. The SSD division at Intel is a growing one. Unlike the CPU architecture group, the NAND solutions group just hasn't been around for that long. Growing pains are still evident, and Intel management isn't too keen on investing heavily there. Despite the extremely positive impact on the industry, storage has always been a greatly commoditized market. Just as Intel is in no rush to sell $20 smartphone SoCs, it's similarly in no hurry to dominate the consumer storage market.

I had heard rumors of an Intel designed 6Gbps SATA controller for a while now. Work on the project began years ago, but the scope of Intel's true next-generation SATA SSD controller changed many times over the years. What started as a client focused controller eventually morphed into an enterprise specific design, with its scope and feature set reinvented many times over. It's typical of any new company or group. Often times the only way to learn focus is to pay the penalty for not being focused. It usually happens when you're really late with a product. Intel's NSG had yet to come into its own, it hadn't yet found its perfect development/validation/release cadence. If you look at how long it took the CPU folks to get to tick-tock, it's still very early to expect the same from NSG.


The true 3rd generation Intel SATA SSD controller

Today all of that becomes moot as Intel releases its first brand new SSD controller in 5 years. This controller has been built from the ground up rather than as an evolution of a previous generation. It corrects a lot of the flaws of the original design and removes many constraints. Finally, this new controller marks the era of a completely new performance focus. For the past couple of years we've seen controllers quickly saturate 6Gbps SATA, and slowly raise the bar for random IO performance. With its first 6Gbps SATA controller, Intel does significantly improve performance along both traditional vectors but it adds a new one entirely: performance consistency. All SSDs see their performance degrade over time, but with its new controller Intel wanted to deliver steady state performance that's far more consistent than the competition.

I originally thought that we wouldn't see much innovation in the non-PCIe SSD space until SATA Express. It turns out I was wrong. Codenamed Taylorsville, it's time to review Intel's SSD DC S3700.



Inside the Drive

I already went over the S3700's underlying architecture, including the shift from a B-tree indirection table to a direct mapped flat indirection table which helped enable this increase in performance consistency. I'll point you at that article for more details as to what's going on underneath the hood. For high level drive details, the excerpts below should give you most of what you need.

The S3700 comes in four capacities (100, 200, 400 and 800GB) and two form factors (2.5" and 1.8"). The 1.8" version is only available at 200GB and 400GB capacities. Intel sees market potential for a 1.8" enterprise SSD thanks to the increasing popularity of blade and micro servers. The new controller supports 8 NAND channels, down from 10 in the previous design as Intel had difficulty hitting customer requested capacity points at the highest performance while populating all 10 channels. 6Gbps SATA and AES-256 are both supported by the new controller.

The S3700's chassis is 7.5mm thick and held together by four screws. The PCB isn't screwed in to the chassis instead Intel uses three plastic spacers to keep the board in place once the drive is put together. Along one edge of the drive Intel uses two 35V 47µF capacitors, enough to allow the controller to commit any data (and most non-data) to NAND in the event of a power failure. The capacitors in the S3700 are periodically tested by the controller. In the event that they fail, the controller disables all write buffering and throws a SMART error flag. Intel moved away from surface mount capacitors with the S3700 to reduce PCB real estate, which as you can see is very limited on the 2.5" drive. The S3700 supports operation on either 12V, 5V or both power rails - a first for Intel. Power consumption is rated at up to 6W under active load (peak power consumption can hit 8.2W), which is quite high and will keep the S3700 from being a good fit for a notebook.

The S3700 is a replacement to the Intel SSD 710 (the 710 will be EOLed sometime next year), and thus uses Intel's 25nm HET-MLC (High Endurance Technology) NAND. The S3700 is rated for full 10 drive writes per day (4KB random writes) for 5 years.

Intel SSD DC S3700 Endurance (4KB Random Writes, 100% LBA)
  100GB 200GB 400GB 800GB
Rated Endurance 10DW x 5 years 10DW x 5 years 10DW x 5 years 10DW x 5 years
Endurance in PB 1.825 PB 3.65 PB 7.3 PB 14.6 PB

That's the worst case endurance on the drive, if your workload isn't purely random you can expect even more writes out of the S3700 (around 2x that for a sequential workload). Intel sent us a 200GB sample which comes equipped with 264GB of 25nm Intel HET-MLC NAND. Formatted capacity of the drive is 186GB in Windows, giving you a total of 78GB (78GiB technically) of spare area for wear leveling, block recycling, redundancy and bad block replacement. Note that the percent of overprovisioning on the S3700 is tangibly less than on the 710:

Intel HET-MLC SSD Overprovisioning Comparison
  Advertised Capacity Total NAND on-board Formatted Capacity in Windows MSRP
Intel SSD 710 200GB 320GB 186GB $800
Intel SSD DC S3700 200GB 264GB 186GB $470

Intel is able to guarantee longer endurance on the S3700 compared to the 710, with less spare area and built using the same 25nm HET-MLC NAND technology. The key difference here is the maturity of the process and firmware/controller. Both have improved to the point where Intel is able to do more with less.

 

Because of the odd amount of NAND on board, there are 14 x 16GB, 1 x 32GB and 1 x 8GB NAND packages on this 200GB PCB. Each package uses between 1 and 4 8GB NAND die. Note that this is Intel's first SSD to use BGA mounted NAND devices. The controller itself is also BGA mounted, the underfill from previous generations is gone. The 8-channel controller is paired with 256MB of DDR3-1333 DRAM (the second pad is for a second DRAM used for the 800GB drive to reach 1GB of total DRAM capacity). Intel does error correction on all memories (NAND, SRAM and DRAM) in the S3700.

Pricing is much more reasonable than the Intel SSD 710. While the 710 debuted at around $6.30/GB, the Intel SSD DC S3700 is priced at $2.35/GB. It's still more expensive than a consumer drive, but the S3700 launches at the most affordable cost per GB of any Intel enterprise SSD. A non-HET version would likely be well into affordable territory for high-end desktop users. The S3700 is sampling to customers now, however widespread availability won't be here until the end of the year/beginning of Q1 2013.

Intel SSD DC S3700 Pricing (MSRP)
  100GB 200GB 400GB 800GB
Price $235 $470 $940 $1880

The S7300's performance is much greater than any previous generation Intel enterprise SATA SSD:

Enterprise SSD Comparison
  Intel SSD DC S3700 Intel SSD 710 Intel X25-E Intel SSD 320
Capacities 100 / 200 / 400 / 800GB 100 / 200 / 300GB 32 / 64GB 80 / 120 / 160 / 300 / 600GB
NAND 25nm HET MLC 25nm HET MLC 50nm SLC 25nm MLC
Max Sequential Performance (Reads/Writes) 500 / 460 MBps 270 / 210 MBps 250 / 170 MBps 270 / 220 MBps
Max Random Performance (Reads/Writes) 76K / 36K 38.5K / 2.7K IOPS 35K / 3.3K IOPS 39.5K / 600 IOPS
Endurance (Max Data Written) 1.83 - 14.6PB 500TB - 1.5PB 1 - 2PB 5 - 60TB
Encryption AES-256 AES-128 - AES-128
Power Safe Write Cache Y Y N Y

Intel is also promising performance consistency with its S3700. At steady state Intel claims the S3700 won't vary its IOPS by more than 10 - 15% for the life of the drive. Most capacities won't see more than a 10% variance in IO latency (or performance) at steady state. Intel has never offered this sort of a guarantee before because its drives would vary quite a bit in terms of IO latency.

Intel also claims to be able to service 99.9% of all 4KB random IOs (QD1) in less than 500µs:

The lower latency operation is possible through the use of a flat indirection table (and some really well done firmware) in the new controller. What happens now is there's a giant array with each location in the array mapped to a specific portion of NAND. The array isn't dynamically created and, since it's a 1:1 mapping, searches, inserts and updates are all very fast. The other benefit of being 1:1 mapped with physical NAND is that there's no need to defragment the table, which immediately cuts down the amount of work the controller has to do. Drives based on this new controller only have to keep the NAND defragmented (the old controller needed to defragment both logical and physical space).

The downside to all of this is the DRAM area required by the new flat indirection table. The old binary tree was very space efficient, while the new array is just huge. It requires a large amount of DRAM depending on the capacity of the drive. In its largest implementation (800GB), Intel needs a full 1GB of DRAM to store the indirection table. By my calculations, the table itself should require roughly 100MB of DRAM per 100GB of storage space on the drive itself. There's a bit of space left over after you account for the new indirection table. That area is reserved for a cache of the controller's firmware so it doesn't have to read from slow flash to access it.

Once again, there's no user data stored in the external DRAM. The indirection table itself is physically stored in NAND (just cached in DRAM), and there are two large capacitors on-board to push any updates to non-volatile storage in the event of power loss.

It sounds like a simple change, but building this new architecture took quite a bit of work. With a drive in hand, we can put Intel's claims to the test. And there's one obvious place to start...



Consistent Performance: A Reality?

IO latency isn't an issue for most operations. Reads are almost guaranteed to complete relatively consistently so long as you have a decent controller. Similarly, sequential writes are very predictable and tend to be the easiest case to optimize for. You will see differences between drives when looking at read and sequential write latency, but the differences honestly won't be that dramatic. The chart below shows just how consistently random read performance is over time on Intel's SSD 710:

Enterprise Iometer - 4KB Random Read

It's the highly random write workloads that really trip up most controllers. You have to properly map random LBAs to physical NAND locations and regularly defragment the NAND to ensure that performance always remains high. All of this happens in real time and while insane amounts of data (and fragments) are flying at the drive. The drive has to balance present needs of high performance with future needs of not leaving the NAND in a fragmented state as well as keeping wear even across all NAND die. It requires a fast controller, a good underlying firmware architecture, and experience.

For the past few years we've been using 4KB random write performance as a measure of how advanced a controller is. Big multi-user (or virtualized) enterprise workloads almost always look fully random at a distance, which is why we use steady state random write performance as a key metric. The results below take hours to generate per drive and are truly an indication of steady state performance for 4KB random writes (100% LBA space):

Enterprise Iometer - 4KB Random Write

Most modern enterprise drives do a good job here. Drives from a few years ago couldn't cope with prolonged random write workloads, however the latest controllers have no issues sustaining high double or even triple digit transfer rates under heavy 4KB random writes. The graph above depicts steady state performance, which is honestly quite good for many of these enterprise drives. Intel's SSD DC S3700 obviously does very well here, but by looking at this graph alone we're really not doing the drive justice.

The next chart looks at max IO latency during this steady state 4KB random write workload:

Enterprise Iometer - 4KB Random Write

I had to pull Micron's P400e out of this graph because it's worst case latency was too high to be used without a logarithmic scale. The 710 actually boasts higher max latency than even a 15K RPM mechanical hard drive! If you remember back to the really bad JMicron drives from 5 years ago, they would deliver max latency well over 1 second at much lighter workloads. There are significant differences between the drives if you look at worst case latency, but the average latency for most of these drives tends to be ok:

Enterprise Iometer - 4KB Random Write

The reason we haven't bothered with max IO latency in the past is because of what you see above. Average latency values are all pretty decent, indicating that despite the occasional high latency IO operation, things normally operate quite smoothly. Compared to a hard drive, even though some of these max latencies may even make an SSD temporarily slower than a hard drive, the overall SSD experience is still good enough to justify the move. The graphs above give us a couple of data points, but what we don't have a good idea of is just how different the S3700's behavior is over time. As is usually the case with big shifts in SSD architecture, we need a new way of looking at things.

To generate the data below I took a freshly secure erased SSD and filled it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next I kicked off a 4KB random write workload at a queue depth of 32 using incompressible data. I ran the test for just over half an hour, no where near what we run our steady state tests for but enough to give me a good look at drive behavior once all spare area filled up.

I recorded instantaneous IOPS every second for the duration of the test. I then plotted IOPS vs. time and generated the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive alllocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.


Here we get an immediate feel for how all of these drives/controllers behave under heavy random writes. Although nearly all of the drives here are great average performers, variability among some is extremely high. The more variability, the less robust the drive's defrag/garbage collection algorithm is. If you click on the Intel SSD 710 you'll get a good feel for what's happening in these controllers.

The 710 has a very well defined performance trend. You see significant clustering of data points around what ends up being the drive's average performance over time. Performance is nice and tight early on while the 710 chews through its tremendous spare area, but even then you see occasional dips in IO performance. It's not super frequent, but it happens enough to be noticeable. Each one of those dots that appears significantly below the majority of the data points refers to when the Intel controller had to go off and defrag the drive (and/or indirection table) in order to keep performance high. You can tell that Intel's controller is always working, which helps keeps performance somewhat clustered but over time the controller seems to bite off more than it can chew and thus performance degrades.

The old X25-E looks almost consistent here, but that's partly because its performance is so low by modern standards. The other half of the X25-E story is that its performance is more consistent than most of the other drives. SLC NAND, especially at the 50nm generation, had very low program/erase latencies.

Note that the SandForce based Intel SSD 330 does very well here. I did use incompressible data both for the initial fill as well as for the random workload, however if you look at the tail end of the 330's curve you begin to see the drive lose its cool. SandForce's controller manages to do a good job, relatively speaking.

Moving down the list, Toshiba's SAS controller actually does a fairly good job here - once again thanks in part to its use of SLC NAND. Turning to Micron's P400e however, we have the other end of the spectrum. If you look at the overall performance curve there's a very tight distribution of IOPS as the P400e is able to quickly burn through clean blocks in its spare area. As most workloads won't see several minutes of sustained, high queue depth random writes, Micron's approach keeps performance for everyone else quite high. The controller doesn't appear to do a lot of clean up as it goes along, which is why it hits a performance wall once all spare area is consumed. The performance after that point is hugely variable. Some transactions complete at respectable rates while others effectively deliver dozens of IOPS.

Samsung's SSD 840 Pro, although a high-end client drive, gives us good insight into the next generation of Samsung enterprise drives. The 840 Pro's performance distribution isn't anywhere near as bad as the P400e's, but there's still at least an order of magnitude difference between the worst case and best case performance while running our test. I suspect that throwing more spare NAND at the problem would greatly help the 840 Pro.

Finally we get to the new Intel drive. Although the scale on this graph paints an ok picture for many drives at a high level, one look at Intel's SSD DC S3700 puts it all in perspective. While all drives show a scatter plot of IOPS, the S3700 draws a line. If your experience with SSDs is great minus the occasional drop in performance, it looksl ike the S3700 doesn't know how to hiccup. At a high level we see the same sort of behavior here as the other drives. Performance starts out very high as spare area is filled, there's a clear drop in performance as the S3700 adjusts to its new state (devoid of a large pool of clean blocks) but once we march towards steady state the S3700's performance is downright predictable.

The next set of charts look at the steady state (for most drives) portion of the curve. Here we'll get some better visibility into how everyone will perform over the long run.


The source data is the same, we're just focusing on a different part of the graph. Once again the S3700's plot is effectively a line. The rest of the drives, including Intel's SSD 330 start to really show just how much their performance can vary from one transaction to the next.

The final set of graphs abandons the log scale entirely and just looks at a linear scale that tops out at 40K IOPS. We're also only looking at steady state (or close to it) performance here:


At this scale we can see the amount of variance the S3700 has to deal with. The highest value during this period for the S3700 is 34865 IOPS and the lowest value is 29150, a difference of just over 17%. Intel claims that you'll only see a variance of around 10 - 15% in its datasheets. We're a bit higher than that, but not by much. Looking at the S3700's graph you can pinpoint the frequency and duration of the new controller's big defrag/GC routines. Roughly every 150 seconds there's a tangible drop in performance where the controller is spending some of its cycles reorganizing its data in the physical space. The duration of the cleanup is short by comparison, roughly 30 - 40 seconds. These times will likely change depending on the workload. The 710 looks like an absolute mess by comparison. There's no clear cleanup period, which is likely because defragmenting both the indirection table and physical NAND ensure that the controller is always scrambling to restore performance. The 330 looks similarly scrambled. In fact, none of the other drives have the consistency that the S3700 delivers. I'm not going to go out on a limb and say that Intel's new controller is the only one that's this well behaved, but at least compared to this small sample set of controllers and drives its performance appears to be quite unique.

Implications for Enterprise & Client Users

The obvious impact from the S3700's performance in enterprise is that the drive should be much better behaved in large RAID arrays. With < 20% variation between min and max IOPS under steady state conditions (compared to gaps of 2x - 10x for older drives), overall performance should just improve. Heavily virtualized environments tend to look fully random in nature and they'll see big improvements in moving to the S3700. Even single drive environments will show much better behavior if the workload is random write heavy.

On the client side the improvements offered by the new S3700 controller are more subtle. Intel told me that it had to give up max performance in pursuit of consistency. We see that a bit if you look at our client focused Storage Bench suites. Samsung's SSD 840 Pro retains the overall performance crown (admittedly Intel's S3700 firmware is more optimized for enterprise workloads), but when it comes to IO consistency the S3700 can't be beat. For a client usage model (e.g. in a desktop or notebook) the impact of all of this is simply going to be fewer hiccups in normal use. Light workloads likely won't see any benefit especially if they're mostly read heavy, but the multitasking power users in the audience will likely appreciate the focus on consistency. I know through personal experience that I often encounter unexpectedly high IO latency at least several times a day on my personal machine (Samsung SSD 830). I would be willing to give up a bit of performance to get a more consistent experience. Although the S3700 isn't targeted at client systems, it's very likely that a future version will ship with non-HET 25nm MLC NAND (similar to the Intel SSD 710 vs 320). Given that the HET version of the drive is priced at $2.35 per GB, it's quite likely that we'll see Intel SSD 320 pricing (or perhaps a bit higher) for a lower endurance version.



Sequential Read/Write Speed

Similar to our other Enterprise Iometer tests, queue depths are much higher in our sequential benchmarks. To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 32. The results reported are in average MB/s over the entire test length.

Enterprise Iometer - 128KB Sequential Read

As Intel's first 6Gbps controller, there's a huge improvement in sequential IO compared to the outgoing 710. There are definitely still drives that put up beter performance numbers if we look at all of our consumer test data (mainly when it comes to sequential write performance), but Intel is at least in the realm of top performers and does very well compared to many current enterprise SSDs.

Enterprise Iometer - 128KB Sequential Write



Enterprise Storage Bench - Oracle Swingbench

We begin with a popular benchmark from our server reviews: the Oracle Swingbench. This is a pretty typical OLTP workload that focuses on servers with a light to medium workload of 100 - 150 concurrent users. The database size is fairly small at 10GB, however the workload is absolutely brutal.

Swingbench consists of over 1.28 million read IOs and 3.55 million writes. The read/write GB ratio is nearly 1:1 (bigger reads than writes). Parallelism in this workload comes through aggregating IOs as 88% of the operations in this benchmark are 8KB or smaller. This test is actually something we use in our CPU reviews so its queue depth averages only 1.33.

Oracle Swingbench - Average Data Rate

The S3700's only performance blemish is here in our Swingbench test. Like many other new drives we've looked at (e.g. Intel's SSD 910, Micron's P320h), the S3700's performance here is a regression compared to previous drives. I asked Intel about this and it appears that largely unaligned smaller-than-4KB accesses are slower on the new controller compared to the outgoing 710. My guess is that given how common 4KB accesses are, most controller vendors picked it as the optimization point for their next-gen drives. I'm not entirely sure how many enterprise applications exist that fall into this behavior pattern but it's worth pointing out in case you have a unique workload that features a lot of < 4KB unaligned accesses.

Update: I have some more clarification as to what's going on here. There are two components to the Swingbench test we're running here: the database itself, and the redo log. The redo log stores all changes that are made to the database, which allows the database to be reconstructed in the event of a failure. In good DB design, these two would exist on separate storage systems, but in order to increase IO we combined them both for this test. Accesses to the DB end up being 8KB and random in nature, a definite strong suit of the S3700 as we've already shown. The redo log however consists of a bunch of 1KB - 1.5KB, QD1, sequential accesses. The S3700, like many of the newer controllers we've tested, isn't optimized for low queue depth, sub-4KB, sequential workloads like this. 

Remember that with 25nm NAND, the page size grew to 8KB. Having tons of small IOs like this creates extra tracking overhead, which can require additional DRAM to maintain full performance. Intel views this type of a scenario as unlikely (apparently the latest versions of Oracle allow you to force the redo log to use 4KB sectors instead of 512B sectors, which would make these transfers 8KB - 12KB in size), and thus simply didn't optimize for it with the S3700's firmware. Intel did confirm that should customer feedback indicate this is a fairly likely scenario that it would be willing to consider some alternate solutions to improving performance here, but at this point it seems like it doesn't matter for the vast majority of use cases.

Oracle Swingbench - Disk Busy Time

Oracle Swingbench - Average Service Time



Enterprise Storage Bench - Microsoft SQL UpdateDailyStats

Our next two tests are taken from our own internal infrastructure. We do a lot of statistics tracking at AnandTech - we record traffic data to all articles as well as aggregate traffic for the entire site (including forums) on a daily basis. We also keep track of a running total of traffic for the month. Our first benchmark is a trace of the MS SQL process that does all of the daily and monthly stats processing for the site. We run this process once a day as it puts a fairly high load on our DB server. Then again, we don't have a beefy SSD array in there yet :)

The UpdateDailyStats procedure is mostly reads (3:1 ratio of GB reads to writes) with 431K read operations and 179K write ops. Average queue depth is 4.2 and only 34% of all IOs are issued at a queue depth of 1. The transfer size breakdown is as follows:

AnandTech Enterprise Storage Bench MS SQL UpdateDaily Stats IO Breakdown
IO Size % of Total
8KB 21%
64KB 35%
128KB 35%

Microsoft SQL UpdateDailyStats - Average Data Rate

Our own AnandTech server workloads don't play to any of the S3700's weaknesses however. Instead, the S3700 ends up delivering better performance here than the SandForce based Intel SSD 520 and any other enterprise SATA SSD we've tested in fact.

Microsoft SQL UpdateDailyStats - Disk Busy Time

Microsoft SQL UpdateDailyStats - Average Service Time



Enterprise Storage Bench - Microsoft SQL WeeklyMaintenance

Our final enterprise storage bench test once again comes from our own internal databases. We're looking at the stats DB again however this time we're running a trace of our Weekly Maintenance procedure. This procedure runs a consistency check on the 30GB database followed by a rebuild index on all tables to eliminate fragmentation. As its name implies, we run this procedure weekly against our stats DB.

The read:write ratio here remains around 3:1 but we're dealing with far more operations: approximately 1.8M reads and 1M writes. Average queue depth is up to 5.43.

Microsoft SQL WeeklyMaintenance - Average Data Rate

The S3700 continues to be the fastest enterprise SATA SSD we've tested. In the past you had to choose between performance or endurance with the 520 and 710, with the S3700 Intel offers both.

Microsoft SQL WeeklyMaintenance - Disk Busy Time

Microsoft SQL WeeklyMaintenance - Average Service Time



AnandTech Storage Bench 2011

Although the S3700 isn't a client focused drive, I was curious to see how it would perform in our client Storage Bench suites.

Two years ago we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. I assembled the traces myself out of frustration with the majority of what we have today in terms of SSD benchmarks.

Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.

Originally I kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.

Not too long ago I tweeted that I had created what I referred to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. It's the load you'd put on a drive after nearly two weeks of constant usage. And it takes a *long* time to run.

1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. My thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.

2) I tried to cover as many bases as possible with the software I incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). I included a large amount of email downloading, document creation and editing as well. To top it all off I even use Visual Studio 2008 to build Chromium during the test.

The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:

AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown
IO Size % of Total
4KB 28%
16KB 10%
32KB 10%
64KB 4%

Only 42% of all operations are sequential, the rest range from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.

Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result I'm going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time I'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, I will also break out performance into reads, writes and combined. The reason I do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.

There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. Lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running last year.

As always I don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea.

The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests. All of the older tests are still run on our X58 platform.

AnandTech Storage Bench 2011 - Heavy Workload

We'll start out by looking at average data rate throughout our new heavy workload test:

Heavy Workload 2011 - Average Data Rate

Nothing seems capable of reaching the 840 Pro's performance level, but the S3700 ends up doing very well. If it were released today as a consumer drive, it would be the fastest Intel had ever shipped.

AnandTech Storage Bench 2011 - Light Workload

Our new light workload actually has more write operations than read operations. The split is as follows: 372,630 reads and 459,709 writes. The relatively close read/write ratio does better mimic a typical light workload (although even lighter workloads would be far more read centric).

The I/O breakdown is similar to the heavy workload at small IOs, however you'll notice that there are far fewer large IO transfers:

AnandTech Storage Bench 2011 - Light Workload IO Breakdown
IO Size % of Total
4KB 27%
16KB 8%
32KB 6%
64KB 5%

Despite the reduction in large IOs, over 60% of all operations are perfectly sequential. Average queue depth is a lighter 2.2029 IOs.

Light Workload 2011 - Average Data Rate

The S3700's performance in our light workload isn't as impressive, falling slightly behind the 520. Compared to Intel's last drive made fully in house, the S3700 does provide a healthy improvement. Samsung still owns the top of the client performance charts.



Power Consumption

Intel's SSD DC S3700 can draw power on both 12V or 5V rails. Most 2.5" SATA drives pull from the 5V rail, however the S3700 seems to prefer the 12V if both are attached. The S3700 was the only drive I tested that drew any current on the 12V rail (and none on 5V), everything else was exclusively 5V. If your server supports it, Intel claims the S3700 can pull on both rails simultaneously.

As I alluded to in our S3700 architecture analysis piece, the Intel SSD DC S3700 draws quite a bit of power under load. It's easily the most power hungry Intel enterprise SSD, and it's the worst offender in all but one of our tests. The added power consumption isn't significant enough to be a problem in the datacenter, but it will likely keep the S3700 (or a not-HET derivative) from being a good fit for notebooks. Desktop use is another matter entirely, but Samsung's SSD 840 Pro controller's amazing idle power consumption is a perfect fit for mobile use. I suspect what we're seeing here is the result of Intel's focus on the enterprise market: the S3700 controller was just not built for mainstream mobile client use.

Enterprise SSD Power Consumption - Idle

Enterprise SSD Power Consumption - 128KB Sequential Writes QD32

Enterprise SSD Power Consumption - 4KB Random Writes QD32



Final Words

Intel is back, but with an enterprise focus. The S3700 is one of the most exciting drives I've had the opportunity to test. Its architecture is more than just a better, faster evolution of designs that came before it, the S3700 is truly something new.

I view the evolution of "affordable" SSDs as falling across three distinct eras. In the first era we saw most companies focusing on sequential IO performance. These drives gave us better-than-HDD read/write speeds but were often plagued by insane costs or horrible pausing/stuttering due to a lack of focus on random IO. In the second era, most controller vendors woke up to the fact that random IO mattered and built drives to deliver the highest possible IOPS. I believe Intel's SSD DC S3700 marks the beginning of the third era in SSD evolution, with a focus on consistent, predictable performance. At least compared to the drives/controllers used in this article, Intel's S3700 appears to be ahead of the curve. I suspect the next major eras will be the transition to PCIe based interfaces, followed by the move away from NAND altogether (there may be another distinct period in between those two as well).

The S3700 is an extremely exciting SSD. For its indended market, the S3700 continues Intel's push to reduce the cost of enterprise storage. The days of spending thousands of dollars on a relatively small amount of NAND for use in a server are over. Intel is guaranteeing 1 - 14PB of write endurance on the S3700, which should be more than enough for the vast majority of workloads (even overkill for many). Performance and IO consistency are both very good, at least when dealing with 4KB random writes. With the exception of smaller-than-4KB, unaligned IO performance the S3700 is among the fastest enterprise SATA SSDs we've tested. It's a clear improvement over all of Intel's previous drives.

With the exception of power consumption, the S3700's controller is the true third generation successor to Intel's X25-M G2. It focuses on a new aspect of performance that, until we move to SATA Express, should be the target for all high-end SSD controllers going forward. Depending on the application, consistency of performance can be just as important as absolute performance itself. I can't stress this enough: we are largely saturating 6Gbps SATA and random IO is more than good enough for many, delivering a better experience should be everyone's target going forward.

It's good to see Intel back in the saddle with a competitive home grown controller, my only complaint here is that I wish we could have the same technology applied across the entire market. Although less profitable, the consumer SSD space is in need of more high quality competitors that push the industry forward. Without Intel aggressively playing in both the consumer and enterprise spaces I worry that we'll see a race to the bottom with manufacturers focusing on reducing cost without necessarily prioritizing innovation. The IO consistency offered by the S3700 is something I know I wish I had in my notebook. I frequently encounter hiccups in performance that I do my best to combat by throwing additional spare area at the problem, but a more fundamental architectural solution would be ideal.

 

Update: I'll be answering questions about the S3700 live from this year's SC12 conference. Head over here to get your questions answered!

Log in

Don't have an account? Sign up now