Original Link: http://www.anandtech.com/show/5817/the-intel-ssd-330-review-60gb-120gb-180gb
The Intel SSD 330 Review (60GB, 120GB, 180GB)by Anand Lal Shimpi on August 1, 2012 12:01 AM EST
Earlier this year Intel introduced its second SandForce based SSD: the Intel SSD 330. While Intel had previously reserved the 5xx line for 3rd party controllers, the 330 marks the first time Intel has used something other than its own branded controller in a mainstream or 3-series drive.
I don't doubt that I'll eventually get the story of how we got here. Apparently there's a good one behind why Intel's sequential write speed was capped at 100MB/s in the early days of the X25-M's controller. Regardless of how, this is where we are today: every new Intel SSD, with the exception of the high-end PCIe solution, is now powered by SandForce's SF-2281 controller and not Intel's own silicon.
The firmware is of course a collaboration between Intel and SandForce, although it's not clear if Intel ever had access to the firmware's source code or not. The result is a solution that performs a little differently than a standard SandForce drive, but should be less prone to compatibility/stability/reliability issues that have plagued SandForce drives for the past year. The latter is difficult to quantify.
We have seen examples of better behavior from Intel's SF-2281 firmware internally, and even wrote about one in our original Cherryville review. Despite Intel's best efforts, there are starting to be a small percentage of issues being reported in the wild. The number of publicly reported problems is very low, but it's impossible to say if this is a function of time or a truly superior design. I'm still comfortable in saying Intel's SandForce drives are good and likely better tested than any other SF drive, but as with any SSD, there can be issues depending on your system configuration. For what it's worth, even Intel's own controllers have had issues.
The Intel SSD 330
The 330 is available in four capacities: 60GB, 120GB, 180GB and 240GB. The limited launch capacities are a bit odd when you consider the Intel SSD 320 was available from 40GB all the way up to 600GB. Given the 330 uses the same controller as the 520, anyone who needs a larger drive can always buy the 520 instead.
Architecturally the 330 and 520 are identical. They both use the same SF-2281 6Gbps controller, and they both use Intel's 25nm MLC NAND. Actually, if you look at the 330's PCB itself you'll see the same layout as the 520 and Cherryville codename silkscreened onto the board. Despite the latter, Intel's SSD 330 is technically codenamed Maple Crest.
The similarities don't end there either. If you haven't updated to the latest Intel SSD Toolbox, the 330 is actually detected by the software as a 520:
Updating to the latest version rectifies the latter:
The 330 and 520 are very similar drives. The 330's primary differentiation comes from its use of cheaper, lower endurance MLC NAND. I'll get to the math behind why this isn't an issue at all for most users shortly. Conceptually, the 330 vs. the 520 is very similar to Kingston's HyperX 3K vs. regular HyperX drive. Just like with frequency binning for CPUs, there's endurance binning for NAND. Lower endurance parts are more plentiful (and thus cheaper) while the highest endurance parts will be sold for a premium (e.g. MLC-HET). If Intel does its job right, most of the stuff in the middle should be very good. And if it does its job really well, even the lower endurance parts should be more than good enough.
Intel's SSD 330 also carries a different firmware version from the 520: 300i vs. 400i. The firmware changes are likely minor in nature, however one major change is the loss of Intel's E2/E3/E4 SMART attributes for quick endurance testing. As I mentioned in our look at Intel SSDs in the enterprise, you can use these attributes to determine write amplification and estimate NAND longevity of a given workload. Intel views these as enterprise features, and with the 330's focus exclusively as a client drive it loses the features. You still have an accurate count of total host writes vs. NAND writes, as well as Intel's media wear indicator that lets you know what percentage of p/e cycles you have exhausted. I suspect this is more of Intel's famous forced segmentation at work rather than true delineation between client and datacenter drives. Depending on the server and workload, the 330 could be just fine.
Using cheaper NAND allows Intel to be a little more aggressive on the 330's pricing without sacrificing margins. We turn to our Newegg pricing table once more to see where this puts the 330 in the grand scheme of things:
|SSD Price Comparison|
|Intel SSD 330||$70||$100||$160|
|Intel SSD 520||$90||$125||$190||$255||$500|
|OCZ Vertex 3||$70||$95||$200||$530|
|OCZ Vertex 4||$75||$115||$210||$550|
|Plextor M3 Pro||$180||$280||$680|
|Samsung SSD 830||$128||$143||$282||$700|
Although SSD pricing is extremely volatile, Intel's SSD 330 tends to be among the cheaper solutions. The 60GB drive is just as cheap as the competition at $70, and the 120GB model is only $5 more than the chepaest alternative here. The 180GB drive is an interesting point below $200 if you need just a little more capacity than a 120GB drive would afford you. You pay a small price per GB penalty (~6%) but if you need capacity at a specific budget, it works. The newly announced 240GB drives were either backordered or not listed at many vendors.
The Drives and Internal Architecture
The SF-2281 controller features eight NAND channels, although it can pipeline multiple requests on each channel. The 60GB and 120GB drives both feature eight NAND packages with one 8GB die and two 8GB die per package, respectively. That works out to be 64GiB of NAND on the 60GB drive, and 128GiB of NAND on the 120GB drive. RAISE is disabled on both of these drives, all spare area is dedicated for the replacement of bad blocks as well as garbage collection/block recycling.
The 180GB drive on the other hand uses twelve NAND packages, with two 8GB 25nm MLC die per package. The math works out to be 192GiB of NAND. SandForce's redundant NAND technology (RAISE) is enabled on the 180GB drive, so the extra spare area is divided between NAND failure coverage as well as traditional garbage collection/bad block replacement. Note the non-multiple-of-eight NAND configuration poses a bit of a challenge for extracting peak performance out of the drive, however it's still able to deliver a tangible advantage over the 120GB version. It's possible that Intel still routes all 8 channels to NAND die and simply sacrifices pipelining of requests on some of the channels.
In the case of all of the Intel SSD 330 models, the SF-2281 controller is cooled by a thermal pad that helps dissipate heat across the SSD's metal chassis. All 330s are 9.5mm thick without any external removable spacer, which sets these drives apart from most Intel SSDs. Strangely enough, the plastic spacer that's normally on the outside of the drive is actually located on the inside of the 330, although it's not actually responsible for the thicker form factor. I'm not sure why, but it's in there.
The standard 330 bundle comes with a molex to SATA power adapter, a SATA cable, a 2.5" to 3.5" drive sled and a link to download Intel's Data Migration Software (powered by Acronis).
As the 330 isn't targeted at OEMs, I'm not sure if we'll see standalone drives sold without the bundle. As of now this is the only way to get the 330.
Lower Endurance: A Non-Issue
We've mentioned time and time again that NAND endurance is a non-issue for desktop (and even some enterprise) workloads.
Intel doesn't quantify the difference in endurance between the NAND used on the SSD 330 and the 520, it simply states that the 330's NAND will deliver a useful lifespan of 3 years and thus carries a 3 year warranty. Unfortunately, without access to the E2/E3/E4 counters we can't quickly figure out how long the 330 would last given a typical client workload. Thankfully Intel left one SMART wear indicator intact: the E9 attribute, otherwise known as the Media Wearout Indicator.
The normalized value of the E9 attribute (accessible via any SMART monitoring software, or Intel's SSD Toolbox) starts at 100 and decreases, linearly by integer increments, down to 0. Its meaning? The number of cycles the NAND media has undergone. At 100, your NAND is running at roughly full health. At 90, you've exhausted 10% of your NAND's lifespan. By the time the E9/MWI attribute gets down to 1 (99% of NAND p/e cycles have been used up) the counter stops decrementing and you're recommended to replace the drive. Even Intel admits however it's quite likely that at a value of 1 your drive will last for a considerable amount of time.
These are SandForce drives that implement real time data deduplication/compression so I wanted to create the worst case scenario to see how big of a deal this lower endurance NAND would be. I filled the 60GB Intel SSD 330 with incompressible data, leaving only its spare area untouched. I then ran a 4KB random write test, at a queue depth of 32, using incompressible data in four blocks of 6 hours, stopping only to look at the state of the E9/MWI attribute.
Even after 3959GB written to the 60GB drive, the media wear indicator remained at 100. If we do the math and assume that Intel isn't lying about the attribute decreasing linearly from 100 down to 0, it's possible the NAND on this particular 60GB SSD 330 is good for at least 6000 p/e cycles (3959GB/64GB = ~61 p/e cycles per NAND cell).
Not satisfied with these results I went in for another 24 hour round. By the end of it I had written 7629GB to the 60GB drive, at around 119 p/e cycles per NAND cell (assuming perfect wear leveling). The MWI hadn't budged:
Extrapolating based on this data you'd end up with over 10,000 p/e cycles for this particular drive. Whether or not this is accurate remains to be seen, but endurance is clearly not going to be an issue with Intel's SSD 330.
If you assume a typical client drive sees 10GB of writes per day, within a year you'd write 3650GB to the drive. I wrote that much in 24 hours. In fact, I wrote more than two years worth of data to the 60GB Intel SSD 330 in two days. All the while the Intel SSD 330 didn't even blink.
Note that as with any form of binning, you will see examples of drives that do better or worse than the one I've tested. Unless there's something horribly wrong with Intel's 25nm NAND process however, the difference is unlikely to be large enough where it'd be an issue.
As a side benefit to all of this experimenting with death is we can actually quantify write amplification on a SandForce drive.
Intel's 300i firmware, like all SandForce firmware, keeps track of NAND vs. host writes. Using that data I'm able to quantify write amplification during this fairly worst-case scenario workload:
7629GB of NAND writes vs. 1550GB of host writes = ~4.9x write amplification
SandForce maintains extremely low write amplification, even when faced with a hefty workload. Typical write amplification for client users will likely be far lower.
|CPU||Intel Core i7 2600K running at 3.4GHz (Turbo & EIST Disabled) - for AT SB 2011, AS SSD & ATTO|
|Motherboard:||Intel DH67BL Motherboard|
|Chipset Drivers:||Intel 126.96.36.1995 + Intel RST 10.2|
|Memory:||Corsair Vengeance DDR3-1333 2 x 2GB (7-7-7-20)|
|Video Card:||eVGA GeForce GTX 285|
|Video Drivers:||NVIDIA ForceWare 190.38 64-bit|
|Desktop Resolution:||1920 x 1200|
|OS:||Windows 7 x64|
Random Read/Write Speed
The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.
Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.
At similar capacities, the 330 and 520 offer nearly identical random read performance. The old X25-M G2 actually offers better random read performance than many of the newer drives, although most users would be hard pressed to tell the difference in actual usage.
Random write performance is great with easily compressible data, but even when faced with data that can't be reduced the Intel SSD 330 does very well. Once again performance is very similar between the 330 and 520 drives.
Many of you have asked for random write performance at higher queue depths. What I have below is our 4KB random write test performed at a queue depth of 32 instead of 3. While the vast majority of desktop usage models experience queue depths of 0 - 5, higher depths are possible in heavy I/O (and multi-user) workloads:
Sequential Read/Write Speed
To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.
Low queue depth sequential read performance is good but not exactly class leading. Once again there's no real performance difference between the 330 and 520.
Sequential write performance with incompressible data is the biggest downside to any SandForce based drive. Try copying a compressed video or photos to the drive and you'll get speeds south of 230MB/s. The 60GB drive can only manage 80MB/s with incompressible data, that's actually no faster than the old Intel X25-M.
AS-SSD Incompressible Sequential Performance
The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers.
Here we see what higher queue depth sequential reads look like. The 330 gets close to 500MB/s but never actually exceeds it.
Incompressible sequential write performance, again, doesn't look very good.
AnandTech Storage Bench 2011
Two years ago we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. I assembled the traces myself out of frustration with the majority of what we have today in terms of SSD benchmarks.
Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.
Originally I kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.
Not too long ago I tweeted that I had created what I referred to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. It's the load you'd put on a drive after nearly two weeks of constant usage. And it takes a *long* time to run.
1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. My thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.
2) I tried to cover as many bases as possible with the software I incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). I included a large amount of email downloading, document creation and editing as well. To top it all off I even use Visual Studio 2008 to build Chromium during the test.
The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:
|AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown|
|IO Size||% of Total|
Only 42% of all operations are sequential, the rest range from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.
Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result I'm going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time I'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, I will also break out performance into reads, writes and combined. The reason I do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.
There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. Lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running in 2010.
As always I don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea.
The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests.
AnandTech Storage Bench 2011 - Heavy Workload
We'll start out by looking at average data rate throughout our new heavy workload test:
Our heavy storage bench suite shows average performance for Intel's SSD 330. There's a bit of a gap between it and the SSD 520 as well.
The next three charts just represent the same data, but in a different manner. Instead of looking at average data rate, we're looking at how long the disk was busy for during this entire test. Note that disk busy time excludes any and all idles, this is just how long the SSD was busy doing something:
AnandTech Storage Bench 2011 - Light Workload
Our new light workload actually has more write operations than read operations. The split is as follows: 372,630 reads and 459,709 writes. The relatively close read/write ratio does better mimic a typical light workload (although even lighter workloads would be far more read centric).
The I/O breakdown is similar to the heavy workload at small IOs, however you'll notice that there are far fewer large IO transfers:
|AnandTech Storage Bench 2011 - Light Workload IO Breakdown|
|IO Size||% of Total|
The 330's performance in our lighter workload is similarly middle of the road.
PCMark 7's secondary storage benchmark does little to show us differences between modern, high-performance SSDs as everything here scores within 5% of one another - but that's the point. For most mainstream client uses you'd be hard pressed to tell the difference between two good 6Gbps SSDs. Worry more about cost and reliability than outright performance if you're considering an SSD for a normal machine. Anything you see here will be much faster than a mechanical drive.
Performance Over Time & TRIM
SandForce's controllers have always behaved poorly if you pushed them into their worst case scenario. Should you fill a SF-2281 based drive completely with incompressible data then continue to write to the drive with more incompressible data (overwriting some of what you've already written) to fill up the spare area you'll back the controller into a corner that it can't get out of, even with TRIM. It's not a terribly realistic situation since anyone using an SF-2281 SSD as a boot drive will at least have Windows (or some other easily compressible OS) installed, and it's fairly likely that you'll have other things stored on your SSD in addition to large movies/photos. Regardless, it's a corner case that we do have to pay attention to.
As we found out in our 520 review, Intel's firmware isn't immune to this corner case. I filled a 120GB SSD 330 with incompressible data. I then ran a 60 minute 4KB random write torture test (QD32), once again, with incompressible data. Normally I'd use HDTach to chart performance over time however HDTach measures performance with highly compressible data so we wouldn't get a good representation of performance here. Instead I ran AS-SSD to get a good idea for incompressible sequential performance in this worst case state. Afterwards, I TRIMed the drives and ran AS-SSD again to see if TRIM could recover the drive's performance.
|Intel SSD 330 - Resiliency - AS SSD Sequential Write Speed - 6Gbps|
|Clean||After Torture||After TRIM|
|Intel SSD 330 120GB||148.9 MB/s||96.3 MB/s||94.4 MB/s|
This is really the biggest problem with SandForce drives. If you're primarily storing large amounts of incompressible data, sequential write speeds suffer even further over the long haul.
Idle power consumption is pretty high for the Intel SSD 330, while active power consumption is middle of the road. Unfortunately if you opt for the Samsung SSD 830 instead you get better idle power consumption but worse active power.
Of the available SandForce drives, I've felt most comfortable recommending Intel's own. The pass through Intel's validation labs provides that extra peace of mind that hopefully translates into a better overall experience. In the past Intel has been a reliable option in the market but not necessarily the most affordable. The 330 attempts to correct the latter. While other drives are cheaper, the 330 does give you a unique combination of an Intel validated drive at a competitive price point.
The performance delta between the 330 and the 520 is narrow enough that I don't see a reason to recommend the 520 unless you need a higher capacity drive. The loss of endurance is likely something no typical end user would ever notice. Perhaps the lower p/e cycle is enough to keep the 330 out of write heavy enterprise deployments, but otherwise it's a non-issue.
The biggest problem with Intel's SSD 330 really stems from the limitations of its SandForce controller. Performance with incompressible (or software encrypted) data is hardly competitive. As an unencrypted OS/application drive the 330 is great, but if you're planning on using software encryption or will be primarily storing photos, videos and music you'll want to opt for a drive based on a different controller technology.
In the end it's good to see Intel playing aggressively on price. The 330 is likely one of the best SandForce drives on the market, and not having to pay a premium for it is pretty awesome.