Original Link: http://www.anandtech.com/show/3681/oczs-vertex-2-special-sauce-sf1200-reviewed
OCZ's Vertex 2, Special Sauce SF-1200 Reviewedby Anand Lal Shimpi on April 28, 2010 3:17 PM EST
Last week I migrated both of my primary work computers, my desktop and my notebook, to SandForce based SSDs. My desktop now uses an OCZ Vertex 2 based on the SandForce SF-1200 with OCZ’s special sauce firmware. My notebook uses Corsair’s Force F100, also based on the SF-1200 but offering equal performance to the Vertex 2.
Clearly 100GB isn’t enough space for everything I have, so on my desktop I have a pair of 1TB drives in RAID-1. This is where I store all of my pictures, music and some of my movies. Automatic backups happen to a separate 2TB networked drive.
I’ve got a separate file server that feeds the rest of my home and office with a 3TB RAID-5 array. The last part is really to feed my HTPC and hold all of my benchmarking applications, images and lab files, it’s not necessary otherwise.
My desktop and notebook drives basically house an OS, applications, emails, PDFs, spreadsheets and tons of text files. In other words - highly compressible data.
This is exactly the sort of usage model SandForce was planning on when it designed its DuraWrite technology. If the majority of the data you store can somehow be represented by fewer bits you can solve a lot of the inherent problems with building a high performance SSD.
The SF-1200 and 1500 controllers do just that. The controllers and their associated firmware do whatever it takes to simply write less. In systems like my desktop or notebook, this is very simple. Writing less means the NAND lasts longer, it means that performance remains high for longer and with TRIM you can actually maintain that very high level of performance almost indefinitely.
SandForce’s technology is entirely transparent to the end user. You don’t get any extra capacity, all you get is better performance.
As I just mentioned, OCZ’s Vertex 2 ended up in my desktop. That’s the drive we’re looking at today. I moved to SandForce SSDs not because I wanted more performance, but because I wanted to begin long term testing of the mass production firmware on these drives. If I’m going to recommend them, I’m going to use them.
The Vertex 2
With the Agility 2 and Vertex 2 drives, OCZ has completely abandoned Indilinx in the high end MLC space. While Indilnx’s JetStream controller is still expected sometime this year, OCZ has clearly aligned with SandForce for the immediate future of its high end SSDs.
The Agility 2 we recently reviewed uses a standard SF-1200 controller and firmware. The Vertex 2 uses the same controller, but ships with a different (allegedly OCZ exclusive) firmware that enables higher small file random write speeds. This is a mass production firmware revision (based on 3.0.5) and is officially sanctioned by SandForce.
The drives carry a small price premium over OCZ’s Agility 2 line:
|OCZ SandForce Drive Pricing (MSRP)|
|OCZ Agility 2||$204.99||$379.99||$719.99|
|OCZ Vertex 2||$219.99||$399.99||$769.99|
In theory, the Vertex 2 should be the fastest SF-1200 on the market. However, Corsair’s Force F100 offers similar performance. The trick is in the firmware. Corsair ships its drives with SandForce’s release candidate firmware (3.0.1), which has the higher small file random write performance. In order to work around a known issue with that firmware, Corsair disables a power saving state that results in slightly higher power consumption from the Force F100.
I’ve been using both drives and so far, they both work fine. But if you want the performance and to stick with SandForce’s MP firmware, the Vertex 2 is apparently the only solution for now.
Starting with the Differences: Power Consumption
I’ll spoil the surprise up front: the Vertex 2 performs identically to Corsair’s Force drives, as expected. This is a good thing since those drives perform as well as the Vertex LE, which in turn are generally faster than Intel’s X25-M and second only to Crucial’s C300 in some cases.
Unless we stumble upon any other issues with the RC firmware, the difference between these drives amounts to power consumption.
At idle the Vertex 2 actually uses more power than Corsair’s Force F100 - 0.65W vs. 0.57W. The Vertex LE also exhibits the same lower power consumption, so I wonder if it’s related to the earlier firmware revision (I haven’t updated my LE to 3.0.5/1.05 yet).
Under load we see a larger difference between the Vertex 2 and the F100. The Corsair drive pulls 1.25W in our sequential write test compared to 0.97W from the Vertex 2. For desktop users the power consumption differences won’t matter, this is really more of an academic or notebook discussion.
Power consumption during heavy random writes is closer between the two drives, but the Force still draws a tad bit more.
For notebook users it appears to be a tradeoff - lower power consumption at idle or lower power consumption under load. I’d argue that the former is just as important, but it really varies based on usage model.
Still Resilient After Truly Random Writes
In our Agility 2 review I did what you all asked: used a newer build of Iometer to not only write data in a random pattern, but write data comprised of truly random bits in an effort to defeat SandForce’s data deduplication/compression algorithms. What we saw was a dramatic reduction in performance:
|Iometer Performance Comparison - 4K Aligned, 4KB Random Write Speed|
|Normal Data||Random Data||% of Max Perf|
|Corsair Force 100GB (SF-1200 MLC)||164.6 MB/s||122.5 MB/s||74.4%|
|OCZ Agility 2 100GB (SF-1200 MLC)||44.2 MB/s||46.3 MB/s||105%|
|Iometer Performance Comparison|
|Corsair Force 100GB (SF-1200 MLC)||Normal Data||Random Data||% of Max Perf|
|4KB Random Read||52.1 MB/s||42.8 MB/s||82.1%|
|2MB Sequential Read||265.2 MB/s||212.4 MB/s||80.1%|
|2MB Sequential Write||251.7 MB/s||144.4 MB/s||57.4%|
While I don’t believe that’s representative of what most desktop users would see, it does give us a range of performance we can expect from these drives. It also gave me another idea.
To test the effectiveness and operation of TRIM I usually write a large amount of data to random LBAs on the drive for a long period of time. I then perform a sequential write across the entire drive and measure performance. I then TRIM the entire drive, and measure performance again. In the case of SandForce drives, if the applications I’m using to write randomly and sequentially are using data that’s easily compressible then the test isn’t that valuable. Luckily with our new build of Iometer I had a way to really test how much of a performance reduction we can expect over time with a SandForce drive.
I used Iometer to randomly write randomly generated 4KB data over the entire LBA range of the Vertex 2 drive for 20 minutes. I then used Iometer to sequentially write randomly generated data over the entire LBA range of the drive. At this point all LBAs should be touched, both as far as the user is concerned and as far as the NAND is concerned. We actually wrote at least as much data as we set out to write on the drive at this point.
Using HDTach, I measured performance across the entire drive:
The sequential read test is reading back our highly random data we wrote all over the drive, which you’ll note takes a definite performance hit.
Performance is still respectably high and if you look at write speed, there are no painful blips that would result in a pause or stutter during normal usage. In fact, despite the unrealistic workload, the drive proves to be quite resilient.
TRIMing all LBAs restores performance to new:
The takeaway? While SandForce’s controllers aren’t immune to performance degradation over time, we’re still talking about speeds over 100MB/s even in the worst case scenario and with TRIM the drive bounces back immediately.
I’m quickly gaining confidence in these drives. It’s just a matter of whether or not they hold up over time at this point.
With the differences out of the way, the rest of the story is pretty well known by now. The Vertex 2 gives you a definite edge in small file random write performance, and maintains the already high standards of SandForce drives everywhere else.
The real world impact of the high small file random write performance is negligible for a desktop user. I’d go so far as to argue that we’ve reached the point of diminishing returns to boosting small file random write speed for the majority of desktop users. It won’t be long before we’ll have to start thinking of new workloads to really start stressing these drives.
I've trimmed down some of our charts, but as always if you want a full rundown of how these SSDs compare against one another be sure to use our performance comparison tool: Bench.
|CPU||Intel Core i7 965 running at 3.2GHz (Turbo & EIST Disabled)|
|Motherboard:||Intel DX58SO (Intel X58)
|Chipset:|| Intel X58 + Marvell SATA 6Gbps PCIe
|Chipset Drivers:||Intel 126.96.36.1995 + Intel IMSM 8.9|
|Memory:||Qimonda DDR3-1333 4 x 1GB (7-7-7-20)|
|Video Card:||eVGA GeForce GTX 285|
|Video Drivers:||NVIDIA ForceWare 190.38 64-bit|
|Desktop Resolution:||1920 x 1200|
|OS:||Windows 7 x64|
Sequential Read/Write Speed
Using the 6-22-2008 build of Iometer I ran a 3 minute long 2MB sequential test over the entire span of the drive. The results reported are in average MB/s over the entire test length:
Random Read/Write Speed
This test reads/writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time.
I've had to run this test two different ways thanks to the way the newer controllers handle write alignment. Without a manually aligned partition, Windows XP executes writes on sector aligned boundaries while most modern OSes write with 4K alignment. Some controllers take this into account when mapping LBAs to page addresses, which generates additional overhead but makes for relatively similar performance regardless of OS/partition alignment. Other controllers skip the management overhead and just perform worse under Windows XP without partition alignment as file system writes are not automatically aligned with the SSD's internal pages.
Overall System Performance using PCMark Vantage
Next up is PCMark Vantage, another system-wide performance suite. For those of you who aren’t familiar with PCMark Vantage, it ends up being the most real-world-like hard drive test I can come up with. It runs things like application launches, file searches, web browsing, contacts searching, video playback, photo editing and other completely mundane but real-world tasks. I’ve described the benchmark in great detail before but if you’d like to read up on what it does in particular, take a look at Futuremark’s whitepaper on the benchmark; it’s not perfect, but it’s good enough to be a member of a comprehensive storage benchmark suite. Any performance impacts here would most likely be reflected in the real world.
The HDD specific PCMark Vantage test is where you'll see the biggest differences between the drives:
AnandTech Storage Bench
Note that our 6Gbps controller driver isn't supported by our custom storage bench here, so the C300 results are only offered in 3Gbps mode.
The first in our benchmark suite is a light usage case. The Windows 7 system is loaded with Firefox, Office 2007 and Adobe Reader among other applications. With Firefox we browse web pages like Facebook, AnandTech, Digg and other sites. Outlook is also running and we use it to check emails, create and send a message with a PDF attachment. Adobe Reader is used to view some PDFs. Excel 2007 is used to create a spreadsheet, graphs and save the document. The same goes for Word 2007. We open and step through a presentation in PowerPoint 2007 received as an email attachment before saving it to the desktop. Finally we watch a bit of a Firefly episode in Windows Media Player 11.
There’s some level of multitasking going on here but it’s not unreasonable by any means. Generally the application tasks proceed linearly, with the exception of things like web browsing which may happen in between one of the other tasks.
The recording is played back on all of our drives here today. Remember that we’re isolating disk performance, all we’re doing is playing back every single disk access that happened in that ~5 minute period of usage. The light workload is composed of 37,501 reads and 20,268 writes. Over 30% of the IOs are 4KB, 11% are 16KB, 22% are 32KB and approximately 13% are 64KB in size. Less than 30% of the operations are absolutely sequential in nature. Average queue depth is 6.09 IOs.
The performance results are reported in average I/O Operations per Second (IOPS):
If there’s a light usage case there’s bound to be a heavy one. In this test we have Microsoft Security Essentials running in the background with real time virus scanning enabled. We also perform a quick scan in the middle of the test. Firefox, Outlook, Excel, Word and Powerpoint are all used the same as they were in the light test. We add Photoshop CS4 to the mix, opening a bunch of 12MP images, editing them, then saving them as highly compressed JPGs for web publishing. Windows 7’s picture viewer is used to view a bunch of pictures on the hard drive. We use 7-zip to create and extract .7z archives. Downloading is also prominently featured in our heavy test; we download large files from the Internet during portions of the benchmark, as well as use uTorrent to grab a couple of torrents. Some of the applications in use are installed during the benchmark, Windows updates are also installed. Towards the end of the test we launch World of Warcraft, play for a few minutes, then delete the folder. This test also takes into account all of the disk accesses that happen while the OS is booting.
The benchmark is 22 minutes long and it consists of 128,895 read operations and 72,411 write operations. Roughly 44% of all IOs were sequential. Approximately 30% of all accesses were 4KB in size, 12% were 16KB in size, 14% were 32KB and 20% were 64KB. Average queue depth was 3.59.
The gaming workload is made up of 75,206 read operations and only 4,592 write operations. Only 20% of the accesses are 4KB in size, nearly 40% are 64KB and 20% are 32KB. A whopping 69% of the IOs are sequential, meaning this is predominantly a sequential read benchmark. The average queue depth is 7.76 IOs.
SandForce has come a long way in a very short period of time. The current version of the SF-1200 firmware is deemed production worthy and we should see the first Agility 2 and Vertex 2 drives show up at etailers in the next week or so. The drives have done very well in our tests as well as in my personal systems, although I still recommend waiting to see if any strange bugs crop up over the coming months if you’re not fond of making potentially risky purchases.
Our coverage over the past few weeks should mean today’s Vertex 2 review is no real surprise. You get the performance of Corsair’s Force F100 drive but with SandForce’s mass production (3.0.5) firmware. For the majority of desktop users I’m not sure there’s much point to the extra small file random write performance, but if you have a workload that demands it - the Vertex 2 doesn’t disappoint.
The Agility 2 seems to make more sense for most desktop users. You don’t save a ton of money but you also don’t appear to lose any real world performance either. The cost per GB of these drives is still higher than Intel’s X25-M, but you do get a corresponding increase in performance.
One thing that hurts the cost structure of these SandForce drives is the 28% of the NAND dedicated to spare area. SandForce originally told me that we’d see drives with 7% overprovisioning after the SF-1200s hit. I wonder how that will impact performance...