AnandTech Storage Bench 2010

To keep things consistent we've also included our older Storage Bench. Note that the old storage test system doesn't have a SATA 6Gbps controller, so we only have one result for the 6Gbps drives.

The first in our benchmark suite is a light/typical usage case. The Windows 7 system is loaded with Firefox, Office 2007 and Adobe Reader among other applications. With Firefox we browse web pages like Facebook, AnandTech, Digg and other sites. Outlook is also running and we use it to check emails, create and send a message with a PDF attachment. Adobe Reader is used to view some PDFs. Excel 2007 is used to create a spreadsheet, graphs and save the document. The same goes for Word 2007. We open and step through a presentation in PowerPoint 2007 received as an email attachment before saving it to the desktop. Finally we watch a bit of a Firefly episode in Windows Media Player 11.

There’s some level of multitasking going on here but it’s not unreasonable by any means. Generally the application tasks proceed linearly, with the exception of things like web browsing which may happen in between one of the other tasks.

The recording is played back on all of our drives here today. Remember that we’re isolating disk performance, all we’re doing is playing back every single disk access that happened in that ~5 minute period of usage. The light workload is composed of 37,501 reads and 20,268 writes. Over 30% of the IOs are 4KB, 11% are 16KB, 22% are 32KB and approximately 13% are 64KB in size. Less than 30% of the operations are absolutely sequential in nature. Average queue depth is 6.09 IOs.

The performance results are reported in average I/O Operations per Second (IOPS):

AnandTech Storage Bench - Typical Workload

If we strip 6Gbps out of the equation completely, the SSD 320 does very well in our old light workload. You're looking at performance that's at the top of the pack from the mainstream offering.

If there’s a light usage case there’s bound to be a heavy one. In this test we have Microsoft Security Essentials running in the background with real time virus scanning enabled. We also perform a quick scan in the middle of the test. Firefox, Outlook, Excel, Word and Powerpoint are all used the same as they were in the light test. We add Photoshop CS4 to the mix, opening a bunch of 12MP images, editing them, then saving them as highly compressed JPGs for web publishing. Windows 7’s picture viewer is used to view a bunch of pictures on the hard drive. We use 7-zip to create and extract .7z archives. Downloading is also prominently featured in our heavy test; we download large files from the Internet during portions of the benchmark, as well as use uTorrent to grab a couple of torrents. Some of the applications in use are installed during the benchmark, Windows updates are also installed. Towards the end of the test we launch World of Warcraft, play for a few minutes, then delete the folder. This test also takes into account all of the disk accesses that happen while the OS is booting.

The benchmark is 22 minutes long and it consists of 128,895 read operations and 72,411 write operations. Roughly 44% of all IOs were sequential. Approximately 30% of all accesses were 4KB in size, 12% were 16KB in size, 14% were 32KB and 20% were 64KB. Average queue depth was 3.59.

AnandTech Storage Bench - Heavy Multitasking Workload

Crank up the workload and the 320 falls a bit behind the rest of the competitors. Last year's heavy multitasking workload is nothing compared to what we introduced earlier this year, so it's still pretty light by comparison but it's clear for normal usage the 320's 3Gbps performance is quite good.

The gaming workload is made up of 75,206 read operations and only 4,592 write operations. Only 20% of the accesses are 4KB in size, nearly 40% are 64KB and 20% are 32KB. A whopping 69% of the IOs are sequential, meaning this is predominantly a sequential read benchmark. The average queue depth is 7.76 IOs.

AnandTech Storage Bench - Gaming Workload

SYSMark 2007 TRIM Performance
Comments Locked

194 Comments

View All Comments

  • Chloiber - Tuesday, March 29, 2011 - link

    True.

    It is true, thet the REAL capacity of flash drives is 2 based. The NAND chips are.
    So a 120GB drive has in reality 128GB of flash.

    So its 120/128 % spare area. The 300GB version has also 300/320 % spare area (which is exactly the same).

    Anand is confusing things. The user gets 300GB, as he gets 300GB when buying a HDD. Windows on the other han is showing us "GiB" not "GB". But it's not a real difference in size. 74.5GiB EQUALS 80GB. It's the same thing. Compare the BYTE numbers if you wanna be sure, not the KB/MB/GB/TB numbers.

    I'm actually shocked that this still gets confused.
  • overzealot - Tuesday, March 29, 2011 - link

    RAM was not the only thing that was calculated using binary pseudo-metric prefixes. Perhaps you aren't old enough to remember the days before kibibytes, when all computer disks and tapes were measured as such.
  • noblemo - Wednesday, April 6, 2011 - link

    Conversion from GB to GiB:
    320 / 1.024^3 = 298 GiB

    Subtract 6.25% spare area:
    298 x (1-0.0625) = 279 GiB
  • MeanBruce - Monday, March 28, 2011 - link

    You can blather all the technostats you want 25nm, who cares, didn't change a thing! My next ssd, would never have said this a year ago, looks like Corsair Force GT! Read/Write 500/500 is all you need to say!;)
  • zanon - Monday, March 28, 2011 - link

    Granted, this isn't a stunning offering. But one thing I do look forward to is that I think we will finally start to see updated filesystems start to appear in the near future. For example, ZFS appears as if it will at last appear as a full Mac OS X file system via Z-410 this summer.

    One of the features of modern filesystems is full filesystem level compression and encryption (which really is where such features belong). I will be looking forward to (hopefully) seeing you test how this affects the SSD scene. My principle concern with Sandforce's strategy in the back of my head has always been this: that sooner or later, OS makers or someone will finally get with it and make full compression standard in the FS. At that point, the "worst case" scenario of fully random data will become the *only* scenario. That still leaves a (huge huge) legacy market, and likely time to adapt, but I do wonder if it will shake up the SSD scene at all once again.
  • overzealot - Tuesday, March 29, 2011 - link

    I don't agree. If the controllers are powerful enough to do encryption and compression in real-time, then it should still be done at the disk level.
    You can still encrypt/compress in your OS as you please, but I like having performance.
    PS, not dogging on ZFS, I use it all the time with openindiana.
  • marc1000 - Monday, March 28, 2011 - link

    vertex 3 is not already on market???

    http://www.amazon.com/OCZ-Technology-Vertex-2-5-In...
  • aork - Monday, March 28, 2011 - link

    That's for pre-order. Notice "Usually ships within 1 to 2 months."
  • piquadrat - Monday, March 28, 2011 - link

    Is it sufficient, security wise, using only max. 8 characters ATA password against thieves?
    One program, I sometimes use, MHDD offers ATA password reset option.
    If someone can bypass ATA pass so easily what all this AES128 is for?
    Could someone explain this matter to me?
  • DesktopMan - Monday, March 28, 2011 - link

    The password is used to generate the encryption key, much like how software products such as TrueCrypt does it.

    The max length of ata paswords is 32 which should be more bits than the actual key, depending on character set. 8 is not much though, depending on how these drives deal with brute force attacks.

    Old drives with ata passwords are just enabled with the password, which can be circumvented with master passwords or firmware commands in some cases.

Log in

Don't have an account? Sign up now