AnandTech Storage Bench

To avoid any potential optimizations for industry standard benchmarks and to provide another example of real world performance we've assembled our own storage benchmarks that we've creatively named the AnandTech Storage Bench.

The first in our benchmark suite is a light/typical usage case. The Windows 7 system is loaded with Firefox, Office 2007 and Adobe Reader among other applications. With Firefox we browse web pages like Facebook, AnandTech, Digg and other sites. Outlook is also running and we use it to check emails, create and send a message with a PDF attachment. Adobe Reader is used to view some PDFs. Excel 2007 is used to create a spreadsheet, graphs and save the document. The same goes for Word 2007. We open and step through a presentation in PowerPoint 2007 received as an email attachment before saving it to the desktop. Finally we watch a bit of a Firefly episode in Windows Media Player 11.

There’s some level of multitasking going on here but it’s not unreasonable by any means. Generally the application tasks proceed linearly, with the exception of things like web browsing which may happen in between one of the other tasks.

The recording is played back on all of our drives here today. Remember that we’re isolating disk performance, all we’re doing is playing back every single disk access that happened in that ~5 minute period of usage. The light workload is composed of 37,501 reads and 20,268 writes. Over 30% of the IOs are 4KB, 11% are 16KB, 22% are 32KB and approximately 13% are 64KB in size. Less than 30% of the operations are absolutely sequential in nature. Average queue depth is 6.09 IOs.

The performance results are reported in average I/O Operations per Second (IOPS):

AnandTech Storage Bench - Typical Workload

The Martini controller improves performance by around 10% compared to the Barefoot based Corsair Nova. It's enough to bring Indilinx's latest offering within striking distance of the new V+100 and smaller capacity SandForce drives, however the larger capacity SF drives are untouchable.

If there’s a light usage case there’s bound to be a heavy one. In this test we have Microsoft Security Essentials running in the background with real time virus scanning enabled. We also perform a quick scan in the middle of the test. Firefox, Outlook, Excel, Word and Powerpoint are all used the same as they were in the light test. We add Photoshop CS4 to the mix, opening a bunch of 12MP images, editing them, then saving them as highly compressed JPGs for web publishing. Windows 7’s picture viewer is used to view a bunch of pictures on the hard drive. We use 7-zip to create and extract .7z archives. Downloading is also prominently featured in our heavy test; we download large files from the Internet during portions of the benchmark, as well as use uTorrent to grab a couple of torrents. Some of the applications in use are installed during the benchmark, Windows updates are also installed. Towards the end of the test we launch World of Warcraft, play for a few minutes, then delete the folder. This test also takes into account all of the disk accesses that happen while the OS is booting.

The benchmark is 22 minutes long and it consists of 128,895 read operations and 72,411 write operations. Roughly 44% of all IOs were sequential. Approximately 30% of all accesses were 4KB in size, 12% were 16KB in size, 14% were 32KB and 20% were 64KB. Average queue depth was 3.59.

AnandTech Storage Bench - Heavy Multitasking Workload

The Toshiba controllers have done very well in our heavy multitasking workload, as did the old Barefoot. Indilinx's Martini outdoes them both. The Vertex Plus is now our highest performing drive in our heavy multitasking workload. The SandForce drives fall short because much of this workload deals with incompressible data (JPEGs, .7z archives, etc...).

The gaming workload is made up of 75,206 read operations and only 4,592 write operations. Only 20% of the accesses are 4KB in size, nearly 40% are 64KB and 20% are 32KB. A whopping 69% of the IOs are sequential, meaning this is predominantly a sequential read benchmark. The average queue depth is 7.76 IOs.

AnandTech Storage Bench - Gaming Workload

Gaming performance took a slight hit compared to the old Barefoot. It's not enough to be a huge deal, especially considering the other improvements.

Overall System Performance using SYSMark 2007 Power Consumption
Comments Locked

61 Comments

View All Comments

  • scook9 - Tuesday, November 16, 2010 - link

    Looks like a nice update to barefoot but compared to the Intel G2 it is nothing ground breaking at all. There were also reliability issues with the barefoot that this would have to overcome (at least for me, that is why I have Intel G2s now). The price is also not that exciting given that the Intel G2 120GB just came out and is well priced.
  • Out of Box Experience - Wednesday, November 17, 2010 - link

    There are reliability issues with Sandforce as well

    I Just checked and there are almost as many complaints at New Egg as there are over at OCZ Forums regarding their SSD's

    I could be wrong but it seems most people complaining about bricked drives or losing all their data are the ones who take OCZ's advice and do all the recommended tweaks and Firmware updates

    I personally torture tested both Vertex and Vertex 2 drives without any tweaks or firmware updates and I have never had any trouble with either drive

    I do full formats and partition under XP (Both OCZ No-No's)
    I defragged both drives several times and never used trim yet both drives are working fine

    I think my next torture test will be to use the recommended OCZ tweaks and firmware updates

    NOT!
  • boe - Thursday, November 18, 2010 - link

    I'm anxious to build a new system with a sandy bridge proecessor, an at 6970 or nVidia video card and an SSD drive. However I need about 2TB of total storage and these puny SSD solutions would have been very practical about a decade ago but most of us looking for a high end computer that might include the more expensive SSD need a LOT more space.
  • TF2pro - Friday, November 19, 2010 - link

    Well or course you don't buy an SSD for space, I spent $160 on my 60gb Sandforce drive and that could have bought me 3.5TB worth of mechanical drive space. Would I do it again.. 100 times over.. I have been building reasonably high end systems for myself for years and an SSD is the missing element. If you are looking for 2tb SSD's you shouldn't even be reading this article ... wait 2-3 years and then come back, until then if you really wanna see a jump in the speed of your PC get an SSD. Also 2tb drives are 90 bucks.. so just get both.
  • Qapa - Friday, November 19, 2010 - link

    Well, that's not entirely true... if you have $3k or $4k to spend for SSDs you can buy 900GB-1TB SSDs (just check www.NewEgg.com).

    The question is, does that make sense. Not really for most people.
  • TF2pro - Friday, November 19, 2010 - link

    Well yes of course you COULD get 2tb of SSD... but if you have that much money you probably aren't reading this forum.. your butler is reading it for you...
  • rbarone69 - Sunday, November 21, 2010 - link

    Tech people come in all sizes, shapes and backgrounds. Some have millions, some dont. I am well off but technology is a passion for me. My job and my 'fun' do revolve around tech.

    My point is Anand has some of the most informative artciles on the internet regarding tech. Doesnt matter how much money you have you go where the quality is.
  • marraco - Tuesday, November 16, 2010 - link

    I would like to read tests of ICH10 RAID0 made of different disks.

    Is the performance averaged, or bottlenecked to the slower disk?

    I don’t want opinions, as credible they may be. I want actual real tests.

    Publication of those tests may encourage new kind of RAID controllers, in which the load is balanced between different performing drives.

    Today’s controllers expect similar performance from each drive, so balance load equally to all SSD.

    But let’s say that tomorrow I buy a second, much faster SSD, and I want to do RAID 0 with my Vertex 2.
    A good controller should split the information on sizes according to each drive speed.
  • DanNeely - Tuesday, November 16, 2010 - link

    That only makes sense if the drives capacity differences roughly match their speed differences as well; a usecase I suspect is too uncommon to be worth developing towards.
  • marraco - Tuesday, November 16, 2010 - link

    Smart point. Now it should be taken in account that speed and sizes are increasing by large and simultaneously.

    And it only makes the RAID controller more interesting.

    Is to the user to decide if he wants to be bottlenecked by speed, or be forced to reduce partition size to be able to gains speed at cost of size.

    Let's say that an old 100GB SSD is to be paired with a 150 GB SSD 2X faster (and thus should need to store 2X the space of the slower SSD).

    Then choices are:

    1-Reduce the partition on the slower SSD to 75GB, to match it with the 150GB 2X speed. It would result in a 3X speed improvement (making simplified number for clarity), and 125% increase in space storage. Also, 25 GB in the slower SSD would be free to a non RAID partition, as ICH10 allows today.

    2-Use all the capacity on both drives, as if the new SSD were only 50% faster. It would waste speed, because the improvement would be 2.5X instead of 3X, but no free space would be need reallocation.

    3-Anything on the middle.

Log in

Don't have an account? Sign up now