AnandTech Storage Bench 2010

To keep things consistent we've also included our older Storage Bench. Note that the old storage test system doesn't have a SATA 6Gbps controller, so we only have one result for the 6Gbps drives.

The first in our benchmark suite is a light/typical usage case. The Windows 7 system is loaded with Firefox, Office 2007 and Adobe Reader among other applications. With Firefox we browse web pages like Facebook, AnandTech, Digg and other sites. Outlook is also running and we use it to check emails, create and send a message with a PDF attachment. Adobe Reader is used to view some PDFs. Excel 2007 is used to create a spreadsheet, graphs and save the document. The same goes for Word 2007. We open and step through a presentation in PowerPoint 2007 received as an email attachment before saving it to the desktop. Finally we watch a bit of a Firefly episode in Windows Media Player 11.

There’s some level of multitasking going on here but it’s not unreasonable by any means. Generally the application tasks proceed linearly, with the exception of things like web browsing which may happen in between one of the other tasks.

The recording is played back on all of our drives here today. Remember that we’re isolating disk performance, all we’re doing is playing back every single disk access that happened in that ~5 minute period of usage. The light workload is composed of 37,501 reads and 20,268 writes. Over 30% of the IOs are 4KB, 11% are 16KB, 22% are 32KB and approximately 13% are 64KB in size. Less than 30% of the operations are absolutely sequential in nature. Average queue depth is 6.09 IOs.

The performance results are reported in average I/O Operations per Second (IOPS):

AnandTech Storage Bench - Typical Workload

If there’s a light usage case there’s bound to be a heavy one. In this test we have Microsoft Security Essentials running in the background with real time virus scanning enabled. We also perform a quick scan in the middle of the test. Firefox, Outlook, Excel, Word and Powerpoint are all used the same as they were in the light test. We add Photoshop CS4 to the mix, opening a bunch of 12MP images, editing them, then saving them as highly compressed JPGs for web publishing. Windows 7’s picture viewer is used to view a bunch of pictures on the hard drive. We use 7-zip to create and extract .7z archives. Downloading is also prominently featured in our heavy test; we download large files from the Internet during portions of the benchmark, as well as use uTorrent to grab a couple of torrents. Some of the applications in use are installed during the benchmark, Windows updates are also installed. Towards the end of the test we launch World of Warcraft, play for a few minutes, then delete the folder. This test also takes into account all of the disk accesses that happen while the OS is booting.

The benchmark is 22 minutes long and it consists of 128,895 read operations and 72,411 write operations. Roughly 44% of all IOs were sequential. Approximately 30% of all accesses were 4KB in size, 12% were 16KB in size, 14% were 32KB and 20% were 64KB. Average queue depth was 3.59.

AnandTech Storage Bench - Heavy Multitasking Workload

The gaming workload is made up of 75,206 read operations and only 4,592 write operations. Only 20% of the accesses are 4KB in size, nearly 40% are 64KB and 20% are 32KB. A whopping 69% of the IOs are sequential, meaning this is predominantly a sequential read benchmark. The average queue depth is 7.76 IOs.

AnandTech Storage Bench - Gaming Workload

Overall System Performance using PCMark Vantage TRIM Performance & Power Consumption
Comments Locked

153 Comments

View All Comments

  • erple2 - Friday, April 8, 2011 - link

    I believe that the issue is scale. It would not be possible financially for OCZ to issue a massive recall to change the packaging on all existing drives in the marketplace. Particularly given that while the drives have different performance characteristics (I'd like to see what the real world differences are, not just some contrived benchmark), it's not like one drive fails while another works.

    So it sounds to me like they're doing more or less what's right, particularly given the financial difficulty of a widespread recall.
  • Dorin Nicolaescu-Musteață - Thursday, April 7, 2011 - link

    IOmeter results for the three NAND types are the same for both compressible and uncompressible data in ”The NAND Matrix”. Yet, the text suggests the opposite.
  • gentlearc - Thursday, April 7, 2011 - link

    The Vertex 3 is slower
    It doesn't last as long
    Performance can vary

    Why would you write an entire article justifying a manufacturers decisions without speaking about how this benefits the consumer?

    The real issue is price and you make no mention of it. If I'm going to buy a car that doesn't go as fast, has a lower safety rating, and the engine can be any of 4 different brands, the thing better be cheaper than what's currently on the market. If the 25nm process allows SSDs to break a price barrier, then that should be the focal point of the article. What is your focal point?

    "Why not just keep using 34nm IMFT NAND? Ultimately that product won't be available. It's like asking for 90nm CPUs today, the whole point to Moore's Law is to transition to smaller manufacturing processes as quickly as possible."

    Pardon? This is not a transistor count issue, it's further down the road. I am surprised you would quote Moore's Law as a reason why we should expect worse from the new generation of SSDs. The inability for a company to address the complications of a die shrink are not the fault of Moore's Law, it's the fault of the company. As you mentioned in your final words, the 250GB will probably be able to take better advantage of the die shrink. Please don't justify manufacturers trying to continue using a one-size-fits-all approach without showing how we, the consumer (your readership), are benefited.
  • erple2 - Friday, April 8, 2011 - link

    I think that you've missed the point entirely. The reason why you can't get 34nm IMFT NAND going forwards, is that Intel is ramping that production down in favor of the smaller manufacturing process. They may already have stopped manufacturing those products in bulk. Therefore, the existing 34nm NAND is "dying off". They won't be available in the future.

    The point about Moore's Law - I think Anand may be stretching the meaning of Moore's Law, but ultimately the reason why we get faster, smaller chips is because of cost. It's unclear to me what the justification behind Moore's law is, but ultimately, that's not important to the actual Law itself. It is simply a reflection of the reality of the industry.

    I believe transistor count IS the issue. The more transistors Intel (or whomever) can pack in to a memory module for the same cost to them (thereby increasing capacity), the more likely they are to do that. It is a business, after all. Higher density can be sold to the consumer at a higher price (more GB's = more $'s). Intel (the manufacturer of the memory) doesn't care whether the performance of the chips is lower to some end user. As you say, it's up to the controller manufacturer to figure out how to take into account the "issues" involved in higher density, smaller transistor based memory. If you read the article again, Anand isn't justifying anything - he's simply explaining the reasons behind why RIGHT NOW, 25nm chips are slower on existing SF drives than 34nm chips are.

    It's more an issue of the manufacturers trying to reuse "old" technology for the current product line, until the SF controller optimizations catch up to the smaller NAND.
  • gentlearc - Saturday, April 9, 2011 - link

    Once again, why do an article explaining a new product that is inferior to the previous generation with no reason why we should be interested? AMD's Radeon HD 6790 was titled "Coming Up Short At $150" because regardless of the new technology, it offers too little for too much. Where is the same conclusion?

    Yes, this article was an explanation. Anand does a 14-page explanation, saving a recommendation for the future.

    "The performance impact the 120GB sees when working with incompressible data just puts it below what I would consider the next-generation performance threshold."

    The questions remains. Why should the120GB Vertex 3 debut $90 more than it's better performing older brother?
  • mpx999 - Sunday, April 10, 2011 - link

    If you have a problem with speed of flash memory then a good choice for you are drives with SLC memory, which doesn't have as much speed limitations. Unfortunately manufacturers severy overprice them, as SLC drives are much more than 2 times more expansive than MLC ones at the same amount GB, despite the fact that the flash is only 2 times more expansive. You can buy reasonably priced (2x MLC version price) SDHC cards with SLC flash, but you can't get reasonably priced (2 x MLC version price) SSD with SLC flash.
  • taltamir - Thursday, April 7, 2011 - link

    "After a dose of public retribution OCZ agreed to allow end users to swap 25nm Vertex 2s for 34nm drives"

    Actually OCZ lets customers swap their 25nm 64Gbit drives for 25nm 32Gbit drives. There are no swaps to the 32nm 32Gbit drives
  • garloff - Thursday, April 7, 2011 - link

    Anand -- thanks for your excellent coverage on SSDs -- it's the best that I know of. And I certainly appreciate your work with the vendors, pushing them for higher standards -- something from which everybody benefits.

    One suggestion to write power consumption:
    I can see drives that write faster consume more power -- that's no surprise, as they write to more chips (or the SF controller has to compress more data ...) and it's fair. They are done sooner, going back to idle.
    Why don't you actually publish a Ws/GB number, i.e. write a few Gigs and then measure the energy consumed to do that? That would be very meaningful AFAICT.

    (As a second step, could could also do a mix, by having a bench run for 60s, writing a fixed amount of data and then comparing energy consumption -- faster drives will be longer in idle than slower ones ... that would also be meaningful, but that's maybe a second step. Or you measure the energy consumed in your AS bench, assuming that it transfers a fixed amount of data as opposed to running for a fixed amount of time ...)
  • Nihil688 - Thursday, April 7, 2011 - link

    Hello all,
    I am kinda new to all this and since I am about to get a new 6GB/s Sata3 system I need to ask this

    The main two SSDs that I am considering are the Micron's C400 or the OCZ Vertex3 120' version.
    I can see that their sequential speeds in both write and read are completely different with V3 winning
    but their Random IOPSs (always comparing the 120GB V3 and the 128GB C400) differ with C400 winning in reads and V3 winning with big difference in writes.
    I must say I am planning to install my windows 7 OS in this new SSD I am getting and what I would
    consider doing is the following:
    -Compiling
    -Installing 1 game at a time, playing, erasing, redo
    -Maybe Adobe work: Photoshop etc

    So I have other hard drives to store stuff but the SSD would make my work and gaming quite faster.
    The question is, C400 gives 40K of read which is more important for an OS whilst V3 gives better overall stats and is only lacking in random reads. What would be more important for me? Thanks!
  • PaulD55 - Thursday, April 7, 2011 - link

    Connected my 120 Gig Vertex 3 ( purchased from New Egg) , booted and saw that it was not recognized by the BIOS, I then noticed the drive was flashing red and green. Contacted OCZ and was told the drive was faulty and should be returned. New Egg claims they have no idea when these will be back in stock.

Log in

Don't have an account? Sign up now