AnandTech Storage Bench 2011

Back in 2011 (which seems like so long ago now!), we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on peak IO performance and basic garbage collection routines. There is a lot of downloading and application installing that happens during the course of this test. Our thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives. The full description of the Heavy test can be found here, while the Light workload details are here.

Heavy Workload 2011 - Average Data Rate

The same goes for our 2011 Storage Bench: the XP941 is unbeatable. Only in the Light Workload test, the 8-controller OCZ behemoth is able to beat the XP941 by a small margin, but other than that there's nothing that can challenge the XP941. The consumer-oriented OCZ RevoDrive comes close but the XP941 once again shows how a good single controller design can beat any RAID 0 configuration.

Light Workload 2011 - Average Data Rate

AnandTech Storage Bench 2013 Random & Sequential Performance
Comments Locked

110 Comments

View All Comments

  • McTeags - Thursday, May 15, 2014 - link

    I think there is a spelling mistake in the first sentence. Did you mean SATA instead of PATA? I don't know all of the tech lingo so maybe I'm mistaken.
  • McTeags - Thursday, May 15, 2014 - link

    Please disregard my comment. I googled it...
  • BMNify - Thursday, May 15, 2014 - link

    sata-e[serial], sata[serial], pata[parrallel] ,SCSI [several, and chainable to 15+ drives on one cable, we should have used that as generic] ,shugart these are all drive interfaces and there are more too going back in the day....
  • metayoshi - Thursday, May 15, 2014 - link

    "It's simply much faster to move electrons around a silicon chip than it is to rotate a heavy metal disk."

    While SSD performance blows HDDs out of the water, the quoted statement is technically not correct. If you take a single channel NAND part and put it up against today's mechanical HDDs, the HDD will probably blow the NAND part out of the water in everything except for random reads.

    What really kills HDD performance isn't the just rotational speed as much as it is the track-to-track seek + rotational latency of a random workload. A sequential workload will reduce the seek and rotational latency so much that the areal density of today's 5 TB HDDs will give you pretty good numbers. In a random workload however, the next block of data you want to read is most likely on a different track, different platter, and different head. Now it has to seek the heads to the correct track, perform a head switch because only 1 head can be on at a time, and then wait for the rotation of the disk for that data block to be under the head.

    A NAND part with a low number of channels will give you pretty crappy performance. Just look at the NAND in smartphones and tablets of today, and in the SD cards and USB thumb drives of yesteryear. What really makes SSDs shine is that they have multiple NAND parts on these things, and that they stripe the data across a huge number of channels. Just think RAID 0 with HDDs, except this time, it's done by the SSD controller itself, so the motherboard only needs 1 SATA (or other like PCIe) interface to the SSD. That really put SSDs on the map, and if a single NAND chip can do 40 MB/s writes, think about 16 of them doing it at the same time.

    So yes, there's no question that the main advantage of SSDs vs HDDs is an electrical vs mechanical thing. It's just simply not true that reading the electrical signals off of a single NAND part is faster than reading the bits off of a sequential track in an HDD. It's a lot of different things working together.
  • iwod - Friday, May 16, 2014 - link

    I skim read it. Few things i notice, No Power usage testing. But 0.05w idle is pretty amazing. Since the PCI-E supply the power as well i guess they could be much better fine grained? Although Active was 5.6W. So at the same time we want more performance == faster controller while using much lower power. it seems there could be more work to do.

    I wonder if the relative slow Random I/O were due to Samsung betting its use on NVMe instead of ACHI.
  • iwod - Friday, May 16, 2014 - link

    It also prove my points about Random I/O. We see how Random I/O for xp941 being at the bottom of the chart while getting much better benchmarks results. Seq I/O matters! And It matters a lot. The PCI -E x4 interfaces will once again becomes bottleneck until we move to PCI-E 3.0 Which i hope we do in 2015.
    Although i have this suspicious feeling intel is delaying or slowing down our progression.
  • nevertell - Friday, May 16, 2014 - link

    Can't you place the bootloader on a hard drive, yet have it load the OS up from the SSD ?
  • rxzlmn - Friday, May 16, 2014 - link

    'Boot Support: Mac? Yes. PC? Mostly No.'

    Uh, a Mac is a PC. On a serious tech site I don't expect lines like that.
  • Penti - Friday, May 16, 2014 - link

    Firmware differences.
  • Haravikk - Friday, May 16, 2014 - link

    It still surprises me that PCs can have so many hurdles when it comes to booting from various devices; for years now Macs have been able to boot from just about anything you plug into them (that can store data of course). I have one machine already that uses an internal drive combined with an external USB drive as a Fusion Drive, and it not only boots just fine, but the Fusion setup really helps eliminate the USB performance issues.

    Anyway, it's good to see PCIe storage properly reaching general release; it's probably going to be a while before I adopt it on PCs, as I'm still finding regular SATA or M.2 flash storage runs just fine for my needs, but having tried Apple's new Mac Pros, the PCIe flash really is awesome. Hopefully the next generation of Mac Pros will have connectors for two, as combined in a RAID-0 or RAID-1 the read performance can be absolutely staggering.

Log in

Don't have an account? Sign up now