The Test

For our tests we looked at combined performance of all two/four NAND partitions on the Intel SSD 910. We created a two/four drive RAID-0 array from all of the active controllers on the card to show aggregate performance. If you're going to dedicate one partition to each VM in a virtualized environment, you can expect performance to be roughly a quarter of what you see here per drive.

CPU

Intel Core i7 2600K running at 3.4GHz (Turbo & EIST Disabled)

Motherboard:

Intel H67 Motherboard

Chipset:

Intel H67

Chipset Drivers:

Intel 9.1.1.1015 + Intel RST 10.2

Memory: Qimonda DDR3-1333 4 x 1GB (7-7-7-20)
Video Card: eVGA GeForce GTX 285
Video Drivers: NVIDIA ForceWare 190.38 64-bit
Desktop Resolution: 1920 x 1200
OS: Windows 7 x64

Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews. For our enterprise suite we make a few changes to our usual tests.

Our first test writes 4KB in a completely random pattern over all LBAs on the drive (compared to an 8GB address space in our desktop reviews). We perform 32 concurrent IOs (compared to 3) and run the test until the drive being tested reaches its steady state. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Enterprise Iometer - 4KB Random Read

Intel was one of the first mainstream SSD vendors to prioritize random read performance, and it applies just as much to its enterprise offerings. The 800GB Intel SSD 910 delivers gobs of performance in our random read test. The 400GB version is also good but it's interesting to note Toshiba's 400GB 2.5" SAS drive does just as well here.

Enterprise Iometer - 4KB Random Write

Random write performance is good, although still far away from what the SandForce based solutions from OCZ are able to deliver with compressible data. Throw anything other than pure text into your database however and Intel's drives become the fastest offerings once again.

Sequential Read/Write Speed

Similar to our other Enterprise Iometer tests, queue depths are much higher in our sequential benchmarks. To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 32. The results reported are in average MB/s over the entire test length.

Enterprise Iometer - 128KB Sequential Read

Sequential read performance of the 800GB 910 is nearly 2GB/s and competitive with OCZ's 600GB Z-Drive R4. This is really where PCIe based SSDs shine, there's simply no way to push this much bandwidth over a single SATA/SAS port. The 400GB cuts performance in half but we're still talking about nearly 1GB/s. Of the single drives, Toshiba's 400GB SAS drive does the best at 521MB/s. Micron's P400e is a close second among 2.5" drives.

Enterprise Iometer - 128KB Sequential Write

Here we finally see a difference between running the 800GB 910 in standard and high performance modes. In its max performance state the 800GB 910 is good for 1.5GB/s. OCZ's Z-Drive R4 is still a bit quicker with compressible data, but if you throw any incompressible (or encrypted) data at the drive performance is cut in half. Without the TDP cap removed, we can write sequentially to the 910 at 1GB/s.

Intel's SSD Data Center Tool Enterprise Storage Bench - Oracle Swingbench
Comments Locked

39 Comments

View All Comments

  • web2dot0 - Friday, August 10, 2012 - link

    That's why you need a comparison buddy. Otherwise, why don't we just read off the spec sheet and declare a winner? Let's face it z-drive r4 is NO FusionIO ok.

    FusionIO is a proven entity backed my a number of reputable companies (Dell, HP, etc...). Those companies didn't sign on because the cards are crap. Who's backing Z-Drive?

    They are the standards in which enterprise SSDs are measured. At least, that's the general consensus.
  • happycamperjack - Friday, August 10, 2012 - link

    Spec sheet? did you even read the benchmarks in that comparison? FusionIO's ioDrive clearly lost out there except for low queue situation.

    As for who's backing OCZ's enterprise SSD, let's see, Microsoft, SAP, ebay just to name a few. I don't know where you get the idea that OCZ's enterprise products do not meet the standard, but they are currently the 4th largest enterprise SSD provider. So you are either very misinformed, or just a clueless FusionIO fanboy.
  • web2dot0 - Sunday, August 12, 2012 - link

    Come on dude.

    You are clearly looking at the specsheets. The feature sets offered by FusionIO cards are light years ahead of OCZ cards.

    The toolset is also light years ahead. It's not always just about performance. Otherwise, everyone will be using XEN and nobody will be using VMWARE. Get it?

    I would like to see a direct comparison of FusionIO cards (on workloads that enterprises matter), not what you THINK it will perform.

    You are either very much misinformed or you are a clueless kid.
  • happycamperjack - Thursday, August 16, 2012 - link

    what spreadsheet? I'm comparing the benchmark charts at later pages, which you obviously have not clicked through. There's enterprise comparisons too ok kid?

    what's great about FIO is its software sets for big data and its low latency and high low queue data access performance. but if just comparing single card performance per GB price ratio, FIO is overpriced IMO. And FIO's PCIe cards' lackluster performance in high queue depth is highlighting what could be the doom of FPGA PCIe cards as the cheap ATIC controllers mature and overthrow the FPGA cards with its abundant number on a board.

    My guess is that in 2 years, FPGA PCIe SSDs would be used only in some specialized Tier 0 storages for high performance computing that would benefit from FPGA's feature sets. Similar to Rambus's RDRAM's fate.

    And if OCZ is good enough for MS's Azure cloud, I don't see why it's not good enough for other enterprise
  • hmmmmmm - Saturday, August 11, 2012 - link

    unfortunately, they are comparing the 910 to a 2009, discontinued card from fusion-io. would like to see a new card in the comparison to be able to compare what's on the market today
  • happycamperjack - Thursday, August 16, 2012 - link

    I love to see some ioDrive 2 comparisons too. Unfortunately I can't find any.
  • zachj - Thursday, August 9, 2012 - link

    Does the 910 have a capacitor to drain contents of DRAM to flash during a power outage?
  • FunBunny2 - Thursday, August 9, 2012 - link

    It looked like it, but I didn't read a mention. Could be bad eyesight.
  • erple2 - Thursday, August 9, 2012 - link

    For the market that this targets, you should never have a power outage that affects your server. These are too expensive to not have some sort of redundant power source like at least a solid ups, or better yet, a server room backup power generator.

    That having been said, if you look at the main PCB, you can see 4 capacitors of some sort.
  • mike_ - Saturday, August 11, 2012 - link

    >>For the market that this targets, you should never have a power outage that affects your server.

    You'd wish it weren't so, but environments can and will fail. If it has capacitors and such that's great, if it doesn't this device is effectively useless. Surprised it didn't get mentioned :)

Log in

Don't have an account? Sign up now