Benchmarks

After running our tests on the ZFS system (both under Nexenta and OpenSolaris) and the Promise M610i, we came up with the following results.  All graphs have IOPS on the Y-Axis, and Disk Que Lenght on the X-Axis.

4k Sequential Reads

 

In the 4k Sequential Read test, we see that the OpenSolaris and Nexenta systems both outperform the Promise M610i by a significant margin when the disk queue is increased.  This is a direct effect of the L2ARC cache.  Interestingly enough the OpenSolaris and Nexenta systems seem to trend identically, but the Nexenta system is measurably slower than the OpenSolaris system.  We are unsure as to why this is, as they are running on the same hardware and the build of Nexenta we ran was based on the same build of OpenSolaris that we tested.  We contacted Nexenta about this performance gap, but they did not have any explanation.  One hypothesis that we had is that the Nexenta software is using more memory for things like the web GUI, and maybe there is less ARC available to the Nexenta solution than to a regular OpenSolaris solution.   

 

4k Random Write

 

In the 4k Random Write test, again the OpenSolaris and Nexenta systems come out ahead of the Promise M610i.  The Promise box seems to be nearly flat, an indicator that it is reaching the limits of its hardware quite quickly.  The OpenSolaris and Nexenta systems write faster as the disk queue increases.  This seems to indicate a better re-ordering of data to make the writes more sequential the disks.

  

4k Random 67% Write 33% Read

 

The 4k 67% Write 33% Read test again gives the edge to the OpenSolaris and Nexenta systems, while the Promise M610i is nearly flat lined.  This is most likely a result of both re-ordering writes and the very effective L2ARC caching.

  

4k Random Reads

 

4k Random Reads again come out in favor of the OpenSolaris and Nexenta systems.  While the Promise M610i does increase its performance as the disk queue increases, it's nowhere near the levels of performance that the OpenSolaris and Nexenta systems can deliver with their L2ARC caching.

  

8k Random Read

 

8k Random Reads indicate a similar trend to the 4k Random Reads with the OpenSolaris and Nexenta systems outperforming the Promise M610i.  Again, we see the OpenSolaris and Nexenta systems trending very similarly but with the OpenSolaris system significantly outperforming the Nexenta system.

  

8k Sequential Read

 

 8k Sequential reads have the OpenSolaris and Nexenta systems trailing at the first data point, and then running away from the Promise M610i at higher disk queues.  It's interesting to note that the Nexenta system outperforms the OpenSolaris system at several of the data points in this test.

   

8k Random Write

 

  8k Random writes play out like most of the other tests we've seen with the OpenSolaris and Nexenta systems taking top honors, with the Promise M610i trailing.  Again, OpenSolaris beats out Nexenta on the same hardware.

  

8k Random 67% Write 33% Read

 

8k Random 67% Write 33% Read again favors the OpenSolaris and Nexenta systems, with the Promise M610i trailing.  While the OpenSolaris and Nexenta systems start off nearly identical for the first 5 data points, at a disk queue of 24 or higher the OpenSolaris system steals the show.

  

16k Random 67% Write 33% Read

 

 16k Random 67% Write 33% read gives us a show that we're familiar with.  OpenSolaris and Nexenta both soundly beat the Promise M610i at higher disk ques.  Again we see the pattern of the OpenSolaris and Nexenta systems trending nearly identically, but the OpenSolaris system outperforming the Nexenta system at all data points.

  

16k Random Write

 

 16k Random write shows the Promise M610i starting off faster than the Nexenta system and nearly on par with the OpenSolaris system, but quickly flattening out.  The Nexenta box again trends higher, but cannot keep up with the OpenSolaris system.

  

16k Sequential Read

 

 The 16k Sequential read test is the first test that we see where the Promise M610i system outperforms OpenSolaris and Nexenta at all data points.  The OpenSolaris system and the Nexenta system both trend upwards at the same rate, but cannot catch the M610i system.

  

16k Random Read

 

The 16k Random Read test goes back to the same pattern that we've been seeing, with the OpenSolaris and Nexenta systems running away from the Promise M610i.  Again we see the OpenSolaris system take top honors with the Nexenta system trending similarly, but never reaching the performance metrics seen on the OpenSolaris system.

  

32k Random 67% Write 33% Read

 

 32k Random 67% Write 33% read has the OpenSolaris system on top, with the Promise M610i in second place, and the Nexenta system trailing everything.  We're not really sure what to make of this, as we expected the Nexenta system to follow similar patterns to what we had seen before.

  

32k Random Read

 

 32k Random Read has the OpenSolaris system running away from everything else.  On this test the Nexenta system and the Promise M610i are very similar, with the Nexentaq system edging out the Promise M610i at the highest queue depths.

  

32k Sequential Read

 

 32k Sequential Reads proved to be a strong point for the Promise M610i.  It outperformed the OpenSolaris and Nexenta systems at all data points.  Clearly there is something in the Promise M610i that helps it excel at 32k Sequential Reads.  

 

32k Random Write

 

  

32k random writes have the OpenSolaris system on top again, with the Promise M610i in second place, and the Nexenta system trailing far behind.  All of the graphs trend similarly, with little dips and rises, but not ever moving much from the initial reading. 

 After all the tests were done, we had to sit down and take a hard look at the results and try to formulate some ideas about how to interpret this data.  We will discuss this in our conclusion.

Test Blade Configuration Demise of OpenSolaris
Comments Locked

102 Comments

View All Comments

  • Mattbreitbach - Tuesday, October 5, 2010 - link

    Indeed you can, which is one of the most exciting parts about using software based storage appliances. Nexenta really excels in this area, offering iSCSI, NFS, SMB, and WebDAV with simple mouse clicks.
  • MGSsancho - Tuesday, October 5, 2010 - link

    or a single command!
  • FransUrbo - Wednesday, January 11, 2012 - link

    Would be really nice to see how ZoL compares. It's in no way optimized yet (current work is on getting the core functionality stable - which IMHO it is) so it would have no chanse against OpenSolaris or Nexenta, but hopfully it's comparative to the Promise rack.

    http://zfsonlinux.org/
  • gfg - Tuesday, October 5, 2010 - link

    NAS is extremely cost effective in a data center if a large majority of NFS/CIFS users are more interested in capacity, not performance. NDMP can be very efficent for backups, and the snapshots/multi-protocol aspects of NAS systems are fairly easy to manage. Some of the larger Vendor NAS systems can support 100+TB's per NAS fairly effectively.
  • bhigh - Wednesday, October 6, 2010 - link

    Actually, OpenSolaris and Nexenta can act as a SAN device using COMSTAR. You can attach to them with iSCSI, FC, Infiniband, etc. and use any zvols as raw scsi targets.
  • JGabriel - Wednesday, October 6, 2010 - link

    Also, "Testing and Benchmarking"?

    Doesn't that mean the same thing and isn't it redundant? See what I did there?

    .
  • Fritzr - Thursday, October 7, 2010 - link

    This is similar to the NAS<>SAN argument. They are used in a similar manner, but have very different purposes.

    Testing. You are checking to see if the item performance meets your need & looking for bugs or other problems including documentation and support.

    Benchmarking. You are running a series of test sets to measure the performance. Bugs & poor documentation/support may abort some of the measuring tools, but that simply goes into the report of what the benchmarks measured.

    Or in short:
    Test==does it work?
    Benchmark==What does it score on standard performance measures?
  • lwatcdr - Friday, October 8, 2010 - link

    I am no networking expert so please bear with me.
    What are the benfits of a SAN over local drivers and or a NAS?
    I would expect a NAS to have better performance since it would send less data over the wire than a SAN if they both had the same physical connection.
    A local drive/array I would expect to be faster than a SAN since it will not need to go through a network.
    Does it all come down to management? I can see the benefit of having your servers boot over the network and having all your drives in one system. If you set up the servers to boot over the network it would be really easy to replace a server.
    Am I missing something or are the gains all a matter of management?
  • JohanAnandtech - Sunday, October 10, 2010 - link

    A NAS has most of the time worse performance than a similar SAN since there is a file system layer on the storage side. A SAN only manages block and has thus less layers, and is more efficient.

    A local drive array is faster, but is less scalable and depending on the setup, it is harder to give a large read/write cache: you are limited by the amount of RAM your cache controller supports. In a software SAN you can use block based caches in the RAM of your storage server.

    Management advantages over Local drives are huge: for example you can plug a small ESXi/Linux flash drive which only contains the hypervisor/OS, and then boot everything else from a SAN. That means that chances are good that you never have to touch your server during its lifetime and handle all storage and VM needs centrally. Add to that high availability, flexibility to move VMs from one server to another and so on.
  • lwatcdr - Monday, October 11, 2010 - link

    I but that layer must be executed somewhere I thought that decrease in data sent over the physical wire would make up for the extra software cost on the server side.
    Besides you would still want a NAS even with a SAN for shared data. I am guessing that you could have a NAS served data from the SAN if you needed shared directories.
    I also assume that since most SAN are on a separate storage network that the SAN is mainly used to provide storage to servers and than the servers provide data to clients on the lan.
    The rest of it seems very logical to me in a large setup. I am guessing that if you have a really high performance data base server that one might use a DAS instead of SAN or dedicate a SAN server just to the database server.
    Thanks I am just trying to educate myself on SANs vs NAS vs DAS.
    Since I work at a small software development firm our sever setup is much simpler than the average Data center so I don't get to deal this level of hardware often.
    However I am thinking that maybe we should build a SAN and storage network just for our rack.

Log in

Don't have an account? Sign up now