"Order Entry" Stress Test: Measuring Enterprise Class Performance

One complaint we've historically received about our Forums database test was that it isn't strenuous enough for some of the Enterprise customers to make a good decision based on the results.

In our infinite desire to please everyone we worked very closely with a company that could provide us with a truly Enterprise Class SQL stress application. We cannot reveal the identity of the Corporation that provided us with the application because of non-disclosure agreements in place. As a result, we will not go into specifics of the application, but rather provide an overview of it's database interaction so that you can grasp the profile of this application and better understand the results of the tests (and how they relate to your database environment).

We will use an Order Entry system as an analogy for how this test interacts with the database. All interaction with the database is via stored procedures. The main stored procedures used during the test are:

sp_AddOrder - inserts an Order
sp_AddLineItem - inserts a Line Item for an Order
sp_UpdateOrderShippingStatus - updates an status to "Shipped"
sp_AssignOrderToLoadingDock - inserts a record to indicate which Loading Dock the Order should be shipped from
sp_AddLoadingDock - inserts a new record to define an available Loading Dock
sp_GetOrderAndLineItems - selects all information related to an Order and it's Line Items

The above is only intended as an overview of the stored procedure functionality; obviously the stored procedures perform other validation, and audit operations.

Each Order had a random number of Line Items, ranging from one to three. Also randomized was the Line Items chosen for an order, from a pool of approximately 1500 line items.

Each test was run for 10 minutes and was repeated three times. The average between the three tests was used. The number of Reads to Writes was maintained at 10 reads for every write. We debated for a long while about which ratio of reads to writes to would best services the benchmark and we decided there was no correct answer... so we went with 10.

The application was developed using C# and all database connectivity was accomplished using ADO.NET and used 20 threads, 10 for reading and 10 for inserting.

So as to ensure that IO was not the bottleneck, each test was started with an empty database and expanded to ensure that autogrow activity did not occur during the test. Additionally, a gigabit switch was used between the client and the server. During the execution of the tests, there were no applications running on the server or monitoring software. Task Manager, Profiler, and Performance Monitor where used when establishing the baseline for the test, but never during execution of the tests.

At the beginning of each platform both the server and client workstation was rebooted to ensure a clean and consistent environment. The database was always copied to the 8 disk RAID 0 array with no other files present to ensure that file placement and fragmentation was consistent between runs. In between each of the three test the database was deleted, the empty one was copied again the clean array. SQL Server was not restarted.

AnandTech Forums Database Test Results Stored Procedures per Second Analysis
Comments Locked

58 Comments

View All Comments

  • Rand - Friday, May 20, 2005 - link

  • perlgreen - Tuesday, June 1, 2004 - link

    Is there any chance that you guys could do more tests and benchmarking on Linux for IT Computing/Servers? I really like your site, but it'd be really nice if there would be more stuff for fans of the Penguin!

    cheers,

    Campbell
  • ragusauce - Friday, March 5, 2004 - link

    #54
    We have been building from source and trying different options / debug versions...
  • DBBoy - Friday, March 5, 2004 - link

    #47 - In OLAP, or poorly indexed environments where the amount of data exceeds the 4 MB L3 cache of the Xeons the Opteron is going to shine even more with it's increased memory bandwidth.

    Assuming you do not bottleneck on the disk IO the SQL cache/RAM will be utilised much more thus putting more of a burden on the FSB of the Xeons in addition to allowing the Opteron's memory bandwidth to display it's abilities.
  • Jason Clark - Friday, March 5, 2004 - link

    ragusauce, using binaries or building from source?

    Cheers
  • ragusauce - Friday, March 5, 2004 - link

    We have been doing extensive testing of MySQL64 on Opteron and have had problems with seg faults as well.
  • zarjad - Thursday, March 4, 2004 - link

    Great, thanks.
    My thoughts:
    In this type of application you are likely to use more than 4GB memory.
    Memory bandwidth should matter because you will be doing a lot of full table scans (as opposed to using indexes).
  • Jason Clark - Thursday, March 4, 2004 - link

    zarjad, I'll get back to you on that question I have some thoughts and amd discussing them with one of the guys that worked with us on the tests (Ross).
  • zarjad - Thursday, March 4, 2004 - link

    Jason, any comments on #47?
  • Jason Clark - Wednesday, March 3, 2004 - link

    The os used was windows 2003 enterprise which does indeed support NUMA. So NUMA was enabled.. this was covered in an earlier response.

Log in

Don't have an account? Sign up now