"Order Entry" Stress Test: Measuring Enterprise Class Performance

One complaint that we've historically received regarding our Forums database test was that it isn't strenuous enough for some of the Enterprise customers to make a good decision based on the results.

In our infinite desire to please everyone, we worked very closely with a company that could provide us with a truly Enterprise Class SQL stress application. We cannot reveal the identity of the Corporation that provided us with the application because of non-disclosure agreements in place. As a result, we will not go into specifics of the application, but rather provide an overview of its database interaction so that you can grasp the profile of this application, and understand the results of the tests better (and how they relate to your database environment).

We will use an Order Entry system as an analogy for how this test interacts with the database. All interaction with the database is via stored procedures. The main stored procedures used during the test are:

sp_AddOrder - inserts an Order
sp_AddLineItem - inserts a Line Item for an Order
sp_UpdateOrderShippingStatus - updates a status to "Shipped"
sp_AssignOrderToLoadingDock - inserts a record to indicate from which Loading Dock the Order should be shipped
sp_AddLoadingDock - inserts a new record to define an available Loading Dock
sp_GetOrderAndLineItems - selects all information related to an Order and its Line Items

The above is only intended as an overview of the stored procedure functionality; obviously, the stored procedures perform other validation, and audit operations.

Each Order had a random number of Line Items, ranging from one to three. Also randomized was the Line Items chosen for an order, from a pool of approximately 1500 line items.

Each test was run for 10 minutes and was repeated three times. The average between the three tests was used. The number of Reads to Writes was maintained at 10 reads for every write. We debated for a long while about which ratio of reads to writes would best serve the benchmark, and we decided that there was no correct answer. So, we went with 10.

The application was developed using C#, and all database connectivity was accomplished using ADO.NET and 20 threads - 10 for reading and 10 for inserting.

So, to ensure that IO was not the bottleneck, each test was started with an empty database and expanded to ensure that auto-grow activity did not occur during the test. Additionally, a gigabit switch was used between the client and the server. During the execution of the tests, there were no applications running on the server or monitoring software. Task Manager, Profiler, and Performance Monitor were used when establishing the baseline for the test, but never during execution of the tests.

At the beginning of each platform, both the server and client workstation were rebooted to ensure a clean and consistent environment. The database was always copied to the 8-disk RAID 0 array with no other files present to ensure that file placement and fragmentation was consistent between runs. In between each of the three tests, the database was deleted, and the empty one was copied again to the clean array. SQL Server was not restarted.

SQL Stress Results Order Entry Results
Comments Locked

144 Comments

View All Comments

  • saratoga - Thursday, April 21, 2005 - link

    " The three main languages used with .NET are: C# (similar to C++), VB.NET (somewhat similar to VB), and J# (fairly close to JAVA). Whatever language in which you write your code, it is compiled into an intermediate language - CIL (Common Intermediate Language). It is then managed and executed by the CLR (Common Language Runtime).
    "

    Waaah?

    C# is not similar c++, its not even like it. Its dervived from MS's experience with Java, and its intended to replace J#, Java and J++. Finally the language which is similar to c++ is managed c++ which is generally listed as the other main .net language.

    http://lab.msdn.microsoft.com/express/
  • Shintai - Thursday, April 21, 2005 - link

    #80, #81

    Tell me how much faster a singlecore 4000+ is compared to a 3800+ and you see it´s less than 5% in average, mostly about 2%. Your X2 2.2Ghz 1MB cache will perform same or max 1-1½% faster than a singlecore 2.2Ghz 1MB cache in games. So 10% lower for is kinda bad for a CPU with higher rating. Memory wont help you that much, cept in a fantasy world.

    And less take a game as example.
    Halo:
    127.7FPS with a 2.4Ghz 512KB cache singlecore.
    119.4FPS with a 2.2Ghz 1MB cache dualcore.
    And we all know they basicly have the same PR rating due to cache difference.

    And the cheap singlecore here beats the more expensive, powerusing and hotter dualcore CPU with 7% faster speed.

    So instead of paying 300$ for a CPU, you now pay 500$+, get worse gaming speeds, more heat, more powerusage....for what, waiting 2 years on game developers? Or some 64bit rescue magic that makes SMP for anything possible? It´s even worse for intel with their crappy prescotts, 3.2Ghz vs 3.8Ghz. Atleast AMD is close to the top CPU, but still abit away.

    Real gamers will use singlecores the next 1-2 years.
  • Shintai - Thursday, April 21, 2005 - link

    Forgot to add about physics engine etc. For that alone you can add a dummy dedicated chip on say..a GFX card for it for 5-10$ more that will do it 10x faster than any dualcore CPU we will see the rest of the year. Kinda like GFX cards are 10s of times faster than our CPUs for that purpose. Not like good o´days where we used the CPU to render 3D in games.

    The CPUs are getting more and more irrelevant in games. Just look how a CPu that performs worse in anything else as the Pentium M can own everything in gaming. Tho it lacks all the goodies the P4 and AMD64 got.

    It makes one wonder what they actually develop CPUs after, since 95% is gaming, 5% workstation/servers and corporate PCs could work perfectly and more with a K5-K6/P2-P3.

    Then we could also stay with some 100W or 200W PSU rather than a 400W, 500W or 700W.
  • cHodAXUK - Thursday, April 21, 2005 - link

    make that 'performed to 91% of the fastest gaming cpu around'.
  • cHodAXUK - Thursday, April 21, 2005 - link

    #79 Are you smoking something? A dual core 4400+ running with slow server memory and timing plus no NCQ drive peformed within 91% of the fastest gaming chip around. Now the real X2 4400+ with get at least a 15% pefromance boost from faster memory timings and unregistered memory and that is before we even think about overclocking at all.
  • Shintai - Thursday, April 21, 2005 - link

    So what the tests show is like 64bit tests.

    Dualcores are completely useless the next 2+ years unless you use your PC as a workstation for CAD, photoshop and heavy encoding of movies.

    And WinXP 64bit will be toy/useless the next 1-2years aswell, unless you use it for servers.

    Hype, hype, hype...

    In 2 years when these current intel and amd cores are outdated and we have pentium V/VI or M2/3 and K8/K9. Then we can benefit from it. But look back in the mirror. Those early AMD64 and those lowspeed Pentium 4 with 64bit wont really be used for 64bit. Because when we finally get a stable driverset and windows on windows enviroment. Then we will all be using Longhorn and nextgen CPUs.

    Dualcores will be slower than singlecores in games for a LONG LONG time. And it will be MORE expensive. And utterly useless cept for bragging rights. Ask all those people using dual xeons, dual opterons today how much gaming benefit they have. Oh ye, but hey lets all make some lunatic assumption that i´m downloading with p2p at 100mbit so 1 CPU will be 100% loaded while im encoding a movie meanwhile. yes then you can play your game on CPU#2. But how often is that likely to happen anyway. And all those multitasking things will just cripple your HD anyway and kill the sweet heaven there.

    It´s a wakeup call before you be the fools.
    games for 64bit and dualcores ain´t even started yet. So they will have their 1-3years developmenttime before we see them on the market. And if it´s 1 year it´s usually a crapgame ;)
  • calinb - Thursday, April 21, 2005 - link

    "Armed with the DivX 5.2.1 and the AutoGK front end for Gordian Knot..."

    AutoGK and Gordian Knot are front ends for several common aps, but AutoGK doesn't use Gordian Knot at all. AutoGK and Gordian Knot are completely Independent programs. len0x, the developer of AutoGK, is also a contributor to Gordian Knot development too. That's the connection.

    calinb, DivX Forums Moderator
  • cHodAXUK - Thursday, April 21, 2005 - link

    Hmm, it does seem that dual core with hyperthreading can be a real help and yet some times a real hinderance. Some benchmarks show it giving stellar performance and some show it slowing the cpu right down by swamping it. Some very hit and miss results for Intel's top dual core part there makes me wonder if it is really worth the extra money for something that can be so unreliable in certain situations.
  • Zebo - Thursday, April 21, 2005 - link

    even if i can't spell them.
  • Zebo - Thursday, April 21, 2005 - link

    Fish bits

    Words mean things sorry you don't like them.

    Cripple is true. The X2 is crippled in this test since it's not really and X2.

    It's a "misrepresentation" of it's true abilities.

    I was'nt bagging on Anand but highlithing what was said in the article then speculating of potential performance of the real X2 all things considered.

    Sorry I did'nt attend PC walk on my tip toes class. I say what I mean and use accurate words.

Log in

Don't have an account? Sign up now