APUS: a new way of benchmarking

Using industry benchmarks and full disclosure of testing methods is important, as it allows other people to verify your numbers. Therefore we have included a benchmark that everyone can verify (3DS Max architecture rendering). However, testing with real heavy duty applications is so much more interesting. These are the applications that are running in the datacenters, not SPECjbb or TPC. People really care about the performance, as when the applications start to crawl, it affects their work and business. Therefore, let me introduce you to our completely new way of benchmarking with our unique software, called APUS.

As many of our loyal readers know, quite a few of the IT-based articles are based on a close collaboration with the server lab of the College University of West-Flanders. While the academic server research is beyond the scope of this article, the main advantage is that we get the opportunity to work with IT firms which develop rather interesting server applications.

Of course testing these applications is rather complex: how can you recreate the most intensive parts of the real world use of an application and get at the same time a very repeatable and reliable real world benchmark? The answer lies in software developed at the same university called APUS. APUS (or Application Unique Stress-testing) allows us to use the logs of real world applications and turn them into a repeatable benchmark. The application is able to read the logs of almost every popular relational Database (DB2, Oracle, Sybase, MySQL, SQL server, PostGreSQL...) and web applications out there.

We are also working on "special protocols" so we can also use this benchmarking method for other socket applications. A highly tuned threading system allows us to simulate a few thousand users (and more) on one dual core portable. Complex setups with tens of clients are not necessary at all.


There is more: thanks to the hard work of Leandro, Ben, Brecht, and Dieter, APUS is much more than the typical stress tests you can find all over the web. It integrates:
  • the monitoring of many important client parameters
  • and server parameters
  • performance measurements
  • and is able to capture the corresponding power consumption in real time, making use of the EXTECH 380801
Integrating client monitoring with performance measurements allows us to detect when our client CPU or network is becoming the bottleneck. Server monitoring, possible on both Linux and Windows platforms, shows us what has been happening on the server during a particular benchmark session.

You may expect us to test with several real world applications in the future, but for now, let me introduce you to our first "as real world as it can get" application: MCS eFMS.

MCS eFMS software suite

One of the very interesting and more processing intensive applications that we encountered was developed by MCS. The MCS Enterprise Facility Management Solutions (MCS eFMS) is a state-of-the-art Facility Management Information System (FMIS). It includes applications such as space management (buildings), asset management (furniture, IT equipment, etc.), helpdesk, cable management, maintenance, meeting room reservations, financial management, reporting, and many more. MCS eFMS stores all information in a central Oracle database.

MCS eFMS integrates space management, meeting room reservations and much more
Click to enlarge

What makes the application interesting to us as IT researchers is the integration of three key technologies:
  • A web-based front end (IIS + PHP)
  • Integrated CAD drawings
  • Gets its information from a rather complex, ERP-like Oracle database.
The application allows users to view unfoldable location trees that display all buildings, floors, rooms, etc. including detailed floor maps. It can also provide an overview of all available meeting rooms and bookings. In practice, MCS eFMS is one of the most demanding web applications we have encountered so far.

It uses the following software:
  • Microsoft IIS 6.0 (Windows 2003 Server Standard Edition R2)
  • PHP 4.4.0
  • FastCGI
  • Oracle 9.2
The next version of MCS eFMS works with PHP 5 and Oracle 10. MCS eFMS is used daily by large international companies such as Siemens, Ernst & Young, ISS, and PricewaterhouseCoopers, which makes testing this application even more attractive. We used the specially developed APUS (Application Unique Stress-testing) software, developed by our own lab to analyze the logs we got from MCS and turn these logs into a demanding stress test. The result is a benchmark which closely models the way users access MCS web servers around the world.

The client, database server, and web server are connected via the same D-link Gigabit Switch. All cable connections worked at 1 Gbit/s full duplex


The only difference is that the hundreds of simulated users access the web server over one Gigabit Ethernet connection, while in the real world people access the MCS web applications over internal LANs as well over different WAN connections. As some of the pages easily take 400 to 800 ms (and higher under heavy load!) between receiving and sending the request, the few milliseconds that a good internet connection adds will not be significant.

MCS received a detailed and accurate model of how the web server and the clients react under different loads, large user groups or heavy pages, which enables them to optimize the MCS eFMS suite even more. Let us see the most interesting results of that report.

Test Setup and Configuration Network Load Balancing
POST A COMMENT

28 Comments

View All Comments

  • JohanAnandtech - Monday, May 28, 2007 - link

    Hey, you certainly gave the wrong impression of yourself with that first post, not our fault.

    Anyway NLB, infiniband and rendering farms are not anymore exotic than 802.3 ad link aggregation. So I am definitely glad that you and a lot of people want to look beyond the typical gaming technology.

    quote:

    Most of the above, and many other technologies for me, are just a means to an end, not entertainment.


    That is somewhat in contradiction with being an "enthousiast" as "enthousiast" means that technology is a little more than just a tool.

    quote:

    *this* 'gamer' you speak of knows a good bit more about 'IT' than you give him credit for,


    Yep, but why hide it ?

    quote:

    am a *real* hardware enthusiast, who would rather be reading about technology, instead of reading yet another 'product review'.


    Well it is hardly about the product alone, as we look into NLB and network rendering, which is exactly using the technology for a mean.

    While do get the point of your second post, your first post doesn't make any sense: 1) this kind of server should never be turned into a iSCSI device: there are servers that can have more memory and have - more importantly - a much better storage subsystem. 2) you give the impression that an enthousiast site should not talk about datacenter related stuff.

    Hey man, my purpose here is certainly not making fun of you. You seem like a person that can give a lot better feedback than you did in your first post. By all means do that :-)


    quote:

    Especially since any person worth their paygrade in IT should already know how this system (or anything like) is going to perform beforehand.


    A lot of data administrators are very capable certified people in the world of networking and Server OS. But very few know their hardware or can decently size it. I read a book from O'reilly about datacenters a while ago. The stuff about the electrical and networking part of datacenters was top notch. The part about storage, load balancing and sizing were very average. And I believe a lot of people are in that case.
    Reply
  • yyrkoon - Monday, May 28, 2007 - link

    quote:

    A lot of data administrators are very capable certified people in the world of networking and Server OS. But very few know their hardware or can decently size it. I read a book from O'reilly about datacenters a while ago. The stuff about the electrical and networking part of datacenters was top notch. The part about storage, load balancing and sizing were very average. And I believe a lot of people are in that case.


    Well I suppose you are right to an extent here, maybe I like hardware so much that I tend to spend more time 'researching' different hardware?

    The last thing I really want to convey is that I know EVERYTHING, which if I actually thought this, I would most likely be dillusional(this goes for everyone, not just myself, and no, I am not pointing any fingers, I am just saying that perhaps I come off as a know-it-all, but I really dont know it all).

    Anyhow, my original post ws more of a joke, with the serious part, being if somehow this equipment landed its self in my home, I would actually do with it as I said. I do not work in a data center, but I do contract work for small business a lot, mostly for media broadcasters, and the occational home PC when business as such presents its self, so obviously there are things the datacenter monkies know that I do not.

    All that being said, I can not hide the fact that home PC hardware is where my enthusiasm stems from, concerning technology. I see great things for technology like Infiniband, SAS, but these technologies are all but useless in the home because it is being driven by enterprise consumers, that usually dont care about 'reasonably' priced hardware, that performs well in the home envoirnment.

    As I stated before, I have been following the PCIe v2.0 technology for a bit now, and I was under the impression that PCIe-PCIe direct communications were not going to be implemented outside of PCIe v2.0, and would have a good chance of being reasonably priced enough, to be used in the home (on a smaller scale, say 4x channels, instead of the potential of 32x channels). Now, I am dissapointed in seeing that while it may improve server performance, this is going to be used as an excuse to bleed home users dry of cash. Just like SAS, hardware wise, it is comparable in price to say firewire once you pass a certain HDD count threshhold, but finding standalone expanders (without a 2.5" formfactor removable drive bay, or LSI built 1U or greater rack), are non existant(or at least, I personally have not been able to find any). This means people like me, who are wanting to build SoHo like storage for personal, or small business get left out in the cold, AGAIN. Would'nt you like to have a small server at home capable of delivering decent disk throuput/access speeds(ie external to your desktop PC) for a reasonable price ? I know I would.

    All in all, I find the hardware interresting, yet find myself dissapointed from the home use aspect. SO this is why I can hardly be excited by such news.
    Reply
  • TA152H - Monday, May 28, 2007 - link

    Johan,

    I think labels are bad in general, and the enthusiast site is more often an excuse than a valid explanation for some of the choices. It gets annoying, because there is no reason why you can't do both. If someone doesn't want to read your articles on a topic, they can skip it, right?

    However, I have a few issues, naturally :P. Not that you weren't complimentary towards Supermicro, but I'm not sure you carried it quite far enough. Comparing Supermicro to Dell is kind of insulting to Supermicro. Also, you seemed to leave out that Supermicro sells motherboards, cases, power supplies, etc... as standalone pieces, and they are considered by most professionals to be the best motherboards made, as well as being supported extremely well. You can't kill these motherboards, I still run a P6DLE (440LX!) that I want to upgrade but it just won't die. They never do, and the components and fit and finish are absolutely top notch. Now before someone who likes Intel screams at me, they make excellent motherboards too and are extremely high quality as well. But, Intel doesn't make motherboards for AMD, Supermicro does. And if you're building a server, and want AMD, do you really want some junk from Taiwan? Sure, they're cheap, but you buy Asus or Tyan and you're whistling past the graveyard with that rubbish. On top of this, you can even buy Supermicro motherboards that are not server motherboards (my first was a desktop one, P5MMA, and it still works as a print server). There are plenty of white boxes sporting Supermicro motherboards, and some companies build their own in house with Supermicro components. So, their market share is considerable higher than just those sold as complete servers.

    Also, you're insistence on a redundant power supply, I think, misses the point completely. You can buy Supermicros with redundant power supplies, and if that's what you wanted, review one of them. This was made for a different purpose, and you absolutely do NOT want it. That would defeat the purpose, so saying how they should get it is kind of silly. Saving 100 watts is absolutely enormous, especially when getting something inexpensive, and from the most reputable company in server motherboards. By the way, have you ever killed a Supermicro power supply? I haven't, and I do try. So, yes, maybe power supplies fail more often for the cheap companies, but I think the failure rate for Supermicro is very low. But realistically, you have to consider if you can tolerate it at all. If you can't, get one of their other products. If you can, then the power savings are incredible. It's nice to have both choices, isn't it?

    If you want the best and can pay, Supermicro is the way to go. When they are inexpensive and have excellent power use characteristics, it's almost irresistable if it is the type of product you want. Dell??? Oh my. You'd have to be crazy.
    Reply
  • JohanAnandtech - Monday, May 28, 2007 - link

    So much feedback, thanks! Makes these horrible long undertakings called "server articles" much more rewarding even if you don't agree.

    Just leave your first name next time, I like to address you in a proper way.

    Anyway, your feedback

    quote:

    Also, you're insistence on a redundant power supply, I think, misses the point completely. You can buy Supermicros with redundant power supplies, and if that's what you wanted, review one of them.


    Read my article again, and you'll understand my POV better. I work with a lot of SMBs, like MCS, and they like to run their webserver in NLB for fail over reasons. For that, the supermicro twin could be wonderful: pay half as much colocation costs than normal. And notice that I did remark that the percentage of downtime due to a PSU failure is relatively small and it is probably an acceptable risk for many SMBs.

    I just hope I can challenge Supermicro enough to get two PSUs in there. The Twin is an excellent idea, and it would be nice to have it as high availability solution too. So no, just reviewing another supermicro server won't cut it: you double the rackspace needed.

    Reply
  • TA152H - Monday, May 28, 2007 - link

    Johan,

    I just thought of a few things after writing that, and have moved closer to your point of view. Oh, my name is Rich, sorry, I forgot to mention that in the first one.

    Maybe it isn't possible to create a server with the same feature set as this one, in 1U with a redundant power supply. My first reaction was when you are asking for this, you were willing to put up with something bigger, which is an option. Another option is to impose limitations on the server, and still fit it in 1U. By removing some features, and imposing limitations, for example what processors can be used, you might be able to do it. Not only could you reduce the motherboard size, but you could also reduce the power supply if you can safely say that fewer watts can be used. And it's significant, because you multiply it by two, or four in the case of processors. If you lower the acceptable power use for the processor by say 30 watts, you reduce the power supply by 120 watts, so it's significant. If you make it ULV, you could realize some very serious savings, as well as reduce cooling issues. Between this and removing some features, they might be able to significantly reduce power use, and at the same time make the motherboard a little smaller.

    On the other hand, I think SAS would complicate things, and might be why they left it out. I don't think you can get everything in this type of box right now, but maybe another choice would be to create more choices by leaving certain things out, and allowing other things (redundant power supply) to be put in. Of course, I don't know for sure if it's possible, but it might be.
    Reply
  • MrSpadge - Monday, May 28, 2007 - link

    Rich,

    that's roughly what I thought as well when I read your first post. "It's too cramped, they can't get 2 x 900 W with high quality in there" Then I was about to post that it may be possible to get away with a two smaller PSUs and started to name the power consuming devices: 4 x 120 W CPUs, 32 x 5-7 W FB-DIMMs, 4 x 10 W HDD plus something for the chipsets and some loss in the CPU core voltage converters. And I realized that even 700W is probably not enough for this hardware, so I scrapped my post.

    To reduce power consumption they'd really have to constrain the use of CPUs and maybe limit the machines to 8 FB-DIMMs per machine, which is still a lot of memory. The 80W 2.33 GHz quad Xeon may be the best candidate for this. One could also think about using either 2.5" 7200 rpm notebook drives (uhh..) or Seagate Savvios. Less cost effective than 3.5" SATA but you save some more space and power.

    MrS
    Reply
  • TA152H - Monday, May 28, 2007 - link

    Johan,

    I think I know where we disagree. I do understand your point, but I don't think what you're asking for is possible. The problem is, how do you get two power supplies of that rating into that size case? Consider, if you will, you still need a massive power supply to be able to handle both, the case is already crammed pretty well. To get two dual motherboards in there in the first place is and to create such an attractive product is already a great accomplishment. I just don't think you can stuff two 900 watt power supplies in there. Addressing a challenge and beating the laws physics are two different things, I think, right now, it is a bridge too far (a reference to a historic battle in your country, I hope you appreciate it :P). Even if they do get a twin in there, which again, I think is nearly impossible, you'd lose some power efficiency, and add some cost. So, it wouldn't be painless. But, you know, it would make for a good choice to have, so I'm not against it. Both would be attractive. But, I just don't see how they could do it. I don't even know any company has a 2 x Dual available in 1U yet, so it's not trivial. Of course, I can't say for sure it's impossible, it's not like I really know enough to.
    Reply
  • yyrkoon - Monday, May 28, 2007 - link

    Take out all but 2-4 cores, slap in another 16 GB ram for a total of 32GB, use Windows2003 iSCSI/export 31GB of ram disk, and you have a decently fast, very low latency 'disk'. All for 2000x the cost of a similar sized HDD ;) Weeeeeeeeee!

    Sorry guys, I thought this was an 'enthusiast' site, and was briefly confused :P If I had one of the mentioned systems, this is prpobably fairly accurate as to how I would use it . . .
    Reply

Log in

Don't have an account? Sign up now