Final Conclusion

Database benchmarking is full of pitfalls. Databases are pretty complicated to set up and tune, and depending on the amount of data, and the way that the database is accessed (a few rows a lot of the time, or a lot of rows sometimes), results can be quite different. We are well aware that it will take a lot of time before our benchmarks will be really "mature".

However, we see a few trends emerging out of this report. First of all, while file serving and firewalls tend to be always "all about I/O", this generalization is simply not true for database servers that run "read heavy" database applications. Our DB2 results depended only - well, 95% or so - on CPU processing power. This was not completely the case in MySQL tests, but the CPU was still by far the most important component.

The Opteron deals out a decisive blow to the Xeons in MySQL. We know from past experiences that when you run extremely complicated SQL statements, the Opterons were a lot faster. After this project, we also know that the Opteron is still the winner when you mix a lot of simple queries with a few heavier queries.

Nevertheless, AMD cannot sit on its laurels. Intel made a very good comeback with Nocona, as this 3.6 GHz CPU is just a tiny bit faster in DB2. This concurs with some of Jason's MS SQL server results. Of course, it is a very big question mark whether or not Intel can push this Xeon much higher. Meanwhile, it is clear that AMD has quite a bit of headroom with its new 90 nm process technology.

In a nutshell, we can conclude that the 3.2 GHz Xeon with 2 MB L3-cache is too expensive compared to its 3.6 GHz Nocona brother.

The Opteron systems still have a price advantage over similar Xeons, mostly thanks to the cheaper-to-produce motherboards and DDR-I. A ProLiant DL145 2.4GHz Opteron 2GB ATA Rack Model with 2 CPUs and 2 GB of memory costs about $4300, while a comparable ProLiant DL360 G4 Xeon 3.60GHz Processor, SATA - Rack Model arrives at about $4900. In any comparison, prices can be a bit different, but generally, it is safe to say that the Opteron systems are a bit cheaper. So for now, the Opteron has an advantage still, but it can't knock out the Xeon, as it could have a few months ago, before the Xeon Nocona arrived.

Benchmarks MySQL: Hyperthreading?
Comments Locked

46 Comments

View All Comments

  • blackbrrd - Friday, December 3, 2004 - link

    I think that it is Quad-channel, as the board is Numa aware..
  • Olaf van der Spek - Friday, December 3, 2004 - link

    > The result is that the Lindenhurst board can offer 4 DIMMs per channel while the other Xeon servers with DDR-I were limited to 4 DIMMs in total, or one per memory channel.

    Is that chipset quad-channel?
  • Olaf van der Spek - Friday, December 3, 2004 - link

    > It is especially impressive if you consider the fact that the load on the address lines of DDR makes it very hard to use more than 4 DIMMs per memory channel. Most Xeon and Opteron systems with DDR-I are limited to 4 DIMMs per memory channel

    Isn't the Opteron limited to 3 or 4 DIMMs per channel too?
    After all, it's 6 to 8 DIMMs per CPU and each CPU is dual-channel.
  • prd00 - Thursday, December 2, 2004 - link

    I am waiting for 64 bit Nocona vs 64 bit Opteron. Also, I think SLES9 would be interesting.
  • mczak - Thursday, December 2, 2004 - link

    #16 ok didn't know 2.4.21 already supported NUMA. SuSE lists it as a new feature in SLES9.
    I agree it probably really makes not much of a difference with a 2-cpu box, but I think there should be quite an advantage with a 4-cpu box. The HT links are speedy, but I would guess you would end up using basically only one ram channel for all ram accesses way too often, bumping into bandwidth limitations.
  • JohanAnandtech - Thursday, December 2, 2004 - link

    Lindy, you are probably right, I probably got carried away a little too much. however, you seem to swing the other way a little too far. For example, a peoplesoft server is essentially a database server (or are you talking about the application server, working in 3 tiers?)

    A webserver is in many cases a databaseserver too. I would even doubt an exchange server is not related, but I never worked with that hard to configure stubborn application. Many of those turnkey and homegrown apps are probably apps on top of database server too...

    And I think it is clear we are not talking about fileservers. I agree fully that fileservers are all about I/O but I don't agree about database servers.

    To sum it up: yes, you are right, it is not the lionshare in quantities. However, it is probably still the biggest part when we look at costs. Because I can probably buy 5 fileservers for one database server. Why even use fileservers when you have NAS?

  • dragonballgtz - Thursday, December 2, 2004 - link

    cliff notes :P
  • lindy - Thursday, December 2, 2004 - link

    This statement……

    Up to $46 billion is spent in the Servers (hardware) market, and while a small portion of those servers is used for other things than running relational databases (about 20% for HPC applications), the lion's share of those servers are bought to ensure that a DB2, Oracle, MS SQL server or MySQL database can perform its SQL duties well.

    ……..Is so far off base, its almost funny.

    I would reverse that statement, as in a small portion of servers are database server in a most companies. I manage an IT department that takes care of about 160 servers for a company. A good mix of mostly 2/3 windows servers and 1/3 UNIX/LINUX. System administration/engineering is my trade.

    When I look at our servers I see DNS, DHCP, WINS, Domain Controllers, Exchange, SMTP, Blackberry, Proxy, File, Print, WEB, Backup, turnkey application, and Database servers. Maybe 20 of the approximately 160 servers are database servers. Of that 2, (8 CPU Sun 1280’s clustered running Sybase) are the busiest, containing our customer database of over 200,000 customers. Even at that, those servers are rarely over 50% CPU utilization.

    The other 18 database servers, run a variety of databases (none DB2) Oracle, MySQL, and Microsoft SQL. The databases server up data for all kinds applications, like Microsoft SMS2003, Crystal Reports, ID badge security application, People Soft, Remedy, all kinds of turn key applications based around our industry, home grown apps and the list goes on. There are times when some of these servers are really busy CPU wise, about 5% of the time, and usually at night doing data uploads or re-index’s.

    My point is most servers waste CPU power. Sure you can find applications and uses for servers that eat CPU all day long…..but that is the minority of the 46 billion spent on servers…..tiny minority. For most servers network I/O and especially disk I/O are way more critical. Database servers setup with the wrong disk configurations have their CPU’s sitting around doing not much. Servers like File, print, DHCP, DNS, SMTP, some in every company…..can get away with single CPU’s. Heck our print servers are running on Dell 1650’s with 1.4ghz P3-CPU’s that are coasting, but the disks are spinning all the time, and the network cards are busy, busy.

    When you realize these things, Xeon CPU’s vs Opteron does not really matter 99% of the time, cost does. When you a company like Dell that has sold its soul to Intel for low prices, that they turn around and offer to people like me……I don’t even consider what CPU is in the box most of the time.
  • JohanAnandtech - Thursday, December 2, 2004 - link

    about MySQL:
    I don't think you can find a way to make the Xeon go faster than the Opteron.

    But I do agree that performance depends on the kind of application, the size of the database etc.

    "A database that fits entirely inside of RAM isn't very interesting"

    Well, I can understand that. But

    1) do realize that for really performance critical (read applications) applications you are doomed if information has to come from your harddisks, no matter how fast RAID 50 is. Caching is the key to a speedy database application

    2) The information that is being requested 99% of the time (in most applications) is relatively small compared to the total amount of data. So a test with a 1 GB database can be representive for a database that is in total 30 GB or something. Just look at Anandtech: how many of you are browsing the forum of 3 months ago? How interesting is it for AT to optimise for those few that do?

    3) I think we made it very clear that our focus was not on the huge OLTP databases but the ones behind other applications


  • Slack3r78 - Thursday, December 2, 2004 - link

    I'd agree that using SuSe 8 was a poor choice. I like the "not using the latest and greatest" theme for servers as that's a reality in the field, but SuSe 8 was realeased essentially alongside the first Opterons. The move to a 2.6 kernel and the time for developers to really play with the new architechture could mean even bigger performance numbers.

    Given that Nocona, or public knowledge of an Intel x86-64 chip at all, didn't exist when SuSe8 was released, I'm not surprised that it wouldn't run in 64 bit mode. EMT64 has proven to be rather quirky and less than perfect, from the reports I've read, anyway. See here:
    http://www.theinquirer.net/?article=16879

    Another test running a distribution that was more recently released would definitely be interesting, if possible.

Log in

Don't have an account? Sign up now