Dual CPU Database Server Comparison

by Johan De Gelas on 12/2/2004 12:11 AM EST


Back to Article

  • markwrob - Friday, February 04, 2005 - link

    Great article. One thing I'm looking for on
    this and similar comparisons is how the rigs
    or their components do on power consumption.

    Are there any places to find that out easily?
  • markwrob - Friday, February 04, 2005 - link

  • vaxinator - Monday, December 13, 2004 - link

    Very interesting review. I think you are wise to narrow your scope to basically in-memory queries, since this pretty much ensures you are testing just the CPU and memory. Disk I/O is another topic.

    I would be interested to know a few more details about the database (the data model) and the actual queries used.

    But the key element you need to watch for is ensuring your client box is not overloaded. I have seen lots of "performance reviews" where the client box was smaller than the device under test...wrong! It should be under-utilized and the cpu consumption/memory use etc should be reported as well. If necessary use lots of client boxes, but watch the network load.
  • rogerjlarsson - Monday, December 06, 2004 - link

    MySQL v. IBM DB2 Single v. Dual
    "MySQL doesn't seem to scale as well as DB2."

    Yea, right... It all depends on what level you start from!

    MySQL _Single_ Xeon Concurrency 50: 157
    DB2 _Dual_ Xeon Concurrency 50: 102

    MySQL scales a little to _Dual_ Conc. 50: 222

    DB2 is not even near!!! MySQL might be network
    limited rather than Memory or CPU...
  • davesbeer - Sunday, December 05, 2004 - link


    Oracle's ETL tool is $5k also comes with
    Forms Designer
    Reports Designer

    I've worked with some of the biggest financial institutions in the world including ABN Amro... etc...
    Oracle's DB also is less costly then MS.. Not thier EE version but then again MS has nothing to compete with that one..

    Also you mentioned MS tools are the best.. not likely.. they are the easiest to use but far from the best.. The worlds biggest datawarehouses, datamarts etc.. are not on MS and they don't use MS's tools... except for maybe MS's themselves...
  • jensend - Sunday, December 05, 2004 - link

    Comparing these systems in 32-bit mode with a 2.4-based distro makes very little sense. Among a lot of other reasons, the 2.4 scheduler is AFAIK rather poorly suited for use with combined SMT & SMP. Reply
  • marcuri - Saturday, December 04, 2004 - link

    as #5 methods are far from being anywhere remotely 'mature' as the article states. Actually, after talking to people who run databases, its close to useless. The benchmarks used test cpu speed, not memory or memory management. Opterons are a better solution because they are 64 bit and thus handel the management of 4+gigs of memory much better (it starts in excess of 1gig that 64 helps out i believe). By using a data set of 1 gig the benchmarks are completely missing whats important, its niave at best. Reply
  • Decoder - Saturday, December 04, 2004 - link


    I've worked with Oracle 8i, 9, IBM UDB v7, SQL Server 6.5,7, 2K and 2005. I also do J2EE, ETL (Informatica, Datastage, DTS), .NET etc. $ for $, MS SQL Server 2005 is a much better value.

    Have you ever built a data mart using MS SQL Server DTS? It's an ETL tool that comes with MS SQL server for free. Try building a data mart on Oracle and you will find yourself spending 100K US for an ETL tool. Also from a developers perspective, SQL Server tools are the best (Read up on SQL Profiler, Query Analyzer etc).

    .NET not scalable? Scalable compared to what? J2EE EJB's?? Please. Sure there's a limit to scalability on 32-bit boxes, but with x86-64 and 64bit OS's, that limit will go away.

    davesbeer: "SQL Server is for people who can't program real databases"

    Program databases? Please elaborate. You code to a database, design a database, write procs for a database but what does program databases mean?

    Also, tell that to the Finanical Services companies in New York.

    davesbeer: "SQL Server will be delayed again... (you can remember I said that) and the technology is already outdated."

    SQL Server 2005 will be extremely popular. Also SQL Server 2005 DTS (a complete rewrite) will penetrate the ETL market.

  • davesbeer - Saturday, December 04, 2004 - link


    I work with companies that have over 6000 processors actively running multiple databases in thier environments. MS just does'nt compete.. .Net is interesting but again you are stuck with M$ and scalabilty is just not there for real apps. Most financial industries run Sybase... its thier last bastion of full fledged support, I worked with 15 of the largest American Financial institutes, I know what I am taking about. I currently work with some of the largest IT departments in the world. Thousands of developers in a single company alone.. I know IT.
    Oracle, DB2 etc's 64 bit knowledge is far superior to M$ and will remain that way for a long time.. they'll been writing 64bit code for years. SQL Server will be delayed again... (you can remember I said that) and the technology is already outdated. SQL Server is for people who can't program real databases (can feel the flames now) sorry Anandtech but your databases are lightweight... once they grow up you will switch to Oracle or DB2 etc..
  • at80eighty - Saturday, December 04, 2004 - link

    First off: Welcome Johan : )

    next...im a frickin n00b, so pardon my ignorance : )

    my questions may not be entirely on topic, but i hope someone...ANYONE can help...

    im lookin to *build* an entry level server, so i've got the following doubts (based on the following factors) :

    Company: startup...less than a year old

    Expected # of initial users: 8 (2 people will be responsible for writing data, all employees will read db) requirements should go up to 15-20 people (2-4 write: all read) by year end (should business expand as planned)

    Server: Mail + DB + File server in one

    Cost: would be nice to keep things economical , but im willing to stretch for a middle ground : )

    1) What CPU is best bang for buck?
    2) OS will be Windows SBS Premium...was going for this since i gathered MS SQL is the best way to go (more so coz Anandtech uses SQL too :p )...after reading some of the comments im a little confused viz. DB2 v/s Oracle v/s MS SQL...plus Exchange looks kinda sweet too : )
    3) Should i wait for SQL 2005? What benefits if any (with reference to costs) ?
    4) Which is the best front-end in your opinion? i was looking to use Infopath as i found the GUI VERY easy to use and frankly the look is quite easy on the eyes too : p. Also, its important to note that I am *supposedly* the most computer literate in my company...so im looking to create a DB with an easy interface for my employees : p
    5) Is Crystal Reports easy to integrate into MS SQL (or any of the above for that matter)? OR im lookin for a back-end that can export data to Excel, so that i can analyse & crunch numbers
    6) Will i *need* an in-house IT guy to manage the DB? ...or even the server as a whole? (looking to keep costs to a minimum)

    i've got TONS of further queries (and i dont want to attention-whore on the thread :), so, should anyone have any spare time (and any inclination in the first place : ) to help a fellow AT'er, my mail add is: ucanmailme AT gmail DOT com

    Thanks in advance

  • Decoder - Friday, December 03, 2004 - link

    davesbeer: "MS is a joke".

    You don't know jack about enterprise IT. Mostly of the financial services industry (FSI) companies run on both UNIX and Windows. Some FSI companies have standardized on .NET and SQL Server. I know this because i work in this industry. MS is no joke. MS.NET is no joke and i can assure you MS SQL Server 2005 is no joke. $ for $, MS products deliver more value and ease of use/development/admin then anything else out there. x86-64 will help MS win over some of the 64-bit enterprise computing deals as well. MS is in the best position ever.
  • davesbeer - Friday, December 03, 2004 - link

    I had great faith in Anandtech... until this article... not the hardware aspect but the software aspect...
    I never see MySql in competition... MS is a joke... known for cheap but not reliable or scalable.. DB2 is the only competition to Oracle but it is not the same database on differing platforms therefor has huge problems for customers.. Only Oracle allows you to move from one platform to another with minimal changes... Oracle is the leader in the DB market.. Gartner includes NON relational database in IBM's numbers which inflates them. Oracle commands about 70% in the Unix space and quite frankly is retaking significant ground in the Windows space with the low cost SE1 DB options.. Interesting to note that IBM benches its hardware with Oracle and not DB2..... The only thing software correct was the fact that Linux is extremely important to all the vendors and becoming more important to corporations...
  • Puppetman - Friday, December 03, 2004 - link

    #32 - Oracle prohibits you from posting benchmarks in their licensing agreement that you have to sign to get a copy of the software, I believe.

    I guess this is partly because it's so complicated to set up (MySQL is easier, but tuning is still an issue).

    I would have liked to have seen Postgres 7 and 8 tested. PostgresQL has the features of Oracle, and 8.0 has some pretty impressive performance numbers (the optimizer seems to be much better than the 7.4 optimizer, in my limited tests).

  • Puppetman - Friday, December 03, 2004 - link

    They used a 32-bit version of MySQL 3.23, when a 64-bit version of 4.0 or 4.1 are available.

    No statement as to the storage engine used in MySQL (ISAM, MyISAM, InnoDB, BDB, etc), but all the big sites using MySQL (Google, Yahoo, etc) use the InnoDB engine, as it provides ACID transactions, tablespaces, foreign keys, etc.

    These tests are like testing a Pentium 4 3.4ghz EE CPU with Windows 98.

  • mbhame - Friday, December 03, 2004 - link

    Would've been nice to see some Oracle and SQL Server benches! Reply
  • lindy - Friday, December 03, 2004 - link

    Our people soft environment consists of a application/web server running on Windows 2000 up front, with a SUN UNIX server running Oracle on the back end. In Nov-Dec the database server is busy…review/pay raise time of the year. The rest of the year it hums along.

    We have mostly two tier applications, WEB or Application up front on a server, database in the back on a server. However lots of turn key solutions like our Crystal Reports server and our Remedy server are an application on top of MS SQL….so essentially database servers.

    Exchange is a beast, every user hits our single Exchange 2003 server….1600+ users with a total 300+ gigs of email. You are right it’s basically a database server with the Exchange application sitting on top of it…..there is no way to separate it. Exchange 2003 would be a great test for you as there are lots of load simulators for it out there that can simulate many users pounding it.

    Why use a NAS when you have a SAN? Our 2 big Windows storage server 2003 file servers use a SAN for their 2 terabytes of data. These servers are backed up over fiber to a tape silo attached to the SAN. It’s about the fastest backup solution out there today. To your point the data on those file servers are slowly moving to a sharepoint solution which is a WEB server up front, and a big MS SQL database in the back….a pretty big paradigm shift.

    Anyhow good article, and happy holidays!!!
  • tygrus - Friday, December 03, 2004 - link

    What was the MySQL scalling like with the Opterons?

    Other OS's, other DBMS, MS Server 2003, MS SQL ?
    Nocona ?
    4-way intel vs 4-way AMD ?

    While it's nice to isolate the CPU performance, I would like to see some more variety and real life tests for the next edition. Part of a DB server is the IO handling and disk sub-system. Try to set them up with same (best) SCSI drives (SCSI RAID card ? on-board, OEM best or after market?). A few more serach, report, maintenance and data mining tasks would be nice. Capacity and expansion options (and cost) for more disks and backup.

    The other thing is that less CPU % usage for a given workload will reduce latency for potentially greater productivity. You don't want a DB server running at >50% for most of the time for speed, reliability, transaction growth, DB growth, emergency capacity. If it was <50% then failure of a CPU or it's associated memory (for Opterons) then the server ccould be run without it. I'm not saying that the system would be limited by disk IO to have that CPU <50% but that the system as a whole would be running at half its peek.

  • Scali - Friday, December 03, 2004 - link

    This is nice, but I still miss a few configurations that I would be interested in... For starters, Xeons running in 64 bit mode... And I also wonder how Windows would perform. Windows may scale quite differently from 1 to 4 or more CPUs, and HyperThreading may have a different impact aswell (especially with Windows XP or 2003, which have special scheduling strategies for HyperThreading).
    I hope that these will be covered in future benchmarks. They will put these results in a new perspective.
  • Bonebardier - Friday, December 03, 2004 - link

    I know, why don't I give my posts more thought - sorry Anand, I got my Tyan model numbers mixed up! The board used does of course show Opteron off to its best.

    Here's my sign!
  • Bonebardier - Friday, December 03, 2004 - link

    Yet another AMD vs. Intel review that handicaps the AMD contender unduly - why was the Opteron platform equipped with a K8W, when a K8S Pro would have provided double the memory bandwidth, or have I answered my own question?

    I'm looking at building an Opteron based server and would never dream of providing it with only a single bank of dual-channel RAM shared between the two, certainly not when a board is available that allows each processor to have it's own bank of DC RAM, which can be shared with the other processor if needed. Database apps are precisely the type of app that would benefit from this.

    Come on Anand - give your articles the thought they deserve, unless this one was just an Intel Nocona advert......
  • blackbrrd - Friday, December 03, 2004 - link

    I think that it is Quad-channel, as the board is Numa aware.. Reply
  • Olaf van der Spek - Friday, December 03, 2004 - link

    > The result is that the Lindenhurst board can offer 4 DIMMs per channel while the other Xeon servers with DDR-I were limited to 4 DIMMs in total, or one per memory channel.

    Is that chipset quad-channel?
  • Olaf van der Spek - Friday, December 03, 2004 - link

    > It is especially impressive if you consider the fact that the load on the address lines of DDR makes it very hard to use more than 4 DIMMs per memory channel. Most Xeon and Opteron systems with DDR-I are limited to 4 DIMMs per memory channel

    Isn't the Opteron limited to 3 or 4 DIMMs per channel too?
    After all, it's 6 to 8 DIMMs per CPU and each CPU is dual-channel.
  • prd00 - Thursday, December 02, 2004 - link

    I am waiting for 64 bit Nocona vs 64 bit Opteron. Also, I think SLES9 would be interesting. Reply
  • mczak - Thursday, December 02, 2004 - link

    #16 ok didn't know 2.4.21 already supported NUMA. SuSE lists it as a new feature in SLES9.
    I agree it probably really makes not much of a difference with a 2-cpu box, but I think there should be quite an advantage with a 4-cpu box. The HT links are speedy, but I would guess you would end up using basically only one ram channel for all ram accesses way too often, bumping into bandwidth limitations.
  • JohanAnandtech - Thursday, December 02, 2004 - link

    Lindy, you are probably right, I probably got carried away a little too much. however, you seem to swing the other way a little too far. For example, a peoplesoft server is essentially a database server (or are you talking about the application server, working in 3 tiers?)

    A webserver is in many cases a databaseserver too. I would even doubt an exchange server is not related, but I never worked with that hard to configure stubborn application. Many of those turnkey and homegrown apps are probably apps on top of database server too...

    And I think it is clear we are not talking about fileservers. I agree fully that fileservers are all about I/O but I don't agree about database servers.

    To sum it up: yes, you are right, it is not the lionshare in quantities. However, it is probably still the biggest part when we look at costs. Because I can probably buy 5 fileservers for one database server. Why even use fileservers when you have NAS?

  • dragonballgtz - Thursday, December 02, 2004 - link

    cliff notes :P Reply
  • lindy - Thursday, December 02, 2004 - link

    This statement……

    Up to $46 billion is spent in the Servers (hardware) market, and while a small portion of those servers is used for other things than running relational databases (about 20% for HPC applications), the lion's share of those servers are bought to ensure that a DB2, Oracle, MS SQL server or MySQL database can perform its SQL duties well.

    ……..Is so far off base, its almost funny.

    I would reverse that statement, as in a small portion of servers are database server in a most companies. I manage an IT department that takes care of about 160 servers for a company. A good mix of mostly 2/3 windows servers and 1/3 UNIX/LINUX. System administration/engineering is my trade.

    When I look at our servers I see DNS, DHCP, WINS, Domain Controllers, Exchange, SMTP, Blackberry, Proxy, File, Print, WEB, Backup, turnkey application, and Database servers. Maybe 20 of the approximately 160 servers are database servers. Of that 2, (8 CPU Sun 1280’s clustered running Sybase) are the busiest, containing our customer database of over 200,000 customers. Even at that, those servers are rarely over 50% CPU utilization.

    The other 18 database servers, run a variety of databases (none DB2) Oracle, MySQL, and Microsoft SQL. The databases server up data for all kinds applications, like Microsoft SMS2003, Crystal Reports, ID badge security application, People Soft, Remedy, all kinds of turn key applications based around our industry, home grown apps and the list goes on. There are times when some of these servers are really busy CPU wise, about 5% of the time, and usually at night doing data uploads or re-index’s.

    My point is most servers waste CPU power. Sure you can find applications and uses for servers that eat CPU all day long…..but that is the minority of the 46 billion spent on servers…..tiny minority. For most servers network I/O and especially disk I/O are way more critical. Database servers setup with the wrong disk configurations have their CPU’s sitting around doing not much. Servers like File, print, DHCP, DNS, SMTP, some in every company…..can get away with single CPU’s. Heck our print servers are running on Dell 1650’s with 1.4ghz P3-CPU’s that are coasting, but the disks are spinning all the time, and the network cards are busy, busy.

    When you realize these things, Xeon CPU’s vs Opteron does not really matter 99% of the time, cost does. When you a company like Dell that has sold its soul to Intel for low prices, that they turn around and offer to people like me……I don’t even consider what CPU is in the box most of the time.
  • JohanAnandtech - Thursday, December 02, 2004 - link

    about MySQL:
    I don't think you can find a way to make the Xeon go faster than the Opteron.

    But I do agree that performance depends on the kind of application, the size of the database etc.

    "A database that fits entirely inside of RAM isn't very interesting"

    Well, I can understand that. But

    1) do realize that for really performance critical (read applications) applications you are doomed if information has to come from your harddisks, no matter how fast RAID 50 is. Caching is the key to a speedy database application

    2) The information that is being requested 99% of the time (in most applications) is relatively small compared to the total amount of data. So a test with a 1 GB database can be representive for a database that is in total 30 GB or something. Just look at Anandtech: how many of you are browsing the forum of 3 months ago? How interesting is it for AT to optimise for those few that do?

    3) I think we made it very clear that our focus was not on the huge OLTP databases but the ones behind other applications

  • Slack3r78 - Thursday, December 02, 2004 - link

    I'd agree that using SuSe 8 was a poor choice. I like the "not using the latest and greatest" theme for servers as that's a reality in the field, but SuSe 8 was realeased essentially alongside the first Opterons. The move to a 2.6 kernel and the time for developers to really play with the new architechture could mean even bigger performance numbers.

    Given that Nocona, or public knowledge of an Intel x86-64 chip at all, didn't exist when SuSe8 was released, I'm not surprised that it wouldn't run in 64 bit mode. EMT64 has proven to be rather quirky and less than perfect, from the reports I've read, anyway. See here:

    Another test running a distribution that was more recently released would definitely be interesting, if possible.
  • JohanAnandtech - Thursday, December 02, 2004 - link

    About SLES9 and NUMA: NUMA is also supported by Linux kernel 2.4.21 and it boosts performance only a tiny bit. The reason are the very speedy HT links which keep latency at a minimum.

    It is still possible that kernel 2.6 NUMA support is far better of course, but I doubt it makes a difference for quad or dual systems as there is only hop in quad systems. With two hops (8 CPUs) from CPU 1 to 8 for example, this will become important.
  • AtaStrumf - Thursday, December 02, 2004 - link

    A TYPO:

    So for now, the Opteron has an advantage still, but it ***can*** /can't/ knock out the Xeon, as it could have a few months ago, before the Xeon Nocona arrived.

  • HardwareD00d - Thursday, December 02, 2004 - link

    There have been enough benchmarks on the web for a long time which show that Opteron generally wipes Xeon's a$$ hands down, and scales far better in multi processor configurations. The latest Xeon is nothing special compared to prior versions and will no doubt preform better mostly due to its increased clock speed. Xeon will never be better than Opteron no matter how much cache and tweaks Intel adds.

    Maybe Intel's next server architecture will be something to woo, but that's a ways off.
  • jshaped - Thursday, December 02, 2004 - link

    as a long-time reader of aceshardware, i'll be the first to welcome Johan here, great first article. keep them coming!!!! Reply
  • HardwareD00d - Thursday, December 02, 2004 - link

    I don't think there are enough variations of the way requests are handled to make a realistic conclusion for either chip. I'm sure you could create a situation where Intel bests AMD in My Sql, and vice versa. This article really needs more benchmarks and more in-depth analysis. Still, it provides enough information to conclude that both Xeon and Opteron have their strengths and weaknesses. Reply
  • mczak - Thursday, December 02, 2004 - link

    Nice read. I really think though you should have used SLES 9. Not only does it use kernel 2.6, but it's also NUMA-aware (and DB2 should specifically support it, though it might not have been released yet). SLES 9 also ought to be faster especially on x86_64 due to newer compiler (not that it would matter much with precompiled databases, but every bit counts...). Though for 2-cpu boxes, NUMA might not be that important - but it's safe to predict a landslide victory for a 4-cpu opteron with NUMA support vs. a 4-cpu xeon box. Xeons simply don't scale to 4 cpus, intel might sell them but they are useless (especially since the Xeon MPs are still limited to 400 (or was that 533?) Mhz FSB.
    A pity though the quad opterons don't support ddr-400. I guess manufacturers decided it's more important to have a boatload of ram slots than fewer slots (with shorter traces) with higher speeds...
    And btw, where are the 90nm Opterons? AMD's latest roadmap shows them as available in 2004, which doesn't leave too much time...
  • bthomas - Thursday, December 02, 2004 - link

    Bogus conclusions about IBM tests IMO. From the

    > If we had published a similar report back in
    > August, the Opteron would enjoyed a landslide
    > victory. Luckily for Intel, Nocona is very
    > competitive and is about 5% faster than the Opteron 250.

    and later in the "conclusion":

    > Nevertheless, AMD cannot sit on its laurels.
    > Intel made a very good comeback with Nocona, as > this 3.6 GHz CPU is just a tiny bit faster in >

    It has not.

    You fail to specify that this is comparing the _32 bit_ mode for the Opteron. IF you compare the Nocoma performance to the Opteron 64 bit capability...it sweeps the the Nocona in all tests.

    The true conclusion is that based on the results in the article, for neither of the databases tested do *any* of the Intel processors compete with the Opteron.
  • fitten - Thursday, December 02, 2004 - link

    Randomized benchmarks are hard to verify as well. You could get a "good" distribution that really takes advantage of cache locality while another randomization may be very cache unfriendly. I agree with #5 to a degree. A database that fits entirely inside of RAM isn't very interesting, ultimately.

    Still, I am happy that AnandTech is going down these paths of benchmarking instead of just being about Doom3, HL2, and FarCry like most other sites. I eagerly await further database benchmark articles.
  • PrinceXizor - Thursday, December 02, 2004 - link

    #5 - Since when do top tier e-commerce cites compare to mid-level company database users as the beginning of the article mentioned?

    My company is an engineering firm that does custom electronics. Our database server handles all the transactions for our Inventory/MRP system which is mostly reads. These benchmarks are very appropriate. I wish I could have convinced my boss to go Opteron. Its funny, they had Athlon MP's before and then switched to Xeons when Opteron was out. Go figure.

    Anyway, great article. I'm not IT guy by any stretch, but I enjoyed the article.

  • Jason Clark - Thursday, December 02, 2004 - link

    #6, done ages ago..


  • smn198 - Thursday, December 02, 2004 - link

    Would love to see how MS SQL performs in similar tests. Reply
  • mrVW - Thursday, December 02, 2004 - link

    This test seems foolish to me. A 1GB database? All of that fits in ram.

    A database server is all about being the most reliable form of STORAGE, not some worthless repeat queries that you should cache anyway.

    Transactions, logging... I mean how realistic is it to have a 1GB of database on a system with 4GB of RAM and expensive DB2 software.

    A real e-commerce site likeMWave, NewEgg, Crucial could have 20GB per year! Names, addresses, order detail, customer support history, etc.

    Once you get over a certain size, a database is all about disk (putting logging on one disk indepdent of the daata, etc.). The indexes do the main searching work.

    This whole test seems geared to be CPU focused, but only a hardware hacker would apply software in such a crazy way.

  • mrdudesir - Thursday, December 02, 2004 - link

    man i would love to have one of those systems. Great job on the review you guys, its good to know that there are places where you can still get great independent analysis. Reply
  • Zac42 - Thursday, December 02, 2004 - link

    mmmmmmm Quad Opterons...... Reply
  • Snoop - Thursday, December 02, 2004 - link

    Great read Reply
  • ksherman - Thursday, December 02, 2004 - link

    is that pic from the 'lab'? (the one on pg 1) Reply

Log in

Don't have an account? Sign up now