Market Analysis

Let us take a quick look at the complete market to see how the most interesting CPUs from Intel and AMD compare. In the first column, you see the market. In the second column is the percentage of server shipments to this market. Some markets generate more revenue to server manufactures like ERP, OLTP, and OLAP; however, since we have no recent numbers on this, we'll just mention it. We compare the Opteron "Shanghai" 2.7GHz with the Xeon "Harpertown" 3GHz as they have similar pricing and power dissipation. The green zones of the market are the ones we have a decent benchmark for and which are won by AMD, the blue ones are the Intel zones, and the red parts are - for now - unknown.

AMD "Shanghai" Opteron 2.7 GHz versus Xeon "Harpertown" 3 GHz
Market Importance First bench Second bench Benchmarks/remarks
ERP, OLTP 10-14% 21% 5% SAP, Oracle
Reporting, OLAP 10-17% 27%   MySQL
Collaborative 14-18% N/a    
Software Dev. 7% N/a    
e-mail, DC, file/print 32-37% N/a   Not really a "CPU loving" market
Web 10-14% 2%   MCS eFMS
HPC 4-6% 28% -3% to 66% LS-DYNA, Fluent
Other 2%? -18% -15% 3DSMax, Cinebench
Virtualization 33-50% 34%   VMmark

Yes, our benchmarks do not cover the whole market. However, keep in mind that for a large percentage of the "infrastructure" servers, the CPU is not really an important factor for the buying decision. We are convinced that once we have setup a good "collaborative benchmark" we cover most of the server market where the CPU performance makes a difference.

What do we learn from this overview? The new quad-core Opteron 2384 or 8384 is a success. It's a late success, but it can keep its most important competitor at a tangible distance in ERP, OLAP, and HPC. For ERP, OLTP, and OLAP, we are pretty sure our benchmarks give a good view. SAP, Oracle, and MySQL are very popular applications each in their own field, and the SQL server results of our "AMD Opteron Shanghai" review show more or less the same picture. In these markets, it will be hard to find benchmarks that contradict our findings

The HPC market is a lot more diverse, and since we have a limited knowledge of this market, we are sure that there are examples that show the complete opposite picture of the benchmarks we have shown here. Still, the Ansys benchmarks are good representatives of a decent part of this market.

The benchmark that really convinces us that currently the Opteron has the advantage is VMmark. Being able to consolidate 27% (14 vs. 11 tiles) to 33% (8 vs. 6 tiles) more virtual machines translates immediately into considerable cost savings. Those 27 to 33% more VMs do not result in a performance hit, as the total consolidated performance rises 34% and more. Considering that most IT investments in these uncertain times will target at cutting costs, that is a huge plus for AMD.

The Opteron Killer? Closing Thoughts
Comments Locked


View All Comments

  • Theunis - Wednesday, December 31, 2008 - link

    I'm not entirely happy with the results, but I would also like to know the -march that was used to compile the 64bit Linux kernel and MySQL, and any other Linux benchmarks, because optimization for specific -march is crucial to compare apples with apples. Ok yes AMD wins by assuming standard/generic compile flags was used. But what if both were to use optimized architecture compile flags? Then if AMD lose they could blame gcc. But it still should be interesting to see the outcome of open source support. And how it wraps around Vendor support to open source and also the distribution that decided to go so far to support architecture specific (Gentoo Linux Distribution comes to mind).

    Intel knew that they were lagging behind because of their arch was too specific and wasn't running as efficiently on generic compiled code. Core i7 was suppose to fix this, But what if this is still not the case and optimization is required for architecture is still necessary to see any improvements on Intel's side?
  • James5mith - Tuesday, December 30, 2008 - link

    Just wanted to mention that I love that case. The Supermicro 846 series is what I'm looking towards to do some serious disk density for a new storage server. I'm just wondering how much the SAS backplane affects access latencies, etc. (If you are using the one with the LSI expander chip.)
  • Krobaruk - Saturday, December 27, 2008 - link

    Just wanted to add a few bits of info. Back to an earlier comment, it is definitely incorrect to call SAS and infiniband the same, the cables are infact slightly different composition (Differences in shieliding) although they are terminated the same. Lets not forget that 10G ethernet uses the same termination in some implementations too.

    Also at least under Xen AMD platforms still do not offer PCI pass through, this is a fairly big inconvenience and should probably be mentioned as support is not expected until the HT3 Shanghais release later this year. Paricularly interesting are result here that show despite NUMA greatly reducing load on the HTLinks, it makes very little difference to the VMMark result:">
    I would imagine HT3 Opteron will only really benefit 8way opterons in typical figure of 8 config as the bus is that much heavier loaded.

    Its odd your Supermicro quad had problems with the Shanghais and certain apps, no problems from testing with a Tyan TN68 here. Was the bios beta?

    Are any benches with Xen likely to happen in future or is Anandtech solidly VMWare?

    Answering another users questions about mailserver and spam assasin, I do not know of any decent bechmarks for this but have seen the load that different mailfilter servers can handle under the MailWatch/Mailgate system. Fast SAS raid 1 and dual quad cores seems to give best value and ram requirement is about 1Gb per core. Would be interesting to see some sort of linux mail filter benchmarks if you can construct something to do this.
  • RagingDragon - Saturday, December 27, 2008 - link

    I'd imagine alot of software development servers are used for testing of applications which fall into one of the other categories. There'd also be bug tracking and version control servers, but most interesting from a performance perspective might be build servers (i.e. servers used for compiling software) - the best benchmark for that would probably be compile times for various compilers (i.e. Gnu, Intel and MS C/C++ compilers; Sun, Oracle and IBM java compilers; etc.)
  • bobbozzo - Friday, December 26, 2008 - link

    IMO, many (most?) mailservers need more than fast I/O; what with so much anti-spam and anti-virus filtering going on nowadays.

    SpamAssassin, although wonderful, can be slow under heavy loads and with many features enabled, and the same goes for A/Vs such as clamscan.

    That said, I don't know of any good benchmarks.

  • vtechk - Thursday, December 25, 2008 - link

    Very nice article, just the Fluent benchmarks are far too simple to give relevant information. Standard Fluent job these days has 25+ million elements, so sedan_4m is more of a synthetic test than a real world one. It would also be very interesting to see Nastran, Abaqus and PamCrash numbers.
  • Amiga500 - Sunday, December 28, 2008 - link

    Just to reinforce Alpha's comments... 25 million being standard isn't even close to being true!!!

    Sure, if you want to perform a full global simulation, you'll need that number (and more) for something like a car (you can add another 0 to the 25 million for F1 teams!), or an aircraft.

    But, mostly, the problems are broken down into much smaller numbers, under 10 million cells. A rule of thumb with fluent is 700K cells for every GB of RAM... so working on that principal, you'd need a 16GB workstation for 10 million cells...

    Anything more, and you'll need a full cluster to make the turnaround times practical. :-)
  • alpha754293 - Saturday, December 27, 2008 - link

    "Standard Fluent job these days has 25+ million elements"

    That's NOT entirely true. The size of the simulation is dependent on what you are simulating. (And also hardware availability/limitations).

    Besides, I don't think that Johan actually ran the benchmarks himself. He just dug up the results database and mentioned it here (wherever the processor specs were relevant).

    He also makes a note that the benchmark itself can be quite costly (e.g. a license of Fluent can easily be $20k+), and there needs to be a degree of understanding to be able to run the benchmark itself (which he also stated that neither he, nor the lab has.)

    And on that note - Johan, if you need help in interpreting the HPC results, feel free to get in touch with me. I sent you an email on this topic (I'm actually running the LS-DYNA 3-car collision benchmark right now as we speak on my systems). I couldn't find the case data to be able to run the Fluent benchmarks as well, but IF I do; I'll run it and I'll let you know.

    Otherwise, EXCELLENT. Looking forward to lots more coming from you once the NDA is lifted on the Nehalem Xeons.
  • zpdixon42 - Wednesday, December 24, 2008 - link


    The article makes a point of explaining how fair the Opteron Killer? section is, by assuming that unbuffered DDR3-1066 will provide results close enough to registered DDR3-1333 for Nehalem. But what is nowhere mentioned is that all of the benchmarks unfairly penalize the 45nm Opteron because registered DDR2-800 was used whereas faster DDR2-1067 is supported by Shanghai. If you go into great length justifying memory specs for Intel, IMHO you should mention that point for AMD as well.

    The Oracle Charbench graph shows "Xeon 5430 3.33GHz". This is wrong, it's the X5470 that runs at 3.33GHz, the E5430 runs at 2.66GHz.

    The 3DSMax 2008 32 bit graph should show the Quad Opteron 8356 bar in green color, not blue.

    In the 3DSMax 2008 32 bit benchmark, some results are clearly abnormal. For example a Quad Xeon X7460 2.66GHz is beaten by an older microarchitecture running at a slower speed (Quad Xeon 7330 2.4GHz). Why is that ?

    The article mentions in 2 places the Opteron "8484", this should be "8384".

    The Opteron Killer? section says "the boost from Hyper-Threading ranges from nothing to about 12%". It should rather say "ranges from -5% to 12%" (ie. HT degrades performance in some cases).

    There is a typo in the same section: "...a small advantage at* it can use..." s/at/as/.

    Also, I think CPU benchmarking articles should draw graphs to represent performance/dollar or performance/watt (instead of absolute performance), since that's what matters in the end.
  • JohanAnandtech - Wednesday, December 24, 2008 - link

    "But what is nowhere mentioned is that all of the benchmarks unfairly penalize the 45nm Opteron because registered DDR2-800 was used whereas faster DDR2-1067 is supported by Shanghai. "

    Considering that Shanghai has just made DDR-2 800 (buffered) possible, I think it is highly unlikely that we'll see buffered DDR-2 1066 very soon. Is it possible that you are thinking of Deneb which can use DDR-2 1066 unbuffered?

    "In the 3DSMax 2008 32 bit benchmark, some results are clearly abnormal. For example a Quad Xeon X7460 2.66GHz is beaten by an older microarchitecture running at a slower speed (Quad Xeon 7330 2.4GHz). Why is that ? "

    Because 3DS Max does not like 24 cores. See here:">

    "Also, I think CPU benchmarking articles should draw graphs to represent performance/dollar or performance/watt (instead of absolute performance), since that's what matters in the end. "

    In most cases performance/dollar is a confusing metric for server CPUs, as it greatly depends on what application you will be running. For example, if you are spending 2/3 of your money on a storage system for your OLTP app, the server CPU price is less important. It is better to compare to similar servers.

    Performance/Watt was impossible as our Quad Socket board had a beta BIOS which disabled powernow! That would not have been fair.

    I'll check out the typos you have discovered and fix them. Thx.

Log in

Don't have an account? Sign up now