The past several months have seen both Intel and AMD introducing interesting updates to their CPU lines. Intel started with the E-stepping of the Xeon. Even at 3GHz, the four cores of the Xeon 5450 need 80W at the most, and if speed is all you care about a 120W 5470 is available at 3.33GHz. The big news came of course from AMD. The "only native x86 quad-core" is finally shining bright thanks to a very successful transition to 45nm immersion lithography as you can read here. The result is a faster and larger 6MB L3 cache, higher clock speeds, and lower memory latency. AMD's quad-core is finally ready to be a Xeon killer.

So it was time for a new server CPU shoot out as server buyers are confronted with quickly growing server CPU pricelists. Talking about pricelists, is someone at marketing taking revenge on a strict math teacher that made him/her suffer a few years ago? How else can you explain that the Xeon 5470 is faster than the 5472, and that the Xeon 5472 and 5450 are running at the same clock speed? The deranged Intel (and in a lesser degree AMD) numbering system now forces you to read through spec sheets the size of a phone book just to get an idea of what you are getting. Or you could use a full-blown search engine to understand what exactly you can or will buy. The marketing departments are happy though: besides the technical white papers you need to read to build a server, reading white papers to simply buy a CPU is now necessary too. Market segmentation and creative numbering…a slightly insane combination.

Anyway, if you are an investor trying to understand how the different offerings compare, or you are out to buy a new server and are asking yourself what CPU should be in there, this article will help guide you through the newest offerings of Intel and AMD. In addition, as the Xeon 55xx - based on the Nehalem architecture - is not far off, we will also try it to see what this CPU will bring to the table. This article is different from the previous ones, as we have changed the collection of benchmarks we use to evaluate server CPUs. Read on, and find out why we feel this is a better and more realistic approach.

Breaking out of the benchmark prison

When I first started working on this article, I immediately started to run several of our "standard" benchmarks: CINEBENCH, Fritz Chess, etc. As I started to think about our "normal" benchmark suite, I quickly realized that this article would become imprisoned by its own benchmarks. It is nice to have a mix of exotic and easy to run benchmarks, but is it wise to make an article with analysis around such an approach? How well does this reflect the real world? If you are actually buying a server or are you are trying to understand how competitive AMD products are with Intel's, such a benchmark mix is probably only confusing the people who like to understand what decisions to make. For example, it is very tempting to run a lot of rendering and rarely used benchmarks as they are either easy to run or easy to find, but it gives a completely distorted view on how the different products compare. Of course, running more benchmarks is always better, but if we want to give you a good insight in how these server CPUs compare, there are two ways to do it: the "micro architecture" approach and the "buyer's market" approach.

With the micro architecture approach, you try to understand how well a CPU deals with branch/SSE/Integer/Floating Point/Memory intensive code. Once you have analyzed this, you can deduce how a particular piece of software will probably behave. It is the approach we have taken in AMD's 3rd generation Opteron versus Intel's 45nm Xeon: a closer look. It is a lot of fun to write these types of articles, but it only allows those who have profiled their code to understand how well the CPU will do with their own code.

The second approach is the "buyer's market" approach. Before we dive into new Xeons and Opterons, we should ask ourselves "why are people buying these server CPUs"? Luckily, IDC reports[1] answer these questions. Even though you have to take the results below with a grain of salt, they give us a rough idea of what these CPUs are used for.

The reason why people buy a 2 socket server

The reason why people buy a 4 socket server

IT infrastructure servers like firewalls, domain controllers, and e-mail/file/print servers are the most common reasons why servers are bought. However, file and print servers, domain controllers, and firewalls are rarely limited by CPU power. So we have the luxury of ignoring them: the CPU decision is a lot less important in these kind of servers. The same is true for software development servers: most of them are for testing purposes and are underutilized. Mail servers (probably 10% out of the 32-37%) are more interesting, but currently we have no really good benchmark comparisons available, since Microsoft's Exchange benchmark was unfortunately retired. We are currently investigating which e-mail benchmark should be added to our benchmarking suite. However, it seems that most mail server benchmarking boils down to storage testing. This subject is to be continued, and suggestions are welcome.

Collaborative servers really deserve more attention too as they comprise 14 to 18% of the market. We hope to show you some benchmarks on them later. Developing new server benchmarks takes time unfortunately.

ERP and heavy OLTP databases are good for up to 17% of the shipments and this market is even more important if you look at the revenue. That is why we discuss the SAP benchmarks published elsewhere, even though they are not run by us. We'll add Oracle Swingbench in this article to make sure this category of software is well represented. You can also check Jason's and Ross' AMD Shanghai review for detailed MS SQL Server benchmarking. With Oracle, MS SQL Server and SAP, which together dominate this part of the server market, we have covered this part well.

Reporting and OLAP databases, also called decision support databases, will be represented by our MySQL benchmark. Last but not least, we'll add the MCS eFMS web server test -- an ultra real world test -- to our benchmark suite to make sure the "heavy web" applications are covered too. It is not perfect, but this way we cover the actual market a lot better than before.

Secondly, we have to look at virtualization. According to IDC reports, 35% of the servers bought in 2007 were bought to be virtualized. IDC expect this number to climb up to 52% in 2008 [2]. Unfortunately, as soon as we upgraded the BIOS of our quad socket platform to support the latest Opteron, it would not allow us to install ESX nor let us enable power management. That is why we had to postpone our server review for a few weeks and that is why we split it into two parts. For now, we will look at the VMmark submissions to get an idea how the CPUs compare when it comes to virtualization.

In a nutshell, we're moving towards a new way of comparing server CPUs. We combine the more reliable industry standard benchmarks (SAP, VMmark) with our own benchmarks and try to give you a benchmark mix that comes closer to what the servers are actually bought for. That should allow you to get an overview that is as fair as possible. Performance/watt is still missing in this first part, but a first look is already available in the Shanghai review.

Benchmark Configuration
POST A COMMENT

29 Comments

View All Comments

  • Bruce Herndon - Tuesday, December 23, 2008 - link

    I'm surprised by your comments. You claim that VMmark is a CPU/memory-centric benchmark. If I look at the raw data in the VMmark disclosure for Dell's R905 score of 20.35 @ 14 tiles, I see that the benchmark is driving 250-300 MB/s of disk IO across several HBAs and storage LUNs. This characteristic scales with the various systems mentioned in the article.

    As a designer of VMmark, I happen to know that both storage bandwidth (for the fileserver) and latency (for mail and database)are critical to acheiving good VMmark scores. Furthermore, the webserver drives substantial network IO. The only purely CPU-centric component to VMmark is the javaserver. Overall, the benchmark does exercise the entire virtualization solution - hypervisor, CPU, memory, disk, and network.
    Reply
  • cdillon - Tuesday, December 23, 2008 - link

    While SAS and Infiniband share some connectors and obtain similar data rates, they are incompatible technologies with two different purposes. Infiniband can be used for disk shelf connections, but it is less common and definitely not the case here. You should not call the connection between the Adaptec 5805 controller and the disk shelf an "Infiniband connection", even if it is using Infiniband connectors and cables, it is simply an SAS connection.

    Reply
  • JohanAnandtech - Tuesday, December 23, 2008 - link

    Well, the physical layer is Infiniband, the used protocol is SCSI. I can understand calling it an "infiniband connection" maybe confusing, but the cable is an infiniband cable. Reply
  • shank15217 - Friday, December 26, 2008 - link

    Anand, I think the above poster is right. The Adaptec RAID 5805 uses SFF-8087 connectors but the protocol is SSP (Serial SCSI Protocol). Infiniband is a physical layer protocol that shares the same connector as SAS but they are not the same. Nothing in the Adaptec RAID 5805 spec mentions Infiniband as a supported protocol.

    http://www.adaptec.com/en-US/products/Controllers/...">http://www.adaptec.com/en-US/products/C...ers/Hard...
    Reply
  • niva - Tuesday, December 23, 2008 - link

    I'm not sure you can run your same ol benchmark for rendering, and I'd really like more insight into what you guys are rendering and if it's indeed using all 16/24(six core 4 point system)/32(hyperthreading) cores on the system.

    What renderer, what scene, details details...

    These chips get gobbled up by render farms and this is indeed where they can really flex their muscles to the fullest.
    Reply
  • JohanAnandtech - Tuesday, December 23, 2008 - link

    Just click on the link under "we have performed so many times before" :-) Reply
  • akinneyww - Tuesday, December 23, 2008 - link

    I read DailyTech and anandtech.com to keep up with the latest in IT. I appreciate the thought that has gone into putting together this article. I would like to see more articles like this one. Reply
  • Jammrock - Tuesday, December 23, 2008 - link

    The VMware results shocked me the most. I know AMD has been working hard on the virtualization sector and it looks like their work has paid off. Reply
  • classy - Tuesday, December 23, 2008 - link

    With the rapid increase of virtualization, AMD is looking really strong. We have begun using 3.5 Vmware and are expanding the use of it. Virtualization is truly becoming a big thing in server choice. Reply

Log in

Don't have an account? Sign up now