The new methodology

At Anandtech, giving you real world measurements has always been the goal of this site. Contrary to the vast majority of IT sites out there, we don’t believe in letting some consultant or analyst spell it out for you.  We give you our measurements, as close to the real world as possible. We give you our opinion based on those measurements, but ultimately it is up to you to decide how to interpret the numbers.  You tell us in our comment box if we make a mistake in our thoughts somewhere. And we will investigate it, and get back to you. It is a slow process, but we firmly believe in it. And that is what happened in our article about  “dynamic power management”and “testing low power CPUs”

The former article was written to understand how the current power management techniques work. We needed a very easy, well understood benchmark to keep the complexity down. And it allowed us to learn a lot about the current Dynamic Voltage and Frequency Scaling (DVFS) techniques that AMD and Intel use. But as we admitted, our Fritz Chess benchmark was and is not a good choice if you wanted to apply this new insights to your own datacenter.

“Testing low power CPUs” went much less in depth,  but used a real world benchmark: our vApus Mark I, which simulates a heavy consolidated virtualization load. The numbers were very interesting, but the article had one big shortcoming: it only measured at 90-100% workload or idle. The reason for this is that the vApus benchmark score was based upon throughput. And to measure the throughput of a certain system, you have to stress it close to the maximum. So we could not measure performance accurately unless we went for the top performance. And that is fine for an HPC workload, but not for a commercial virtualization/database/web workload.

Therefore we went for a different approach based upon our reader's feedback. We launched “one tile” of the vApus benchmark on each of tested servers. Such a tile consists of a OLAP database (4 vCPUs), an OLTP database (4 vCPUs) and two web VMs (2 vCPUs). So in total we have 12 virtual CPUs. These 12 virtual CPUs are much less than what a typical high-end dual CPU server can offer. From the point of view of the Windows 2008, Linux or VMware ESX scheduler, the best Xeon 5600 (“Westmere”) and Opteron 6100 (“Magny-cours”) can offer 24 logical or physical cores. To the hypervisor, those logical or physical cores are Hardware Execution Contexts (HECs). The hypervisor schedules VMs onto these HECs.  Typically each of the 12 virtual cores needs somewhere between 50 and 90% of one core. Since we have twice the number of cores or HECs than required, we expect the typical load on the complete system to hover between 25 and 45%.  And although it is not perfect, this is much closer to the real world. Most virtualized servers never run idle for a long time: with so many VMs, there is always something to do. System administrators also want to avoid CPU loads over 60-70% as this might make the response time go up exponentially.

There is more. Instead of measuring throughput, we focus on response time. At the end of the day, the number of pages that your server can maximally serve is nice to know, but not important. The response time that your system offers at a certain load is much more important. Users will appreciate low response times. Nobody is going to be happy about the fact that your server can serve up to 10.000 request per second if each page takes 10 seconds to load.

Lowering the energy costs Hardware configuration and measuring power
POST A COMMENT

49 Comments

View All Comments

  • Zstream - Thursday, July 15, 2010 - link

    It kills the AMD low power motto :( Reply
  • duploxxx - Thursday, July 15, 2010 - link

    lol, all that you can say about this article is something about AMD. Looks like you need an update on server knowledge, Since the Arrival of Nehalem Intel has the best offer when you need the highest performance parts and when using Low power parts which give still the best performance. Since MC arrived things got a bit different mostly due to aggressive price for all mid value but still a favor to intel parts for highend and L power bins. Certainly in the area of virtualization AMD does very well

    What is shown here should be known to many people that design virtual environments, Virtualization and low power parts don't match if you run applications that need cpu power and response all the time, L series can only be very useful for a huge bunch of "sleeping" vm's.

    Interesting would be to compare with AMD, but 9/10 both low power and high power intel parts will be more interesting when you will only run 1 tile, the huge core amount lower ipc advantage will loose against the higher ipc/core of intel in this battle.
    Reply
  • Zstream - Thursday, July 15, 2010 - link

    Excuse me? I am quite aware of low power consuming chips. The point AMD has made in the past four to five years is that low power and high performance can match Intel's performance and still save you money. I have been to a number of AMD web conferences and siminars were they state the above. Reply
  • MrSpadge - Thursday, July 15, 2010 - link

    I have been to a number of AMD web conferences and siminars were they state the above.


    Not sure if you're being sarcastic here, as it's obvious AMD would tell you this.

    But regarding the actual question: you'd be about right if you compared K8 or Phenom I based Opterons with Core 2 based ones. And you'd be very right if you compared them to Phenom II. However, the performance of these Intels is being held back by the FSB and FB-DIMMs and power efficiency is almost crippled by the FB-DIMMs. But Nehalem changed all of that.

    MrS
    Reply
  • duploxxx - Friday, July 16, 2010 - link

    4-5 years.... Nehalem was launched q12009 since then all changed. Before that Xeon parts suffered from FBDimm powerconsumption and FSB bottleneck and that is why AMD was still king on power/performance and was able to keep up with max performance. Nehalem was king, Istanbul was able to close the gap a bit but missed raw ghz and had higher power needs due to ddr2, again MC parts leveraged back this intel advantage and now there is a choice again, but L power still is king to Neh/Gulf. Reply
  • Penti - Saturday, July 17, 2010 - link

    It invalidates low power versions of AMDs also. That's he's point I would believe. Reply
  • stimudent - Thursday, July 15, 2010 - link

    Not really.
    If there can't be two sides to the story or a more diverse perspective, then it should not have been published. Next time, wait a little longer for parts to arrive - try harder next time.
    Reply
  • MrSpadge - Friday, July 16, 2010 - link

    A comparison to AMD would have been nice, but this article is not Intel vs. AMD!

    It already has 2 side: high power vs. low power Intels. And Johan found something very important and worthy of reporting. No need to blur the point by including other chips.

    MrS
    Reply
  • Zstream - Thursday, July 15, 2010 - link

    I know we have the VMware results but could someone do an analysis on AMD / INTEL chips?

    For instance I can get a 12 core AMD chip or a 6 core/12 HT chip from Intel. Has anyone done any test with Terminal Servers or Real world usage of a VM (XP Desktop) with core count?

    I would think that a physical 12C vs 6C impacts real world performance by a considerable large amount.
    Reply
  • tech6 - Thursday, July 15, 2010 - link

    Great work Anandtech - it's about time someone took the low power TCO claims to task. Reply

Log in

Don't have an account? Sign up now