Challenging. That is the least you can say about the economic climate for the launch of Intel's newest "Nehalem EP Xeon" platform. However, challenges must be met and they certainly make things more interesting. The server vendors won't convince a lot of people to buy a new Intel Nehalem (or AMD Shanghai) based server just because "performance is higher". That will only work in the processing hungry HPC and render worlds, where less time per task results in time and cost savings. Hence, the challenge for AMD and Intel is to convince the rest of the market - that is 95% or so - that the new platforms provide a compelling ROI (Return On Investment).
 
The most productive or intensively used servers in general get replaced every 3 to 5 years. Based on Intel's own inquiries, Intel estimates that the current installed base consists of 40% dual-core CPU servers and 40% servers with single-core CPUs.
 

That means that Intel's Nehalem platform (and AMD's Shanghai/Opteron 23xx platform) has to convince people to replace their dual-core Opteron, dual-core Xeon 50xx ("Dempsey"), and Xeon "Irwindale" servers. There are two great ways to turn a much more powerful server into a moneymaking and cost saving machine. One is to use fewer servers in a cluster, which is not applicable to all companies. The other more popular approach is to consolidate more servers on the same physical machine by using virtualization. The most important arguments for upgrading your servers are performance/watt and support for virtualization.

Intel's newest platform holds the promise that it supports virtualization better by adding EPT and lower world switch times. However, probably the largest bottleneck in the past was the amount of available bandwidth. Bandwidth is frequently an overrated performance factor, as few applications - excluding the HPC world - get a boost from for example using three instead of two memory channels. That changes dramatically when you are running tens of virtual machines on top of a physical machine: many applications with medium bandwidth demands morph into one big bandwidth-hogging monster. The challenge is thus to provide access to the memory as fast as possible, lower energy consumption, and better support for virtualization. On paper, the Nehalem architecture definitely can play all those trump cards. Anand has provided a detailed description of the Nehalem architecture. The most important improvements for business applications are:

  • The integrated memory controller talks to its own local memory or remote memory (NUMA). Memory access takes between 27 and 54 ns (80 to 161 cycles). Compare this to the Xeon 5450 at the same clock speed where memory access via the MC in the chipset can take up to 123 ns! The closest competitor (Opteron "Shanghai") needs between 32 and 71 ns.
  • A native quad-core design with fast 33 cycle L3 cache make it easy for the L2 caches to exchange cache coherency information
  • Fast CPU interconnects make sure that the rest of the snoops happen very fast and do not interfere with other traffic.
  • The memory controller has up to three channels. A dual CPU configuration has access to 35GB/s of memory bandwidth (measured with stream) if you use DDR3-1333. The latest dual Opteron achieves 19.4GB/s with DDR2-800

Basically, Nehalem is Intel's version of the improvements found in the AMD Barcelona platform, only better (or at least that's the goal). Let's see what it can do in reality.

What Intel is Offering
Comments Locked

44 Comments

View All Comments

  • rkchary - Tuesday, June 16, 2009 - link

    We've a customer who is interested in upgrading to Nehalem. He's running on Windows with Oracle database for SAP Enterprise Portals.

    Could you kindly let us know your recommendations please?

    The approximate concurrent users would be around 3000 Portal users.

    Keenly looking forward for your response and if you could state any instances of Nehalem installed in SAP environment for production usage, that would be a great deal of help.

    Regards,
    Chary
  • Adun - Thursday, April 9, 2009 - link

    Hello,

    I understand the PHP not-enough-threads explanation as to why Dual X5570 doesn't scale up.

    But, can anyone please explain why when you add another AMD Opteron 2384 the increase is from 42.9 to 63.9, while when you add another Xeon X5570 there isn't such an increase?

    Thank you for the article,

    Adun.
  • stimudent - Thursday, April 2, 2009 - link

    Was it really too much effort to clean off the processor before posting a picture of it? Or were they trying to show that it was used, tested?
  • LizVD - Friday, April 3, 2009 - link

    Would you perhaps like us to draw a smiley face on it as well? ;-)
  • GazzaF - Wednesday, April 1, 2009 - link

    Well done on an excellent review using as many real-world tests as possible. The VMWare test is a real eye opener and shows how the 55xx can match double the number of CPUs from the last generation of Xeons *AND* crucially save $$$$ on licensing from Windows and MS SQL and other per-socket licensed software, plus the power saving which is again a financial saving if you hire rack space in a datacentre.

    I eagerly await your own in-house VM tests. Please consider also testing using Windows 2008 Hyper-V which I think doesn't have the 55xx optimisations that the latest release of VMWare has (and might not have until R2?).

    Thanks for the time you put in to running the endless tests. The results make a brilliant business case for anyone wanting to upgrade their servers. You must have had the chips a good week before Intel officially launched them. :-) I do feel sorry for AMD though. I'm sure they have plenty of motivation to come back with a vengeance like they did a few years ago.
  • JohanAnandtech - Thursday, April 2, 2009 - link

    Thanks! Good to hear from another professional. I believe the current Hyper Beta R2 already has some form of support for EPT.

    Our virtualization testing is well under way. I'll give an update soon on our blog page.

  • Lifted - Wednesday, April 1, 2009 - link

    You mention octal servers from Sun and HP for VM's, but does anybody really use these systems for VM's? I can't imagine why anybody would, since you are paying a serious premium for 8 sockets vs. 2 x 4 socket servers, or even 4 x 2 socket servers. Then the redundancy options are much lower when running only a few 8 socket servers vs many 2 or 4 socket servers when utilizing v-motion, and the expansion options are obviously far less w/ NIC's and HBA's. From what I've seen, most 8 socket systems are for DB's.
  • Veteran - Wednesday, April 1, 2009 - link

    What i mentioned after reading the review is there are very few benches on benchmarks a little bit favored by AMD.

    For example, only 1 3DSmax test (so unusefull) at least 2 are needed
    Only 1 virtualization benchmark, which is really a shame....
    Virtualization is becoming so important and you guys only throw in one test?

    Besides that, the review feels a bit biased towards intel, but i will check some other reviews of the xeon 5570
  • duploxxx - Wednesday, April 1, 2009 - link

    Virtualization benchmark come from the official Vmmark scores.

    However there is something real strange going on in the results...

    HP HP ProLiant DL370 G6
    VMware ESX Build #148783 VMmark v1.1
    23.96@16tiles
    View Disclosure 2 sockets
    8 total cores
    16 total threads 03/30/09

    Dell Dell PowerEdge R710
    VMware ESX Build #150817 VMmark v1.1
    23.55@16tiles
    View Disclosure 2 sockets
    8 total cores
    16 total threads 03/30/09

    Inspur Inspur NF5280
    VMware ESX Build #148592 VMmark v1.1
    23.45@17tiles
    View Disclosure 2 sockets
    8 total cores
    16 total threads 03/30/09

    Intel Intel Supermicro 6026-NTR+
    VMware ESX v3.5.0 Update 4 VMmark v1.1
    14.22@10 tiles
    View Disclosure 2 sockets
    8 total cores
    16 total threads 03/30/09

    So lets see all the prebuilds of esx3.5 update 4 get a real high score of 16 tiles almost as much as a 4s shanghai while Vmware performance team themselves stated that we should never see the HT core as a real cpu in Vmware (even with the new code for HT) while yet the benchmark shows a high performance increase, no not like anandtech is stating that this is due to the more available memory and its bandwith, those Vmmarks are not memory starving. Now look at the official Intel benchmark with ESX update 4, it provides 10 tiles and a healthy increase, that from a technical point of view seems much more realistic. All other marketing stuff like switching time etc, all nice, but then again is within the same line of current shanghai.
  • JohanAnandtech - Wednesday, April 1, 2009 - link

    What kind of tests are you looking for? The techreport guys have a lot of HPC tests, we are focusing on the business apps.

    "very few benches on benchmarks a little bit favored by AMD."

    That is a really weird statement. First of all, what is a test favored by AMD?

    Secondly, this new kind of testing with OLTP/OLAP testing was introduced in the Shanghai review. And it really showed IMHO that there was a completely wrong perception about harpertown vs Shanghai. Because Shanghai won in the tests that mattered the most to the market. While many tests (inclusive those of Intels) were emphasizing purely CPU intensive stuff like Blackscholes, rendering and HPC tests. But that is a very small percentage of the market, and that created the impression that Intel was on average faster, but that was absolutely not the case.

    "Only 1 virtualization benchmark, which is really a shame..."

    Repeat that again in a few weeks :-). We have just succesfully concluded our testing on Nehalem.

    Personally I am a bit shocked about the "not enough tests" :-). Because any professional knows how hard these OLTP/OLAP tests are to set up and how much time they take. But they might not appeal to the enthousiast, I am not sure.



Log in

Don't have an account? Sign up now