The upcoming Intel Nehalem CPU has been in the spotlight for months now. In contrast and despite the huge die size and 1.9 billion (!) transistors, the 6-core Xeon 74xx is a wallflower for both the public as Intel's marketing. However, if you've invested in the current Intel platform, the newly launched Intel 74xx series deserves a lot more attention.

The Xeon 74xx, formerly known as Dunnington, is indeed a very interesting upgrade path for the older quad socket platform. All Xeon 74xx use the same mPGA604 socket as previous Xeons and are electrically compatible with the Xeon 73xx series. The Xeon 73xx , also known as Tigerton, was basically the quad-core version of the Xeon 53xx (Clovertown) that launched at the end 2006. The new hex-core Dunnington combines six of the latest 45nm Xeon Penryn cores on a single die. As you may remember from our dual socket 45nm Xeon 54xx review, the 45nm Penryn core is about 10% to 20% faster than its older 65nm brother (Merom). There is more: an enormous 12MB to 16MB L3 cache ensures that those six cores access high latency main memory a lot less. This huge L3 also reduces the amount of "cache syncing" traffic between the CPUs, an important bottleneck for the current Intel server platforms.


2.66GHz, 6 cores, 3x3MB L2, and 16MB L3 cache: a massive new Intel CPU

With at least 10% to 20% better performance per core, two extra cores per CPU package, and an upgrade that only requires a BIOS update, the newest Xeon 7460 should be an attractive proposal if you are short on processing power.

Six Cores?

Dunnington was announced at the past IDFs as "extending the MP leadership". Readers who read our last quad socket report understand that this is a questionable claim. Since AMD introduced the Opteron 8xxx in April 2003, there has never been a moment that Intel was able to lead the dance in the quad socket server market. Sure, the Intel 73xx was able to outperform the AMD chip in some areas (rendering), but the AMD quad-core was still able to keep up with Intel chip in Java, ERP, and database performance. When it comes to HPC, the AMD chip was clearly in the lead.

Dunnington might not be the darling of Intel marketing, but the chip itself is a very aggressive statement: let us "Bulldoze" AMD out of the quad socket market with a truly gigantic chip that only Intel can produce without losing money. Intel is probably - courtesy of the impressive ultra low leakage 45nm high-K process technology - the only one capable of producing large quantities of CPUs containing 1.9 billion transistors, resulting in an enormous die size of 503 mm2. That is almost twice the size of AMD's upcoming 45nm quad-core CPU Shanghai. Even IBM's flagship POWER6 processor (up to 4.7GHz) is only 341 mm2 and only has 790 million transistors.

Processor Size and Technology Comparison
CPU transistors count (million) Process Die Size Cores
Intel Dunnington 1900 45 nm 503 6
Intel Nehalem 731 45 nm 265 4
AMD Shanghai 705 45 nm 263 4
AMD Barcelona 463 65 nm 283 4
Intel Tigerton 2 x 291 = 582 65 nm 2 x 143 = 286 4
Intel Harpertown 2 x 410 = 820 45 nm 2 x 107 = 214 4

The huge, somewhat irregular die - notice how the two cores in the top right corner are further away from the L3 cache than the other four - raises some questions. Such an irregular die could introduce extra wire delays, reducing the clock speed somewhat. Why did Intel not choose to go for an 8-core design? The basic explanation that Patrick Gelsinger, General Manager of Intel's Digital Enterprise Group, gave was that simulations showed that a 6-core with 16MB L3 outperformed 8-core with a smaller L3 in the applications that matter the most in the 4S/8S socket space.


Layout of the new hex-core

TDP was probably the most important constraint that determined the choice of six cores, since core logic consumes a lot more power than cache. An 8-core design would make it necessary to reduce the clock speed too much. Even at 65nm, Intel was already capable of producing caches that needed less than 1W/MB, so we can assume that the 16MB cache consumes around 16W or less. That leaves more than 100W for the six cores, which allows decent clock speeds at very acceptable TDPs as you can see in the table below.

Processor Speed and Cache Comparison
Xeon model Speed (GHz) Cores L2 Cache (MB) L3 Cache (MB) TDP (W)
X7460 2.66 6 3x3 16 130
E7450 2.4 6 3x3 12 90
X7350 2.93 4 2x4 0 130
E7440 2.4 4 2x3 12 90
E7340 2.4 4 2x4 0 80
E7330 2.4 4 2x4 0 80
E7430 2.13 4 2x3 12 90
E7420 2.13 4 2x3 8 90
L7455 2.13 6 3x3 12 65
L7445 2.13 4 2x3 12 50

The other side of the coin is that Dunnington probably uses an L3 cache that runs at half the clock speed of the cores. We recorded a 103 cycle latency, measured with a 2.66GHz CPU (39 ns), for the L3 cache.


Dunnington cache hierarchy

In comparison, the - admittedly much smaller - L3 cache of the quad-core Opteron needs 48 cycles (using a 2.5GHz chip, or 19 ns). The L3 cache is about half as fast as the one found in the Barcelona core, so the L3 is a compromise where the engineers traded in speed for size and power consumption.

Price Comparisons
Comments Locked

34 Comments

View All Comments

  • JarredWalton - Tuesday, September 23, 2008 - link

    Heh... that's why I love the current IBM commercials.

    "How much will this save us?"
    "It will reduce our power bills by up to 40%."
    "How much did we spend on power?"
    "Millions."
    [Cue happy music....]

    What they neglect to tell you is that in order to achieve the millions of dollars in energy savings, you'll need to spend billions on hardware upgrades first. They also don't tell you whether the new servers are even faster (it's presumed, but that may not be true). Even if your AC costs double the power bills for a server, you're still only looking at something like $800 per year per server, and the server upgrades cost about 20 times as much every three to five years.

    Now, if reduced power requirements on new servers mean you can fit more into your current datacenter, thus avoiding costly expansion or remodeling, that can be a real benefit. There are certainly companies that look at density as the primary consideration. There's a lot more to it than just performance, power, and price. (Support and service comes to mind....)
  • Loknar - Wednesday, September 24, 2008 - link

    Not sure what you mean: "reduced power requirements means you can fit more into your DC". You can fill your slots regardless of power, unless I'm missing something.

    Anyway I agree that power requirement is the last thing we consider when populating our servers. It's good to save the environment, that's all. I don't know about other companies, but for critical servers, we buy the most performing systems, with complete disregard of the price and power consumption; because the cost of DC rental, operation (say, a technician earns more than 2000$ per year, right?) and benefits of performance will outweigh everything. So we're so happy AMD and Intel have such a fruitful competition. (And any respectable IT company is not fooled by IBM's commercial! We only buy OEM (Dell in my case) for their fast 24-hour replacement part service and worry free feeling).
  • JarredWalton - Wednesday, September 24, 2008 - link

    I mean that if your DC has a total power and cooling capacity of say 100,000W, you can "only" fit 2000 500W servers in there, or you could fit 4000 250W servers. If you're renting rack space, this isn't a concern - it's only a concern for the owners of the data center itself.

    I worked at a DC for a while for a huge corporation, and I often laughed (or cried) at some of their decisions. At one point the head IT people put in 20 new servers. Why? Because they wanted to! Two of those went into production after a couple months, and the remainder sat around waiting to be used - plugged in, using power, but doing no actual processing of any data. (They had to use up the budget, naturally. Never mind that the techs working at the DC only got a 3% raise and were earning less than $18 per hour; let's go spend $500K on new servers that we don't need!)

Log in

Don't have an account? Sign up now