The Test

Performance-wise, the new Brisbane chips shouldn't be any different than their 90nm counterparts, but to make sure we benchmarked the new chip against our first 90nm X2 5000+. From a power consumption standpoint, we wanted to compare the new 65W 65nm chip to AMD's Energy Efficient and Energy Efficient Small Form Factor 90nm chips to see how the new process competes with the most efficient of AMD's CPUs that use the older, but more mature process. Unfortunately, we only have a 5000+ 65nm chip, so we can't say for certain what advantages Brisbane will hold at equivalent clocks to the EE/SFF parts.

For each benchmark we measured performance as well as average power consumption during the course of the benchmark, finally reporting performance per watt as one divided by the other.

CPU: Intel Core 2 Duo E6600 (2.40GHz/4MB)
AMD Athlon 64 X2 5000+ (2.6GHz/512KBx2)
AMD Athlon 64 X2 5000+ "Brisbane"
AMD Athlon 64 X2 EE 4600+ (2.4GHz/512KBx2)
AMD Athlon 64 X2 EE SFF 3800+ (2.0GHz/512KBx2)
Motherboard: eVGA NVIDIA nForce 680i
ASUS M2N32-SLI Deluxe
Chipset: nForce 680i
nForce 590 SLI
Chipset Drivers: NVIDIA 9.53
NVIDIA 9.35
Hard Disk: Seagate 7200.9 300GB SATA
Memory: Corsair XMS2 DDR2-800 4-4-4-12 (1GB x 2)
Video Card: NVIDIA GeForce 8800 GTX
Video Drivers: NVIDIA ForceWare 97.44
Resolution: 1600 x 1200
OS: Windows XP Professional SP2

Before we get to the power consumption tests let's have a quick look at idle power consumption of these systems:

Power Consumption

Note that Cool 'n Quiet and EIST were enabled for all tests, but running at 1GHz the AMD CPUs at idle are able to draw much less power than the Intel system (which runs at an idle clock speed of 1.6GHz). Part of the increased power consumption for the E6600 may also be due to the 680i chipset vs. the 590 SLI used on the AMD systems, but we would need to compare both chipsets on a common CPU platform to be sure of that.

Regardless of the reasons, at idle, our Intel test platform consumes much more power than any of the AMD platforms. At the same time, the new 65nm Brisbane CPU doesn't really draw significantly less power than the 90nm cores at idle. Under load though, we've got a completely different story...

Index Media Encoding Performance & Power Consumption
Comments Locked

63 Comments

View All Comments

  • dev0lution - Friday, December 15, 2006 - link

    Now that both companies claim to offer "platforms" it'd be interesting to see an Intel 965 board vs. an ATI board being used in these benches. Not sure that the NVIDIA models were the best choice here, especially since their not apples to apples on generation and power consumption.
  • poohbear - Thursday, December 14, 2006 - link

    im quite shocked to see the gaming performance advantage @ 1600x1200!!!! is'nt the cpu removed as a performance bottle neck @ such a high resolution? its all about the gpu horsepower @ that resolution no? can't believe a cpu can show a 25fps diff @ 1600x1200?!?!?
  • JumpingJack - Friday, December 15, 2006 - link

    quote:

    im quite shocked to see the gaming performance advantage @ 1600x1200!!!! is'nt the cpu removed as a performance bottle neck @ such a high resolution? its all about the gpu horsepower @ that resolution no? can't believe a cpu can show a 25fps diff @ 1600x1200?!?!?


    Some more research here is in order. The G80 GPU, commonly known as the 8800 GTX or GTS on the market, has taken graphics performance to a new level. All of the reviews that used an AMD FX-60 or 62 CPU clearly showed throttling back to the CPU in many if not most cases, only at the highest possible resolutions/AA + FSAA did the scaling turn back on with an FX-60. The X6800 released the full potential of this card.

    The difference in framerate you see bettweent he 5000+ and the E6600 is that the E6600 is has pushed the bottleneck further ahead -- simply because the E6600 is a better gaming CPU.

    Tom's did a good article titled: The 8800 Needs the Fasted CPU.
    In essense, even for a single card solution, an AMD CPU is not a good match for this card.
  • Makaveli - Friday, December 15, 2006 - link

    The reason for the 25fps difference must mean the Geforce is more cpu bottlenecked on the AMD platform than the intel one.

    U gotta remember the 8800GTX is an insanely fast card, and is still bottlenecked on Conroe systems.
  • JarredWalton - Friday, December 15, 2006 - link

    This is why back at the Core 2 Duo launch we talked about CPU performance using games running at 1280x1024 0xAA. When a faster GPU comes out and shifts the bottleneck to the CPU (as has happened with the 8800 GTX), saying "Athlon X2 and Core 2 Duo are equal when it comes to gaming" is a complete fabrication. It's not unreasonable to think that there will be other games where that ~25% performance difference means you can enable high quality mode. Company of Heroes is another good example of a game that can chug on X2 chips at times, even with high-end GPUs.
  • XMan - Thursday, December 14, 2006 - link

    Do you think you might have gotten a higher overclock if you weren't running HTT at 1125MHz?!?
  • Sunrise089 - Thursday, December 14, 2006 - link

    The chart on page one claims to have pricing info. There is none as far as I can see.
  • JarredWalton - Friday, December 15, 2006 - link

    Fixed. :)
  • RichUK - Thursday, December 14, 2006 - link

    Same old sh!t. It's nice to see AMD finally kicking off their 65nm retail chips, but lets see this new core for God sakes.

    They're lacking big time, this really is sad. **Thumbs-Down**
  • Stereodude - Thursday, December 14, 2006 - link

    How's a chip that uses less power run hotter? On the last page the 65nm X2 5000+ hit 51C under load, lower than any other chip, but it uses more power than any chip except the 90nm X2 5000+. How's that work?

Log in

Don't have an account? Sign up now