AMD's Six-Core Phenom II X6 1090T & 1055T Reviewed
by Anand Lal Shimpi on April 27, 2010 12:26 AM EST- Posted in
- CPUs
- AMD
- Phenom II X6
AMD’s Turbo: It Works
In the Pentium 4 days Intel quickly discovered that there was a ceiling in terms of how much heat you could realistically dissipate in a standard desktop PC without resorting to more exotic cooling methods. Prior to the Pentium 4, desktop PCs saw generally rising TDPs for both CPUs and GPUs with little regard to maximum power consumption. It wasn’t until we started hitting physical limits of power consumption and heat dissipation that Intel (and AMD) imposed some limits.
High end desktop CPUs now spend their days bumping up against 125 - 140W limits. While mainstream CPUs are down at 65W. Mobile CPUs are generally below 35W. These TDP limits become a problem as you scale up clock speed or core count.
In homogenous multicore CPUs you’ve got a number of identical processor cores that together have to share the maximum TDP of the processor. If a single hypothetical 4GHz processor core hits 125W, then fitting two of them into the same TDP you have to run the cores at a lower clock speed. Say 3.6GHz. Want a quad-core version? Drop the clock speed again. Six cores? Now you’re probably down to 3.2GHz.
Single Core | Dual Core | Quad Core | Hex Core |
|
|
|
|
This is fine if all of your applications are multithreaded and can use all available cores, but life is rarely so perfect. Instead you’ve got a mix of applications and workloads that’ll use anywhere from one to six cores. Browsing the web may only task one or two cores, gaming might use two or four and encoding a video can use all six. If you opt for a six core processor you get great encoding performance, but worse gaming and web browsing performance. Go for a dual core chip and you’ll run the simple things quickly, but suffer in encoding and gaming performance. There’s no winning.
With Nehalem, Intel introduced power gate transistors. Stick one of these in front of a supply voltage line to a core, turn it off and the entire core shuts off. In the past AMD and Intel only put gates in front of the clock signal going to a core (or blocks of a core), this would make sure the core remained inactive but it could still leak power - a problem that got worse with smaller transistor geometries. These power gate transistors however addressed both active and leakage power, an idle core could be almost completely shut off.
If you can take a single core out of the TDP equation, then with some extra logic (around 1M transistors on Nehalem) you can increase the frequency of the remaining cores until you run into TDP or other physical limitations. This is how Intel’s Turbo Boost technology works. Depending on how many cores are active and the amount of power they’re consuming a CPU with Intel’s Turbo Boost can run at up to some predefined frequency above its stock speed.
With Thuban, AMD introduces its own alternative called Turbo Core. The original Phenom processor had the ability to adjust the clock speed of each individual core. AMD disabled this functionality with the Phenom II to avoid some performance problems we ran into, but it’s back with Thuban.
If half (or more) of the CPU cores on a Thuban die are idle, Turbo Core does the following:
1) Decreases the clock speed of the idle cores down to as low as 800MHz.
2) Increases the voltage of all of the cores.
3) Increases the clock speed of the active cores up to 500MHz above their default clock speed.
The end result is the same as Intel’s Turbo Boost from a performance standpoint. Lightly threaded apps see a performance increase. Even heavily threaded workloads might have periods of time that are bound by the performance of a single thread - they benefit from AMD’s Turbo Core as well. In practice, Turbo Core appears to work. While I rarely saw the Phenom II X6 1090T hit 3.6GHz, I would see the occasional jump to 3.4GHz. As you can tell from the screenshot above, there's very little consistency between the cores and their operating frequencies - they all run as fast or as slow as they possibly can it seems.
AMD's Turbo Core Benefit | |||||
AMD Phenom II X6 1090T | Turbo Core Disabled | Turbo Core Enabled | Performance Increase | ||
x264-HD 3.03 1st Pass | 71.4 fps | 74.5 fps | 4.3% | ||
x264-HD 3.03 2nd Pass | 29.4 fps | 30.3 fps | 3.1% | ||
Left 4 Dead | 117.3 fps | 127.2 fps | 8.4% | ||
7-zip Compression Test | 3069 KB/s | 3197 KB/s | 4.2% |
Turbo Core generally increased performance between 2 and 10% in our standard suite of tests. Given that the max clock speed increase on a Phenom II X6 1090T is 12.5%, that’s not a bad range of performance improvement. Intel’s CPUs stand to gain a bit more (and use less power) from turbo thanks to the fact that Lynnfield, Clarkdale, et al. will physically shut off idle cores rather than just underclock them.
I have noticed a few situations where performance in a benchmark was unexpectedly low with Turbo Core enabled. This could be an artifact of independent core clocking similar to what we saw in the Phenom days, however I saw no consistent issues in my time with the chip thus far.
168 Comments
View All Comments
silverblue - Thursday, April 29, 2010 - link
I agree. If it's a struggle to utilise all six cores at 100%, just add another program to the mix. This may just prove once and for all if a physical Stars core can beat a logical i- core, and thus whether AMD were right to launch Thuban in the first place.Scali - Friday, April 30, 2010 - link
I'll say a few things to that...A physical Stars core actually has to beat TWO logical i-cores. After all, we have 6 Stars cores vs 8 logical i-cores.
So if we were to say that the 4 physical cores on both are equal (which they're not, because the i-cores have an advantage), that leaves 2 physical cores against 4 logical cores.
Another thing is that if you have to work hard to set up a multitasking benchmark that shows Thuban in a favourable light, doesn't that already prove the opposite of what you are trying to achieve?
I mean, how realistic is it for a consumer processor to set up Virtual Box/VMWare benchmarks? Doesn't that belong in the server reviews (where as I recall, AMD's 6-cores couldn't beat Intel's 8 logical cores either in virtualization benchmarks)?
Virtualization is not something that a consumer processor needs to be particularly good at, I would say. Gaming, video processing, photo editing. Now those are things that consumers/end-users will be doing.
wyvernknight - Thursday, April 29, 2010 - link
@mapesdhsTheres no such thing as an AM3 board with DDR2. Only an AM2+ board with DDR2 that has AM3 support. The MA770-UD3 you gave as an example is an AM2+ board with AM3 compatibility. "Support for Socket AM3 / AM2+ / AM2 processors". AM3 boards do not have support for AM2+ and AM2 processors.
mapesdhs - Thursday, April 29, 2010 - link
Strange then that the specs pages specifically describe the sockets as being AM3.
Ian.
Skyflitter - Thursday, April 29, 2010 - link
Could someone please tell me the difference between the Phenom II X6 1090T & 1055T.I would like to put one of these new chips into my Gigabyte DDR2 MB but the Gigabyte web site says my board only supports the 1035T and the 1055T chips. My board is rated @ 140 W. ( GA-MA770-UD3 )
I am currently running a Athlon 64 x2 6400+ ( 3.4Ghz ) and I do not want to loose to much clock speed by going with 1055T ( 2.8 Ghz ).
Do all the new Phenom II X6 support DDR2?
cutterjohn - Friday, April 30, 2010 - link
I'm waiting for them to cough up a new arch that delivers MUCH better per-core performance.There is just no value proposition with their 6 core CPU that mostly matches a 5 core i7 920 which can be had for a roughly similar pricepoint, i.e. i7 930 $199 @ MicroCenter.
Either way unless I win the giveaway :D, I'm now planning at least until next year to upgrade the desktop to see how Sandy Bridge comes out and IF AMD manages to get out their new CPU. I figure that I may as well wait now for the next sockets LGA2011 for Intel, and what I'm sure will be a new one for AMD with their new CPU. As an added bonus I'll be skipping the 1st generation of DX11 hw, as new architectures to support new APIs DX11/OGL4 tend to not be quite the best optimized or robust, especially apparently in nVidia's case this time. (Although AMD had an easier time of it as they made few changes from R7XX to R8XX as is usual for them. AMD need to really start spending some cash on R&D if they wish to remain relevant.)
silverblue - Friday, April 30, 2010 - link
The true point of the X6 is heavy multi-tasking. I'd love to see a real stress test thrown at these to show what they can do, and thus validate their existence.pow123 - Wednesday, May 5, 2010 - link
You would have to be insane to pay $1000 for a chip that may be good for gaming. at $199 with slightly lower performance its a no brainer. When I build a system, I don't care if the frame rates etc is 10 to 15% better. Who cares ; the chip is fast and I have not problems playing high end games. I have no special setup and it does everything that my friends I7 can do. Good for me I get more pc for the buck . Go ahead and go broke buying just a motherboard and cpu when I can get a modern motherboard a cpu, 6gigs of ddr3 1600, a 1tb hd and a dvdrw. More for me.spda242 - Sunday, May 2, 2010 - link
I would really like to have seen a World of Warcraft test with there CPUs like you did with the Intel 6-core.It would be interesting to see if WoW can use all Core's and to what performance.
hajialibaig - Wednesday, May 5, 2010 - link
Not sure why there is no Power vs. Performance vs. Price comparison of the different processors. As for the performance, it could be anything that you want, such as Gaming Performance or Video Encoding.Such a comparison should be interesting, since you may as well pay back the higher initial price via power savings.