The Phenom II X4 965 Black Edition impressed us when it launched earlier this week. Well, just about everything but the long-winded naming scheme and a slight concern about its power consumption under load conditions impressed us. Of course, the last minute change from a 125W TDP rating to a 140W TDP rating had us scratching our heads and those of the motherboard suppliers who will see a few less products certified for this processor.

That said, our first retail processor arrived in the labs yesterday. Of course, our first inclination was to test its overclocking capabilities and it reached the same 4.025GHz core speed the review sample managed on similar voltages. We did not expect any real miracles as the Phenom II tends to run out of steam around 4GHz when utilizing a 64-bit operating system. This particular chip did manage to hit 4.3GHz on 1.510V under Windows 7 x86, so it’s a keeper.

Our initial curiosity of the way, we decided to take a quick look at whether undervolting this gem of a processor could reduce its carbon footprint. After all, a stock 1.40V core voltage setting is a bit high for our tastes although an idle rate of 1.00V with Cool-n-Quiet (CnQ) enabled produced excellent results in earlier testing.  Of course, undervolting is not new and all current processors can handle a certain reduction in voltages.  We wanted to see just how low our first retail 965 BE could go before becoming unstable.  Our best 955 BE retail CPU hit 1.220V at a lower stock core speed of 3.2GHz and we thought this would be an attainable voltage target but at 3.4GHz considering the yield and process improvements made by AMD over the past few months. 

Our test bed is slightly different than the chipset launch article as we are utilizing the MSI 790FX-GD70 motherboard and an ASUS HD 4890 video card. The test results today are concentrated on undervolting this particular processor and not as a cross-platform comparison.

The power consumption tests utilize an watts up? Pro meter and we capture total power consumption for the system at the wall, less the monitor and speakers. Our faith in capturing true core temperatures with various software utilities for the Phenom II series is suspect at times based on the diode design and chipsets. Regardless if the reported temperatures are exactly true or not, the delta between settings is our primary focus today. We are using Everest 5.0.2.1810 to capture our individual core temperatures.

We let the system idle for 10 minutes before capturing the idle readings for both power and temperature. The CPU load test consists of us rendering a rather large scene in Maxon’s Cinema 4D R11 x64 while transcoding a MPEG4 file into a size friendly format for our iPOD with Mainconcept’s Reference 1.61 utility. This test ensures a 100% load on the processor from start to finish. We run FarCry 2 at 1680x1050 2xAA High Quality settings (very high on processing effects) to simulate a typical gaming session.

At a stock core speed of 3.4GHz, we were able to reduce the load core voltage from 1.392V to 1.240V and retain 100% stability with the system. The idle voltage with CnQ enabled dropped from 1.00V to 0.832V. At idle there is a slight difference in power consumption and temperatures, but nothing to get excited about yet.

However, in our CPU load test the system power consumption drops 16% and temperatures 19%. In the gaming test, system power consumption is reduced 11% and temperatures 15%. Of note is that we were able to run the Northbridge speed at 2.4GHz without increasing NB voltages at the 1.240V (1.224 actual) core setting.

We set the CPU multiplier to 19x for the overclock test which disables CnQ. Changing the HTT reference clock will allow CnQ to remain enabled and we are still running stability tests to determine core speed stability at reduced voltages. At this point, the highest stable setting on 1.36V is 17x218 for a 3.708GHz core clock with the NB at 2.4GHz, but we are still trying for 3.8GHz.

In the meantime, we set our multiplier to 19x with an HTT reference clock of 200, resulting in a 3.8GHz core clock. The lowest voltage we could set was 1.36V and retain stability in a wide variety of benchmarks. The CPU load test indicates a 3% reduction in power consumption and a 4% drop in temperatures. The game result is better with an earth friendly 5% reduction in power and temperatures dropping 7%.

Of course, your mileage will vary based on the quality of the processor, but our initial tests indicate there is an opportunity to reduce both power consumption and temperatures up to 16% and 19% respectively at stock settings by undervolting the 965 Black Edition processor. We have an additional retail processor arriving shortly and will provide a short update on the final test results at that time. For now, it appears this processor can be saved from the gas guzzler tax.

Comments Locked

40 Comments

View All Comments

  • PrinceGaz - Monday, August 17, 2009 - link

    I did consider mentioning using the much more scientifically useful Kelvin scale with absolute zero as the minimum point temperatures are compared against as being the only possible fixed scale which would make any sense, but decided against that as it would lengthen and dilute what I had written. I think the only people who regularly use the Kelvin scale when it comes to CPUs are those whose daily shopping-list includes a large tub of LN2 :)

    The important thing we both agree on is that temperature differences have to be relative to something relevant. Perhaps more importantly, the much higher percentage reduction in CPU temperature by undervolting (relative to room-temperature) compared with the reduction in system power consumption, is exactly what would be expected. Undervolting the CPU is only really reducing the power consumption of that component alone (the rest of the system eats the watts just as before), so even a fairly small decrease in system power consumption should result in a more significant decrease in CPU temperature.
  • MrSpadge - Saturday, August 15, 2009 - link

    I wonder why they choose such a high stock voltage. Apparently it's not really neccessary as they could easily get by with 1.35 V and probably with 1.30 V on many of these well-binned chips.

    Are these first ones just cherries.. or are they trying to get good press due to "oh, our chips are soo good, they can easily hit 4 GHz at stock voltage!". And thereby take the hit of specifying 140 W TDP instead of 125 W (painful in my eyes) and take the hit of having a higher load power consumption than any current Intel CPU (also painful)? And hope that *gamers* won't care?

    What was the last Intel CPU you've seen with 1.40 V? I'm sure it was some 90 nm chip, certainly not 45 nm!

    MrS
  • MrSpadge - Saturday, August 29, 2009 - link

    Now that makes sense: [url=http://www.nordichardware.com/news,9821.html">http://www.nordichardware.com/news,9821.html]AMD reduces TDP rating of Phenom II X4 965[/url].
  • blackshard - Saturday, August 15, 2009 - link

    AMD may desire to raise 965BE yields declaring a higher stock voltage for the whole model line-up.

    BTW, you can't compare 45nm AMD SOI production with Intel 45nm production. AMD with 45nm and 1.40v has 140W TDP, while Intel has much lower operating voltage but still 130W TDP with latest Nehalem.

    Power requirements are comparable, operating voltages aren't.
  • MrSpadge - Sunday, August 16, 2009 - link

    Well.. they could sell the better chips at 1.35 V (still enough margin) and the worse ones at 1.40 V. Later on when enough chips can get by with <1.35 V they could officially lower the TDP rating to 125 W. This has been done with the 9950 before and e.g. the old X2 3800+ came in 1.35 V and 1.30 V varieties. They could do this and save some power, if they wanted to. I assume AMD's not stupid, so they know their chips need less voltage. Voltage is bad: it shortens chip lifetime significantly and makes you look worse in power consumption measurements. hat'Ts why I'm asking: why are they doing it nevertheless?

    Sure, there are differences in the process. But these CPUs all have to get along with the same physics ;) Intel reaches a comparable TDP because ~130 W is what shaped to be the maximum which desktop systems can handle without too much hassle (more cool = increased system cost). They're achieving it at a lower voltage because in the i7 design more transistors are switching during the clock cycles (=more work gets done if these transistors are put to good use). This and the TDP dictate the maximum voltage they can apply to their chip.

    That Intel can reach the same clock speeds at lower voltages is a result of clever chip design and details of the manufacturing process. So, yes, you have to be careful in comparing voltages. But saying that such comparisons were meaningless would be utterly wrong.

    MrS
  • blackshard - Sunday, August 16, 2009 - link

    Yes, but then they have to sell 1.40v parts with lower price than 1.35v. Since they can't overprice their processors (965BE is currently a bit overpriced if compared to 2.66 Ghz Nehalem, and will sure be overpriced with i5 coming next month), they have to sell two 3.4 Ghz flavours with different prices. Also they may need more testing, and it has a cost. It has more sense to sell all equal processors now, then introduce a 125W refinement when possible (if possible).

    Intel can reach lower voltages because they have to (and can) use lower voltages with high-k dielectrics and metal gates. Probably imho their leakage currents will be much larger with higher voltages. Instead AMD has SOI which helps reducing leakage, but can't reduce voltages. That's the reason I say voltages can't be really compared. Also TDP's can't be really compared, since AMD states that their TDP is the maximum power drain, while Intel states that it is maximum typical power drain.
  • MrSpadge - Monday, August 17, 2009 - link

    No, they should just offer both versions of the chip (and switch when the time is right) for the same price, without special markings (as has been done for the lower TDP version of the 9950, the X2 3800+ and probably many other ones). And they need to test and speed-bin the chips anyway, so I don't think that would be too much of a hassle (=cost).

    I know TDPs can't be directly compared. However, they're guidelines.. and when you take a look at test you'll likely find that "130W" i7 CPUs need less juice than most "125W" AMDs. So I don't think this adds anything to the discussion: AMD going from 125W to 140W is painful for the user, cost cooling and noise wise.

    And regarding the 2nd part: sorry, but I think you got it totally wrong. SOI helps reduce substrate leakage and leakage between elements in the plane (usually taken care of by a "FOX" field oxide layer), but does'n affect much else. A High-K gate dielectric reduces gate leakage currents by several orders of magnitude and in turn enables one to build faster transistors (which achieve a higher switching speed at similar voltages). The metal contact doesn't add much, just a little series resistence reduction. It's being introduced together with the high-k because the traditional highly doped poly-Si doesn't mix as well with the high-k (some HfO) as it does with SiO2.

    To first approximation power consumption and lifetime still scale the same way with voltage, regardless of SOI or high-k + metal. Voltage is bad in either case. What AMD is really doing here is overvolting their chips to give them more frequency headroom, normally only exploited by overclockers. That's why I suppose they do it for good press, nevermind the drawbacks.

    MrS
  • blackshard - Monday, August 17, 2009 - link

    Dunno if it has sense for AMD to ship two kinds of the same processors, changing just for a minor detail to save 7W of power :/
    Also consider that lousy motherboards may cause some small or large voltage drop during massive cpu usage, so maybe AMD is just taking care of this.

    About the second part, I'm sure you're right. I was misinformed.
  • MrSpadge - Saturday, August 22, 2009 - link

    Hi. Well, I think we can leave it at that. I calculate savings of 10 W when going from 1.40 V to 1.35 V.. but this does not really change the picture. I still don't like it, but it's not catastrophic. And I have to admit that further binning would cost something and would get them at best *questionable* advantages ;)
  • strikeback03 - Monday, August 17, 2009 - link

    Those definitions of TDP always seem a little odd, since the comparisons between actual draw of the processors against their TDP that I have seen all show the Intel processors at well below their TDP, while AMD processors are much closer. For example, the numbers in this ( http://www.anandtech.com/casecoolingpsus/showdoc.a...">http://www.anandtech.com/casecoolingpsus/showdoc.a... ) test are a few years old now, but the Intel processors generally come in well under their rated TDP, with the QX6850 the closest at about 80% of its TDP. Meanwhile the AMD processors all seem to be at or above their TDPs (though the naming of the Athlon X2s makes it hard to determine which version they were testing).

Log in

Don't have an account? Sign up now