Back to Article

  • mikepers - Friday, July 20, 2007 - link

    So, quick rough calculation. Assuming electricity costs 10 cents per KWH then I believe that with the AMD chip you're saving 80 watts x 24 hrs day x 365 / 1000 x .10 = about $70 per year. (at idle)

    One thing not addressed in this article: Assuming a large server farm, what are your infrastructure savings? How much do you save in cooling that infrastructure. How much do you save because you can buy less generating capacity for when you have power outages? Any thoughts?

    Last comment, for all the info in the article the most significant statistic for a large server farm is performance per watt. How much work can the farm do given the resources it consumes. In that respect it looks like AMD wins but not by as much as the initial difference in power consumption would have you believe.
  • bruce24 - Tuesday, July 17, 2007 - link

    You can buy 2GB and 4GB FB-DIMMS, yes they are a bit more expensive, but if you interested in power savings and an intel DP server, they are the way to do.

    I've love to see the numbers if you replaced the 8 1GB DIMMs with 4 2GB or even better 2 4GB DIMMs.
  • Justin Case - Tuesday, July 17, 2007 - link

    Then why not compare them to 2GB and 4GB registered (non-FB) DDR2 modules? Micron has been making them since 2004 (well, they announced them, at least). There must be other companies making / selling them these days (I'm pretty sure Infineon and Samsung are).

    A quick search turned up Kingston's D51272F51 module (not sure who makes the chips). Not cheap, though (almost $900).

    I think I even remember seeing some reference to an 8GB DDR2 DIMM, but it could have been just a kit (4+4).

    IIRC, FB-DIMM power consumption is roughly 80% higher than regular registered DIMMs.
  • jpeyton - Tuesday, July 17, 2007 - link


    yes they are a bit more expensive

    2GB FB-DIMMs are feasible, since they are roughly 2X the price of 1GB modules. 4GB FB-DIMMs are roughly 11.5X more expensive than a 1GB module.

    That might lower power consumption by a noticeable amount, but nothing significant enough to erase the huge gap between platforms.

    Intel really needs to do more work on platform engineering; their gains in CPU efficiency are entirely erased (and then some) by some foolish choices (like going the FB-DIMM route).

    It may not affect the little guys, but companies with server farms or HPC take platform efficiency very seriously (which is why companies like Cray and Sun are adamant about sticking with AMD).
  • TA152H - Tuesday, July 17, 2007 - link

    "Intel really needs to do more work on platform engineering; their gains in CPU efficiency are entirely erased (and then some) by some foolish choices (like going the FB-DIMM route). "

    You're kidding, right? They made a conscious choice to use FB-DIMMS because of the benefits of it, and considered them more important than the negatives. It certainly isn't because they didn't have engineering resources for the platform, because that's exactly what they used to create the FB-DIMM platform. You might not agree with their choice of trade-offs, but using normal DDR2 or DDR3 would be comparitively easy considering they already have chipsets that support it.

    I'm mixed on FB-DIMMS. There's a lot of bad, but there's a lot of good too. I'll agree with you in one way, they do need to provide people a choice. So, in that sense, I don't think they've done a great job on their platform. FB-DIMMs are certainly the best choice in some situations, and certainly not in others. People should decide based on their situation.
  • Alyx - Tuesday, July 17, 2007 - link

    I believe intel processors still hold the edge on performance per watt. In these tests the AMD system had higher performance per watt because of the intel ram. I'd be interested in an intel system with standard ram to get a closer comparison of just the processors. Are intel processors with standard ram uncommon?

    It is illogical that a faster higher performing processor, that does more clock few clock than AMD, which also runs at lower watts would have less performance per watt than AMD.

    Interesting read, I learned a bit from it. Cheers.
  • TA152H - Tuesday, July 17, 2007 - link

    Intel servers currently support FB-DIMMs, so that is the platform that must be tested.

    Also keep in mind you can't measure processors alone for power, although most sites do this incorrectly. AMD processors simply do more, they have a memory controller. So, all things being equal, a processor with a memory controller will use more power, but at the same the chipset would use less. Of course, not everything is ever equal, but my point is, even seemingly simple comparisons like CPU to CPU aren't quite so simple.

    I have read that Intel may be backing off of FB-DIMMS, but until they do, and these folks are testing servers, it's absolutely valid data because that's what you're getting if you buy an Intel based solution. A lot of people have problems with Intel choosing this type of memory, so they might back off of it, or at least offer both. FB-DIMMS obviously have advantages too, so giving people a choice might make a lot of sense.
  • BPB - Tuesday, July 17, 2007 - link

    If energy savings is your aim, and your organization is a large one, then check out">this article from last month.">
  • brshoemak - Tuesday, July 17, 2007 - link

    Normally I'm not an editor but:


    Internal storage once again comes from one WD1600YD hard drive configured in RAID 0 with the OS installed.
    [under the Test Setup configurations]

    i'm assuming it's two hard drives? or am I missing something because it's early
  • Jason Clark - Tuesday, July 17, 2007 - link

    Fixed. Reply
  • DeepThought86 - Tuesday, July 17, 2007 - link

    Based on these results, it looks like even though Barcelona will top out at 2.0 GHz but with the same TDP, it should be a killer in performance/watt and a great server processor Reply
  • LTG - Tuesday, July 17, 2007 - link

    Not for long - how hard would it be for Intel to come with a non FB-DIMM solution?

    Then they would crush AMD because the CPU's actually have better power consumption.

  • Hans Maulwurf - Tuesday, July 17, 2007 - link

    The AMD numbers for power consumption of CPUs only seem far to high.

    I guess you didnt rearrange the memory modules when you took one CPU off the system. Thus you disabled half the memory modules as well.
  • Ross Whitehead - Tuesday, July 17, 2007 - link

    We did not rearrange the memory modules as we oly wanted to alter one attribute of the system between measurements so that we could attribute all difference in power to the one change.

    If you consider that all of the AMD DIMMs only took 8 Watts total, and that the difference between AMD CPUs and Intel CPUs was 31 Watts total, I am fairly confident in the numbers.
  • TA152H - Tuesday, July 17, 2007 - link

    Are these articles really meant to mislead people, or are there actual performance differences between the low voltage parts and the normal ones. I was under the impression they were the same parts but were picked for their ability to perform at lower voltages, thus their IPC should be completely identical. But the charts do show some difference, which is kind of surprising. This makes no sense to me at all, considering what AMD has been saying, but it is possible. Do you guys know what's going on with this? Are they just cherry picked CPUs that run with lower voltage, or do they differences that would alter IPC (most likely the L2 cache). It might be that the test variances are just statistical scatter, but if this is so, it would make no sense at all to report performance data on both types of processors, so I don't get it.

    Also, the reason you don't combine servers is fairly simple, and that last paragraph is mind-boggling it's so uninformed. If you run a server at 3% all day, except for say 30 minutes, and then your servers get pegged out, you might average 7% for the day, but for those moments when your two servers are getting hammered, you can't possibly merge them or you'd potentially suffer degraded performance. It's not the average that matters as much as the maximum, unless you can tolerate the degradation. Most people can not, and the cost of a server doesn't validate a loss of performance during peak times.
  • Jason Clark - Tuesday, July 17, 2007 - link

    Typically low power processors are picked based on yields, you are correct.

    The assumption about combining servers is just not correct. If you look at an enterprise VM stack like VMware, they can move VM's around based on resource usage. If a VM is using most of the resources it can shuffle the other VM's around as required. Furthermore, VMware can allow for resource scheduling, whereas you can inform the stack that at 3:00AM this VM needs more resources... Just because you spike at 80-100% for 10 minutes by no means that you are now tied to being on one physical host...

  • TA152H - Tuesday, July 17, 2007 - link

    On the first part, you should remove one of the pairs of processors, either the lower power, and say what you just said. The fact you test both for performance strongly implies a difference where none exists, and in fact is just confusing. Why test both if they are the same? Wouldn't it be better to just say they have the same IPC, and clean up your charts some (except for the cost per watt type), and remove this source of confusion?

    OK, with regards to VMWare, where do you get these extra resources from if you have gotten rid of the machine? Software is great, but if you don't have the hardware, how do you allocate these machines? I guess if you have a situation where one piece peaks at one time during the day, and another at another time, you could do something like this, so I'll grant you that it would work in some situations. In my experience, this is not typical though, and most of the time, you have "peak" hours where more people are just on and using all the servers more. And if you don't have the extra capacity sitting around on an underutilized server, there isn't much that will help it.
  • Alyx - Tuesday, July 17, 2007 - link

    In regards to saving money with servers I'm sure that substantial amounts of cash that would warrant an upgrade of this type would only be called for in a case where there was some type of a server farm or at least enough servers to consolidate one or two. Saving $10-$20 a month on power would only be useful if you were saving it on 20+ servers.

    Hell, if you are only swapping out one or two servers then the number of tech hours spent are going to eat up any monetary benefit for at least the first years worth of power.
  • VooDooAddict - Tuesday, July 17, 2007 - link

    If you can consolidate with VMWare / Xen, ect. You'll get far better raw system power usage out of a couple Dual Socket Quad Core systems running as hosts then running them all on physical with low voltage chips.

    Cooling is another issue those as once you get all those 8 Cores working on VMs you'll have quite alot of heat in a concentrated area.
  • duploxxx - Tuesday, July 17, 2007 - link

    Well it will be depending on you're hardware. If you mean current available quadcore offers, for sure they do look interesting prise wise but are not interesting performance wise in a virtual (hypervisor) environment. Even current woodcrest system have a major FSB limit with there dual 1333FSB against current AMD k8 opteron systems, in quad core cpu systems you just add raw cpu power on the same limit.

    there are enough benchmarks providing this info and if you have the chanche to play with those systems, you will even notice it.

    For real quadcore advantage you'll have to wait for the K10. Even if it is only at 2.0 GHZ with his updated dual mem controller, internal communication, shared cache and most important NPT feature it might even outperform 2.6-3.0 Clovers in virtualization.
  • sc3252 - Tuesday, July 17, 2007 - link

    Even though this is for low power, it is nice to see amd win some benchmarks for a change.

    If I was building a small server it looks like the logical choice would be AMD, for now. Of course if I was building a small server it probably would be a x2 3600 or a sempron, unless you are actually running a database you are never going to be at high utilization. Running mythtv, samba and a mail client would never require the amount of power either the core 2 dou have or the amd 64 cpu's.
  • TA152H - Tuesday, July 17, 2007 - link

    I run one server on a K6-III+ running at 250 MHz, and it never gets pegged. A lot of servers are just file servers, and you can get by with an underclocked processor easily, since the main performance characteristics come from I/O devices. So, a lot of it depends on the workload.

    I was looking again at some mini-ITX stuff lately, and VIA has some really impressive processors now too. A year or so ago, I bought some 800 MHZ Eden processor that took less than 8 watts, but it proved worthless since the performance was less than a K6-III+ using the same amount of power (although the platform was more stable). So, I wasn't too impressed with it. However, now they have a 1.5 GHz version with the same power use, and an amazing 1 GHz version using only 3.5 watts. With a hard disk (notebook) and some memory, you can get power draw of considerably under 30 watts with a whole system, and the performance is fine for a lot of stuff. It looks like they are finally beating the older technology and offer interesting products for those who do not need a lot of processing power, and are very interested in saving power. The bad thing is though, with 2.5 inch drives, even your I/O performance is not going to be very good, but you can always use these in other cases too if you need I/O performance. How reliable they are is unknown to me, and of course if you have critical data, they are a bad choice because unless you use the slot, you don't have RAID either. But, in limited situations they are worth considering.

    They have released a new chipset, the CN896, that looks interesting, but so far I have not seen any motherboards based on it available. It's irritating that they release these things so far in advance of any products, but it looks to be a pretty impressive chipset. The only thing I don't like is that it doesn't support DX10, and I don't know why anyone would settle for DX9 anymore, especially with Vista becoming the standard. So, it's really just for Vista Basic, which kind of sucks. But, I guess this type of compromise is necessary in this power envelope.
  • Spoelie - Tuesday, July 17, 2007 - link

    Your last paragraph doesn't make much sense.
    Vista aero only needs a dx9 card, it does not require dx10. The only thing that requires dx10 are dx10 games, and performance in those applications has been utterly abysmal on everything but the highest end cards. Integrated dx10 graphics at this moment would be completely useless.

    So it is in fact the other way around, why would anyone pay extra for unusable features at this moment and the foreseeable future.
  • TA152H - Tuesday, July 17, 2007 - link

    Well, Microsoft only validates Vista Basic for that chipset, not Aero.

    Abysmal is a subjective term. Sure, you have the dweebs that want the highest settings on everything, and this is true, an IGP wouldn't be a great choice. But, you're obviously ignorant to the fact DirectX is not just for games, so I'm not sure what else to say to you. Learn something, OK? I can help you, here is a nice interview with the a guy from Microsoft that was quite involved with DirectX 10. Maybe you'll believe him when he says it's for more people than just gamers?">

    DirectX 10 is useful in a lot of ways, not just idiot kids shooting aliens. There are now low cost cards supporting this, and only a fool would buy old technology for the same price, unless they like obsolescence. The problem with the CN896 is it doesn't support DirectX 10, or Aero, and that's just a bad thing for something being released right now. On top of this, it isn't even available in motherboards. Compare this with the 965, which is a lot older, and still has more features. I think most people buying a machine, even a low power one, would prefer to run Aero on their machine, Vista isn't quite Vista without it. But, maybe they couldn't do it with the power envelope, and decided it would have to wait until they could. So, they just support the obsolete and inferior DX9. It's understandable, but I don't really like it. Not for something totally new. I don't think anyone should be releasing DX9 only hardware anymore. The future is clear, and it's DX10 and what succeeds it.
  • Spoelie - Tuesday, July 17, 2007 - link

    The fact still remains it only is being used in games for the moment, try to find a CAD/CAM solution that utilizes dx10 features, or an announcement for one in the coming year.

    If ms doesn't certify aero, then you have a point, but dx10 is a non-issue.

    And yes, performance is non-subjectively abysmal. We're not talking about ultra high settings, we're talking about 1024x768 with no AA or AF whatsoever. That is below the standard LCD resolution... You need a high end chip to run it, the mainstream line of both ati and nvidia do not cut it, let alone any hypothetical dx10 integrated graphics.

    Any person would buy older technology if it provides a better value, and that is what most current low-end to mid-range dx9 solutions provide, better performance for a better price. Maybe in a year or so the situation will have changed, but now and in the near future dx10 is next to useless on anything but the high end.
  • ButterFlyEffect78 - Tuesday, July 17, 2007 - link

    What does AMD stand for? Advanced micro systems. If you apply this logic it should be very self-explanatory to this subject or topic. Reply
  • Gul Westfale - Tuesday, July 17, 2007 - link

    it's "than", not "then". also, AMD stands for Advanced Micro Devices, not Systems.

    as for the article, it is certainly interesting to learn something about servers, but the actual consumption of a new system is certainly going to be less important than things like future upgradeability, performance, and reliability. in that sense, the rather small differences between AMD and intel do not matter so much, they are not the most important deciding factor when it comes to making a buying decision. also, there are so many different system configurations out there that with a few different components the numbers here would be very different.

    so, like i said, interesting read but we should not draw too many conclusions from it.

Log in

Don't have an account? Sign up now