The past several years, reducing power consumption in datacenters has been one of the most talked about issues, and some progress has been made. PSUs have become a lot more efficient in the AC/DC conversion, with the conversion efficiency rate going up from 70% to as high as 90%. CPUs are dissipating less heat even at full load and have become quite efficient when running idle thanks to technologies like Intel's EIST and DBS. Memory DIMMs (some of them at least) are able to save a bit power too. But there is still a lot more power that can be saved.


Intel's first idea is to turn off a lot of motherboard components if they are not in use. As you can see, in the current platform, a lot of components are always on, no matter how busy they are. On the picture below you can see which components Intel is trying to turn off - or at least make sure that they run in a lower power state.


To do this, Intel wants to shift away from OS power management to HW management. It is a bit similar to how AMD's Barcelona and Intel's newest Core CPU regulate their power states. The reason why OS management is not desirable is that it is not accurate enough. Each transition from a low power state to a higher power state or vice versa costs a bit of power, so you want to make sure you are not switching between them too quickly.

Intel is also taking a hard look reducing the VRM and capacitors which are needed for the DC/DC conversions from your power supply to your motherboard. These improvements are desirable for both servers and notebooks. Intel expects up to 30% improvements in performance/watt from these modifications. However, it is clear that even these improvements alone are not going to save the datacenters which are short on power right now. The complete chain from the power entering the datacenter to the CPUs/systems using that power must become more efficient.


Some of the current datacenters are already powered by 48V DC which saves quite a bit of power by reducing heat loads from AC to DC power conversions. Rackable Systems has been promoting DC powered servers for quite some time now. However, 48V DC power also requires much thicker (7 to 17 times bigger) and harder to place cables.

That is why Intel feels that the industry should take a good, long look at a 400V DC power infrastructure. At this point in time, 400V DC power infrastructures would be incredibly expensive, and there are no industry standards. But as you can see at the picture above, about 75% of the power that enters the datacenter would actually be used for performing useful things on your server, instead of only 50% now.

There's a lot more to this subject, but for now we'll just leave it at: To be continued...

FB-DIMMs and Servers
Comments Locked

12 Comments

View All Comments

  • afan - Friday, September 21, 2007 - link

    you remarked:
    quote:

    "That is quite amazing as even the 130W 2.93GHz Xeon MPs can find a home in this server."


    The supermicro site:
    http://www.supermicro.com/products/motherboard/Xeo...">http://www.supermicro.com/products/motherboard/Xeo...http://www.supermicro.com/products/motherboard/Xeo...

    Indicates that, yes, the board will take 130W 2.93GHz Xeons, but their 1U server chassis won't take it, alas. (If you look at their 1U server chassis, (listed as the CSE-818TQ-1000) it indicates the max for 1U is 80W CPUs. The chassis seems to support AMD chips, in fact -- confusing!).

    I'm still waiting for PCI-e 2.0, More PCI-e x8 slots (2 is a joke), and dual 10gbps:
    http://www.intel.com/network/connectivity/resource...">http://www.intel.com/network/connectivi...s/techno...

    I wish they'd hurry up with these!

  • afan - Saturday, September 22, 2007 - link

    I found the 1U chassis:
    http://supermicro.com/products/system/1U/8015/SYS-...">http://supermicro.com/products/system/1U/8015/SYS-...

    I read the manual, and it confirms that it won't take 130 Watt CPUs.
    Also, the spec sheet says it only has 1 PCIe x8 slot - but the manual says it can take two. WTF!?

    SuperMicro website TOTALLY SUCKS. (I like their products, though).
    ----
    What I'm really looking forward to are the boards alluded to here - of course I can't find any info on them(!):
    http://supermicro.com/newsroom/pressreleases/2007/...">http://supermicro.com/newsroom/pressreleases/2007/...

    Seaburg-based:
    X7DWN+, X7DWA-N, X7DWT/INF and X7DWU.

    San Clemente-based:
    X7DCL-3/i, X7DCA-3/i and X7DCU

    Bigby chipset-based:
    X7SB4, X7SBE, X7SBi, X7SBL-LN1/LN2, X7SBA and X7SBU



  • JohanAnandtech - Saturday, September 22, 2007 - link

    It would indeed make sense that the 1U is limited to 80W CPUs. I specifically asked the Supermicro people what kind of CPU the 1U could take and they told me 130W. I guess the Supermicro people were a bit too enthousiastic. I'll ask again.
  • brshoemak - Friday, September 21, 2007 - link

    Anyone else notice that the last picture on the first page shows a tabletop fan supposedly as one of the components in the server (next to hard drive)?? Must be a 5U chassis.

    If that was an Intel created image you would imagine they have a few resources available to them to find a picture of a case fan.
  • JohanAnandtech - Saturday, September 22, 2007 - link

    Lol. This was indeed an Intel slide. I guess a tabletop fan is easier on the eye than a case fan? (I have no idea :-)
  • TA152H - Friday, September 21, 2007 - link

    Johan,

    You should probably explain what an AMB is before just introducing it. I think people will get from the context it is part of a FB-DIMM, but it probably could use some description.

    Intel's marketing has permeated reports of the Nehalem. When AMD alone was working on the integrated memory controller, their caches were small because of it. Not because it wasn't needed. Now, for Intel, their caches are getting smaller because they don't need it. So, the IMC is purely wonderful, it has no side effects, right? Wrong! They can't put the same cache on the chip and sell it economically because of the IMC. It's a tradeoff.

    Also, since when is a smaller chip faster? The Pentium Pro was bigger than the Pentium, but it ran at higher clock speeds. Ditto for Athlon versus K6, and Pentium 4 versus Pentium III. I think you meant that if you have a smaller cache, all other things being equal, you can run it at faster speeds(i.e. lower latency). There is one example, the Prescott, where the size of the processor (because of the power/heat) did limit the clock speed, but that was an isolated case (and one that Intel obviously never predicted when they created that monster).
  • JohanAnandtech - Saturday, September 22, 2007 - link

    Sorry for the late answer, I was flying over the Ocean yesterday :-).

    I agree with the IMC assesment. It is a matter of keeping the die a bit smaller, to save costs and to get probably one extra speedbin out of it. Huge die do result into slightly slower chips though. The bigger the CPU, the bigger the chance that one of the cores can not be clocked as high as the other ones, especially at the outside of the wafer.


  • TA152H - Saturday, September 22, 2007 - link

    Johan,

    I don't think that's the relevant point here, but I agree with it. Yes, more dies will result in lower clock speeds, in general, because you can only clock as high as the slowest one. No disagreement there. And more dies do make chips bigger. But, that's not what was relevant within the context of the statement. It was about a bigger cache making a processor bigger, not an additional die.

    In fact, in general, bigger processors have led to higher clock speeds, not lower. The Pentium 4 and Athlon were perfect examples. A lot of transistors were put there so they could clock higher, not lower. One of the design goals of the Pentium Pro was higher clock speed too, on the same process technology, and transistors were dedicated for the stages so it could.

    But, to put it another way, do you really believe that the Nehalem having a smaller cache would lead to higher clock speeds? It might have lower L2 latency, but that's not the same. Will the Penryn clock lower because it's got a bigger L2 cache? There just isn't any correlation between cache size and the clock speed a processor can run at, unless it's limited by heat, like the Prescott.
  • uutorok - Thursday, September 20, 2007 - link

    Dare to feel your heart pound, your pulse race, and your breath catch in your throat? To open yourself up for a jolt of sheer adrenaline that just might eat you alive? To bring home a beast that’s leaping and snarling on the end of its leash?

    Like Mother Nature, AMD has a dark side — and on September 25, 2007
    it will be revealed to the world.

    So what it really comes down to is, do you dare?

    http://amd-member.com/campaigns/black/
  • iceburger - Thursday, September 20, 2007 - link

    :) almost viral marketing

Log in

Don't have an account? Sign up now