POST A COMMENT

54 Comments

Back to Article

  • baka_toroi - Tuesday, March 15, 2011 - link

    "Starting in 2010 Intel will have an Atom based..."
    So, is it already on the market or has your inner calendar gone back a year or two, Anand? :3
    Reply
  • JarredWalton - Tuesday, March 15, 2011 - link

    Just fixed it. :) Reply
  • yioemolsdow - Wednesday, April 20, 2011 - link


    ★∵☆.◢◣   ◢◣
       ◢■■◣ ◢■■◣
      ◢■■■■■■■■■◣
     ◢■■■╭~~*╮((((( ■■■◣  
     ◥■■/( '-' ) (' .' ) ■■■◤
      ◥■■■/■ ..../■ ■■◤ 
        ◥■■■■■◤ jordan air max oakland raiders $34a€“39;
         ◥■■■◤ 
          ◥■◤ Christan Audigier BIKINI JACKET $25;
           ▼ 
            \ Ed Hardy AF JUICY POLO Bikini $25;
             \
              \ gstar coogi evisu true jeans $35;
               \
                \ gstar coogi evisu true jeans $35;
                 \
                  \  coogi DG edhardy gucci t-shirts $18;       
                 ● \ ●
                 《 》 》》
                  》 《
            _?▂▃▄▅▆▇███▇▆▅▄▃▂
    ^__^:====( www etradinglife com )======
    Reply
  • JMC2000 - Tuesday, March 15, 2011 - link

    Though I know AMD's stance on using Bobcat in servers, I wonder what kind of ULP chip could be made with 16 or 32 Bobcat cores? Reply
  • Taft12 - Tuesday, March 15, 2011 - link

    Not an effective server chip, that's for sure. Bobcat is absolutely not suited for server loads.

    AMD has no offering in this space (and won't for years if ever), but as this article points out, this only projects to be ~10% of the market. ARM is the competitor here, not AMD.
    Reply
  • OneArmedScissorB - Tuesday, March 15, 2011 - link

    It's "for sure?" Did you design the Bobcat core, or what?

    It's not like whatever implementation of Atom they go with will be the same as the PC version, even if that's drastically updated and addresses every conceivable issue. The platform itself will have to be radically different, and it would be just the same for Bobcat.
    Reply
  • HibyPrime1 - Wednesday, March 16, 2011 - link

    I believe he was referring to the fact that the bobcat core has a high performance (for a server) GPU in it. That would end up being wasted energy in almost any server. The only place GPUs are used heavily in servers is in HPC applications, and bobcat is far too slow for that. Reply
  • JMC2000 - Wednesday, March 16, 2011 - link

    Actually, Bobcat itself does not contain a gpu, just like the Bonnell core doesn't. The Brazos platform chips, Ontario and Zacate, are what contain the gpu. Bobcat is just the low power x86-64 core used in them. But 16-32 of them on either a single C32 or G34 socket would make for a very dense, low power server. Reply
  • mino - Tuesday, March 15, 2011 - link

    "Bobcat is absolutely not suited for server loads."
    Well, compared to Atom is SURELY is. The ONLY ingredient missing is ECC.

    On the other hand Atom having 1/2 the performance on top of missing ECC has just been nominated by Intel to the Microserver space.

    Just wondering who paid for that post of yours...
    Reply
  • MonkeyPaw - Tuesday, March 15, 2011 - link

    Yeah, it's not like Bobcat is based on Opteron, which put up a good showing until Nehalem. Before Intel's IMC and QPI, Intel's x86-turned-server products were engineering bandaids. FB-DIMMs? Really? Reply
  • piroroadkill - Wednesday, March 16, 2011 - link

    It's more suitable than Atom, by any measure. Reply
  • noeldillabough - Tuesday, March 15, 2011 - link

    *If* and when the new version of Windows Home Server is finished and *if* its cool I'd be looking at one of these low power processors for its heart. What board(s) go best with low power Xeons? Reply
  • casket - Tuesday, March 15, 2011 - link

    Arm is coming to windows... not surprised to see this.
    "Microsoft announced on 5 January 2011 that the next major version of the Windows NT family will include support for ARM processors. Microsoft demonstrated a preliminary version of Windows (version 6.2.7867) running on an ARM-based computer at the 2011 Consumer Electronics Show.[88]"
    Reply
  • Taft12 - Tuesday, March 15, 2011 - link

    Why would it need to be one of these Xeons and not the 35W TDP Core i5? Reply
  • greenguy - Thursday, March 17, 2011 - link

    ECC Ram. Reply
  • JGabriel - Monday, March 21, 2011 - link


    Okay, but why do you need error-correcting RAM in a home server? Assuming the primary applications are serving files and multimedia, ECC seems unnecessarily robust for your purposes -- especially since buffered, registered, RAM costs twice as much.
    Reply
  • anactoraaron - Tuesday, March 15, 2011 - link

    Why not download the beta before making your server box?

    As far as I can see it's just like V1 but without the Drive Extender. The interface isn't that different, and it functions similarly to V1 WHS.
    Reply
  • MrSpadge - Tuesday, March 15, 2011 - link

    2 Sandy Bridge cores with a 20 W TDP running:
    Load on 2 cores: 2.2 GHz
    Load on 1 core: 3.4 GHz

    ... that's how I want my dual core and what Bobcat should have been!

    At least proportionally. 2 much slower and simpler cores at a maximum of 1.6 GHz at 18 W seems outright stupid in comparison. Sure, it's got a GPU included in this power envelope.. but this thing shouldn't consume power when it's not doing anything (as does the SNB-GPU), otherwise it's a really bad choice for the task at hand.
    Reply
  • asmoma - Tuesday, March 15, 2011 - link

    Turbo is coming to brazos. Google: AMD E-450 Reply
  • mino - Tuesday, March 15, 2011 - link

    Those SB chips also cost about the same a Brazos SYSTEM does. :) Reply
  • duploxxx - Wednesday, March 16, 2011 - link

    duh, it also has a GPU remember?

    Atom in current design with inorder architecture is useless, this is already shown in many benchmarks that Brazos is a much better chip for that. Unless off course Intel totally redesigns the atom
    Reply
  • marc1000 - Tuesday, March 15, 2011 - link

    I don't believe extreme low-power server will make enough success.... it is way better to have only 1 or 2 "medium" servers and virtualize small server inside those boxes.

    think about replacing 1 CPU or memory in a really high density server... the current blade servers are already crammed enough.
    Reply
  • Johnmcl7 - Tuesday, March 15, 2011 - link

    That was my initial thought but there are other uses for servers that I think microservers would work well, one example I've seen is within manufacturing equipment. In my experience there seems to be increasing use of PC equipment (rather than dedicated PLC equipment) and servers are chosen for uptime/reliability. However frequently these servers are overpowered so a system using a low power processor in a server system could be appealing.

    John
    Reply
  • Sam125 - Tuesday, March 15, 2011 - link

    That makes sense, but going with ultra dense micro servers seems overkill for equipment that doesn't require much computing power to begin with. Of course I would be a fool to doubt Intel's ability to size up a market, but I'm having a hard time picturing where an ultra dense microserver makes a better choice than what's already out on the market. Reply
  • mino - Tuesday, March 15, 2011 - link

    For certain markets, isolation is key. And the cheapest way to achieve _undisputable-by-every-second-idiot_ isolation is by going the microserver way. Reply
  • L. - Wednesday, March 16, 2011 - link

    I doubt that will have any impact on "every-second-idiot", as managing many micro-servers will be done using a micro-server management tool, with which you'll be able to do as many big mistakes as with a virtualization tool (or almost).

    Dense microservers are useless.
    In the sense that, if you find it funny to put 2 cores on a micro-die, it is still way more interesting to put 36 cores on a normal sized die, with all the benefits and cost cutting that implies (shared mem controller etc.).

    Also virtualization has brought a lot of manageability and I don't see that becoming useless because there is another power efficiency point of interest in "micro-servers".

    Besides, any "spike" load on those atoms and you'll see how useless they can be.

    Really, I can see 36 low-power cores on a die, but 2 low-power cores on a die IN a datacenter ? what for ... if you're in a datacenter you need 36+ of them anyway and you'll definitely prefer the manageability increase and the lower power requirement of a single die 36 core toy over 18* the even lower power req. of those dual core mini-cpus.

    Also one must realize that splitting one 36 core box (assuming somebody sells a 36 low-power core die, which would NOT have more computing power than the current multi-core AMD cpu and would surely not use more power) in 18 2 core boxes means you have 18 power supplies to plug, 18 damn network cables adding to the spaghetti dish of the day, etc.

    Not even talking about RAM yet, because if you're going to put something that's equal to 1/18th of a real server CPU in it's own box, it would be only right to give it like 1 Gig of RAM ...

    Management costs up, maintenance costs up, complexity up, price up ... yes you get hardware-level isolation and potentially extremely cheap fully redundant solutions (like 2 4-socket atom boards in a 1U chassis with integrated load-balancing woo ..) but that still does not make any sense to anyone who uses a datacenter quarter rack or more.

    Obviously some people will buy these, because Intel said so, but I think it's clear atom-like C/A PU's are just good for low-power stuff, like home file server, and the stuff mfeller2 mentions just down here. - And for the poor who need full h/w redundancy or those who don't like virtualization's manageability (and the every-second-idiot effect).
    Reply
  • casket - Wednesday, March 16, 2011 - link

    I completely agree. Microservers are crap. If you have smaller cores that use less power, like an atom... you can stick lots of them together. There is no need for a deep pipeline... if you have 100 cores. 36 cores would be great... but we're going to have to wait a little bit. 16-core atoms is coming, though.

    Microsoft Pressing Intel for 16-Core Atom
    http://www.tomshardware.com/news/16-core-Atom-SoC-...
    Reply
  • marc1000 - Wednesday, March 16, 2011 - link

    1 CPU with 16 small-cores that has the same performance as 1 CPU with 4 big-cores?

    I don't believe the power savings will be THAT great, once you start to put more CPUs on the same die the power needed goes up.

    anyway, this defeats the "isolated server" idea.
    Reply
  • johnstalberg@yahoo.se - Thursday, March 17, 2011 - link

    I think you're missing the point here with th 2 * 16 = 1 * 32, logic?

    Forget about Intel for a moment and think about where these CPU's come from? The ARM's typically runs our smartphones software and as such it has to be battery savings that has been the holy grail as a target when producing it. Sure a concious target against a dedicated server market could errode some of this but on the other hand it isn't absolutely a must. Low power footprint as an isolated parameter has an ideal value of 0 in any use case but has different importants in the many different tasks.

    Computer centers becomes more and more burdened by the power cost and to lower the power usage get more and more important and as such this kind of server chips and ultra mobility chips becomes a match. One could belive you would be heavily constraned by raw physics and get the simple proportionality as you describe but as it is today, the ARM chips has an efficiency advantage and by the same time not beeing to poor to be usable for a lot of tasks.

    Had the bigger iron just been bigger, your analysis would be perfectly valid. As it is know you seem to forget the proportional differences in powwr usage.

    Now get back to Intel and the Atom and think about why this is the best candidate? Well it is their low power chip but it has also some or a lot of the ARM battery saving mantra it share with it. That can make it do better than any 2 * 16 = 1 * 32, will show. Xeon has not been made with an overall battery savings agenda covering the whole architecture project from day one until production as of now!
    Reply
  • ajcarroll - Wednesday, March 16, 2011 - link

    L, while I think your points are valid, it's important to note that micro servers are a different form factor to blades. They're not just dropping a low power CPU into a blade, but rather putting it into a half width / half height box, so that you can fit 4x into the same rack. Plus they plan to share power supplies... So while I think you're raised valid points, it really boils down to whether or not they address the issues you raise. If they don't address the issues, then the technology is indeed worthless, if however they do succeed in producing something that is competitive in some applications, in terms of computing power vs. power consumption/physical space, then they will succeed in those markets. My gut feel is that 5 years from now, Microservers will indeed have established themselves as a legit technology that is the best choice for some applications, while utilizing virtualization on more powerful boxes will be the best choice in other situations. Reply
  • mino - Wednesday, March 16, 2011 - link

    There are MANY, MANY use case where e dedicated physical machine is either required or good to have. This is where the real market is.
    Give me a physical hosting on Atom over Virtuozzo "hosting" any day.
    Reply
  • mfeller2 - Tuesday, March 15, 2011 - link

    1. I run an Atom "server" at home. File serving, proxy server, light web server duty, at a whopping 22 W typical. Smaller, quieter, and more power thrifty than the re-dedicated desktop.
    2. I have been interested in AMD's solution because (I believe) it supports VT in a slightly higher power envelope. Current Atom does not have hardware support for VMs. I sometimes use VM's as sand boxes to experiment in.

    My use is not mainstream, and not the focus of Intel or this article, but adding ECC to the mix has an appeal to my little niche.
    Reply
  • Taft12 - Wednesday, March 16, 2011 - link

    If you're only using VMs as experimental sandboxes, who cares if you don't have hardware support for virtualization? It still works. Reply
  • mino - Wednesday, March 16, 2011 - link

    And is painfully (borderline useless) slow on in-order Atom. :) Reply
  • madmilk - Saturday, March 03, 2012 - link

    I run an Athlon II X2 250 in my server. Power consumption is about 29W idling with an extra Intel NIC and 4GB DDR3, as well as one 3.5" 5400rpm hard drive spinning at all times and 2 more spun down but still connected. It supports VT, and if I had the right motherboard (which I don't) it can also use ECC memory.

    Pretty nice, I think!
    Reply
  • vol7ron - Tuesday, March 15, 2011 - link

    Is this article in any relation to: http://www.anandtech.com/show/4208/cebit-2011-some... Reply
  • devene - Tuesday, March 15, 2011 - link

    I think that they are being shortsighted. Instead of limiting the server why not make a hybrid server that uses both low power and medium power CPUs? The small cores take over during light loads while the heavy weights wait power gated in some low C state, and once load peaks, the serer kicks in with full force. The idea sounds like a logistics nightmare but if anyone has the resources to solve it, it's intel... Reply
  • DanNeely - Wednesday, March 16, 2011 - link

    I believe high end VM farm software allows you to shuffle VM's between machines on demand; is there really a need to put both high and low end CPUs in the same enclosure instead of swapping the VM between two separate servers in the same rack? Reply
  • pjkenned - Tuesday, March 15, 2011 - link

    ServeTheHome.com already has benchmarks of the Xeon E3-1220, E3-1230, and E3-1280 CPUs up, and has had them up for some time. Reply
  • JMC2000 - Wednesday, March 16, 2011 - link

    Why did Intel change the Xeon model number scheme? There was some kind of continuity with the 3xxx, 5xxx and 7xxx lines. Reply
  • PhilipHa - Wednesday, March 16, 2011 - link

    Clearly Intel is trying to counteract upcoming ARM based servers, including those from Marvell and Nvidia.

    Calxeda claims to be working on quad-ARM-A9 server nodes with less than 5W TDP including memory, and up to 480 cores in a 2U form factor. ( http://www.zdnet.co.uk/blogs/mapping-babel-1001796... )

    I guess the problem for Intel is that it needs to cover this potential market whether it is realised or not. Intel's big problem is that this has the potential to significantly reduce margins in their server business. Some claim such ARM servers will produce 5x to 10x performance per watt, and 15x to 20x better price performance. ( http://www.eetimes.com/electronics-news/4213963/Ca... ).

    The other advantage the ARM eco-system has is that its cores can be tailored to specific requirements, so for example floating point silicon area could be reduced in favour of more cores and better integer performance in areas, for applications like web servers where floating point performance is not needed.
    Reply
  • marc1000 - Wednesday, March 16, 2011 - link

    I've read your link. there is an interesting part in it:

    "each node will consist of an ARM Cortex A9 quad-core CPU, DRAM and fabric interconnect , and will consume around 5W"

    so, for 480cores you would need 120 quad-core nodes, that will consume 120*5W = 600W. this is in line with any multiple-cpus server.

    take 4 quad-xeons (130w each), plus mobo and memory, and you are in the 600W arena in 2U space again.

    there is simply a physical limit in doing calculationsby using electricity. it does not matter what kind of cpu you use, you will just need similar amounts of power to do similar amounts of calculations.
    Reply
  • marc1000 - Wednesday, March 16, 2011 - link

    that in the same process node, of course. Reply
  • MadMan007 - Thursday, March 17, 2011 - link

    Whoever came up with the dumb-sh*t abbreviation 'SNB' for Sandy Bridge needs to die a in a fire. Reply
  • marc1000 - Thursday, March 17, 2011 - link

    agreed Reply
  • greenguy - Thursday, March 17, 2011 - link

    This is more likely to combat what AMD is planning, which will (hopefully) be the release of a relatively low cost Llano with power gating on CPU core and GPU, with low idle power and ECC RAM support, which is what is needed in a home NAS or server. With all the video stuff on the APU and power gated, the total system power draw has potential to be very low.

    A system with ECC RAM is the perfect complement to NAS with ZFS RAID, ZFS ensuring that your data is as it should be, and the ECC RAM ensuring that any data that should be written is the correct data. And also ensuring that the system is running as it was coded to, without mysterious crashes.
    Reply
  • UrQuan3 - Thursday, March 17, 2011 - link

    I'm kind of late posting on this one, but oh well.

    An Atom-like server would have its uses. Twice in the last year I've had to spec a machine that needed tiny computation ability, but 60GB of ECC ram. The machine simply does not exist. Using a dual, quad-core Xeon for a que manager is painful, but there is currently no other way. The ram cannot be spared for adding VMs, so the cores just sit there. A single atom/bobcat/nano on a 64-bit ecc memory bus would be perfect.
    Reply
  • dirk adamsky - Friday, March 18, 2011 - link

    Hello all,

    HP already offers a low power cpu server, the HP Proliant n36L microserver based on the AMD Neo cpu.
    I did combine the n36l microserver with a HP Smart Array p410 raid controller with 512MB cache (with battery)(controller price in the Netherlands is in or about 270 Euro (now already down to 250 euro)).
    The setup is combined with 2 x 4Gb Kingston memory and 2 x 300GB WD Velociraptors.
    I made 4 blog entries on installation, VMware ESXi installation, MS Win SBS 2011 installation, etc..
    They first 2 can be found here:

    http://deludi.nl/blog/vbscript/various/various-how...

    http://deludi.nl/blog/vbscript/various/various-how...

    Later I will post about running multiple vm's on the n36l.

    Best regards,

    dirk adamsky
    Reply
  • krumme - Saturday, March 19, 2011 - link

    One BD module, dualcore, is aprox 32mm2 sans L3, thats less than half a zakate...
    Is the nearest competitor for AMD BD Atom or SB then? - or ARM?
    Soon Intel will realize the have a good SB cpu but without a profitable market for it
    Reply
  • lili75 - Monday, March 21, 2011 - link

    welcome Reply
  • lili94 - Wednesday, March 23, 2011 - link

    welcome Reply
  • huran - Sunday, April 03, 2011 - link

    ( http://www.2kuu.com )

    Online Store,Get Name Brand Fashion From 12USD Now!

    Lv,Gucci,Prada,Coach,Chanel Women sandal is $30

    DG,JUICY,Lv,Gucci,Coach Hand-bag price is $35

    Polo,Locaste,Levis,EdHardy,Bape,Christan Audigier AF,COOGI Tshirt price is $12 Jeans price is $34 Paypal accept,Door to Door services!

    5 days arrive your home or you ur friends’ adress by EMS,DHL,UPs click my link under here!

    ( http://www.2kuu.com )
    Reply
  • psiboy - Monday, April 18, 2011 - link

    Atom would be road kill in the server market... any of the new AMD Brazos lineup would smash them.. especially as Atom is not an out of order cpu for crying out loud! Reply
  • ratana - Sunday, April 24, 2011 - link

    What moron would ever consider putting an underpowered POS like Atom in a server? Really, they aren't worth the juice they burn in computing power relative to any other computational device, even a single CUDA core (Stream proc), as you like it. It would be just dumb to waste space in or on a blade with this wimpy POS; it would create more heat than the feeble arithmetic it can do is worth removing. I had these embeddded in my gel electrophoresis vislilizer for taking photos of proteins as they separated out due to varying voltages and I hated it; somewhat like the guys in "Office Space" hated the photocopier. I hate Atom. Love the Xeons and core series though. Reply

Log in

Don't have an account? Sign up now