Back to Article

  • Shining Arcanine - Monday, June 14, 2010 - link

    They should have used ARM processors for this server. Theoretically speaking, this server has only 1.6 teraflops of computing power while an ARM server using the 4-core Cortex A9 the would have 8 teraflops of computing power, while using less electricity.

    I cannot see why they would want to use Intel processors for this, especially when Intel charges a premium, while ARM does not.
  • Shining Arcanine - Monday, June 14, 2010 - link

    I forgot to mention, using ARM Cortex A9 processors would enable SeaMicro to use ECC memory on these systems. Reply
  • loknar28 - Monday, June 14, 2010 - link

    Will the ARM Processor run server software designed for the x86 architecture? Wouldn't the server OS have to be ported over to the new architecture? I am guessing that and cost reasons might be why they used the Atom. Plus Intel probably took them out to lunch. Reply
  • sprockkets - Monday, June 14, 2010 - link

    Linux works well in ARM. Just recompile the kernel. Reply
  • rs1 - Monday, June 14, 2010 - link

    The kernel, and every single application that you want your server to run. Not worth it. Reply
  • TeXWiller - Monday, June 14, 2010 - link

    You would probably do it anyway if you could, so why not? This is definitely not a server for Oracle or SAP. Reply
  • Souka - Wednesday, June 16, 2010 - link

    Cost of ARM + board over Atom + board?

    Maybe Atom's were used to keep cost and thermal load down?
  • Taft12 - Tuesday, June 15, 2010 - link

    OK lets do a quick Linux/ARM compatibility check:

    Apache, check
    PHP, check
    MySQL or PostgreSQL, check
    JVM, check

    This stack alone is the baseline of what you would need for many of the workloads customers that would consider this hardware need. I would not be at all surprised to see something similar from this or another vendor on ARM.
  • yyrkoon - Tuesday, June 15, 2010 - link

    Compatibility is much greater than that. Debian can/will run on ARM. Many flavors of Linux will. Now I have not checked, but I would think that would include every APT get-able application.

    Besides all that, *even if* You had to compile your kernel, and all needed applications/drivers etc, it would only have to be done once. Not once for every single core . . . and this is actually a preferred method for many system admins anyways. More secure, and stable that way.

    The only thing that I can see an issue with, is those "admins" out there unwilling to break away from Microsoft in the server arena. Use the right tool, for the right job. This has nothing to do with what makes a better OS. They are all tools, meant to be used for the correct job.

    Now I have to say something aside from all this ARM / Atom conversation. *THIS* is all we need. This gives ISPs every where another reason to be slow . . .
  • Shining Arcanine - Tuesday, June 15, 2010 - link

    Gentoo Linux also supports ARM. Reply
  • Samus - Wednesday, June 16, 2010 - link

    Windows servers account for over half the corporate server market. The rest of the market is filled with a clusterfuck of strange shit like Mac servers, Linux servers running probably close to a dozen different ISA's and then you've got Sun and some other niche' products.

    This is ALREADY a niche' product. Why would you further convolute the market segment by making a niche' product a niche' product. They're doing the right thing making it compatible with the MAJORITY. An ARM (or niche') product can come if and when this thing successfully makes money.
  • bmullan - Monday, July 19, 2010 - link

    I've been using the ARM based Marvel electronic's $99 "Sheeva" Plug Computers for over a year now. I get mine from GlobalScale.

    They come with linux (ubuntu) preinstalled but you can install other linux flavors.

    I pulled and installed dozens of applications down to those little 4.5 W computers and they all worked fine.

    I had several so some were running apache web server apps, some were setup as SAMBA servers for sharing music, videos within my home etc. I'm sure there are problems I've just not encountered but as I said they worked great.

    ARMs don't have math co-processing so there are some things they can't do or do well.

    But those little $99 Plug computers work great.
    I just ordered a couple new models that while still 4.5 Watts, now support -

    Linux Kernel 2.6.32
    Wi-Fi 802.11b/g
    Bluetooth: 2.1 / EDR
    U-SNAP I/O
    DDR2 800MHz, 16-bit bus
    512MB 16bit DDR2 @ 800MHz data rate
    NAND FLASH Controller, 8-bit bus
    512MB NAND FLASH: 4Gb x8, direct boot
    128-bit eFuse Memory
    7x GPIOs for user application- 5 with 3.3V I/O, 2 with 1.8V I/O
    1x eSATA 2.0 port -3Gbps SATAII
    2x USB 2.0
    1x Internal MicroSD Socket for Optional Kernel System
    1x External MicroSD Socket
    RTC w/Battery

    Optional with SPI Flash + SD card boot up
    UBIFS Flash file system support
    NAND Flash boot up
  • RagingDragon - Tuesday, August 24, 2010 - link

    Some ARM implementations have hardware floating point and/or SIMD units, others don't.

    Those plug computers look interesting, though I haven't dug into hardware specs to see if they have hardware floating point and/or SIMD.
  • Thog - Tuesday, June 15, 2010 - link

    ECC would be nice for a server, but depending on what you're doing, flops are nowhere near as important as concurrency. I'd rather have a box that can run Solaris or Linux i386 out-of-the-box, including applications already compiled for it. Reply
  • nafhan - Wednesday, June 16, 2010 - link

    Using Atom instead of ARM makes this almost a drop in replacement for the Xeon/Opteron boxes they are talking about replacing. Once you switch away from x86 you're talking about a lot more work and a lot more risk for enterprise customers to adopt this. An enterprise customer will appreciate the decreased risk the ability to fall back to his old hardware allows. Changing the software stack makes switching back and forth much more difficult. Reply
  • surt - Monday, June 14, 2010 - link

    Arm doesn't run x86 workloads very well, does it? Reply
  • loknar28 - Monday, June 14, 2010 - link

    They could always make a server version of Windows Mobile, lol. Reply
  • piroroadkill - Tuesday, June 15, 2010 - link

    But we're talking server workloads. A server of this scale is probably running Linux and anything you could compile for ARM anyway. Reply
  • Shining Arcanine - Tuesday, June 15, 2010 - link

    Exactly what is a x86 workload? Reply
  • GeorgeH - Monday, June 14, 2010 - link

    It was inevitable that the first Borg Cube's hive mind would be x86 compliant.

    Resistance is futile, people.
  • CharonPDX - Monday, June 14, 2010 - link

    Except the ENTIRE POINT of this is maximum compatibility with minimum power draw. It is *NOT* meant for maximum power, at all.

    Intel chips are standard. (They could have used an AMD Geode, or Via Nano, same effect.) They run all standard server OSes, all standard software. If you have to get custom-written server software, there goes your money savings!

    Yes, someone could re-compile their software for ARM, but this isn't meant for audiences that recompile their software. It's meant for audiences that need a lot of low-end servers.
  • code65536 - Tuesday, June 15, 2010 - link

    Um, why are you citing flops? Unless this server is being used for scientific computing (or other applications along that sort of line) (for which this is unsuitable anyway for a variety of other reasons), you don't care about how many floating-point operations this thing can do; the number of flops is totally irrelevant. You only care about the integer rate. Reply
  • Shining Arcanine - Tuesday, June 15, 2010 - link

    I was probably wrong to call it flops, as what I did was multiply the number of instructions per clock (2) by the clock and the number of of cores. Reply
  • Calin - Tuesday, June 15, 2010 - link

    This is consumer-driven equipment and consumer-driven requirements.
    Many customers would like quite a bit of RAM in their servers, but the processing need would be small enough. Also, they would like to use "industry-standard" software (SQL, Apache, PHP, ...) for Linux, and it might not be completely and totally supported under ARM.
    Yes, using ARM processors would give you better everything (I'm not sure about allowing access to 2GB of RAM), but it would be like trying to sell a Formula 1 car to someone living in a swamp.
  • Shining Arcanine - Tuesday, June 15, 2010 - link

    ARM is completely supported by Linux:

    I am running Linux on my Linksys NSLU2 and I can run just about whatever application I want on it.
  • MySchizoBuddy - Tuesday, July 06, 2010 - link

    ARM server using the 4-core Cortex A9 the would have 8 teraflops of computing power.
    Whats the source of this number. or how did you calculate it.
  • yanfei - Sunday, July 25, 2010 - link

    ======= Reply
  • chromal - Monday, June 14, 2010 - link

    Or you could instead embrace virtualization and oversubscribe the hardware a little. I'm not sure who is in the market for a $100000+ machine that does even offer basic enterprise features like ECC memory. Seems like a solution in search of a problem. I'm sure the problem is real and 'out there,' but I'm also sure that that the specific instances that wouldn't better be accommodated by other technology are niche, indeed...

    Myself, I'd rather have one good Xeon X5550 CPU than 24 crappy Atoms.
  • vol7ron - Monday, June 14, 2010 - link

    I'm curious to see how this will pan out.

    What would the ideal server type be? A web server w/ little computational processes?

    I'm trying to think how little the computational processes would be. Would something like a blog/forum based enterprise or maybe a eStore be the ideal? I'm guessing something like a gameserver would not be suitable for this type of technology, nor would something like eBay that's continuously calculating the difference in time?

    I'm also curious how this would scale down in price. How much would something like ~20 cores (w/ less memory) run? Something like this seems nice because it seems more economical to add on to.

    As stated, it'd be nice to see this in other varieties (ARM-based) with ECC support. For some reason I get the feeling SeaMicro has been at this a while and the Cortex A9 may not have been available when this project was started. Though, I think the A9 also lacks certain instructions that the Intel does provide.
  • spazoid - Monday, June 14, 2010 - link

    This is some very interesting hardware, but I see a problem with the money savings comparison.

    If you have a Dell R610 running at 100%, you can't replace it with X amount of Atom CPU's until you hit the same SPECint performance. You'd need a lot of these Atom CPU's to equal one Quad core Xeon, and seeing as they can't work together in any way other than access each others virtual harddrives, the comparison is totally ridiculus.

    Yes, for something like web servers or similar where the CPU usage on a quad core CPU is very low, this could work, but I don't see any good reason for not just virtualizing such a server, which gives you many advantages that this setup simply cannot provide.
  • datasegment - Monday, June 14, 2010 - link

    Looks to me like the savings go up, but the amount of power used steadily *increases* when CPU usage drops... I'm guessing the charts and titles are mismatched? Reply
  • JarredWalton - Monday, June 14, 2010 - link

    No... it's a poorly made marketing chart IMO. Look at the number of systems listed:

    1000 Dell vs. 40 SeaMicro = $1084110 ($52.94 per "server")
    2000 Dell vs. 80 SeaMicro = $1638120 ($39.99 per "server")
    4000 Dell vs. 160 SeaMicro = $2718720 ($33.19 per "server")

    Based on that, the savings per "server" actually drops at lower workloads -- that's with 512 Atom chips in each SeaMicro server. It's also a bit of a laugh that 20480 Atom chips are "equal" to 8000 Nehalem cores (dual socket + quad-core, plus Hyper-Threading) under 100% load. Although they're probably using the slowest/cheapest E5502 CPU as a comparison point (dual-core, no HTT). Under 100% load, the Nehalem cores are far more useful than Atom.
  • cdillon - Monday, June 14, 2010 - link

    Virtualization has already solved the problem they've attempted to solve, and I think virtualization does a better job at it. For example, I'm going to be implementing desktop virtualization at my workplace by next year, and have been looking at blade systems to house the virtualization hosts. You can pack at least twice the computing power into the same rack space, and use less than twice the wall-power (using low-power Xeon or Opteron CPUs and low-power DIMMs). You actually have an opportunity to SAVE EVEN MORE power because systems like VMware vSphere can pack idle virtual machines on to fewer hosts and can dynamically power on and off entire hosts (blades) as demand requires it. Because the DM10000 does non-virtualized (other than I/O) 1:1 computing, there's no opportunity to power-down completely idle CPU nodes, yet keep the software "running" for when that odd network request comes in and needs attention. Virtualization does that. Reply
  • PopcornMachine - Monday, June 14, 2010 - link

    Couldn't you also use this box as a VMware server, and then save even more energy? Reply
  • cdillon - Monday, June 14, 2010 - link

    No, for several reasons, but mainly the licensing costs for vSphere for 512 CPU sockets. You wouldn't want to waste a fixed per-socket licensing cost on such slow, single-core CPUs. You want the best performance/dollar ratio, and that generally means as many cores as you can fit per socket. With free virtualization solutions that wouldn't really matter, but there's no free virtualization solutions that I'd want to manage 512 individual VM hosts with. Remember, the DM10000 is NOT a single 512-core server. It's 512 independent 1-core servers. Not only that, but each single Atom core is going to have a hard time running more than one or two VM guests at a time anyway, so trying to virtualize an Atom seems rather pointless.

    Sorry if this is a double post, first time I tried to post it just hung for about 10 minutes until I gave up on it and tried again...
  • PopcornMachine - Tuesday, June 15, 2010 - link

    In to OS section, the article mentions running Windows under a VM. So sounds like it can handle VMware to me.

    On the other hand, if the box does not let the cores work together in some type of cluster, which I assumed was the point of it, then I don't see the point of it. Just a bunch of cores to weak to power a netbook properly?
  • fr500 - Tuesday, June 15, 2010 - link

    You got 512 cores, why would you go for virtualization? Reply
  • beginner99 - Tuesday, June 15, 2010 - link

    yeah not to mention other advantages of virtualisation especially when performing upgrades on these web applications.
    Or single-threaded performance. Really only useful if your apps never ever do anything moderatley cpu intensive.
  • PopcornMachine - Monday, June 14, 2010 - link

    When Intel introduce the Atom, this was what I thought would be one its main uses.

    Cluster a bunch together. Low power, scalable, cheap. No brainer. Makes more sense that using them in under powered netbooks.

    How come it took someone this long to do it?

    It would be nice to see some benchmarks to be sure, though.
  • moozoo - Monday, June 14, 2010 - link

    I'd love to see something like this made using a beefed up Tegra 2 type chipsets supporting OpenCL. Reply
  • nofumble62 - Tuesday, June 15, 2010 - link

    So now this server need only 4 racks. Cheaper, more energy efficient.

    I think they have demonstrated something like 80 cores on a chip couple years ago. You bet it is coming.
  • joshua4000 - Tuesday, June 15, 2010 - link

    I thought current AMD/Intel CPUs will save some power due to reduced clock speeds when unused. Wasn't there that core-parking thingy introduced with Win7 as well?
    So why opt for a slower solution which will probably take a good amount of time longer to finish the same task?
  • piroroadkill - Tuesday, June 15, 2010 - link

    Why would I want a bunch of shite servers? Initially, I thought they were just unified into one big ass server, or maybe a couple of servers.

    But this as I understand it, is effectively a bunch of netbooks in an expensive box. No.
  • LoneWolf15 - Tuesday, June 15, 2010 - link

    Silly question, but...

    Say I run VMWare ESX on this box. Is it possible to dedicate more than one server board to support a single virtual machine, or at least, more than one server board to running the VMWare OS itself? Or am I looking at it all wrong?

    It seems like even if I could, VMWare's limit on how many CPU sockets you can have for a given license might be a limiting factor.
  • cdillon - Tuesday, June 15, 2010 - link

    ESX and other virtualization systems can cluster, but not in that way. Each Atom on the DM10000 would have to run its own copy of the virtualization software. That's not what you want to do. The DM10000 is really the *opposite* of virtualization. Instead of taking fewer, faster CPUs and splitting them up into smaller virtual CPUs, which is a very flexible way of doing things, they're using far more slower CPUs in a completely inflexible way. I'm having a really hard time thinking of any actual benefits of this thing compared to virtualization on commodity servers. Reply
  • cl1020 - Tuesday, June 15, 2010 - link

    Same here. For the money you could easlily build out 500 vm's on comodity blade servers running ESX and have a much more flexible solution.

    I just don't see any scenario where this would be a better solution.
  • mpschan - Tuesday, June 15, 2010 - link

    Many people here are talking about how virtualization is a better solution and how they can't envision a market for this.

    Personally, I think that this isn't a bad first design for a server of this type. Sure, it could be better. But I doubt they whipped up this idea yesterday. They probably spent a great deal of time designing this sucker from the ground up.

    The market that will find this useful might be small, but it also might be large enough to fund the next version. And who knows what that might be capable of. Better cross-"server" computation, better access to other CPUs' memory, maybe other CPU options, ECC, etc.
  • MGSsancho - Tuesday, June 15, 2010 - link

    Would be an awesome server if you want to index things. 64 nics on it? yes I know 1 server could easy connect to thousands of connections but this beast has a market. maybe for server testing; use this box as a client node, have each atom have 25 concurrent connections. that is 12,800 connections to your server array. great for testing IMO. or how about network testing; with 64 nics you hook up 10 or so nics to each switch and test your setup. you can mix it up with 10gb Ethernet nics as well. I personally see this device being amazing for testing. Reply
  • ReaM - Thursday, June 17, 2010 - link


    Atom has a horrible performance per watt value!!!

    Using simple mobile core2duo will save you more power!!!
    I mean, read every Atom test, the mobile chips are far better. (as I can remember up to 9 times more efficient, for the same performance)

    this product is a failure
  • Oxford Guy - Friday, June 18, 2010 - link

    "The things that suck about the atom:

    1. double precision. Use a double, and the Atom will grind to a halt.
    2. division. Use rcp + mul instead.
    3. sqrt. Same as division.
    All of those produce unacceptable stalls, and annihilate your performance immediately. So don't use them!

    Now, you'd imagine those are insurmountable, but you'd be wrong. If you use the Intel compiler, restrict yourself to float or int based SSE instuctions only, avoid the list of things that kill performance, and make extreme use of OpenMP, they really can start punching above their weight. Sure they'll never come close to an i7, but they aren't *that* bad if you tune your code carefully. Infact, the biggest problem I've found with my Atom330 system is not the CPU itself, but good old fashioned memory bandwidth. The memory bandwidth appears to be about half that of Core2 (which makes sense since it doesn't support dual channel memory), and for most people that will cripple the performance long before the CPU runs out of grunt.

    The biggest problem with them right now is that they are so different architecturally from any other x86/x64 CPU that all apps need to be re-compiled with relevant compiler switches for them. Code optimised for a Core2 or i7 performs terribly on the atom."

    How do these drawbacks compare to the ARM, Tegra, and Apple chips?
  • yuhong - Wednesday, June 23, 2010 - link

    Intel recently released desktop Atoms using DDR3 memory. Reply
  • paihuaizhe - Sunday, June 20, 2010 - link

    (nike-alliance).(com)=>is a leading worldwide wholesaler company (or u can say

  • ClagMaster - Friday, August 06, 2010 - link

    Wow, thats what I call massively parallel.

    This would make a great Monte Carlo Box for some really big models

Log in

Don't have an account? Sign up now