Final Words

The base configuration of the SeaMicro SM10000 comes with 512 Atom servers, 1TB of DDR2 memory and no storage. SeaMicro charges $139,000 per SM10000 in this configuration.

It is easy to see how this technology might scale. Simple drop the number of cards used in the server and now you have a lower end configuration. SeaMicro needs to be able to make enough revenue to support its business model so I wouldn’t expect any ultra cheap Atom servers out of the company anytime soon. Scaling down appears to be a top priority, although scaling up is also a possibility.

The limitations are of course numerous. There’s no official Windows support, so this for Linux only deployment. Each server is limited in performance not only by the CPU but also by the memory capacity (2GB). There’s no ECC, no real RAID support (other than striping). It’s very clear that the SM10000 can potentially serve one specific niche very well: a setup with tons of servers that have low CPU usage requirements.

The benefits are clear. If you’re running tons of servers that are mostly idle, the SM10000 could save you boatloads on power. By stripping out most of the components on the motherboard and simplifying each server down to something that consumes roughly 4W of power, SeaMicro does have a very valid solution to the power problem in data centers.

The power savings scale up tremendously. On a large scale the cost of running servers really boils down to hardware, then software then power in order of increasing cost. If you’re running tens of thousands of servers, power consumption becomes a real problem. And this is where SeaMicro is most effective:

I suspect that anyone faced with rising data center power costs will be at least interested in what SeaMicro has to offer.

The obvious argument against the SeaMicro approach is virtualization. Combine multiple servers into one beefy Xeon or Opteron machine and then you start to use those processors at their peak efficiency by keeping them loaded most of the time.

It’s difficult to tell based on a press presentation how well the SeaMicro solution fares against a beefy Xeon server running hundreds of VMs. I don’t doubt that it is a lower power solution, the question is whether or not the performance characteristics work with your specific needs.

From where I stand, the SM10000 looks like the type of product that if you could benefit from having it, you’ve been waiting for something like it. In other words, you will have been asking for something like the SM10000 for quite a while already. SeaMicro is simply granting your wish.

Understanding the SeaMicro Architecture
Comments Locked

53 Comments

View All Comments

  • datasegment - Monday, June 14, 2010 - link

    Looks to me like the savings go up, but the amount of power used steadily *increases* when CPU usage drops... I'm guessing the charts and titles are mismatched?
  • JarredWalton - Monday, June 14, 2010 - link

    No... it's a poorly made marketing chart IMO. Look at the number of systems listed:

    1000 Dell vs. 40 SeaMicro = $1084110 ($52.94 per "server")
    2000 Dell vs. 80 SeaMicro = $1638120 ($39.99 per "server")
    4000 Dell vs. 160 SeaMicro = $2718720 ($33.19 per "server")

    Based on that, the savings per "server" actually drops at lower workloads -- that's with 512 Atom chips in each SeaMicro server. It's also a bit of a laugh that 20480 Atom chips are "equal" to 8000 Nehalem cores (dual socket + quad-core, plus Hyper-Threading) under 100% load. Although they're probably using the slowest/cheapest E5502 CPU as a comparison point (dual-core, no HTT). Under 100% load, the Nehalem cores are far more useful than Atom.
  • cdillon - Monday, June 14, 2010 - link

    Virtualization has already solved the problem they've attempted to solve, and I think virtualization does a better job at it. For example, I'm going to be implementing desktop virtualization at my workplace by next year, and have been looking at blade systems to house the virtualization hosts. You can pack at least twice the computing power into the same rack space, and use less than twice the wall-power (using low-power Xeon or Opteron CPUs and low-power DIMMs). You actually have an opportunity to SAVE EVEN MORE power because systems like VMware vSphere can pack idle virtual machines on to fewer hosts and can dynamically power on and off entire hosts (blades) as demand requires it. Because the DM10000 does non-virtualized (other than I/O) 1:1 computing, there's no opportunity to power-down completely idle CPU nodes, yet keep the software "running" for when that odd network request comes in and needs attention. Virtualization does that.
  • PopcornMachine - Monday, June 14, 2010 - link

    Couldn't you also use this box as a VMware server, and then save even more energy?
  • cdillon - Monday, June 14, 2010 - link

    No, for several reasons, but mainly the licensing costs for vSphere for 512 CPU sockets. You wouldn't want to waste a fixed per-socket licensing cost on such slow, single-core CPUs. You want the best performance/dollar ratio, and that generally means as many cores as you can fit per socket. With free virtualization solutions that wouldn't really matter, but there's no free virtualization solutions that I'd want to manage 512 individual VM hosts with. Remember, the DM10000 is NOT a single 512-core server. It's 512 independent 1-core servers. Not only that, but each single Atom core is going to have a hard time running more than one or two VM guests at a time anyway, so trying to virtualize an Atom seems rather pointless.

    Sorry if this is a double post, first time I tried to post it just hung for about 10 minutes until I gave up on it and tried again...
  • PopcornMachine - Tuesday, June 15, 2010 - link

    In to OS section, the article mentions running Windows under a VM. So sounds like it can handle VMware to me.

    On the other hand, if the box does not let the cores work together in some type of cluster, which I assumed was the point of it, then I don't see the point of it. Just a bunch of cores to weak to power a netbook properly?
  • fr500 - Tuesday, June 15, 2010 - link

    You got 512 cores, why would you go for virtualization?
  • beginner99 - Tuesday, June 15, 2010 - link

    yeah not to mention other advantages of virtualisation especially when performing upgrades on these web applications.
    Or single-threaded performance. Really only useful if your apps never ever do anything moderatley cpu intensive.
  • PopcornMachine - Monday, June 14, 2010 - link

    When Intel introduce the Atom, this was what I thought would be one its main uses.

    Cluster a bunch together. Low power, scalable, cheap. No brainer. Makes more sense that using them in under powered netbooks.

    How come it took someone this long to do it?

    It would be nice to see some benchmarks to be sure, though.
  • moozoo - Monday, June 14, 2010 - link

    I'd love to see something like this made using a beefed up Tegra 2 type chipsets supporting OpenCL.

Log in

Don't have an account? Sign up now