Final Words

The base configuration of the SeaMicro SM10000 comes with 512 Atom servers, 1TB of DDR2 memory and no storage. SeaMicro charges $139,000 per SM10000 in this configuration.

It is easy to see how this technology might scale. Simple drop the number of cards used in the server and now you have a lower end configuration. SeaMicro needs to be able to make enough revenue to support its business model so I wouldn’t expect any ultra cheap Atom servers out of the company anytime soon. Scaling down appears to be a top priority, although scaling up is also a possibility.

The limitations are of course numerous. There’s no official Windows support, so this for Linux only deployment. Each server is limited in performance not only by the CPU but also by the memory capacity (2GB). There’s no ECC, no real RAID support (other than striping). It’s very clear that the SM10000 can potentially serve one specific niche very well: a setup with tons of servers that have low CPU usage requirements.

The benefits are clear. If you’re running tons of servers that are mostly idle, the SM10000 could save you boatloads on power. By stripping out most of the components on the motherboard and simplifying each server down to something that consumes roughly 4W of power, SeaMicro does have a very valid solution to the power problem in data centers.

The power savings scale up tremendously. On a large scale the cost of running servers really boils down to hardware, then software then power in order of increasing cost. If you’re running tens of thousands of servers, power consumption becomes a real problem. And this is where SeaMicro is most effective:

I suspect that anyone faced with rising data center power costs will be at least interested in what SeaMicro has to offer.

The obvious argument against the SeaMicro approach is virtualization. Combine multiple servers into one beefy Xeon or Opteron machine and then you start to use those processors at their peak efficiency by keeping them loaded most of the time.

It’s difficult to tell based on a press presentation how well the SeaMicro solution fares against a beefy Xeon server running hundreds of VMs. I don’t doubt that it is a lower power solution, the question is whether or not the performance characteristics work with your specific needs.

From where I stand, the SM10000 looks like the type of product that if you could benefit from having it, you’ve been waiting for something like it. In other words, you will have been asking for something like the SM10000 for quite a while already. SeaMicro is simply granting your wish.

Understanding the SeaMicro Architecture
Comments Locked

53 Comments

View All Comments

  • nofumble62 - Tuesday, June 15, 2010 - link

    So now this server need only 4 racks. Cheaper, more energy efficient.

    I think they have demonstrated something like 80 cores on a chip couple years ago. You bet it is coming.
  • joshua4000 - Tuesday, June 15, 2010 - link

    I thought current AMD/Intel CPUs will save some power due to reduced clock speeds when unused. Wasn't there that core-parking thingy introduced with Win7 as well?
    So why opt for a slower solution which will probably take a good amount of time longer to finish the same task?
  • piroroadkill - Tuesday, June 15, 2010 - link

    Why would I want a bunch of shite servers? Initially, I thought they were just unified into one big ass server, or maybe a couple of servers.

    But this as I understand it, is effectively a bunch of netbooks in an expensive box. No.
  • LoneWolf15 - Tuesday, June 15, 2010 - link

    Silly question, but...

    Say I run VMWare ESX on this box. Is it possible to dedicate more than one server board to support a single virtual machine, or at least, more than one server board to running the VMWare OS itself? Or am I looking at it all wrong?

    It seems like even if I could, VMWare's limit on how many CPU sockets you can have for a given license might be a limiting factor.
  • cdillon - Tuesday, June 15, 2010 - link

    ESX and other virtualization systems can cluster, but not in that way. Each Atom on the DM10000 would have to run its own copy of the virtualization software. That's not what you want to do. The DM10000 is really the *opposite* of virtualization. Instead of taking fewer, faster CPUs and splitting them up into smaller virtual CPUs, which is a very flexible way of doing things, they're using far more slower CPUs in a completely inflexible way. I'm having a really hard time thinking of any actual benefits of this thing compared to virtualization on commodity servers.
  • cl1020 - Tuesday, June 15, 2010 - link

    Same here. For the money you could easlily build out 500 vm's on comodity blade servers running ESX and have a much more flexible solution.

    I just don't see any scenario where this would be a better solution.
  • mpschan - Tuesday, June 15, 2010 - link

    Many people here are talking about how virtualization is a better solution and how they can't envision a market for this.

    Personally, I think that this isn't a bad first design for a server of this type. Sure, it could be better. But I doubt they whipped up this idea yesterday. They probably spent a great deal of time designing this sucker from the ground up.

    The market that will find this useful might be small, but it also might be large enough to fund the next version. And who knows what that might be capable of. Better cross-"server" computation, better access to other CPUs' memory, maybe other CPU options, ECC, etc.
  • MGSsancho - Tuesday, June 15, 2010 - link

    Would be an awesome server if you want to index things. 64 nics on it? yes I know 1 server could easy connect to thousands of connections but this beast has a market. maybe for server testing; use this box as a client node, have each atom have 25 concurrent connections. that is 12,800 connections to your server array. great for testing IMO. or how about network testing; with 64 nics you hook up 10 or so nics to each switch and test your setup. you can mix it up with 10gb Ethernet nics as well. I personally see this device being amazing for testing.
  • ReaM - Thursday, June 17, 2010 - link

    Hi,

    Atom has a horrible performance per watt value!!!

    Using simple mobile core2duo will save you more power!!!
    I mean, read every Atom test, the mobile chips are far better. (as I can remember up to 9 times more efficient, for the same performance)

    this product is a failure
  • Oxford Guy - Friday, June 18, 2010 - link

    "The things that suck about the atom:

    1. double precision. Use a double, and the Atom will grind to a halt.
    2. division. Use rcp + mul instead.
    3. sqrt. Same as division.
    All of those produce unacceptable stalls, and annihilate your performance immediately. So don't use them!

    Now, you'd imagine those are insurmountable, but you'd be wrong. If you use the Intel compiler, restrict yourself to float or int based SSE instuctions only, avoid the list of things that kill performance, and make extreme use of OpenMP, they really can start punching above their weight. Sure they'll never come close to an i7, but they aren't *that* bad if you tune your code carefully. Infact, the biggest problem I've found with my Atom330 system is not the CPU itself, but good old fashioned memory bandwidth. The memory bandwidth appears to be about half that of Core2 (which makes sense since it doesn't support dual channel memory), and for most people that will cripple the performance long before the CPU runs out of grunt.

    The biggest problem with them right now is that they are so different architecturally from any other x86/x64 CPU that all apps need to be re-compiled with relevant compiler switches for them. Code optimised for a Core2 or i7 performs terribly on the atom."

    How do these drawbacks compare to the ARM, Tegra, and Apple chips?

Log in

Don't have an account? Sign up now