Pricing

So how much does this Boston Viridis server cost? $20,000 is the official price for one Boston Viridis with 24 nodes at 1.4GHz and 96GB of RAM. That is simply very expensive. A Dell R720 with dual 10 gigabit, 96GB of RAM and two Xeons E5-L2650L is in the $8000 range; you could easily buy two Dell R720 and double your performance. The higher power bill of the Xeon E5 servers is that case hardly an issue, unless you are very power constrained. However, these systems are targeted at larger deployments.

Buy a whole rack of them and the price comes down to $352 per server node, or about $8500 per server. We have some experience with medium quantity sales, and our best guess is that you get typically a 10 to 20% discount when you buy 20 of them. So that would mean that the Xeon E5 server would be around $6500-$7200 and the Boston Viridis around $8500. Considering that you get an integrated (5x 10Gbit) switch and a lower power bill with the Boston Viridis, the difference is not that large anymore.

Calxeda's Roadmap and Our Opinion

Let's be clear: most applications still run better on the Xeon E5. Our CPU benchmarks clearly indicate that any application that accesses the memory frequently or that needs high per thread integer processing power will run better on the Xeon E5. Compiling and installing software simply feels so much faster on the Xeon E5, there is no need to benchmark.

There's more: if your performance requirements are higher than what a quad-core Cortex-A9 can deliver, the Xeon E5 is a lot more flexible and a better choice in most cases. Scaling up is after all a lot easier than using load balancers and other complex software or hardware to scale out. Also, the management software of the Boston Viridis does the job, but Dell's DRAC, HP ILO, and Supermicro's IM are more user friendly.

Calxeda is aware of all this, as they label their first "highbank" server architecture with the ECX-1000 SoC as targeted to the "early adopter". That is why we deliberately tested a scenario that would be relevant to the potential early adopters: a cluster of web servers that is relatively network intensive as it serves a lot of media files. This is one of the better scenarios for Calxeda, but not the best: we could imagine that a streaming server or storage server would be an even better fit. Especially the latter catches on, and the storage version of the Boston Viridis sells well.

So on the one hand, no, the current Calxeda servers are no Intel Xeon killers (yet). However, we feel that the Calxeda's ECX-1000 server node is revolutionary technology. When we ran 16 VMs (instead of 24), the dual low power Xeon was capable of achieving the same performance per VM as the Calxeda server nodes. That this 24 node system could offer 50% more throughput at 10% lower power than one of the best Xeon machines available was honestly surprising to us. 8W at the wall per server node—exactly what Calxeda claimed—is nothing short of remarkable, because it means that the 48 server node machine, which is also available, is even more efficient.

To put that 8W number in perspective, the current Intel Atoms that offer similar performance need that kind of power for the SoC alone and are baked with Intel's superior 32nm process technology. The next generation ARM servers are already on the way and will probably hit the market in the third quarter of this year. The "Midway" SoC is based on a 28nm (TSMC) Cortex-A15 chip. A 28nm Cortex-A15 offers 50% higher single-threaded integer performance at slightly higher power levels and can address up to 16GB of RAM. With that it's safe to conclude that the next Calxeda server will be a good match for a much larger range of applications--memcached, larger web, and midrange database servers for examples. By then, virtualization will be available with KVM and Xen, but we think virtualization on ARM will only take off when the ARM A57 core with its 64-bit ARM V8 ISA hits the market in 2014.

Right now, the limited performance of the individual server nodes makes the Boston Viridis attractive for web applications with lower CPU demands in a power constrained data center. But the extremely low energy consumption and the rapidly increasing performance of the ARM cores show great potential for Calxeda's technology. Short term, this is a niche market, but in another year or two this style of approach could easily encroach on Intel's higher end markets.

Energy and Power
Comments Locked

99 Comments

View All Comments

  • kfreund - Friday, March 15, 2013 - link

    Keep in mind that this is VERY early in the life cycle, and therefore costs are artificially high due to low volumes. Ramp up the volumes, and the prices will come WAY down.
  • wsw1982 - Wednesday, April 3, 2013 - link

    Ja, IF they have high volume. But even if there is high volume, it's shared between different ARM suppliers and needless to say, the ATOM. How much can it be for one company?

    But the question is where the ARM get the volume? less performance, comparable power consumption, less performance/watt rational (not this kind extreme bias case ), less flexibility, less software support (stability), vendor specific (you can build a normal server, but can you build up a massive parallel cluster?), oh, don't forgot, more (much more) expensive. Which company will sacrifice themselves to beef up the market volume of the ARM server?
  • Sputnik_b - Thursday, March 14, 2013 - link

    Hi Johan,
    Nice job benchmarking and analyzing the results. Our group at EPFL has recently done some work aimed at understanding the demands that scale-out workloads, such as web serving, place on processor architectures. Our findings very much agree with your benchmark conclusions for the Xeon/Calxeda pair. However, a key result of our work was that many-core processors (with dozens of simple cores per chip) are the sweet spot with regard to performance per TCO dollar. I encourage you to take a look at our work -- http://parsa.epfl.ch/~grot/pubs/SOP-TCO_IEEEMicro....
    Please consider benchmarking a Tilera system to round-out your evaluation.
    Best regards!
  • Sputnik_b - Thursday, March 14, 2013 - link

    Sorry, bad URL in the post above. This should work: http://parsa.epfl.ch/~grot/pubs/SOP-TCO_IEEEMicro....
  • aryonoco - Friday, March 15, 2013 - link

    LWN.net has a very interesting write-up on a talk given by Facebook's Director of Capacity Engineering & Analysis on the future of ARM servers and how they see ARM servers fit in with their operation. I think it gives valuable insight on this topic.

    http://lwn.net/SubscriberLink/542518/bb5d5d3498359... (free link)
  • phoenix_rizzen - Friday, March 15, 2013 - link

    ARM already has hardware virtualisation extensions. Linux-KVM has already been ported over to support it.
  • Andys - Saturday, March 16, 2013 - link

    Great article, finally good to see some realistic benchmarks run on the new ARM platform.

    But I feel that you screwed up in one regard: You should have tested the top Xoen CPU also - the E5-2690.

    As you know from your own previous articles, Intel's top CPUs are also the most power efficient under full load, and the price would still be cheaper than the full loaded Calxeda box anyway.
  • an3000 - Monday, March 25, 2013 - link

    It is a test using wrong software stack. Yes, I am not afraid to say that! Apache will never be used on such ARM servers. They are exact match for Memcached or Nginx or another set-get type services, like static data serving. Using Apache or LAMP stack is too much favorable for Xeon.
    What I would like to see is: Xeon server with max RAM non-virtualized running 4-8 (similar to core count) instances of Memcached/Nginx/lighttpd vs cluster of ARM cores doing the same light task. Measure performance and power usage.
  • wsw1982 - Wednesday, April 3, 2013 - link

    My suggestion will be let them run one hard-disk to one hard-disk copy and measure the power usage:)

Log in

Don't have an account? Sign up now