The benchmarking team of Intel Portland did their best to produce some really interesting benchmarks at the last server workshop in San Francisco, but many of the benchmarks did not work well on the ECX-1000 due to the very limited 4 GB RAM capacity. The most interesting benchmark can be found below: a front end web performance benchmark with high network traffic.

In this benchmark, Intel finally admits that the S1260 is nothing to be excited about. The Intel findings are very similar to ours: the ECX-1000 beats the the Atom S1260 by a wide margin in typical server workloads. So where will the ECX-2000 end up? We can not be sure, but we can roughly estimate that it will land somewhere between being 3 to 4 times faster than the Atom S1260. That is not enough to beat the Atom C2750, but that is after all a 20W TDP chip and the top SKU. Digging deeper in the Intel docs, we find that the C2730 at 1.7 GHz (12 W TDP) consumes about 20W for the whole server node  (16 GB and 250 GB HD) and the C2750 about 28W when running SPECint_rate_2006. The harddisk will have consumed very little, since the SPECint_rate_2006 benchmark runs out of memory. 

The ECX-2000 at 1.8 GHz will probably need roughly 12-16W per server node. So our first rough estimates tell us that the C2730 is out of the (performance) reach of the ECX-2000, and that Calxeda's estimate of the C2530 is right on the mark. 

However, the story does not end there. The total power consumption of the ECX-1000 based Boston Viridis server we tested was remarkably low, the very efficient network fabric made sure there was little "power overhead" (PHYs, Backplane,...). This Fleet Fabric has been improved even further, so there is a good chance that the ECX-2000 based servers will offer a very competitive performance/watt, although the Atom C2730 has an edge when the application benefits from more threads. But when that is not the case, i.e. scaling is mediocre beyond 4 threads, the tables might turn. Anyway, there is a very good chance that the ECX-2000 is very competitive with the 4-core Atoms, to say the least. 

There is indeed a reason why HP will use the Calxeda SoC in its new Moonshot server cluster in 2014. The picture above shows such a moonshot module. We felt that the Atom S1260 SoC was a bad match for the HP moonshot, but "HP's Moonshot 2.0" will be an entirely different story. And for those of us with less cash to burn we are looking forward what Penguin computing and Boston will make off their ECX-2000 based server.

Next stop, the 64-bit SoC code-named “Sarita,” based upon the 50% faster Cortex-A57 core, which is pin-compatible with the ECX-1000 and new ECX-2000. This reduces development time and expense for the ODMs. But right now, we can look forward to some interesting microserver comparisons in Q1 2014...

 

 

 

 

 

Intel Atom C2000 versus Calxeda ECX-2000
Comments Locked

45 Comments

View All Comments

  • JohanAnandtech - Wednesday, October 30, 2013 - link

    Everything that Linaro (dev organization) makes available on Xen, KVM, Linux
  • Tanclearas - Wednesday, October 30, 2013 - link

    How is the ECX-2000 "limited to one" DIMM slot, but have a 128-bit memory controller?
  • texadactyl - Saturday, November 2, 2013 - link

    The C2550 and C2750 can support up to 64MB RAM (not part of this article). The C2530 and C2730 (depicted in this article) are limited to 32MB RAM. Source: http://ark.intel.com/compare/77977,77982,77980,779... .
  • TommyVolt - Thursday, December 19, 2013 - link

    Yes, the new 16 Gigabyte DDR3 DIMMs to upgrade to 64GB RAM with just 4 sockets are available from the company I'M Intelligent Memory, website: www.intelligentmemory.com
    These memory modules come as UDIMMs (240 Pin) or SO-DIMMs (204 Pin), with or without ECC. As they are made of just 16 pieces of 8 Gigabit DDR3 chips, the modules are dual-rank and still have total 16GByte. No special form factor, everything looks standard.
    Intel recently released a BIOS update for their C2000 Avoton series to support those Intelligent Memory modules with 16GB capacity..
    But the modules might also work in Calxeda and other platforms, because:
    When I look at the JEDEC DDR3 documentation, a 8 Gigabit DDR3 chip uses the same amount of address lines as a 4 Gigabit chip (A0 to A15). This means there is no hardware-modification required to run the modules. As a logical consequence, such 16GB modules should be working everywhere, in all types of CPUs, as long as the BIOS is programmed correctly to read the modules ID and set the values for the memory-initialization into the memory controller inside the SOC.
  • brentpresley - Saturday, November 2, 2013 - link

    For those of us that actually run hosting companies and datacenters, these servers are actually worthless, for several reasons.

    1) x86 is still king in the hosting arena. Everything else, I would never risk Enterprise customers with SLAs of 99.999% uptime on.
    2) Underpowered. A nice Ivy Bridge system may pull more power, but it will handle proportionally more traffic.
    3) Licensing problems - most Enterprise grade Cloud OS's are licensed out by the Socket, not the core. Therefore your best bang for the buck is something like an Ivy Bridge E or high-end AMD Opteron. Everything else is just a waste of licensing $$$.

    Get me Haswell in an HP Moonshot form factor with mSATA SSDs, and then I might be interested. Until then, these are just toys.

Log in

Don't have an account? Sign up now