Calxeda feels that the ECX-2000 at 1.8 GHz is competitor of the C2530 1.7 GHz (2 GHz Turbo, 4 cores, 9W TDP). If we look at Intel's SKUs, we noticed that the C2730 1.7 GHz (2 GHz Turbo, 8 cores, 12W TDP) might be also be a close competitor. So we list the ECX-1000 (the previous Calxeda SoC), the ECX-2000 and the two closest Intel Atom competitors. The "integrated" part is a bit short on details, but it is out of the scope of the article to discuss the different levels of I/O integration. We'll discuss that in a later article.

CPU Atom
S1260
ECX-1000 Atom
C2530
ECX-2000 Atom
C2730
Launch Date Q3 2012 Q2 2012 Q3 2013 Q4 2013 Q3 2013
Process Technology 32 nm 40 nm 22 nm trigate 28 nm 22 nm trigate
Cores
µ-Architecture
2 + 2 logical (SMT)
Saltwell

4 physical
Cortex-A9

4 physical
Silvermont

4 physical
Cortex-A15
8 physical
Silvermont
Clockspeed  2 GHz 1.4 GHz 1.7/2 GHz 1.8 GHz 1.7/2 GHz
L1-Cache (per core)
L2-Cache
24/32 KB D/I
2x 0.5 MB
32/32 KB D/I
4 MB
24/32 KB D/I
2x1 MB
32/32 KB D/I
4 MB
24/32 KB D/I
4x1 MB
Memory controller Single Channel
64-bit
64-bit Dual Channel
64-bit
128-bit Dual Channel
64-bit
Fastest Supported RAM DDR3 at 1.33 GT/s DDR3 at 1.33 GT/s DDR3 at 1.6 GT/s DDR3 at 1.6 GT/s DDR3 at 1.6 GT/s
Addressing 64 bit 32 bit 64 bit 32 bit with LPAE 64 bit
Max RAM 8 GB 4 GB 64 GB 16 GB 64 GB
Integrated PCIe Yes Yes Yes Yes Yes
Integrated Network No Yes Yes Yes Yes
Integrated SATA No Yes Yes Yes Yes
Typical Server node Power usage 20W (*) +/- 8 W 15-18W ?
(**)
12-16W ? (**) +/- 20W (*)

(*) Based upon Intel's "22 nm Intel Atom server SoCs Performance Overview"
(**) Rough estimates

Although the Atom S1260 had a TDP of only 8.5W, the power numbers were simply not comparable to the other SoCs as the S1260 needed more additional chips to perform the same tasks. In practice this means  that a server node based up on the S1260 need just as much power as the 12W TDP Atom C2730.

The performance/watt of the ECX-2000 SoC has probably not made a giant leap over the predecessor but the overall server efficiency should improve significantely as Calxeda also implemented Energy Efficient Ethernet (EEE) and other tricks to reduce the energy consumption of the "Fleet Fabric". And the point is of course that the number of applications where the performance per node is "good enough" has increased significantely.

The Atom C2000 can support up to 64 GB, where the ECX-2000 is limited to 16 GB. The trade-off is that the C2000 uses up to 4 DIMM slots, where the ECX-2000 is limited to one. Obviously, more DIMM slots offer more flexibility but also make the server node larger and consume more energy. 

 

The Calxeda ECX-2000 How good will the EnergyCore ECX-2000 be?
Comments Locked

45 Comments

View All Comments

  • JohanAnandtech - Wednesday, October 30, 2013 - link

    Everything that Linaro (dev organization) makes available on Xen, KVM, Linux
  • Tanclearas - Wednesday, October 30, 2013 - link

    How is the ECX-2000 "limited to one" DIMM slot, but have a 128-bit memory controller?
  • texadactyl - Saturday, November 2, 2013 - link

    The C2550 and C2750 can support up to 64MB RAM (not part of this article). The C2530 and C2730 (depicted in this article) are limited to 32MB RAM. Source: http://ark.intel.com/compare/77977,77982,77980,779... .
  • TommyVolt - Thursday, December 19, 2013 - link

    Yes, the new 16 Gigabyte DDR3 DIMMs to upgrade to 64GB RAM with just 4 sockets are available from the company I'M Intelligent Memory, website: www.intelligentmemory.com
    These memory modules come as UDIMMs (240 Pin) or SO-DIMMs (204 Pin), with or without ECC. As they are made of just 16 pieces of 8 Gigabit DDR3 chips, the modules are dual-rank and still have total 16GByte. No special form factor, everything looks standard.
    Intel recently released a BIOS update for their C2000 Avoton series to support those Intelligent Memory modules with 16GB capacity..
    But the modules might also work in Calxeda and other platforms, because:
    When I look at the JEDEC DDR3 documentation, a 8 Gigabit DDR3 chip uses the same amount of address lines as a 4 Gigabit chip (A0 to A15). This means there is no hardware-modification required to run the modules. As a logical consequence, such 16GB modules should be working everywhere, in all types of CPUs, as long as the BIOS is programmed correctly to read the modules ID and set the values for the memory-initialization into the memory controller inside the SOC.
  • brentpresley - Saturday, November 2, 2013 - link

    For those of us that actually run hosting companies and datacenters, these servers are actually worthless, for several reasons.

    1) x86 is still king in the hosting arena. Everything else, I would never risk Enterprise customers with SLAs of 99.999% uptime on.
    2) Underpowered. A nice Ivy Bridge system may pull more power, but it will handle proportionally more traffic.
    3) Licensing problems - most Enterprise grade Cloud OS's are licensed out by the Socket, not the core. Therefore your best bang for the buck is something like an Ivy Bridge E or high-end AMD Opteron. Everything else is just a waste of licensing $$$.

    Get me Haswell in an HP Moonshot form factor with mSATA SSDs, and then I might be interested. Until then, these are just toys.

Log in

Don't have an account? Sign up now