The demise of innovator Calxeda and the excellent performance per watt of the new Intel Avoton server were certainly not good omens for the ARM server market. However, there are still quite a few vendors that are aspiring to break into the micro server market. 

AMD seems to have the best position with by far the most building blocks and experience in the server world. The 64 bit 8-core ARMv8 based Opteron A1100 should see the light in the second half of 2014. Broadcom is also well placed and has announced that it will produce a 3 GHz 16 nm quadcore server ARMv8 server CPU. ARM SoC marketleader Qualcomm has shown some interest too, but without any real product yet. Capable outsiders are Cavium with "Project Thunder" and AppliedMicro with the x-gene family

But unless any of the players mentioned above can grab Intel-like marketshare of the micro server market, the fact remains that supporting all ARM servers is currently a very time consuming undertaking. Different interrupt controllers, different implementation of FP units... at this point in time, the ARM server platform simply does not exist. It is a mix of very different hardware running their own very customized OS kernels. 

So the first hurdle to take is to develop a platform standard. And that is exactly what ARM is announcing today: the platform standard for ARMv8-A based (64-bit) servers, known as the ARM ‘Server Base System Architecture’ (SBSA) specification.

The new specification is supported by a very broad range of companies ranging from software companies such as Canonical, Citrix, Linaro, Microsoft, Red Hat and SUSE, OEMs (Dell and HP) and the most important component vendors active in this field such as AMD, Cavium, Applied Micro and Texas Instruments. In fact, the Opteron A1100 that was just announced adheres to this new spec.

All those partners of course formulated comments, but the best comment came from Frank Frankovsky, president and chairman of the Open Compute Project Foundation. 

"These standardization efforts will help speed adoption of ARM in the datacenter by providing consumers and software developers with the consistency and predictability they require, and by helping increase the pace of innovation in ARM technologies by eliminating gratuitous differentiation in areas like device enumeration and boot process." 

The primary goal is to ensure enough standard system architecture to enable a single OS image to run on all hardware compliant to this specification. That may sound like a fairly simple thing, but in reality it's extremely important to solidifying the ARM ecosystem and making it a viable alternative in the server space.

A few examples of the standard:

  • The base server system shall implement a GICv2 interrupt controller
  • As a result, the maximum number of CPUs in the system is 8
  • all CPUs must have Advanced SIMD extensions and cryptography extensions.
  • the system uses generic timers (Specified by ARM)
  • CPUs must implement the described power state semantics
  • USB 2.0 controllers must conform to EHCI 1.1, 3.0 to XHCI 1.0, SATA controllers to AHCI v1.3
We can only applaud these efforts: it will eliminate a lot of useless time investments, lower costs and help make ARM partners a real option in servers. With the expected launch of many ARM Cortex A57 based server SoCs this year, it looks like 2014 can be a breakthrough year for ARM servers. We looking forward to do another micro server review. 
POST A COMMENT

15 Comments

View All Comments

  • BMNify - Friday, January 31, 2014 - link

    if you assume that they are using existing generic ARM IP then the interconnect isnt really a big problem they have whats now called NOC (Network On Chip) the current The ARM® CoreLink™ CCN-508 Cache Coherent Network scaling up to 32 processor cores can deliver up to 1.6 terabits of sustained usable system bandwidth per second with a peak bandwidth of 2 terabits per second (256 igaBytes/s) at processor speeds, but its probable that AMD are using the older CCN-504 scaling to 16 processor cores and one Terabit of usable system bandwidth per second.

    OC if you actually look a little deeper into current and near term interconnects rather than relying on most high profile tech sites that don't seem to look to far beyond the usual PR suspects...

    ... then you find the "Si photonics" and packaging is to see Si photonics (at 40Gbit/s per interconnect link) commoditized as in easy to obtain on chip/package perhaps by 2016/17, the foundries and volume packagers are all ready now with 2.5D and even 3D ready to go apparently.

    Si Photonics: 3D ASIP’s Pre-game Show
    "At this week’s 3D Architectures for Semiconductor Integration and Packaging (3D ASIP) which took place Wednesday, December 11, 2013 at the Hyatt Regency San Francisco Airport, it became clear that High Bandwidth Memory (HBM) is more likely to be the application that brings 3D TSVs to volume manufacturing. However, according to Dim-Lee Kwong, executive director of IME in Singapore, Si Photonics holds the key for integration of memory and logic, and enables wafer level integration of multiple components. Individually, While Si photonics and through silicon interposers (TSI) each provide advantages over scaling and monolithic SoCs, Kwong says together they can move “Moore’s Mountain” as we approach CMOS scaling limits."
    “TSI provides a nice platform for interconnecting photonic ICs (PIC), logic, Memory and CMOS as close as possible on the interposer.” said Kwong.
    http://www.3dincites.com/wp-content/uploads/IMESIP...
    http://www.3dincites.com/wp-content/uploads/STmicr...

    only the mass markets fabless providers only need to actually design
    Reply
  • MrSpadge - Thursday, January 30, 2014 - link

    It's OK for the target market of many small servers. It wouldn't work for HPC, but there you'll typically want either GPUs (or similar designs) for embarrassingly parallel tasks or fewer fat cores and unified memory for other tasks due to limited scalability. I don't see much space for solutions in between these extremes.. which is not yet covered by x86.

    What I wonder, though, is if the whole concept of micro servers really makes sense from a load balancing point of view. If I'd put many more energy efficient cores into a big box and run many more VMs on them, wouldn't that be better than many small independent servers? It would still be quite energy-efficient, but the CPUs and memory can be shared among the entire system. Well, the high performance interconnects needed in this case must not consume too much power.. but I don't think this would be troublesome at 8 cores.
    Reply
  • duploxxx - Friday, January 31, 2014 - link

    while this article mentions the standardization of server specs, that doesn't mean the arm cpu can't be used for anything else...

    HP moonshot is the example you refer too where virtualization is not needed (yet) but specific target markets are already addressed. other types and designs will probably follow soon. It market is changing quite a bit, default server layout will change more soon.
    Reply
  • BMNify - Friday, January 31, 2014 - link

    thats true , OC the ‎Calxeda and moonshot 32bit ARM prototypes where fine as a proof of concept, but their first product layout of their but ugly SLED's use of the available space what nothing but shameful ,tons of useless metal and wasted space in a given U configuration restricting airflow etc...

    ‎Calxeda's NOC was apparently good but they should have sacked the SLED designers , using small self contained COM (computer On Modules) and providing a simple PCB plastic slide mount at minimal distance between PCB cards to direct airflow would have been far more rewarding and cst less to produce, as you could then re-purpose these separate COM when it became time to upgrade.... it's a shame really , lets hope someone in the server space learns that lesson and provides what people want to actually buy this time around.

    on a side not its funny that Microsoft have now also joined this ARM imitative :)
    http://www.computerworld.com/s/article/9245854/Mic...
    Reply
  • lwatcdr - Monday, February 10, 2014 - link

    I do not think so. This should really work well in the SAN and NAS space as well as web servers. Reply

Log in

Don't have an account? Sign up now