This week, Intel is hosting a datacenter event in San Francisco. The basic message is that the datacenter should be much more flexible and that the datacenter should be software defined. So when a new software service is launched,  storage, network and compute should all be adapted in a matter of minutes instead of weeks.

One example is networking. Configuring the network for a new service takes a lot of time and manual intervention: think of router access lists, gateway/firewall configurations and so on. It requires a lot of very specialized people: the Netfilter expert does not necessarily master the intricacies of Cisco's IOS. Even if you master all the skills it takes to administer a network, it still takes a lot of time to log in to all those different devices.

Intel wants the propietary network devices to be replaced by software, running on top of its Xeons. That should allow you to administer all your network devices from one centralized controller. And the same method should be applied to storage and the proprietary SANs.

If this "software defined datacenter" sounds very familiar to you, you have been paying attention to the professional IT market. That is also what VMWare, HP and even Cisco have been preaching.  We all know that, at this point in time, it is nothing more than a holy grail, a mysterious and hard to reach goal. Intel and others have been showing a few pieces of the puzzle, but the puzzle is not complete at all. We will get into more detail in later articles.

But there were some interesting news tidbits we like to share with you. 

First of all, the announcement of the new Broadwell SoC. Broadwell is the successor to Haswell, but Intel also decided to introduce a highly integrated SoC version. So we get the "brawny" Broadwell cores inside a SoC that integrates Network, storage etc. just like the Avoton SoC. As this might be a very powerful SoC for microservers, it will be interesting to see how much room is still left for the Denverton SoC - the successor of the atom based Avoton SoC -  and the ARM server SoCs.

Jason Waxman, General Manager of the Cloud Infrastructure Group, also showed a real Avoton SoC package.

A quick recap: the Atom Avoton is the 22 nm successor of the dualcore Atom S1260 Centerton.

The Avoton SoC has up to 8 cores and integrates SATA, Gigabit Ethernet, USB and PCIe.

Intel promises up to 4x better performance per watt, but no details were given at the conference. The interesting details that we hardware enthusiasts love can be found at the end of the PDF though. Performance per Watt was measured with SPEC CPU INT rate 2006. The dualcore Atom S1260 (2 GHz, HT enabled) scored 18.7 (base) while the Atom C2xxx (clockspeed 1.5 GHz?, Turbo disabled)  on an alpha motherboard (Intel Mohon) reached 69. Both platforms included a 250 GB harddisk and a small motherboard. The Atom "Avoton" had twice as much memory (16 vs 8 GB) but the whole platform needed 19 W while the S1260 platform needed 20W. Doubling the amount of memory is not unfair if you have four times as much cores (and thus SPEC CPU INT instances). So from these numbers it is clear that Intel's Avoton is a great step forward. The SPEC numbers tell us that Intel is able to get four times more cores in the same power envelop without (tangibly) lowering the single threaded performance (the lower clock speed is compensated by the IPC improvements in Silvermont). 

Intel does not stop at integrating more features inside a SoC. Intel also wants to make the server and rack infrastructure more efficient. Today, several vendors have racks with shared cooling and power. Intel is currently working on servers with a rack fabric with optical interconnects. And in the future we might see processors with embedded RAM but without a memory controller, placed together inside a compute node and with a very fast interconnect to a large memory node. The idea is to have very flexible, centralized pools of compute, memory and storage. 

The Avoton server at the conference was showing some of these server and rack based innovations. Not only did it have 30 small compute nodes....

... it also did not have any PSU, drawing power from a centralized PSU.

In summary, it looks like the components in the rack will be very different in the near future. Multi-node servers without PSUs, SANs replaced by storage pools and proprietary network gear by specialized x86 servers running networking software.

Comments Locked

18 Comments

View All Comments

  • flyingpants1 - Tuesday, July 23, 2013 - link

    I read the article. Can someone please explain this in layman's terms? I don't have any experience with servers.

    It looks like they are aiming to do a few things here: greatly simplify the server hardware design (a lot more computing power per square inch), virtualize the network hardware, and speed up deployment time of something.

    Where are the hard drives? I imagine you don't need any for most of those little compute cards.

    Since Broadwell is an SoC now, when is there any benefit to running Atom-based servers? Broadwell is way faster than atom. Do Atom servers take up more space, but give better performance per watt?

    If each one of those cards is a CPU and RAM, what are the physical connectors for? Power and networking? How are they interfaced to eachother?

    And why is there so much empty space? I'm guessing to allow for larger heatsinks.
  • JohanAnandtech - Tuesday, July 23, 2013 - link

    Well, I did not explain the server part in great detail. Basically, complex storage and network devices are being replaced by "normal" x86 boxes and software.
    The harddrives will be accessed somewhere via the network or the SATA interface going through the "gold fingers". The physical connectors are for power/networking fabric/USB/SATA etc.
    Empty space: it is an early prototype.
  • JohanAnandtech - Tuesday, July 23, 2013 - link

    "Since Broadwell is an SoC now, when is there any benefit to running Atom-based servers?"
    Broadwell will be probably in the 40W area, Atom CPUs can be as low as 6W. So for applications that are mostly about I/O might still be better off with the Atom CPUs. To be honest: I am not sure. That will an interesting question to answer with some in depth reviews :-)
  • alxlr8 - Tuesday, July 23, 2013 - link

    Intel are not leading in this space anymore. The announcements above are an admission that the directional path being cut by competitors is actually a viable one, so Intel are starting to copy them, for example with the server chassis being shown here. This is good for the industry as a whole, but not so promising for Intel.
  • A5 - Tuesday, July 23, 2013 - link

    I think Intel can afford to be a "fast follower" instead of a leader in some of these non-core markets like DC and mobile. They have huge brand recognition and enough resources to make a strong entry even if they aren't first.
  • jamyryals - Tuesday, July 23, 2013 - link

    Exactly, also Intel can support projects really well. It's a pretty big deal in the real world when you don't have an expert/specialist on each component at your disposal.
  • Hrel - Tuesday, July 23, 2013 - link

    It wasn't a legitimate path until Intel got in on it. The fact that Intel is doing it means it's going to be done right. Personally I'm looking forward to what they pull off.
  • knolf - Thursday, August 1, 2013 - link

    I can also make a bunch of slides and throw in some buzzwords ( SDN, up and running in minutes ). I'm a network engineer and I work in enterprise datacenter environment on a daily basis. All this 'I push a button and everything gets magically provisioned and configured' is vaporware. Let me know which company that can make this type of product. I will buy their shares.

Log in

Don't have an account? Sign up now