Intel Columbiaville: 800 Series Ethernet at 100G, with ADQ and DDP
by Ian Cutress on April 2, 2019 1:00 PM EST- Posted in
- Networking
- Intel
- Ethernet
- 100G
- 100G Ethernet
- Columbiaville
Among the many data center announcements today from Intel, one that might fly under the radar is that the company is launching a new family of controllers for 100 gigabit Ethernet connectivity. Aside from the speed, Intel is also implementing new features to improve connectivity, routing, uptime, storage protocols, and an element of programmability to address customer needs.
The new Intel 800-Series Ethernet controllers and PCIe cards, using the Columbiaville code-name, are focused mainly on one thing aside from providing a 100G connection – meeting customer requirements and targets for connectivity and latency. This involves reducing the variability in application response time, improving predictability, and improving throughput. Intel is doing this through two technologies: Application Device Queues (ADQ) and Dynamic Device Personalization (DDP).
Application Device Queues (ADQ)
Adding queues to networking traffic isn’t new – we’ve seen it in the consumer space for years, with hardware-based solutions from Rivet Networks or software solutions from a range of hardware and software companies. Queuing network traffic allows high-priority requests to be sent over the network preferentially to others (in the consumer use case, streaming a video is a higher priority over a background download), and different implementations either leave it for manual arrangement, or offer whitelist applications, or do traffic analysis to queue appropriate networking patterns.
With Intel’s implementation of ADQ, it instead relies on the application deployment to know the networking infrastructure and direct accordingly. The example given by Intel is a distributed Redis database – the database should be in control of its own networking flow, so it can tell the Ethernet controller how to manage which packets and how to route them. The application knows which packets are higher priority, so it can send them on the fastest way around the network and ahead of other packets, while it can send non-priority packets on different routes to ease congestion.
Unfortunately, it was hard to see how much of a different ADQ did in the examples that Intel provided – they compared a modern Cascade Lake system (equipped with the new E810-CQDA2 dual port 100G Ethernet card and 1TB of Optane DC Persistent Memory) to an old Ivy Bridge system with a dual-port 10G Ethernet card and 128 GB of DRAM (no Optane). While this might be indicative of a generational upgrade to the system, it’s a sizeable upgrade that hides the benefit of the new technology by not providing an apples-to-apples comparison.
Dynamic Device Personalization (DDP)
DDP was introduced with Intel’s 40 GbE controllers however it gets an updated model for the new 800 series controllers. DDP in simple terms allows for a programmable protocol within the networking infrastructure, which can be used for faster routing and additional security. With DDP, the controller can work with the software to craft a user-defined protocol and header within a network packet to provide additional functionality.
As mentioned, the two key areas here are security (cryptography) or speed (bandwidth/latency). With the pipeline parser embedded in the controller, it can both craft an outgoing data packet or analyse one coming in. When the packet comes in, if it knows the defined protocol, it can act on it – either sending the payload to an accelerator for additional processing, or pushing it to the next hop on the network without needing to refer to its routing tables. With the packet being custom defined (within an overall specification), the limits to the technology depend on how far the imagination goes. Intel already offers DDP profiles for its 700-series products for a variety of markets, and that is built upon for the 800-series. For the 800 series, these custom DDP profiles can be loaded pre-boot, in the OS, or during run-time.
But OmniPath is 100G, right?
For users involved in the networking space, I know what you are going to say: doesn’t Intel already offer OmniPath at 100G? Why would they release an Ethernet based product to cannibalize their own OmniPath portfolio? Your questions are valid, and depending on who you talk to, OmniPath has either had a great response, a mild response, or no response to their networking deployments. Intel’s latest version of OmniPath is actually set to be capable of 200G, and so at this point it is a generation ahead of Intel’s Ethernet offerings. It would appear that based on the technologies, Intel is likely to continue this generational separation, giving a choice for customers to either take the latest it has to offer on OmniPath, or take its slower Ethernet version.
What We Still Need to Know, Launch Timeframe
When asked, Intel stated that it is not disclosing the process node the new controllers are built on, nor the power consumption. Intel stated that its 800 series controllers and corresponding PCIe cards should launching in Q3.
Related Reading
- Intel Announces The FPGA PAC N3000 for 5G Networks
- Things We Missed: Realtek Has 2.5G Gaming Ethernet Controllers
- Wi-Fi Naming Simplified: 802.11ax Becomes Wi-Fi 6
- In The Lab: The Netgear XS724EM, a 24-port 2.5G/5G/10GBase-T Switch
- Omni-Path Switches at SuperComputing 15: Supermicro and Dell
- Exploring Intel’s Omni-Path Network Fabric
20 Comments
View All Comments
ksec - Tuesday, April 2, 2019 - link
From this announcement, so Intel wasn't really interest Mellanox and their bid was only to push the Nvidia's bidding price higher?Kevin G - Tuesday, April 2, 2019 - link
I wouldn't say that. Without Mellanox, how many other high speed networking companies are out there that implement fabrics other than Ethernet? I'd effectively be Intel or Ethernet. At this point, Intel would be free to raise hardware prices as they see fit. Having a (near) monopoly can pay off.TeXWiller - Tuesday, April 2, 2019 - link
>For users involved in the networking space, I know what you are going to say: doesn’t Intel >already offer OmniPath at 100G?They probably wouldn't say that any more than they would try to build an Ethernet cluster when an Infiniband network is needed for low enough latency..
binarp - Thursday, April 4, 2019 - link
+1 No one in networking would ask this. Infiniband/Omnipath networks are high bandwidth, ultra low latency, often non-blocking, memory interconnect, closed loop networks. Other than high bandwidth Ethernet networks are rarely any of these things.abufrejoval - Tuesday, April 2, 2019 - link
DDP sounds very much like the programmable data plane that Bigfoot Torfinos implement via the P4 programming language: So is the hardware capable enough and will Intel support P4 or do they want to split (and control) the ecosystem like with Omnipath and Optane?abufrejoval - Wednesday, April 3, 2019 - link
Barefoot Tofino, sorry, need edit0ldman79 - Tuesday, April 2, 2019 - link
To be fair, can an Ivy Bridge even *move* 100Gbps of data over it's NIC?I'm not sure how else they could do the demo, most hardware would be the limit. The comparison would have to be against older tech, though I suppose they could have used a more recent comparison.
Honestly though, for that level of hardware an Ivy Bridge might be the correct comparison. I doubt these devices get changed very frequently.
We have used a few Core i7 as edge routers for multiple gigabit networking. Not sure what the upper limit is, we haven't hit it yet, even with our Nehalem. I doubt it would handle 10Gbps with a few hundred thousand connections pushing through it.
hpglow - Tuesday, April 2, 2019 - link
No but ivy bridge can't move 50GB/sec over it's port either. It does so the same way a 100 GB/sec connection would be enabled. Through a 16x pcie slot.trackersoft123 - Wednesday, April 3, 2019 - link
Hi guys,do you make difference between 100GB/s and 100Gb/s?
There is roughly 8 times difference :)
Nutty667 - Monday, April 8, 2019 - link
We did 200Gbps on Broadwell server with UDP packets. You have to use a kernel bypass network framework for this however. We had to use two network cards, as you can only get 100Gbps over each PCIe x16 connection.