For a special occasion, and with what looked like a pricing error, I decided to splash out on a 10GBase-T switch for my testing lab. Coming in at almost £800, reduced from £1900, this beast was not cheap but surprisingly below my personal cost-per-port to get into the 10-gigabit game. Rather than review the switch (how do you review a switch anyway? ), I just want to go through what this thing is and what I can do with it. Plus some rough point-to-point bandwidth speeds.

The Quest for 10G on Copper

One of my personal crusades in recent years has been to push 10-gigabit networking – specifically Ethernet over copper (10GBase-T) – into a price range that is more amenable to home users. For a long time, this technology has been priced for commercial and enterprise: upwards of $100 per port for the switch and $100-$200 per port for the add-in cards. This is partly because the technology has a lot of enterprise bells and whistles, such as QoS, but also there has never been a big drive for more than gigabit Ethernet in the home.

Recently this changed somewhat. After a decade of Intel’s 10G silicon on the shelves, Aquantia came in and started offering add-in cards below $100 – and not only for 10G but also the new 2.5G and 5G standards as well. Their idea is to expand the market for this technology, given that they’ve been in the backhaul and networking backbone markets for a while. They had a 2 year lead over others on the 2.5G/5G silicon, but the key issue (as I explained to them over a year ago) was that in order to make it happen in the home it would require switches. These switches could either be managed or unmanaged, but really there needs to be a $50/port or $30/port series of switches for multi-gigabit to really take off. I make an online poll just for this.

Out of 137 voters at the time, about 10% said they would jump on the technology at $80 per port. Around a third said $50 per port, and 60% or so said $30 per port. To be honest, these results were around what I expected. Personally, I think a $250 5-port switch would be a great point to enter the market.

All that being said, and as much as the good folks at Aquantia agree with me, they don’t make the switches – it’s up to the Netgears, the D-Links, the TP-Links, and such to actually build them. I don’t have contacts with any of them to say what their thoughts are, but they haven’t been as quick as I hoped. One thing is that, I guess, they don’t want to build cheap 10G switches which might pull business away from the high margin enterprise hardware.

The State of 10GBase-T

A while back, before Aquantia burst into the scene, we did a piece about every consumer motherboard with 10GBase-T built in. This article saw insane traffic for a short piece, but it also showed every motherboard that was using Intel’s X540-T2 controller chip. For these boards, the chip was expensive (adding ~$250 to the board retail price), power hungry, and it required a good number of PCIe lanes. The upside was that most of these boards were dual port.

Since then, we’ve seen boards with Aquatia AQC107 (and AQC108) chips on board, which raise the price of the board by $70-$100 for a single port, but this is still a far more accessible way of enabling anything better than gigabit Ethernet on a PC. Then there's the range of 10GbE PCIe cards available, running at around $100.

As for the switches, the only options were a number of managed 8-port models from the likes of Netgear, such as the Netgear XS708E, which was around the $750 mark. Shelling out almost $80-$100 per port (after taxes), as we saw in the poll above, is a little insane for a home network and doesn’t appeal to very many users.

In the last year or so, there have been a number of switches that have hit the market offering two 10GBase-T ports and eight 1G ports. This includes switches such as the ASUS XG-U2008, which has been on sale for $250-$300 or so, the Netgear gaming-focused GS810EMX at $250-$300, and the Netgear GS110EMX which is a non-gaming version for slightly less. The problem with these switches is that they only have two ports – there’s no way to make a ‘tree’ from them, it essentially becomes expensive point-to-point connection, given the cheap cost of gigabit switches.


The Netgear Gaming 2x10G + 8x1G managed switch

So as of this week, the state of play was this for 10G offerings:

The offerings are still pretty abysmal for anyone looking for a ‘quick fix’ to enable 10GBase-T in the home.

It Was A Misprice or Something

So this week, when a family member asked me what I wanted for my birthday, I idly flicked through some switch listings. Thinking I might just splurge for a 2-port, I was hoping that an 8-port had come down in price. What I found, without too much trouble, was the Netgear XS724EM, a 24-port 10GBase-T switch. My search hadn’t been for that many ports – I assumed it would automatically be too expensive.

The XS724EM had an RRP of £1700. The price in front of me was £782. After a quick rant on Twitter, it was a no brainer (ed: I still think you're insane). At £782 / $858, this was a 55% discount, and comes in at just under $36 per port. I expected at some point that the cost of the switches would come down in price, although I didn’t anticipate the first one to do so would be a super-large one. Not only that, but it supports both 5G and 2.5G as well, so it's still beneficial with existing Cat5e runs.

If you go to the page today, you will see that this might have been a misprice.

The unit is currently up for £1280, almost £500 more than what I paid for it. Bargain. Prime delivery too.

Unboxing the XS724EM

After showing the box to the resident feline population, it was time to see what we had. On the side it gives a lot of pertinent information. This unit weighs 3.72kg / 8.21 lbs, which will be a key point for some users.

In the box, the unit is well packaged with foam blocks, although there is little space above and below it should the box be punctured.

Aside from the manual, the box came with two power cords (one UK, one EU), along with rubber feet for users putting the switch on a desk somewhere, and brackets to extend the unit to a standard 19-inch rack. Some of the comments online state that in a rack, using the screws, it actually ends up very rear heavy, putting a lot of torque on the screws if the unit isn’t directly above a server. In this case, it might be good to invest in rails.

The manual gives examples of how to connect the switch to multiple devices. Interestingly it thinks that gaming laptops with 2.5G connections are somewhat ubiquitous – I think someone should tell Netgear this is not the case.

There is also an app for the smartphone to help with additional management.

The cables for the switch are designed to be put in the front, and we get 24x 1G/2.5G/5G/10GBase-T ports for RJ45 cables. There is also two 10G SFP+ ports on the right, muxed with the final two 10GBase-T ports so only one pair can be used at once.

The lights on a normal gigabit Ethernet port are both orange and flicker with data. In this case, to discern 2.5G, 5G, and 10G, the LEDs go green and will have different patterns based on the connectivity.

There’s a Kensington Lock on the rear for physical security.

Airflow through the unit is provided by three fans near the outtake, with the intake on the other side.

Opening the chassis takes two screws on either side and three on the rear. It slides off like a standard server chassis, keeping the front panel.

At the rear of the chassis, covered in a shroud, in the built in power supply. The main PCB has several big heatsinks on it, which we’ll get to in a bit.

The fans in the chassis are Delta AFB0412SHB brushless fans, and these can kick up quite a noise at full blast. Luckily the only time I’ve heard them on full is when turning the unit on.

On the PCB are the controllers covered in aluminium heatsinks. These heatsinks are big and heavy, and there’s even a metal plate on top of the main switching fabric.

I actually tried to take this plate off to see the controllers underneath, but that was a no-go. The heatsinks actually use additional thermal pads to keep the plate attached and to conduct the heat energy through the unit. As I bought this unit personally for my use, rather than AnandTech’s money or a review sample, I wasn’t willing to potentially break things. Sorry.

After fitting it all back together, and putting the rubber feet on, it was time to hook it up to my home network.

This switch is going to sit at the crossroads of my five main test beds, along with a steam cache server (to enable quicker downloads), a local NAS, and a few other devices. For sure, I’ll be doing some office rearrangement soon to make the most out of the switch.

Using the Switch

This is a managed switch, which means there is the opportunity to go in and organise all of the settings. However, for users who just want to use it as a switch, it is almost as easy as plug and play. In fact, it was plug and play to begin with – in order to make the process a bit easier, I went into the web interface for the switch and disabled DHCP to make it perfectly clear (DHCP is handled by my router).

Logging in was straightforward (IP and password are on the bottom of the switch, and the password default is password), and the management control seems suitable for what it was designed for. Users can tell which ports are connected at what speeds, and also limit connectivity per port, and set up VLANs. In my case I’m not going to be using much of any of this, but the VLAN and QoS options are going to be key for office users.

Performance

As it turns out, testing networking hardware is difficult. If you really want to get a detailed overview of a switch, it requires the best part of 12-16 systems hitting it hard, aggregating the results for latency and bandwidth, and also keeping track of power, temperature, and noise. Unfortunately I have neither the time nor the facilities to do that, but a quick blast of iperf for peak-to-peak speeds is what we have at hand.

For our testing systems, on one end I have an AMD Ryzen Pro 2400GE (35W) APU system with an Intel X540T2 PCIe card equipped, and the other end is an X170 motherboard with a Core i3-7100T (35W) and an Aquantia AQC107 PCIe card equipped. Both systems were running Win10 x64 Enterprise 1803. I ran installed cards and drivers, made no other settings changes, and ran iperf by varying the number of parallel connections.

The default settings in iperf and on the two systems showed that we could, in theory, reach transfer rates of around 9.3 Gbps. The cards could also be the limiting factor here – the dangers of testing networking is that typically a 10G card is connected to a 10G switch is connected to another 10G card; either one of those three parts could be the bottleneck. I did note that iperf very easily used 85% of one thread on each system, so it could be that we need a faster CPU for better performance as well.

But a more initial concern when buying a switch like this is noise. This is a switch designed for the hubbub of a small office, or a server rack – not necessarily a home office where I might be recording audio. However in my initial use, the only time the fans have come on is when the machine is turned on (like some motherboards turn all fans to full until the startup sequence finishes). After that initial 15 second startup, the fans go to silent. When testing point-to-point peak speeds over several minutes, the unit is still silent. It naturally gets warm to touch, but in my setup it is out of the way on a desk. I’m sure I can find a place for the cats to sit on it and enjoy.

The Final Word

As mentioned, I forked over my own cash for this hardware. At $36 a port, I’m still amazed that the first one that crossed my $50 line was a massive 24-port switch, so now I have overkill for whatever I have planned (ed: it may involve CPUs and motherboards). The key thing here, for me, will be my testing – every new testbed requires 100GB of CPU tests and 800G+ of gaming tests, so copying these over takes time. On a gigabit network, using my new Steam Cache means that at a speed of 70MB/s a big game like GTA5 can still take 13 minutes. I’m hoping that with 10G, if can push that transfer speed to SATA limits, that the total time will be down to around two minutes. There's also the possibility of doing some network card testing in the future now.

 

Related Reading

POST A COMMENT

55 Comments

View All Comments

  • Psycho_McCrazy - Saturday, September 29, 2018 - link

    Serenity now, Insanity later!

    (And serenity implies running cat6 cables at my home to prevent a youtube stream to saturate the WiFi and prevent access to the file server.)
    Reply
  • imaheadcase - Saturday, September 29, 2018 - link

    Wait a tick, you can get 5-8 port 10Gb switches on amazon for around $250-300. Or are they just cheaper in the US? Reply
  • imaheadcase - Saturday, September 29, 2018 - link

    I keep thinking of upgrading my home network to it, but the sad reality outside a few situations its not needed yet. Especially since most things you connect to network still have 1Gb ports. I mean you can stream 4k fine and as long as you got fast internet its easier to download steam stuff than worry about a cache system. Reply
  • V900 - Saturday, September 29, 2018 - link

    Whats the advantage of a switch?

    Suppose someone got a total of ten devices at home and a 5 port router. Half of the devices use the router through LAN cable, and the other five are connected through Wi-Fi.

    What would be the advantage of hooking a ten port switch to the router, and connecting all ten devices with LAN cable to the switch?
    Reply
  • ZeDestructor - Saturday, September 29, 2018 - link

    Some of us just have that many wired devices.

    In my tiny home network, I have:

    1x10G desktop
    1 laptop dock
    2 APs
    2x10G for the big, heavy, noisy rackmounted server
    1 for the aforementioned server's IPMI (cause that's not on the 10G links of said server)
    1 for the modem (modem lives on a VLAN that I then trunk to the aforementioned server)
    1 for the TV STB
    1 for the printer

    That's 10 ports (because some devices devour ports, like my server). And I consider my network positively tiny. And I've been religiously ignoring IoT so far, cause I have seen nothing I find useful enough to dive into the quagmire of security fails.
    Reply
  • ZeDestructor - Saturday, September 29, 2018 - link

    Oh, and let's not forget keeping unused wall ports connected, so that future ZeDestructor doesn't have to go into the shed and move wires around when nerdy guests are over. Reply
  • wolrah - Saturday, September 29, 2018 - link

    > What would be the advantage of hooking a ten port switch to the router, and connecting all ten devices with LAN cable to the switch?

    Since I can't be sure if you understand this, you're already using a switch. Your "router" is just a combination device that has it built in. The actual router part is a system-on-a-chip that has usually two ethernet interfaces on it. One is the WAN port, the other is wired internally to one of the ports on a separate switch chip and the rest of those are exposed as your "LAN" ports. WiFi is connected over some other internal interface, either PCI Express or something proprietary from the SoC vendor. There are some variants on this design but that's the basic idea behind pretty much every home/small business "router".

    The reason to add another switch would be if you have run out of ports, or need ports somewhere far from the original device where running an additional cable would be impractical.

    The reason to use more ports in your scenario would be because WiFi is unreliable and low performance compared to a wire. For some use cases this may not matter, especially if you're in a rural area with limited RF interference, but those in dense urban environments should definitely wire everything they can and even many of us like myself who have relatively clear RF spectrum prefer to wire whatever we can just to know that the network will not be a problem.

    If I try to use my wireless headphones with my Steam Link in my bedroom that has no wired ethernet connection, stream quality goes down and latency goes up as they compete over spectrum. The wired Steam Link in my living room works perfectly no matter what's going on.

    My personal rule is that if it doesn't move it should be wired if at all possible. Desktop computers, printers, set-top boxes, game consoles, VoIP phones, NAS boxes, servers, cameras, etc. WiFi is for portable devices and low bandwidth IoT type stuff that gets placed in locations that'd be inconvenient to wire.

    I hate the fact that so many consumer-tier devices are being made now that are designed to live permanently near a computer or TV screen (aka the most likely places to have wiring available) but don't have ethernet ports. There are printers that only have WiFi. Printers! And don't get me started on set top boxes. I'll let it slide in the "TV stick" formfactor as long as USB-OTG adapters are supported like Chromecast does, but things like the Nintendo Wii or most of the current non-stick Roku line not having a wired network port is insane.
    Reply
  • wolrah - Saturday, September 29, 2018 - link

    Forgot to also note one other reason one might add a new switch to a network would be the primary reason for this article, to increase speed beyond what one's previous switch supported. Very few routers have 2.5/5/10G ports at all, and even less of them are combination switch/router devices offering >1G speeds on the switch ports. If you want 10G networking you're either buying a switch or directly connecting the machines, and direct connections don't really scale well beyond three machines. Reply
  • abufrejoval - Sunday, September 30, 2018 - link

    Beat you to it a couple of weeks ago using a 12-port Buffalo Technology BS-MP2012 at €600 or €50/port including taxes, initial report is somewhere here on this site.

    The Aquantia NICs were down to €80/piece for a week or so, so I upgraded all my home-lab’s core servers.

    Been on that very same mission for 10 years and only stumbled across that 12-port NBase-T switch in summer. I had been using direct connect cables with Intel and Broadcom 10Gbase-T adapters before, but removed them from my home-lab, because those NICs required too much cooling at 10Watts/port: Those were dual port NICs targeting rack-mount servers with serious air-flow, and they kept dying in my desktops.

    With Aquantia this is down to 3Watts/port (1xx series on the NIC, three 4xx series chips on the switch for a total of around 40Watts TDP), which works just fine with my noise-optimized home-lab desktop-technology servers.

    And noise was the major challenge with the Buffalo switch, too, as the original fans are just not “desktop-compatible”, but need to remove 40 Watts of heat. I installed Noctua 40x40x20mm fans with constant air-flow, voiding all warranties and putting the life of my family at risk, but I can no longer hear it, while it just gets a little warm, not hot.

    Incidentally last week I also went the next step to 100Gbit/s in the corporate lab!

    Mellanox offers hybrid NICs, ConnectX-5 adapters that will support both Ethernet and Infiniband semantics, even NVMe over fabric so you get “memory”, “network” and “storage” semantics across a single fabric at close to PCIe 3.0 x16 limits.

    Since the NIC and the switch silicon is essentially the same, only a different size, the Mellanox engineers decided to include a “host-chaining” mode, which allows you to daisy-chain NICs using cheap direct-connect cables (€100/piece) without a switch, similar to ARC-Net, Token-Ring or Fibre-Channel/Arbitrated Loop (FC-AL). Of course it means a shared medium, so it doesn’t scale, but at 100Gbit it takes 10 ports to surpass 10Gbit in star formation. And then you can just create meshes etc. adding more NICs to your servers: Composable, hyper-converged hardware, a CIO’s whet dream!

    Obviously Mellanox management wasn’t too happy about that, so currently it only works with the Ethernet personality of the VPI NICs and I only managed to massage 30Gbit/s out of these links, even if the boxes are beefy Scalable Gold Xeons.

    I find this daisy chaining mode extremely intriguing because you can build all sorts of interconnect topologies, while you save on the jump costs of central switches.
    Reply
  • oRAirwolf - Monday, October 01, 2018 - link

    Nice catch. I would have happily paid that. I would be very interested to see some comparisons done between an aqc107, x540, x550, and a mellanox connectx-3. I use the aqc107 and connectx-3 in my home network and would love to see some data about CPU usage and latency. Reply

Log in

Don't have an account? Sign up now