CSA: ooh look, a new Bus

Gigabit Ethernet is slowly but surely becoming the standard that replaces 100Mbit Ethernet on high performance desktops and workstations everywhere. The problem that exists however is that most of these high performance desktops and workstations only have 32-bit 33MHz PCI slots which, if you do the math, ends up being exactly 1Gbps of bandwidth.

Now if you're transferring only in one direction and you have nothing else eating into that 1Gbps of PCI bus bandwidth, then you're not bandwidth limited at all by the PCI bus, but chances are that this perfect world we just described isn't too realistic in your work environment.

Most of the time you're transferring in two directions, and assuming you're connected to a full duplex switch, you're dealing with a peak of 2Gbps of bandwidth. At the same time, you've usually got some disk activity going on in the background among other things that make it clear that 1Gbps of bandwidth isn't going to cut it for some current and most future usage scenarios with Gigabit Ethernet.

Intel saw the bandwidth limitation to current desktop/workstation Gigabit Ethernet deployments and wanted to make their Gigabit solutions more attractive by introducing a new bus to the MCH - the Communications Streaming Architecture (CSA) bus. We're actually giving the bus a little too much credit by calling it new, this is actually a virtually identical copy of a bus that's been present in Intel MCH's for quite some time; the CSA bus is nothing more than a copy of Intel's Hub Link 2.0 bus that connects the MCH to the ICH (aka South Bridge), but instead, this bus is used to connect the MCH to Intel's Gigabit Ethernet controller.

The CSA bus is perfectly matched for Gigabit Ethernet as it offers a total of 2Gbps of bandwidth (266MB/s, equal to that of the Hub Link 2.0 bus because they are two separate but identical links), which is enough for full duplex Gigabit transfers.

In order to test the usefulness of Intel's CSA we setup a client/server setup composed of one server and two clients all going through a D-Link Gigabit switch using Category-6 cabling. All three systems made use of Intel Gigabit Ethernet controllers; we used Intel Pro/1000 Desktop MT controllers (see top right) in the clients and alternated between another Pro/1000 Desktop controller and the on-board CSA controller (see top left) for the server. Using NetIQ Chariot we generated bidirectional traffic between each client and the server to attempt to saturate the full-duplex Gigabit Ethernet link to the 875P equipped server. Here are the results we came about:

Gigabit Ethernet Performance
Communications Streaming Architecture vs. Conventional PCI Link (Throughput in Mbps)
CSA Bus

PCI Bus

1392

824

|
0
|
278
|
557
|
835
|
1114
|
1392
|
1670

As you can see, Intel's CSA does deliver as promised, although you have to keep in mind that transfer rates this high are impossible if you're reading data off of a hard drive. If you're sending data that's already cached in main memory then you'll be able to reach these sorts of transfer rates, otherwise there will be minimal peak transfer rate differences between a CSA Gigabit interface and a conventional PCI Gigabit interface. There's one thing for sure, with a CSA Gigabit interface, your network performance is entirely disk limited.

Even if you're not getting higher transfer rates, moving bandwidth-heavy traffic off of the PCI bus helps to ensure that other transactions occurring on the bus remain uninterrupted by bursts of network traffic. The end result is similar to what NVIDIA was able to achieve using isochronous Hyper Transport channels with nForce2, in that you get uninterrupted network data transfers and minimize the impact of sudden bursts of data on the rest of the system.

The addition of the CSA bus means that the MCH now has the following links stemming from it:

- 32-bit AGP 8x interface
- 2 x 64-bit ECC DDR memory interfaces
- 64-bit NetBurst FSB interface
- 16-bit Hub Link ICH interface
- 16-bit CSA interface

The addition of the CSA bus along with the two 64-bit memory buses are what make the 875P chipset the largest desktop chipset to date with over 1000 balls connecting the MCH to the motherboard.

Performance Acceleration Technology A New ICH
Comments Locked

0 Comments

View All Comments

Log in

Don't have an account? Sign up now