PCI Express 3.0: More Bandwidth For Compute

It may seem like it’s still fairly new, but PCI Express 2 is actually a relatively old addition to motherboards and video cards. AMD first added support for it with the Radeon HD 3870 back in 2008 so it’s been nearly 4 years since video cards made the jump. At the same time PCI Express 3.0 has been in the works for some time now and although it hasn’t been 4 years it feels like it has been much longer. PCIe 3.0 motherboards only finally became available last month with the launch of the Sandy Bridge-E platform and now the first PCIe 3.0 video cards are becoming available with Tahiti.

But at first glance it may not seem like PCIe 3.0 is all that important. Additional PCIe bandwidth has proven to be generally unnecessary when it comes to gaming, as single-GPU cards typically only benefit by a couple percent (if at all) when moving from PCIe 2.1 x8 to x16. There will of course come a time where games need more PCIe bandwidth, but right now PCIe 2.1 x16 (8GB/sec) handles the task with room to spare.

So why is PCIe 3.0 important then? It’s not the games, it’s the computing. GPUs have a great deal of internal memory bandwidth (264GB/sec; more with cache) but shuffling data between the GPU and the CPU is a high latency, heavily bottlenecked process that tops out at 8GB/sec under PCIe 2.1. And since GPUs are still specialized devices that excel at parallel code execution, a lot of workloads exist that will need to constantly move data between the GPU and the CPU to maximize parallel and serial code execution. As it stands today GPUs are really only best suited for workloads that involve sending work to the GPU and keeping it there; heterogeneous computing is a luxury there isn’t bandwidth for.

The long term solution of course is to bring the CPU and the GPU together, which is what Fusion does. CPU/GPU bandwidth just in Llano is over 20GB/sec, and latency is greatly reduced due to the CPU and GPU being on the same die. But this doesn’t preclude the fact that AMD also wants to bring some of these same benefits to discrete GPUs, which is where PCI e 3.0 comes in.

With PCIe 3.0 transport bandwidth is again being doubled, from 500MB/sec per lane bidirectional to 1GB/sec per lane bidirectional, which for an x16 device means doubling the available bandwidth from 8GB/sec to 16GB/sec. This is accomplished by increasing the frequency of the underlying bus itself from 5 GT/sec to 8 GT/sec, while decreasing overhead from 20% (8b/10b encoding) to 1% through the use of a highly efficient 128b/130b encoding scheme. Meanwhile latency doesn’t change – it’s largely a product of physics and physical distances – but merely doubling the bandwidth can greatly improve performance for bandwidth-hungry compute applications.

As with any other specialized change like this the benefit is going to heavily depend on the application being used, however AMD is confident that there are applications that will completely saturate PCIe 3.0 (and thensome), and it’s easy to imagine why.

Even among our limited selection compute benchmarks we found something that directly benefitted from PCIe 3.0. AESEncryptDecrypt, a sample application from AMD’s APP SDK, demonstrates AES encryption performance by running it on square image files.  Throwing it a large 8K x 8K image not only creates a lot of work for the GPU, but a lot of PCIe traffic too. In our case simply enabling PCIe 3.0 improved performance by 9%, from 324ms down to 297ms.

Ultimately having more bandwidth is not only going to improve compute performance for AMD, but will give the company a critical edge over NVIDIA for the time being. Kepler will no doubt ship with PCIe 3.0, but that’s months down the line. In the meantime users and organizations with high bandwidth compute workloads have Tahiti.

Video & Movies: The Video Codec Engine, UVD3, & Steady Video 2.0 Managing Idle Power: Introducing ZeroCore Power
Comments Locked

292 Comments

View All Comments

  • RussianSensation - Thursday, December 22, 2011 - link

    I think his comment still stands. In terms of a performance leap, at 925mhz speeds at least, this is the worst improvement from 1 major generation to the next since X1950XTX -->2900XT. Going from 5870 to 6970 is not a full generation, but a refresh. So for someone with an HD5870 who wants 2x the speed increase, this card isn't it yet.
  • jalexoid - Thursday, December 22, 2011 - link

    How's OpenCL on Linux/*BSD? Because I fail to see real high performance use in Windows environments for any GPGPU.

    For GPGPU the biggest target should be still Linux/*BSD because they are the dominating platforms there....
  • R3MF - Thursday, December 22, 2011 - link

    "Among the features added to Graphics Core Next that were explicitly for gaming, the final feature was Partially Resident Textures, which many of you are probably more familiar with in concept as Carmack’s MegaTexture technology."

    Is this feature exclusive to gaming, or is it an extension of a visualised GPU memory feature?

    i.e. if running Blender on the GPU via the cycles renderer will i be able to load scenes larger than local graphics memory?
  • Ryan Smith - Thursday, December 22, 2011 - link

    It's exclusive to graphics. Virtualized GPU memory is a completely different mechanism (even if some of the cache concepts are the same).

    With that said I see no reason it couldn't benefit Blender, but the benefits would be situational. Blender would only benefit in situations where it can't hold the full scene, but can somehow hold the visibly parts of the scene by using tiles.
  • R3MF - Friday, December 23, 2011 - link

    cheers Ryan
  • Finally - Thursday, December 22, 2011 - link

    ...the 2nd generation HD8870 feat. GCN, 3W idle consumption and hopefully less load consumption than my current HD6870. Just let a company like Sapphire add a silent cooler and I'm happy.
  • poohbear - Thursday, December 22, 2011 - link

    Btw why didnt Anandtech overclock this card? it overclocks like a beast according to all the other review sites!
  • Esbornia - Thursday, December 22, 2011 - link

    Cause they want you to think this card sucks come on guys everybody in the internet knows this site sucks for reviews that are not from Intel products.
  • SlyNine - Thursday, December 22, 2011 - link

    lol troll. This site has prefered who ever had the advantage in what ever area. They will do a follow up of its OCing and when they first show a card they show it at stock only.

    I do not OC my videocards, whats the point in adding 5% more gain in games that are running maxed anyways.
  • RussianSensation - Thursday, December 22, 2011 - link

    Is this comment supposed to be taken seriously? Go troll somewhere else.

Log in

Don't have an account? Sign up now