PCI Express 3.0: More Bandwidth For Compute

It may seem like it’s still fairly new, but PCI Express 2 is actually a relatively old addition to motherboards and video cards. AMD first added support for it with the Radeon HD 3870 back in 2008 so it’s been nearly 4 years since video cards made the jump. At the same time PCI Express 3.0 has been in the works for some time now and although it hasn’t been 4 years it feels like it has been much longer. PCIe 3.0 motherboards only finally became available last month with the launch of the Sandy Bridge-E platform and now the first PCIe 3.0 video cards are becoming available with Tahiti.

But at first glance it may not seem like PCIe 3.0 is all that important. Additional PCIe bandwidth has proven to be generally unnecessary when it comes to gaming, as single-GPU cards typically only benefit by a couple percent (if at all) when moving from PCIe 2.1 x8 to x16. There will of course come a time where games need more PCIe bandwidth, but right now PCIe 2.1 x16 (8GB/sec) handles the task with room to spare.

So why is PCIe 3.0 important then? It’s not the games, it’s the computing. GPUs have a great deal of internal memory bandwidth (264GB/sec; more with cache) but shuffling data between the GPU and the CPU is a high latency, heavily bottlenecked process that tops out at 8GB/sec under PCIe 2.1. And since GPUs are still specialized devices that excel at parallel code execution, a lot of workloads exist that will need to constantly move data between the GPU and the CPU to maximize parallel and serial code execution. As it stands today GPUs are really only best suited for workloads that involve sending work to the GPU and keeping it there; heterogeneous computing is a luxury there isn’t bandwidth for.

The long term solution of course is to bring the CPU and the GPU together, which is what Fusion does. CPU/GPU bandwidth just in Llano is over 20GB/sec, and latency is greatly reduced due to the CPU and GPU being on the same die. But this doesn’t preclude the fact that AMD also wants to bring some of these same benefits to discrete GPUs, which is where PCI e 3.0 comes in.

With PCIe 3.0 transport bandwidth is again being doubled, from 500MB/sec per lane bidirectional to 1GB/sec per lane bidirectional, which for an x16 device means doubling the available bandwidth from 8GB/sec to 16GB/sec. This is accomplished by increasing the frequency of the underlying bus itself from 5 GT/sec to 8 GT/sec, while decreasing overhead from 20% (8b/10b encoding) to 1% through the use of a highly efficient 128b/130b encoding scheme. Meanwhile latency doesn’t change – it’s largely a product of physics and physical distances – but merely doubling the bandwidth can greatly improve performance for bandwidth-hungry compute applications.

As with any other specialized change like this the benefit is going to heavily depend on the application being used, however AMD is confident that there are applications that will completely saturate PCIe 3.0 (and thensome), and it’s easy to imagine why.

Even among our limited selection compute benchmarks we found something that directly benefitted from PCIe 3.0. AESEncryptDecrypt, a sample application from AMD’s APP SDK, demonstrates AES encryption performance by running it on square image files.  Throwing it a large 8K x 8K image not only creates a lot of work for the GPU, but a lot of PCIe traffic too. In our case simply enabling PCIe 3.0 improved performance by 9%, from 324ms down to 297ms.

Ultimately having more bandwidth is not only going to improve compute performance for AMD, but will give the company a critical edge over NVIDIA for the time being. Kepler will no doubt ship with PCIe 3.0, but that’s months down the line. In the meantime users and organizations with high bandwidth compute workloads have Tahiti.

Video & Movies: The Video Codec Engine, UVD3, & Steady Video 2.0 Managing Idle Power: Introducing ZeroCore Power
Comments Locked

292 Comments

View All Comments

  • haukionkannel - Thursday, December 22, 2011 - link

    Well, 7970 and other GCN based new cards are not so much driver depended as those older radeons. So the improvements are not going to be so great, but surely there will be some! So the gap between 580 or 6970 vs 7970 is going to be wider, but do not expect as big steps as 6970 got via new sets of drivers.
  • Ryan Smith - Thursday, December 22, 2011 - link

    This is actually an excellent point. Drivers will still play a big part in performance, but with GCN the shader compiler in particular is now no longer the end all and be all of shader performance as the CUs can do their own scheduling.
  • CeriseCogburn - Thursday, March 8, 2012 - link

    I hate to say it but once you implement a 10% IQ cheat, it's though to do it again and get away with it again in stock drivers.
    I see the 797x has finally got something to control the excessive shimmering... that's about 5 years of fail finally contained...that I've more or less been told to ignore.... until the 100+ gig zip download here... to prove amd has at least finally dealt with one IQ epic fail... (of course all the reviewers claim there are no differences all the time - after pointing out the 10% cheat, then forgetting about it, having the shimmer, then "not noticing it in game" - etc).
    I'm just GLAD amd finally did something about that particular one of their problems.
    Halleluiah !
    Now some PhysX (fine bullet or open cl but for pete sakes nvidia is also ahead on both of those!) and AA working even when cranking it to 4X plus would be great... hopefully their new arch CAN DO.
    If I get a couple 7970's am I going to regret it is my question - how much still doesn't work and or is inferior to nvidia... I guess I'll learn to ignore it all.
  • IceDread - Thursday, December 22, 2011 - link

    It's a good card, but for me it's not worth it to upgrade from a 5970 to a 7970. Looks like that would be about the same performance.
  • Scali - Thursday, December 22, 2011 - link

    This is exactly the reason why I made Endless City available for Radeons:
    http://scalibq.wordpress.com/2010/11/25/running-nv...

    Could you run it and give some framerate numbers with FRAPS or such?
  • Boissez - Thursday, December 22, 2011 - link

    What many seem to be missing is that it is actually CHEAPER than the current street prices on the 3GB-equiped GTX 580. IOW it offers superior performance, features, thermals, etc. at a lower price than current gen at a lower price.

    What AMD should do is get a 1.5 GB model out @450$ ASAP.
  • SlyNine - Thursday, December 22, 2011 - link

    Looks like I'll be sticking with my 5870. I upgraded from 2 8800GT's ( that in SLI never functioned quite right because they were hitting over 100C ever with after market HSF) and enjoyed over 2x the performance.

    When I upgraded from a 1900XT to the 8800GT's same thing, 800XT-1900XT, 9700pro - 800XT, 4200(nvidia)-9700pro. The list goes on to my first Geforce 256 card.

    Whats the point, My 5870 is 2! generations behind the 7970 yet this would be the worst $per increase in performance yet. Bummer I really want something to drive a new 120hz monitor, if I ever get one. But then thats kinda dependent on whether or not a single GPU can push it.
  • Finally - Thursday, December 22, 2011 - link

    Since when do top-of-the-line cards give you the best FPS/$?
    For the last few months the HD6870+HD6850 were leading all those comparisons by quite some margin. The DH7970 will not change that.
  • SlyNine - Thursday, December 22, 2011 - link

    If you read my post, you will notice that I'm compairing it to the improvments I have paid for in the past.

    40-60% Better than a 2 YO 5870 Is much worse than I have seen so far. Considering that its not just one generation but 2 generations beyond and for 500+$ to boot. This is the worst upgrade for the cost I have seen.....
  • SlyNine - Thursday, December 22, 2011 - link

    The 6870 would not lead the cost per upgrade in performance at all, It would be in the negitives for me.

Log in

Don't have an account? Sign up now