Update: Catalyst 12.11 hss been releasd (Release Notes)

As October winds to a close, the consumer electronics industry is making its final preparations for the holiday shopping season. Among events this week will be the Windows 8 launch – which itself includes a bevy of new computers and tablets – an Apple product refresh, and some other things that should surprise us all. The video card industry is no different, and with AMD and NVIDIA having wrapped up their product launches earlier this month, the focus shifts to competitive positioning of existing products through the rest of the year.

AMD’s holiday plans will be taking a two-pronged approach. The first prong (and likely to be the greater interest to most of our readers) will be the release of a new performance-optimized driver branch for AMD Graphics Core Next hardware. The second prong will be what can only be described as an incredibly aggressive series of game bundles, with AMD capitalizing on their much improved developer relations offering vouchers for multiple different games. By improving their performance relative to NVIDIA, maintaining existing aggressive pricing, and by further tilting the value proposition their way through better bundle, AMD is going all out for the holidays in a way we haven’t seen in years.

Catalyst 12.11 Performance

We’ll kick off this quick look at AMD’s holiday plans with a look at AMD’s new driver. AMD’s official designation for this driver will be Catalyst 12.11 (9.10.8), which is based off of a newer code branch than AMD’s previously released Catalyst 12.9 driver (9.1.0.0). AMD has sampled an early beta to the press ahead of today’s announcement, and they will be posting a slightly newer build of this driver publically at around 3pm today, with a final WHQL release slated to appear next month.

So what does Catalyst 12.11 bring to the table? In short this is the 3rd major performance driver for the GCN architecture, building upon the gains AMD already picked up from Catalyst 12.2 and Catalyst 12.7. Like those previous drivers, AMD has been working on a mix of game specific optimizations and across-the-board optimizations to better utilize the GCN architecture. While there’s no such thing as a “regular” release schedule for these kinds of major performance drivers, it’s rare to see 3 major drivers in a single calendar year, so this ended up being a pleasant surprise.

To get an idea of the performance of AMD’s new drivers and how it compares to earlier drivers, we’ve charted the performance of the Radeon HD 7970 (Tahiti), Radeon HD 7870 (Pitcairn), and Radeon HD 7770 (Cape Verde) against the 12.11 drivers, the 12.7 drivers, and their respective launch drivers. For the 7970 this means going back to the very start of GCN (Dec 2011), while the 7870 and 7770 launched with drivers based off of Catalyst 12.2.

At 1920x1200 we’re seeing a roughly 5% across the board performance improvement for both the 7970 and the 7950. Everything except Starcraft II sees at least a marginal improvement here, with Starcraft II being the lone exception due to the previous issues we’ve run into with the 1.5 patch. The 7770 also sees some gains here but they aren’t quite as great as with AMD’s other cards; the average gain is just 4% at 1680x1050, with gains in individual games being shallower on the 7770 than they are on other cards.

Interestingly even on the 7970 the largest gains are at 1920x1200 and not 2560x1600. The latter is the more GPU-limited resolution and that’s where we’d typically expect to see the largest gains, but that’s not what’s happening here. Most likely AMD’s performance improvements are targeting shading/texturing performance rather than ROP/memory performance, in which case the highest resolutions where we’re already more likely to be ROP/memory bound would be the resolutions least likely to benefit. This isn’t necessarily the best outcome since it’s at the highest resolutions that we need the greatest performance, but since most gamers are still on 1920x1080 (even with cards like the 7970) this is admittedly a more useful outcome.

Meanwhile like most major performance drivers, even when performance is up across the board the biggest gains are seen in a handful of games, and Catalyst 12.11 is no exception. Among the games in our test suite, DiRT 3, Shogun 2, and Battlefield 3 see the greatest improvements, with the former two picking up 6-7% each.

But it’s Battlefield 3 that really takes the cake: the performance improvement from Catalyst 12.11 ranges from 13% for the 7770 at 1680 to a whopping 29% for the 7970 at 1920. This makes Catalyst 12.11 a very special driver for AMD – not only are performance improvements over 20% particularly rare, but Battlefield 3 has long been a thorn in AMD’s side. NVIDIA’s hardware has until now always outperformed AMD’s equivalent hardware here, and as BF3 has remained an extremely popular MP game it’s been one of the most important games for high-end video card buyers. In other words it has always been the game AMD could least afford to lose at.

With the 12.11 drivers AMD has completely eradicated their performance defecit in BF3, with the 7970, 7870, and 7770 being made performance competitive with (if not a hair faster than) their respective NVIDIA GTX 600 counterparts in our BF3 benchmark. AMD has told us that the specific performance benefits are map-dependent with our results appearing at the high-end of their guidance, so while not every map will see the same 20%+ performance gains, some of them will while others will be in the 10% range. Much like our overall performance averages, the largest gains are at 1920 with FXAA, while 2560 with FXAA and 1920 with MSAA will see smaller gains, once again hinting that AMD’s optimizations are on the shader/texture side rather than ROP/memory.

For our part we have long theorized that the Frostbite 2 engine’s heavy use of deferred rendering techniques – particularly its massive G-buffer – was the factor that AMD was struggling with. While these results don’t really further validate or invalidate that theory, what is clear is that AMD has fixed their Frostbite 2 performance problem. Given the fact that Frostbite 2 will be used in at least a couple more games, including the AMD Gaming Evolved title Medal of Honor Warfighter, this was an important engine for AMD to finally conquer.

For their part, AMD hasn’t told us much about what it is they’ve done to optimize their drivers. What we do know is that it’s not driver command lists (an optional DX11 feature that NV has supported for some time), so it has to be something else. AMD has briefly mentioned surface mapping and memory mapping optimizations, but it’s not clear what exactly they’ve done there and if those are the only optimizations.

Finally, these seem to be clean optimizations, as image quality has been held constant in most of our games (e.g.: Battlefield 3). The sole exception is Skyrim, and this is something we’re certain is an unrelated bug. On our 7970 we appear to be missing a lighting pass, but only on our 7970. On our 7870 with the same exact settings everything is being rendered correctly. AMD has told us that they haven’t seen this issue in-house (it would admittedly be hard to miss) so this may be some esoteric issue; due to the short preview window and our own testing time constraints AMD hasn’t had time to look into it further.

Holiday Game Bundles

As we mentioned before, the second prong of AMD’s holiday strategy will be a rather significant holiday game bundle. Both AMD and NVIDIA have been making a greater focus on game bundles this year, and for the forthcoming holiday AMD will be going all out. Typically we see AMD bundle a single AAA game, or in the case of the Three For Free promotion earlier this year one AAA game along with a couple of cheaper games; for the Never Settle Game Bundle promotion however AMD will be bundling upwards of three AAA games with their most expensive cards.

The terms of the Never Settle Game Bundle can be a bit confusing, so let’s start with a chart.

AMD will be offering games as part of the Never Settle Game Bundle with the 7770, the 7800 series, and the 7900 series. Depending on the card, AMD will be offering Far Cry 3, Hitman: Absolution, and Sleeping Dogs as free games through retailer vouchers. Furthermore they will also be offering a 20% off (~$10) voucher for Medal of Honor Warfighter. Ultimately every card that is part of this bundle comes with the MoH voucher and Far Cry 3, purchasing a pair of cards for Crossfire will add in Hitman, and purchasing any 7900 series card will add in both Hitman and Sleeping Dogs.

AMD is hitting the bundles hard this year not only to steal some business from NVIDIA but as with past bundles to offer more value without actually having to cut video card prices. AMD of course pays a fraction of the retail price for these games, so offering three games for a “value” of $170 of course costs them far less than the sticker price. Still, for the 7900 series in particular this is a larger, more valuable bundle than anything else we’ve seen from AMD in recent years. Combined with their prices and improved performance, it will put NVIDIA on the defensive since NVIDIA does not have an active bundle on anything other than the GTX 650 Ti at the moment.

Like always however, the value of the bundle will be in the eye of the buyer. A bundle is not the same a price cut, so these bundles will have the greatest value for buyers who are already interested in the games. To AMD’s credit they’re focusing on games that have yet to be released, minimizing the number of buyers who might already have the game with the tradeoff being that the last game in the bundle won’t be out until the end of November.

Finally, like AMD’s previous bundles this is a retailer/e-tailer driven offer, buyers need to be sure to check whether their store of choice is offering the bundle. AMD hasn’t set a hard date on when the bundle will go live, and since their marketing department is typically ahead of their promotions department by a week or two we’d expect to see these bundles finally become active in November.

Pricing

As we mentioned before, AMD will not be making any further price adjustments for the holidays, with the game bundles intended to offset any need for price adjustments. As a result AMD’s pricing is roughly the same as it was earlier this month with the launch of the GeForce GTX 650 Ti. The 7970GE is priced against the GTX 680, the 7970 against the GTX 670, the 7950 against the GTX 660 Ti, the 7870 against the GTX 660, the 7850 ahead of the GTX 650 Ti, and the 7770 slightly ahead of the GTX 650. Given the combination of prices and game bundles, AMD is clearly intending to squeeze NVIDIA this year.

Fall 2012 GPU Pricing Comparison
AMD Price NVIDIA
Radeon HD 7970GE $449 GeForce GTX 680
Radeon HD 7970 $389/$369 GeForce GTX 670
Radeon HD 7950 $309/$289 GeForce GTX 660 Ti
Radeon HD 7870 $229/$219 GeForce GTX 660
Radeon HD 7850 2GB $189  
Radeon HD 7850 1GB $169 GeForce GTX 650 Ti 2GB
Radeon HD 7770 $109 GeForce GTX 650
Radeon HD 7750 $99 GeForce GT 640

 

Comments Locked

65 Comments

View All Comments

  • CeriseCogburn - Friday, October 26, 2012 - link

    No just correcting the amd fan boy lies, it's big job but someone should do it.
  • nleksan - Thursday, October 25, 2012 - link

    How can anyone say this is anything other than a GOOD thing? The people in charge, the ones with their fingers on the big red, and big green, "Release!" Buttons care about profits and market performance, which comes primarily from BUSINESS-SECTOR sales. I swear, the employee's of the companies must be laughing their behinds off at the baseless vitriol spewed all over the internet by the ignorant "fanboys" (and fangirls; no sexism here!), while wondering why in the world anyone would get so worked up about a company that they personally have ABSOLUTELY NOTHING TO DO WITH! I don't care if you support AMD or if you back nVidia or if you are a die-hard Matrox loyalist or what, the result is the same: you are spending your (likely limited) brain cells defending something that, in reality, is nothing more than a concept, a group of people who collaborate together to make themselves rich!

    The GPU market is no different than the CPU market, although I must say that I have seen the fanboyism die down much more in the latter, especially over the past year or two, while the former just gets worse! It is truly mind-boggling.
    We have three companies, really: Intel, AMD, nVidia. AMD competes against both, although to be fair, they do a pretty darn good job when you look at it that way: one company competing against two companies in two different sectors, each of whom has significantly (as in a LOT) more money/resources available, yet the one company is still remaining competitive to a degree, and certainly isn't bankrupt! I tip my hat to AMD for that, as they are the bravest of the three in many ways (the first and in my mind only REAL "FX" processors which introduced 64bit CPU's to the mainstream enthusiasts while at the same time outperforming Intel's 55% higher-clocked P4 Emergency Edition; their risky acquisition of ATi; their original implementation of the HyperTransport link, doing away with much potential bottlenecking; making dual-channel memory available and worthwhile; and of course, they have always been the ones to take the giant risk of designing a from-the-ground-up-new architecture and have, historically, always been about 3-4 years ahead of the curve). However, sometimes the risks don't pay off, and while we may look back in 5 years and see BD/PD as the beginning of a new era, or do the same for the A8-xxxx/A10-xxxxx APU's, the fact is that RIGHT NOW they are nowhere near the levels of performance offered by the premium Intel chips.
    They are doing quite well with their GPU's, although I fear that their entire GCN architecture was laid out too soon, and that they don't have the Ace in the Sleeve that they might need should nVidia decide to go all-out with the 7xx series of GPU's. And perhaps that is what nVidia has been counting on: they have GK100, which is a terrible design for gaming but could be "stuffed" onto a gaming platform card just as with Fermi should the need arise, but unless AMD has some giant surprise in store, they can continue to make money out of an architecture that has already been paid for.

    Anyway, I really, really, really don't think that the 7xx series, or rather the 780, will utilize the GK100 core as we know it. I have used the Tesla K20 briefly, and have handled it, and there is NO WAY that chip can be made to fit into a "reasonable" power envelope, nevermind the immense amount of cooling required!
    AMD made some very smart moves this round, while I do think that nVidia was either unable to produce the card they REALLY wanted or perhaps just lazy after the unrelenting beat-down they gave AMD with the 5xx series vs 6xxxx series "battle".
    They handled things much better than they did with the 6970, which could never compete with the 580 even when it had a frame-buffer size advantage; they learned from the mistakes they made, and (unless I state otherwise, I will be referring to only x970/x950 and x80/x70, the flagship cards from each make) they have become absolutely competitive with Team Green.
    Some examples of their forward progress:
    - AMD gave their cards a 384-bit GDDR5 Bus and 3GB of "the good stuff", while nVidia oddly took a step backward and settled for a 256-bit memory bus
    - AMD was first-to-market by a not-insignificant amount of time, which put pressure on their competition, who were (in all likelihood) having some yield issues to begin with
    - AMD has taken a very "holistic" approach to product development and integration, with their APU's being actually very advanced compared to anything else similar (oh wait... there is nothing else similar)
    - The idea of Hybrid Crossfire-X is ingenious, and while the implementation may need some work, it may become a savior to all those trying to build a "respectable" gaming machine on a budget especially if they are able to make a more well-rounded package with the high-end offering on-die GPU's equivalent to the ~7870/7950, as then a single $600 purchase (idk, but a hex-core 4.5Ghz CPU with 12MB L3 and a, say, 8870/8950 GPU with its own on-package frame-buffer would probably be not-cheap) would cover both CPU and GPU with only a motherboard and RAM left to purchase; of course, when the individual decides that he/she needs more horsepower, $250-350 for a PCIe x16 8950 3GB(4GB? 6GB?) would actually be a SECOND GPU despite there being just one card
    - The theoretical implementation of Hybrid CF-X given above would allow for either significant improvements in PCIe bandwidth use for GPU's for those wanting the absolute largest amount of power they can get...
    - Or it could free up a huge number of said-lanes for other devices such as 12-18x SATA/SAS 6Gbps Ports, 20+ USB3.0 native ports, Extreme-Quality Onboard Audio supporting 13.4ch surround, native x16 PCIe RAID Hardware Chip allowing for RAID0/1/5/6/10/50/60 from the board with zero CPU overhead and massive throughput, Integrated Solid State NAND Flash and/or micro-SSD's that are of low capacity (~32-64GB) and low cost ($20/ea) but can be Teamed by an on-board quad-core/hex-core SSD Controller so that the more you add the faster it gets (similar to RAID but based on the way an SSD works) while capacity rises as well, Significant reductions in overhead for LAN or other similar external connections, or any few of a thousand other things....

    I see the next true "revolution" in GPU tech coming when ALL cards are multi-GPU cards communicating over a practically bandwidth-free pathway with one another, and being able to utilize the entirety of the onboard memory in a shared manner. The only other way I can really see BIG improvements coming, and with 2560x1440/1600 becoming increasingly common not to mention 5760x1080/1200 (or larger) multi-display setups, is to overhaul the way in which Crossfire-X (and SLI) is performed; specifically, to allow the VRAM to "stack" so that having 3-way CFX/SLI with 3x 6GB cards doesn't equal 6GB of buffer but rather 18GB. As we head towards displays beyond 3-Megapixels, a "stacking" frame buffer COULD become vastly advantageous.
    Or, perhaps the way to do this is to prevent GPU's from accessing the HDD directly, and instead stick a large number of moderate-size SLC NAND modules and a controller on the GPU's PCB itself, allowing for textures to be loaded first into a 16-way SLC NAND "texture pool" via the system memory, and then allow the GPU to simply send a small Request Signal to the NAND controller regarding what it needs, at which point it is sent to the GDDR5/GDDR6 frame buffer after processing via an extremely short and thus extremely high-bandwidth parallel AND serial pathway. Remove the weakest link, allow the GPU and CPU to communicate much more freely yet with much reduced overhead, and wonderful things may happen.

    HOWEVER, I am no hardware/software engineer, so maybe I am just a crazy person. Well, okay, I AM a crazy person, but even a blind squirrel finds a nut ;)

    For the record, I am running a 3930K and GTX670 FTW in my newest, biggest build. My first self-built PC, which I still have and which is being completely "polished" and brought back to full strength, consists of an AMD FX-51 (even rarer due to it being one of the first 1,000 chips ever produced, period) and an ATI X800XT-PE. Both systems are/were absolute top-of-the-line for their time, and both overclock abnormally well (easily hit 5.1ghz on 3930K and 1400core/7800mem on 670, both using custom water; the FX-51 was 24/7/365 stable for 4yrs at 2.45Ghz with a circa-2003 air cooler and the X800XT-PE was at 604/1292 with the stock cooler and the simple addition of a PCI slot blower).
    I buy whatever gives me the best price:performance to price:longevity total ratio, and this time I felt Intel and Nvidia simply couldn't be beaten. I know that there is not a single TRADITIONALLY COOLED system made of AMD parts, with equivalent numbers of components (i.e. 1x/ea CPU/GPU) that can beat mine in anything and I am happy, yet I am also sad.
    I am sad because WE NEED for a "neck and neck" competition, or prices will be controlled not by market but by manufacturer, and innovation WILL stagnate.

    Let's all of us get a case of Red-Green colorblindness for a while and do what we can to help ensure that ALL OF US win in the end, what do you say?
  • CeriseCogburn - Friday, October 26, 2012 - link

    I read about the first paragraph, then I scrolled down, and saw the gigantic obviously totally hypocyryphal bs.
    You cared so much you nearly wrote a freaking book. Well, read it yourself, you're more obsessed than anyone else here, obviously.
    What a doof.
  • Gastec - Tuesday, November 13, 2012 - link

    I normally don't post but I had to come online just for you. You are a troll and you will be banned.
  • akamateau - Saturday, October 27, 2012 - link

    Does anyone else consider that NEXT is not only used by AMD as in AMD NEXT but evidently is also in the name of Microsoft's XBOX NEXT. Apparently XBOX 720 is out and XBOX NEXT is in. Is it becasue AMD NEXT is INSIDE?

Log in

Don't have an account? Sign up now