Battleforge: The First DX11 Game

As we mentioned in our 5870 review, Electronic Arts pushed out the DX11 update for Battleforge the day before the 5870 launched. As we had already left for Intel’s Fall IDF we were unable to take a look at it at the time, so now we finally have the chance.

Being the first DX11 title, Battleforge makes very limited use of DX11’s features given that the hardware and the software are still brand-new. The only thing Battleforge uses DX11 for is for Compute Shader 5.0, which replaces the use of pixel shaders for calculating ambient occlusion. Notably, this is not a use that improves the image quality of the game; pixel shaders already do this effect in Battleforge and other games. EA is using the compute shader as a faster way to calculate the ambient occlusion as compared to using a pixel shader.

The use of various DX11 features to improve performance is something we’re going to see in more games than just Battleforge as additional titles pick up DX11, so this isn’t in any way an unusual use of DX11. Effectively anything can be done with existing pixel, vertex, and geometry shaders (we’ll skip the discussion of Turing completeness), just not at an appropriate speed. The fixed-function tessellater is faster than the geometry shader for tessellating objects, and in certain situations like ambient occlusion the compute shader is going to be faster than the pixel shader.

We ran Battleforge both with DX10/10.1 (pixel shader SSAO) and DX11 (compute shader SSAO) and with and without SSAO to look at the performance difference.

Update: We've finally identified the issue with our results. We've re-run the 5850, and now things make much more sense.

As Battleforge only uses the compute shader for SSAO, there is no difference in performance between DX11 and DX10.1 when we leave SSAO off. So the real magic here is when we enable SSAO, in this case we crank it up to Very High, which clobbers all the cards as a pixel shader.

The difference from in using a compute shader is that the performance hit of SSAO is significantly reduced. As a DX10.1 pixel shader it lobs off 35% of the performance of our 5850. But calculated using a compute shader, and that hit becomes 25%. Or to put it another way, switching from a DX10.1 pixel shader to a DX11 compute shader improved performance by 23% when using SSAO. This is what the DX11 compute shader will initially be making possible: allowing developers to go ahead and use effects that would be too slow on earlier hardware.

Our only big question at this point is whether a DX11 compute shader is really necessary here, or if a DX10/10.1 compute shader could do the job. We know there are some significant additional features available in the DX11 compute shader, but it's not at all clear on when they're necessary. In this case Battleforge is an AMD-sponsored showcase title, so take an appropriate quantity of salt when it comes to this matter - other titles may not produce similar results

At any rate, even with the lighter performance penalty from using the compute shader, 25% for SSAO is nothing to sneeze at. AMD’s press shot is one of the best case scenarios for the use of SSAO in Battleforge, and in the game it’s very hard to notice. For the 25% drop in performance, it’s hard to justify the slightly improved visuals.

Index The Test
Comments Locked

95 Comments

View All Comments

  • Mills - Wednesday, September 30, 2009 - link

    AMD's graphics forums and the number of bugfixes and known issues posted for each driver release say otherwise.

    NVIDIA is not doing so well lately with their drivers either though, especially where Vista/7 is concerned.

    Both companies can't seem to get proper fixed aspect ratio GPU scaling working in Vista/7. This has been broken since Forceware 169.04, and my friend tells me broken in a recent Catalyst release. What the hell is going on?
  • michal1980 - Wednesday, September 30, 2009 - link

    I remeber a few years back when Green was on top and Red was dieing. Looks like tables have turned.

    For the gamer/consumer, IMHO its a win-win.

    My new PC next year, just might have an AMD card. I could care less about brand loyality. I buy what ever gives me the most bang for buck at the time I build.
  • Lavacon - Wednesday, September 30, 2009 - link

    I would love to see this card added to the X58 vs P55 GPU article from yesterday. Although I suspect the results would be much the same.
  • vailr - Wednesday, September 30, 2009 - link

    Please consider adding a Radeon 4770 card to your comparison chart. I believe it's also "TSMC 40nm".
  • chizow - Wednesday, September 30, 2009 - link

    Instead of just marginally reducing performance by dropping clockspeeds and available bandwidth, they artificially neutered their parts by cutting out a few SIMD clusters similar to Nvidia's MO of cutting TPC units.

    Your conclusion doesn't seem to draw this parallel, that the cut SIMD probably don't factor much into the overall performance because 1580 or whatever is left is enough and the full 1600 aren't being fully utilized in most games today. So instead the 5850 scales more closely to the 15% decrease in clockspeeds compared to the combined 23% for clockspeeds and SIMD units.

    The 5870 soft launch followed by today's 5850 paper launch also says quite a bit about 40nm yields in light of their artificial die neutering approach. Reports of AMD shipping *FOUR* 5870 for every *ONE* 5850 for a 4:1 ratio indicates 40nm yields are quite good. Given the high demand and apparently inadquate supply, it makes absolutely no sense whatsoever for AMD to ship these perfectly capable die for a $100 discount when they can sell them for that much more on the 5870.
  • chrone - Wednesday, September 30, 2009 - link

    with beta driver, it can beat nvidia fastest single gpu single card, this card would be an awesome at some moment in the future without breaking the bank for its cost and power consumption.

    i'm falling in love with this new babe already. :D

  • Razer2911 - Wednesday, September 30, 2009 - link

    I think the current beta drivers are holding the performance of these cards and it can improve by another 10-15%. Maybe ATI is holding it back just in case nVidia brings in some surprise (really doubt it).
    Strangely the temps mentioned for both the cards are inconsistent with other reviews on the web with Anandtech's being the lowest. Maybe it would be better to post ambient-to-idle/load temps in all your reviews.
  • LeadSled - Wednesday, September 30, 2009 - link

    Nvidia's tech is soon to be so out dated that they will not be a deal at any price. They cannot even do all DX10 spec let alone any DX11 which I do believe ATI has been able to do some DX11 functions since the X1900. I hope Nvidia gets their act together and survives but unlike when 3D was new and Nvidia pushed new tech envolpe they are have been holding progress to a stand still. Nvidia should put up and play the game or get out of the game and make PhysX cards.

    I do hope they create a 5850X2. These new RV870 gpu's look like they will work well in a 2GB version. Ive heard the 5870X2 will be a 4GB card, lets just hope. I know I would pay $600-$700 for that baby without a thought.
  • TA152H - Wednesday, September 30, 2009 - link

    I am with you, I think NVIDIA needs to go out of business. I think they will.

    They are at a huge disadvantage without a CPU. Intel is moving CPU/GPU soon, and AMD had this planned for a long time. With Intel already precluding NVIDIA from making chipsets for Nehalem based computers, and ATI making far better GPUs, NVIDIA is running on momentum now, and that runs out over time.

    NVIDIA might shirk Intel and make a chipset for Nehalem. While most us wouldn't even consider a crappy NVIDIA chipset, the general market has no idea how problematic they are. They buy from HP and Dell, and they use NVIDIA. I am surprised at how many of these that I see, so it's a good business for NVIDIA.

    Right now, the Lynnfield is essentially irrelevant, and the Bloomfield is a niche product. Neither are particularly important products as far as the market is concerned, so NVIDIA isn't really paying a price. Core 2 is still the most attractive platform for mainstream America, or an AMD platform. Clarkdale, with all its flaws, should sell especially well, and even if NVIDIA does decide they want to make a chipset, it won't sell. No one who knows much about computers will buy an NVIDIA chipset, so they sell mainly through HP and Dell, or similar companies. HP and Dell are not going to want to pay extra for an NVIDIA GPU, since the processor comes with one, and really it's only the southbridge that's up for grabs now. This would make a much smaller contribution to their bottom line. It's all bad for them.

    Yes, they can sell into the Bloomfield space, if they come up with a good discrete card. But, how big is this market? Lynnfield should be even smaller, being brain-damaged and second-best, but, not particularly cheap like the Clarksdale. Also, it's unlikely someone will want a high priced video card, or two, and pair it with anything but the best platform.

    So, where does NVIDIA sell into? Core 2 will go away, Bloomfield and Lynnfield will have relatively small market shares, and Clarksdale should sell especially well in the markets where NVIDIA chipsets sell well now.

    Anand said the Clarksdale was the replacement for the Conroe, which caught a lot of FlAK, because he worded it poorly. But, in a way, he's right with respect to the Clarksdale replacing the Core 2 as the platform for the mainstream market. In this respect, the Clarksdale is better in almost all respects. It's dual core, but runs four threads. Sure, they put the MMU in the wrong place, but it's still better than being outside the processor, and the GPU is better than the G45. On top of this, it should be cheaper. Core 2 duals, with Pentiums, etc..., sell the best, still. Clarksdale is better, and should be cheaper, so it's going to dominate the market in the same way. Bloomfield is king of performance, and will have a place. It's not a big one though. Lynnfield is a good combination of power and decent performance. It's also not a big space, although the i5 750 might do well and shouldn't be discounted. The big space will be the Clarksdale. NVIDIA is going to be hurt by it. Hopefully, fatally.
  • Pastuch - Monday, October 5, 2009 - link

    I disagree with you on ALL points. I buy 2 videocards per year and I own an almost equal number of ATI/Nvidia cards. I just bought a 4890 and next will be a 5850.

    Nvidia should definitely NOT go out of business. Competition drives creativity and reduces prices for consumers. I would hardly say Nvidia is doing badly at the moment. The bulk of aftermarket videocards still come from Nvidia. They are still ahead of ATI in marketshare. They are also a marketing juggernaught; "The Way its Meant to be Played" is a very powerful marketing tool. That being said I expect a firm advantage for ATI over the next six months.

    I have owned several Nvidia chipset motherboards and they have all been exceptionally reliable and great overclockers. I've never had driver issues with them. I find Clarksdale underwhelming. G45 didn't live up to all its promises (Bitstreaming) and I seriously doubt that it's successor will either. G45 has the least features and customizability of any onboard solution I have used yet. Intel has a long way to go on integrated video if they ever want to capture the enthusiast market.

    ATI has been doing a stellar job lately. The 5850 is every hometheater guys dream. Inexpensive, Bitstreaming HD content and it will fit in most HTPC cases. The latest GTX cards and the 5870 are too long. Videocards should be less than 10 inches long!

Log in

Don't have an account? Sign up now