Battleforge: The First DX11 Game

As we mentioned in our 5870 review, Electronic Arts pushed out the DX11 update for Battleforge the day before the 5870 launched. As we had already left for Intel’s Fall IDF we were unable to take a look at it at the time, so now we finally have the chance.

Being the first DX11 title, Battleforge makes very limited use of DX11’s features given that the hardware and the software are still brand-new. The only thing Battleforge uses DX11 for is for Compute Shader 5.0, which replaces the use of pixel shaders for calculating ambient occlusion. Notably, this is not a use that improves the image quality of the game; pixel shaders already do this effect in Battleforge and other games. EA is using the compute shader as a faster way to calculate the ambient occlusion as compared to using a pixel shader.

The use of various DX11 features to improve performance is something we’re going to see in more games than just Battleforge as additional titles pick up DX11, so this isn’t in any way an unusual use of DX11. Effectively anything can be done with existing pixel, vertex, and geometry shaders (we’ll skip the discussion of Turing completeness), just not at an appropriate speed. The fixed-function tessellater is faster than the geometry shader for tessellating objects, and in certain situations like ambient occlusion the compute shader is going to be faster than the pixel shader.

We ran Battleforge both with DX10/10.1 (pixel shader SSAO) and DX11 (compute shader SSAO) and with and without SSAO to look at the performance difference.

Update: We've finally identified the issue with our results. We've re-run the 5850, and now things make much more sense.

As Battleforge only uses the compute shader for SSAO, there is no difference in performance between DX11 and DX10.1 when we leave SSAO off. So the real magic here is when we enable SSAO, in this case we crank it up to Very High, which clobbers all the cards as a pixel shader.

The difference from in using a compute shader is that the performance hit of SSAO is significantly reduced. As a DX10.1 pixel shader it lobs off 35% of the performance of our 5850. But calculated using a compute shader, and that hit becomes 25%. Or to put it another way, switching from a DX10.1 pixel shader to a DX11 compute shader improved performance by 23% when using SSAO. This is what the DX11 compute shader will initially be making possible: allowing developers to go ahead and use effects that would be too slow on earlier hardware.

Our only big question at this point is whether a DX11 compute shader is really necessary here, or if a DX10/10.1 compute shader could do the job. We know there are some significant additional features available in the DX11 compute shader, but it's not at all clear on when they're necessary. In this case Battleforge is an AMD-sponsored showcase title, so take an appropriate quantity of salt when it comes to this matter - other titles may not produce similar results

At any rate, even with the lighter performance penalty from using the compute shader, 25% for SSAO is nothing to sneeze at. AMD’s press shot is one of the best case scenarios for the use of SSAO in Battleforge, and in the game it’s very hard to notice. For the 25% drop in performance, it’s hard to justify the slightly improved visuals.

Index The Test
Comments Locked

95 Comments

View All Comments

  • chizow - Wednesday, September 30, 2009 - link

    Ya it already sounds like the 5870X2 and 5850X2 are being positioned in the media to compete with just a single GT300 with rumors of $500 price points. I think the combination of poor scaling compared to RV770/RV790 in addition to some of the 5850/5870 CF scaling problems seen in today's review are major contributing factors. Really makes you wonder how much of these scaling issues are driver problems, CPU/platform limitations, or RV870 design limitiations.

    My best guess for GT300 pricing will be:

    $500-$550 for a GTX 380 (full GT300 die) including OC variants
    $380-$420 for a GTX 360 (cut down GT300) including OC variants
    $250 and lower GTX 285 followed by GT210 40nm GT200 refresh with DX10.1

    So you'd have the 5870X2 competing with GTX 380 in the $500-600 range. Maybe the 5850X2 in the $400-$500 range competing with the GTX 360. 5870 already looks poised for a price cut given X2 price leaks, maybe they introduce a 2GB part and keep it at the $380 range and drop the 1GB part. Then at some point I'd expect Nvidia to roll out their GT300 GX2 part as needed somewhere in the $650-700+ range.....
  • yacoub - Wednesday, September 30, 2009 - link

    Nah. They won't get enough sales at those prices. They need to slot in under $399 and $299 unless they put out 50% more performance than the 5870 and 5850 respectively.

    Or the heck with them, I'll just wait six months for the refresh on a smaller die, better board layout with better cooling, lower power, and a better price tag.

    It's not like i NEED DX11 now, and i certainly don't need more GPU performance than I already have.
  • chizow - Thursday, October 1, 2009 - link

    How would it need to be 50% faster? It'd only need to be ~33% faster when comparing the GTX 380 to the 5870 or GTX 360 to the 5850. That would put the 5870 and 360 in direct competition in both price and performance, which is right on and similar to past market segments. The 380 would then be competing with the 5870X2 at the high-end, which would be just about right if the 5870X2 scales to ~30% over the 5870 similar to 5870CF performance in reviews.
  • Gary Key - Wednesday, September 30, 2009 - link

    "It's not like i NEED DX11 now, and i certainly don't need more GPU performance than I already have. "

    As of today I am limping along on a GTX275 (LOL) and I really cannot tell any differences between the cards at 1920x1080. Considering the majority of PC games coming for the next year are console ports with a few DX10/11 highlights thrown in for marketing purposes, I am really wondering what is going to happen to the high-end GPU market. That said, I bought a 5850 anyway. ;)
  • chizow - Thursday, October 1, 2009 - link

    I'm running GTX 280 SLI right now and have found most modern games run extremely well with at least 4xTrMSAA enabled. But that's starting to change somewhat, especially once you throw in peripheral features like Ambient Occlusion, PhysX, Eyefinity, 3D Vision, 120Hz monitors or whatever else is next on the checkbox horizon.

    While some people may think these features are useless, it only really takes 1 killer app to make what you thought was plenty good enough completely insufficient. For me right now, its Batman Arkham Asylum with PhysX. Parts of the game still crawl with AA + PhysX enabled.

    Same for anyone looking at Eyefinity as a viable gaming option. Increasing GPU load three-fold is going to quickly eat into the 5850/5870's increase over last-gen parts to the point a single card isn't suitable.

    And with Win7's launch and the rollout of DX11 and DirectCompute, we may finally start to see developers embrace GPU accelerated physics, which will again, raise the bar in terms of performance requirements.

    There's no doubt the IHVs are looking at peripheral features to justify additional hardware costs, but I think the high-end GPU market will be safe at least through this round even without them. Maybe next round as some of these features take hold, they'll help justify the next round of high-end GPU.
  • chrnochime - Wednesday, September 30, 2009 - link

    With PC gaming seemingly going towards MMO like WoW/Aion/Warhammer (and later on Diablo 3) and far less emphasis on other genre(besides FPS, which is more or less the same every year), and as you said most new games being console ports, I really doubt we'll need anything more powerful than the 4890, let alone a 5850 or 5870 for the coming couple of years. Maybe we've enter the era where PC games will forever be just console ports + MMO, or just MMO, and there'd be little incentive to buy any card that cost 100+.

    Just my take of course.
  • C'DaleRider - Wednesday, September 30, 2009 - link

    I was told by a Microcenter employee the current pre-order retail price for the top end GT300 card was $579, an EVGA card, btw. And reportedly the next model down is the GT350. Dunno if this is fact or not, but he didn't have any reason to lie.
  • Zool - Wednesday, September 30, 2009 - link

    The GT300 will need 512bit gddr5 to make memory faster than GT200 and it will hawe even more masive GPGPU bloat than last gen. So in folding it will be surely much faster but in graphic it will cost much more for the same(at least for nvidia depending how close they want to bring it to radeon 5k). And of course they can sell the same gt300 in tesla cards for several thousand(like they did with gt200).
    The 5850 price with disabled units is still win for ati or else they wouldnt sell the defect gpu at all.
  • Genx87 - Friday, October 2, 2009 - link

    GDDR5 provides double the bandwidth of CGDDR3. No need for 512bit memory bus. This was covered in another story on the front of this site.
  • dagamer34 - Wednesday, September 30, 2009 - link

    As great as these cards are, my system only supports low-profile cards since it's a HTPC. Bring on the Radeon HD 5650 & 5670!!!!

Log in

Don't have an account? Sign up now