With the announcement of DirectX 12 features like low-level programming, it appears we're having a revival of the DirectX vs. OpenGL debates—and we can toss AMD's Mantle into the mix in place of Glide (RIP 3dfx). I was around back in the days of the flame wars between OGL and DX1/2/3 devotees, with id Software's John Carmack and others weighing in on behalf of OGL at the time. As Microsoft continued to add features to DX, and with a healthy dose of marketing muscle, the subject mostly faded away after a few years. Today, the vast majority of Windows games run on DirectX, but with mobile platforms predominantly using variants of OpenGL (i.e. smartphones and tablets use a subset called OpenGL ES—the ES being for "Embedded Systems") we're seeing a bit of a resurgence in OGL use. There's also the increasing support of Linux and OS X, making a cross-platform grapics API even more desirable.

At the Game Developers Conference 2014, in a panel including NVIDIA's Cass Everitt and John McDonald, AMD's Graham Sellers, and Intel's Tim Foley, explanations and demonstrations were given suggesting OpenGL could unlock as much as a 7X to 15X improvement in performance. Even without fine tuning, they note that in general OpenGL code is around 1.3X faster than DirectX. It almost makes you wonder why we ever settled for DirectX in the first place—particularly considering many developers felt DirectX code was always a bit more complex than OpenGL code. (Short summary: DX was able to push new features into the API and get them working faster than OpenGL in the DX8/9/10/11 days.) Anyway, if you have an interest in graphics programming (or happen to be a game developer), you can find a full set of 130 slides from the presentation on NVIDIA's blog. Not surprisingly, Valve is also promoting OpenGL in various ways; the same link also has a video from a couple weeks back at Steam Dev Days covering the same topic.

The key to unlocking improved performance appears to be pretty straightforward: reducing driver overhead and increasing the number of draw calls. These are both items targeted by AMD's Mantle API, and presumably the low level DX12 API as well. I suspect the "7-15X improved performance" is going to be far more than we'll see in most real-world situations (i.e. games), but even a 50-100% performance improvement would be huge. Many of the mainstream laptops I test can hit 30-40 FPS at high quality 1080p settings, but there are periodic dips into the low 20s or maybe even the teens. Double the frame rates and everything becomes substantially smoother.

I won't pretend to have a definitive answer on which API is "best", but just like being locked into a single hardware platform or OS can lead to stagnation, I think it's always good to have alternatives. Obviously there's a lot going on with developing game engines, and sometimes slower code that's easier to use/understand is preferable to fast/difficult code. There's also far more to making a "good" game than graphics, which is a topic unto itself. Regardless, code for some of the testing scenarios provided by John McDonald is available on Github if you're interested in checking it out. It should work on Windows and Linux but may require some additional work to get it running on OS X for now.

Source: NVIDIA Blog - GDC 2014

POST A COMMENT

105 Comments

View All Comments

  • ET - Tuesday, March 25, 2014 - link

    GDC always have panels about optimisations. Some of the stuff on these slides even applies to Direct3D.

    It's possible that SteamOS has caused the HIV's to dedicate more time to OpenGL optimisations and their OpenGL department is enthusiastic about that, and therefore this session was more enthusiastic than in recent years (I haven't tried to compare). However NVIDIA had a session about its DX11 driver optimisations, so I'm sure that any performance comparison is relevant for a narrow snapshot of hardware and drivers.
    Reply
  • Scali - Wednesday, March 26, 2014 - link

    I find it funny that Intel and AMD are even present at all. Last time I looked, neither offered an OpenGL 4.4 driver for their hardware. Besides, most extensions that have now made it to ARB standards and newer OpenGL core versions, have originated from nVidia. Reply
  • ddriver - Tuesday, March 25, 2014 - link

    OpenGL FTW - boycott platform and vendor limited APIs! Don't limit the reach of your code. Reply
  • mr_tawan - Tuesday, March 25, 2014 - link

    My two cents. I think in this context everyone including Jarred means DirectGraphics/Direct3D for the word 'DirectX'. It feels wrong to compare DirectX (which is complete multimedia library) with OpenGL (which is a graphics API). Or probably I missed something :-).

    Somehow I'd love to see debates over OpenAL vs OpenSL ES or DirectSound/XAudio too!.
    Reply
  • ET - Tuesday, March 25, 2014 - link

    DirectX typically refers to Direct3D. I remember a time when Microsoft tried to make people use Direct3D (sometimes in the DX10 days, maybe DX11 release time), but since even internal Microsoft presentations continued to refer to it as DirectX Microsoft gave up on that. They can be considered synonymous. Reply
  • Klimax - Tuesday, March 25, 2014 - link

    " Even without fine tuning, they note that in general OpenGL code is around 1.3X faster than DirectX."

    And still no evidence. It doesn't matter who they are. They haven't published evidence nor data thus it looks more like: "We want API which allows us to push proprietary stuff like in the good old 90s". To re-parcel games once again.

    Why I bring up this? Because that was main reason for DirectX, and back then most problematic part of DX "Caps bits". Since DX 10, Microsoft eliminated most of these stupidities and thus GPU makers are not that happy with this status. (Not innovation, but proprietary crap)

    Note. Before some lost soul points to Valve and their crappy PR article comparing DX and OpenGL, sorry but it is very bad comparison, not possible to repeat and only comparing old DX 9 to new OpenGL and new codebase using it thus not eve n apples/oranges comparison.

    ===

    TL/DR: Back to 90s to proprietary stuff (aka Extension hell of OGL); no evidence for their claims.
    Reply
  • JarredWalton - Tuesday, March 25, 2014 - link

    The slides have comparisons using APItest, which has source available on Github and is linked at the end of the article. I can't say that I tried to download or compile anything (because I've long since given up on doing that sort of thing), but presumably there's code for people to look at and play with. So before crying "foul" look at the source and get it running. I do believe however that they are specifically referring to improvements in the number of draw calls with the "performance increase" claims, and there's more to graphics than draw calls I'm pretty sure. :-)

    Mantle, incidentally, boasts something like a 900% increase in the number of draw calls, but in practice I think it only ends up being 25-35% faster in BF4 -- I'd have to go back and check the figures again, but it's definitely not anywhere near 900% faster with Mantle, or even 100% faster. There are many bottlenecks besides the number of draw calls you can execute per second.
    Reply
  • inighthawki - Tuesday, March 25, 2014 - link

    That's because they're talking about draw calls - i.e. the CPU overhead of them. If your game is completely GPU bound to begin with, you could improve the CPU performance of draw calls by 10000% -heck they could be completely free - but you won't see a difference because your frame is purely GPU bottlenecked. Reply
  • jwcalla - Wednesday, March 26, 2014 - link

    I think you're a bit confused about the Valve story regarding L4D2 performance. It's true that L4D2 is a DX9 game, however it's not true that they made a new codebase for their OpenGL port. First, the version of OpenGL they're using is 2.x or, at most, 3.1. It's basically DX9 features; Valve didn't update the engine and graphics capabilities and that's evident in the system requirements. The performance benefits being discussed here are for OpenGL 4.2 and up.

    And it's not a new codebase. They have an existing OpenGL renderer (for OS X) and a cruddy D3D-to-OGL on-the-fly translation layer, which adds a performance hit.

    The DX path has had years of optimizations and the OGL path has not. They gave up optimizing OGL once they hit the engine's 300 fps limit.

    So we're dealing with a 7-year-old version of OGL, a not completely optimized rendering engine, with a translation layer in between, and they were still doing better than DX9.
    Reply
  • inighthawki - Wednesday, March 26, 2014 - link

    Feel free to sift through the code and prove me wrong, but I highly highly doubt they would waste their time porting their engine and making a translation layer to OpenGL 2. You and a lot of people seem to get confused that OpenGL, just like D3D, is an API. You can write a game targeting D3D11 and OpenGL 4.4 but still only use a subset of the hardware feature levels. Reply

Log in

Don't have an account? Sign up now