OpenGL 4.3 Specification Also Released

Alongside OpenGL ES 3.0, desktop OpenGL is also being iterated upon today with the launch of OpenGL 4.3. Unlike OpenGL ES this is only a much smaller point update. This is not to say that it’s unimportant, but it will bring a smaller list of new features to the table.

At the same time though, this also marks what may potentially be an interesting inflection point for OpenGL gaming on the desktop. On Windows OpenGL usage has never been lower; the only AAA game engine still based on OpenGL is id Tech 5, which with the termination of licensing by id is only used internally. Khronos of course would like to change that, so when Valve Software says that they’re going to be porting Source over to Linux (thereby increasing the audience for OpenGL games) it’s of quite some interest to Khronos. At the same time desktop OpenGL still has a long way to go to recapture its glory days in the late 90’s and early 2000’s, so Khronos is doing what they can to spur that on.

OpenGL ES 3.0 Superset & ETC Texture Compression

Moving on to looking at OpenGL 4.3’s features, unsurprisingly, one of the big additions to OpenGL 4.3 is to add the necessary features to make it a proper superset of OpenGL ES 3.0. One of Khronos’s missteps with OpenGL ES 2.0 was that it wasn’t until OpenGL 4.1 in 2010 that desktop OpenGL become a proper superset of OpenGL ES 2.0, which they’re rectifying this time around by launching OpenGL ES and its equivalent OpenGL at the same time.

Because OpenGL ES 3.0 is largely taken from desktop OpenGL in the first place, there aren’t a ton of changes due to this. But there is one: texture compression. The aforementioned standardization around ETC applies to both OpenGL ES and OpenGL, which means that starting with OpenGL 4.3, desktop OpenGL will have a standard texture compression format.

Practically speaking, this won’t make a huge difference to desktop developers right now. Because S3TC is a required part of the Direct3D specification and all desktop GPUs support Direct3D, S3TC has been a de-facto OpenGL standard for nearly 10 years now. And because few developers will target OpenGL 4.3 right away, that won’t change. But this means that developers targeting 4.3 do finally have a choice in texture compression, and developers doing cross-platform development with OpenGL ES can use the same texture compression format in both cases.

It’s worth noting though that just because a GPU “supports” ETC doesn’t mean it has hardware support. NVIDIA has told us that they’ll be 4.3 compliant, but they’re handling ETC by decompressing the texture in their drivers before sending it over to the GPU in an uncompressed format, and while AMD wasn’t able to get back to us in time it’s almost certainly the same story over there. For ports of OpenGL ES games this isn’t going to be a problem (dGPUs have plenty of high-bandwidth memory), but it means S3TC will remain the de-facto standard desktop OpenGL texture compression format for now.

OpenGL Compute Shaders

Moving on, while OpenGL ES 3.0 compatibility is a big deal for OpenGL, it’s actually not the biggest feature addition for OpenGL 4.3. For that we turn to compute shaders.

As a bit of background, when meaningful compute functionality was introduced for GPUs, Microsoft and Khronos went in two separate directions. Khronos of course created OpenCL, a full-featured ANSI C based API for compute. Microsoft on the other hand introduced compute shaders, which was a special class of HLSL designed for compute. OpenCL is far more flexible, but flexibility has its price. Specifically, implementing compute functionality in OpenGL games often wasn’t as easy as the equivalent functionality using a Direct3D compute shader, and the overhead of OpenCL limited performance.

The end result is that Khronos has decided to implement compute shaders in OpenGL in order to bridge this gap. OpenCL of course remains as Khronos’s premiere compute API for both stand-alone compute applications and OpenGL games/applications that need the full flexibility, for but games that don’t need that level of flexibility and only want to run compute work on a GPU there is now another option.

Like Direct3D’s compute shader functionality, OpenGL’s compute shader functionality is geared towards relatively simple pixel operations, where approaching an algorithm in a compute manner (instead of a graphics manner) allows for faster execution. The compute shaders themselves will be written in GLSL rather than C, underscoring the fact that this is an extension of OpenGL’s shading framework rather than their compute framework. The target for this functionality will be games and other applications which perform compute “close to the pixel”, taking advantage of the faster shared memory and greater thread count that compute shaders offer.

Since OpenGL’s compute shader functionality is being spurred on by Direct3D, it should come as no surprise that OpenGL’s compute shaders are going to be a very close implementation of Direct3D’s compute shaders. Specifically, OpenGL’s compute shader functionality is being advertised by Khronos as matching D3D11’s compute shader functionality. The differences between HLSL and GLSL mean that there’s no straightforward portability, but it underscores the fact that this is the OpenGL analog of D3D’s compute shader functionality.

New Texture & Buffer Features

Moving on, OpenGL 4.3 will also be introducing some new texture and buffer features. On the texture side of things, one new feature will be texture views, which allow for a texture to be “viewed” in a different way by interpreting its results in a different manner, all without needing to duplicate the texture for modification. As for buffers, 4.3 introduces support for reading and writing to very large buffers across all shader types and all stages. The idea behind this is that it’s an efficient way for those new compute shaders to communicate with the graphics pipeline, given the large amount of data that can be in flight with a compute shader.

Wrapping things up, for some time now Khronos has been working on bringing OpenGL into alignment with Direct3D in order to close the feature and developer gap that has been created between the two. As Khronos correctly notes, with the addition of compute shader functionality OpenGL is now a true superset of Direct3D. If desktop OpenGL is going to see a resurgence in the next few years, it’s now in a far better position to pull that off than it was in before.

What’s New in OpenGL ES 3.0 Adaptive Scalable Texture Compression
Comments Locked

46 Comments

View All Comments

  • bobvodka - Monday, August 6, 2012 - link

    Firstly using the Steam Hardware survey, which is the correct metric as we are a AAA games studio I'll grant you, at most, 5% of the market, the majority of which have Intel GPUs, for which the OpenGL implementation has generally been.. sub-par to put it mildly.

    Secondly all console development tools are on the PC and based around Visual Studio as such we work in Windows anyway.

    Thirdly the Windows version generally comes about because we need artists/developer tools . Right now it is also useful for learning about and testing 'next gen' ideas with an API which will be close to the XBox API

    Forthly; we have a windows version working which uses D3D11 and OpenGL offers no compelling reason to scrap all the work. Remember D3D had working compute shaders with a sane integration for some years now - OpenGL has only just got these and before doing the work with OpenCL was like opening a horrible can of worms due to the lack of standardised and required interop extensions which existed (I looked into this at the back end of last year for my own work at home and quickly dispaired at the state of OpenGL and its interop).

    Finally, OSX lags OpenGL development. Currently OSX10.7.3 (as per https://developer.apple.com/graphicsimaging/opengl... ) supports GL3.2 and I see no mention of the version being supported in 10.8. Given that OpenGL3.2 was released in 2009 and OSX10.7 was released last year I wouldn't pin my hopes on seeing 4.2 any time 'soon'.

    Now, supporting 'down market' hardware is of course a good thing to do however in D3D11 this is easy (feature levels) in OpenGL different hardware + drivers = different features which again increases engineering work load and the requirements for fallbacks.
    You could mandate 'required features' but at that point you start cutting out market share and that 5% looks smaller and smaller.

    Now, we ARE putting engineering effort into OpenGL|ES as mobile devices are an important corner stone from a business stand point thus the cost can be justified.

    In short; there is no compelling business nor technical reason at this junction to drop D3D11 in favor of OpenGL to capture a fragment of the 5% AAA 'home computer' market when there are no side benefits and only cost.
  • powerarmour - Monday, August 6, 2012 - link

    Yes because Carmack is always 100% right about everything, and the id Tech 5 engine is the greatest and most advanced around.
  • SleepyFE - Monday, August 6, 2012 - link

    id Tech 5 is awesome!! I don't like shooters (except for Prey) but i played Rage just to see how much "worse" OpenGL is. The game looks GREAT. I can't tell it from any other AAA game from the graphics alone. And that means OpenGL is good enough and should be used more. Screw what someone says, try it yourself then tell me OpenGL can't compete.
  • bobvodka - Monday, August 6, 2012 - link

    False logic - games are as good as their art work.

    OpenGL has shaders, so yes with good art work it can do the same as D3D - however the API itself, the thing the programmers have to work with - isn't as good AND up until now it was lacking feature parity with D3D11.

    Feature wise OpenGL is there.
    API/usability wise - it isn't.

    FYI; I used OpenGL for around 8 years from around 1.3 until 3.0 came out and, like a few, was so fed up of the ARB at this point that I gave up on GL and moved to a modern API, speaking from an interface design point of view.
  • Penti - Friday, August 10, 2012 - link

    Game engines are perfectly fine supporting different graphics API's. Obviously non Windows platforms won't run D3D. Microsoft does not license it. So while they do license stuff like ActiveSync/Exchange, exFAT (which should have been included in the SDXC spec under FRAND-terms but isn't), NTFS, remote desktop protocols, OpenXML, binary document formats, sharepoint protocols, some of the .NET environment etc most of the vital tech is against payed licensing. They don't even specifies the Direct3D API's for implementation for none hardware vendors. It's simply not referenced at all. OpenGL is thoroughly referenced in comparison.

    Even though PS3 isn't OGL (PSGL is OGLES based) you could still do Cg shaders, or convert HLSL or GLSL shaders or vise versa so it's not like skills are lost. Tools should be written against the game engines and middleware any way.

    Plus the desktop OGL is compatible with OGLES when it comes to the newer releases such as 4.1 and 4.3. Albeit with some tricks/configuration/compatibility modes. Then implementations sucks, but that will also be true for some graphics chips support for DX.
  • inighthawki - Monday, August 6, 2012 - link

    The tessellation feature you're referring to is a brand-specific hardware extension, and not the same class that DirectX's tessellation is. The tessellation hardware introduced for DX11 is a completely programmable pipeline that offers more flexibility. DirectX does not add support for hardware specific features for good reason.
  • djgandy - Tuesday, August 7, 2012 - link

    Tessellation was only added to the GL pipeline in 4.0. It was another one of those 'innovations' where GL copied DX, just like pretty much every other feature GL adds.

    What GL needs to do is copy DX when they remove stuff from the API. Scratch this stupid core/compatibility model, which just adds even more run-time configurations, remove all the old rubbish and do not allow mixing of new features with the old fixed function pipeline.
  • bobvodka - Tuesday, August 7, 2012 - link

    There was, 4 years ago, a plan to do just what you described in your second paragraph - Longs Peak was the code name and it was a complete change and clean up of the API with a modern design and it was a change universally praised by those of us following the ARB's news letters and design plans.

    In July 2007 they were 'close' to a release; in October they had 'some issues' to work out - they then went into radio silence and 6 months later, without bothering to tell anyone what was going on, they rolled out 'OpenGL3.0' aka 2.2 where all the grand API changes, worked on for 2 years, were thrown out the window, extensions bolted on again and no functionality removed.

    At this point myself, and quite a few others, made a loud noise and departed OpenGL development in favour of D3D10 and then D3D11.

    Four years on the ARB are continuing down the same path and I wouldn't bet my future on them seeing sense any time soon.
  • djgandy - Tuesday, August 7, 2012 - link

    The ARB think they are implementing features that developers want, and maybe they are, but AFAIK they have very few big selling developers anyway.

    It seems the ARB is unable to see the reason behind this, maybe because they are so concerned about the politics of backwards compatibility or least certain members of it are. For me this is the hardest part to understand, since it is not even real breaking of compatibility, it is simply ring fencing new features from old features thus saving a ton of driver writing hell (i.e what DX did). Instead you can still use begin end with your glsl arb and geometry shaders with a bit of fixed function fog over the top. How useful.

    I find it hard to even consider the GL API as an abstraction layer with the existing extension hell and the multiple profiles a driver can opt to support. The end result of this "compatibility" is anyone actually wanting to sell software using OpenGL has to pick the lowest common denominator...whatever that actually is, because you don't even know what you are getting till run time with the newer API, so then you just pick the ancient version of the API because at least you have a 99% chance that a GL 3.0 driver will be installed with all the old fixed function crud that you don't actually need, but glVertex3f is nice right?

    IMO GL's only hope is for a company like Apple to put it into a high volume product and actually deliver a good contract to developers (core profile only, limited extensions, and say GL 4.0).
  • bobvodka - Tuesday, August 7, 2012 - link

    Unfortunately Apple isn't very on the ball when it comes to OpenGL support.

    OSX10.7, released last year, only supports OpenGL 3.2, a spec released in 2009 and had Windows support within 2 months.

    Apple are focusing on mobile it would seem, where OpenGL|ES is saner and rules the roost.

Log in

Don't have an account? Sign up now