Back to Article

  • MikhailT - Monday, July 22, 2013 - link

    > This being the 5th revision of OpenGL 4.x,

    Don't you mean 4th revision or did I miss something? The 4.0 release isn't a revision of 4.x, so that makes 4.4 the 4th revision...
  • nutgirdle - Monday, July 22, 2013 - link

    Ahh, the dreaded "Count from zero" confusion. Reply
  • Ryan Smith - Monday, July 22, 2013 - link

    I was indeed counting from zero. I've reworded it to remove the ambiguity. Reply
  • mr_tawan - Monday, July 22, 2013 - link

    I think, 4.0 is also a revision of OpenGL. Reply
  • Ortanon - Monday, July 22, 2013 - link

    4th revision, or 5th version haha. Reply
  • lmcd - Monday, July 22, 2013 - link

    Yes, you are correct. Reply
  • B3an - Monday, July 22, 2013 - link

    I've always wanted to see a detailed comparison of the latest OpenGL features versus the latest DirectX features. Including the pro's and con's of OpenGL and DX. I'm pretty sure DX 11.2 is more advanced/overall better but it's hard to tell. Reply
  • Ryan Smith - Monday, July 22, 2013 - link

    At a high level the two are roughly equivalent. With compute shaders in OpenGL 4.3 there is no longer a significant feature gap for reimplmenting D3D11 in OpenGL. I didn't cover this in the article since it's getting into the real minutia, but 4.4 further improves on that in part by adding a new vertex packing format (GL_ARB_vertex_type_10f_11f_11f_rev) that is equivalent to one that is in D3D. Buffer storage also replicates something D3D does. Reply
  • przemo_li - Thursday, July 25, 2013 - link

    DX is actually a bit behind (though DX11.2 will fix some of this), and OpenGL 4.4 pushed for some non-essential stuff that was missing compared in DX. (For ease of porting, cause same things could be done already).
    (Only for OGL 4.3 vs DX11.1)
  • WhitneyLand - Monday, July 22, 2013 - link

    A vote for an article on the latest in GPU raytracing performance. Even though it's domain specific, it really pushes the limits of what's possible on a GPU and therefore touches on a lot of generally interesting hardware and performance issues. It's just cool stuff. One example is the Octane renderer. Reply
  • ltcommanderdata - Monday, July 22, 2013 - link

    In DirectX 11.2 there's some thought that Tiled Resources tier 1 refers to Bindless Textures while tier 2 refers to Sparse Textures. So is there any hierarchy to these features where GPUs supporting sparse textures (GCN-based) should also support bindless textures (only Keplar officially announced)? Or are they independent?

    And is GK110 the only GPU currently available that supports Dynamic Parallelism and therefore the only currently shipping GPU that will be OpenCL 2.0 compatible? Hopefully Volcanic Islands and Maxwell will bring dynamic parallelism to the full product stack rather than just the top-end GPUs.
  • sontin - Monday, July 22, 2013 - link

    GK208 (Cuda 3.5) supports Dynamic Parallelism. And with next year i guess every GPU and Tegra 5 will support at leats Cuda 3.5. Reply
  • TeXWiller - Monday, July 22, 2013 - link

    That may be Tesla/Quadro-exlusive offering anyway. It would be nice if Nvidia would bring back the "democratization of parallelism" from the times long gone. Reply
  • sontin - Monday, July 22, 2013 - link

    GK208 is a consumer chip. It's not limited to the Quadro or Tesla series. They even advertised cards based on the chip on their blog:
  • xdrol - Monday, July 22, 2013 - link

    The thing is, nVidia does not care about OpenCL, they could not even (ehm, rather, didn't want to) ship an OpenCL 1.2 driver at all.. So it's nice that the hardware supports the feature, if we cannot use it from OpenCL, that's no better than AMD's version. Reply
  • name99 - Tuesday, July 23, 2013 - link

    You mean you won't be able to use OpenCL on Windows?

    I assume on OSX you'll be able to use OpenCL and it will continue to move forward --- unless nVidia thinks it would be a wise business move to ensure Apple never buys from them again.
  • MrSpadge - Saturday, July 27, 2013 - link

    No, by "not shipping OpenCL 1.2 drivers" he means they're still at 1.1. I don't expect this to be any better on OSX. Reply
  • Ryan Smith - Monday, July 22, 2013 - link

    To the best of knowledge this is independent. At one point last year AMD said they didn't believe they could implement bindless textures in GCN hardware in an equivalent manner to NVIDIA. And NVIDIA of course may be bindless, but they can't do sparse in hardware. I fully expect the two to come together in the next generation of hardware, with NV and AMD gaining their competitor's respective functionality. Reply
  • przemo_li - Thursday, July 25, 2013 - link

    Nvidia support both ARB_bindless_texture and ARB_sparse_texture in their drivers:

    And "Tiers" and "feature levels" is MS speak for extensions and OpenGL versions. (Since clean DX9, DX10, DX11 do not work any more)

    Both of those did NOT landed in core. Those are extensions vendors may implement in hw if they like. (Though ARB stand there for a reason, and those extensions should land in core in unchanged form if hw support will be good)
  • chris81 - Monday, July 22, 2013 - link

    Now with the competition of the new conformance tests -> completion
  • chris81 - Monday, July 22, 2013 - link

    Standing for Standard Portable Interface Representation -> Intermediate Representation Reply
  • Ryan Smith - Monday, July 22, 2013 - link

    Aww geeze. Thanks for catching that. Fixed. Reply
  • iwod - Tuesday, July 23, 2013 - link

    Hopefully within a year or two we should finally see some adoption into OpenCL acceleration. Which i have yet to see much software using it. ( May be CPU today is already fast enough for 95% of what we need. )

    Does OpenGL 4.4 matters? Do Microsoft Still support it on Windows ( I remember there were words on dropping support etc i didn't remember what turns out )
    And Apple, even with their unreleased OSX Maverick has ONLY just updated to OpenGL 4.0
    ( Yeah this is depressing )

    So the most important OpenGL would be OpenGL ES, where it is dominating / monopolizing the mobile world. Why isn't there any update on it?
  • Klimax - Tuesday, July 23, 2013 - link

    OpenGL is implemented by GPU vendors and connecting into graphics subsystem using provided interfaces.
  • Krysto - Tuesday, July 23, 2013 - link

    What do you mean? They've just updated it to Open GL ES 3.0 last year. They won't update ES every year.

    That being said, with Nvidia starting to use the full OpenGL in mobile next year, I think Khronos needs to release ES 4.0 next year, so the other GPU chip makers can catch-up a bit, since adopting the full OpenGL 4.3/4.4 might be too hard for them. Nvidia had to use its PC architecture to do that, and even Intel on its latest Haswell for notebooks and PC's doesn't support more than 4.0, which they barely adopted at the last moment, too.

    So either Khronos gives the others ES 4.0 to work with, or the other mobile GPU makers will be literally *years* behind future Tegra devices, when it comes to graphics capability.
  • przemo_li - Thursday, July 25, 2013 - link

    Geometry shaders, and tesselation shaders consume much power and that was reason why they did not landed in OpenGL ES 3.0. It will be interesting to see if Nvidia can come up with some good solution here.

    (Also Intel is utilizing Mesa drivers for their Android efforts, those are already at full OGL 3.1, and 3.2/3.3 should come this year.)
  • Codeledger - Tuesday, July 23, 2013 - link

    As someone who is following Google's GPGPU efforts with Renderscript, it looks like OpenCL 2.0 and OpenCL SIPR are trying to address some of the issues brought up by the Android team. I would be curious to know if the various vendors are using Renderscript as a test bed DSL or in the future will Android adopt and expose OpenCL formally. Reply
  • Wwhat - Sunday, July 28, 2013 - link

    New in OpenCl: DRM type crap

    No need to worry though, we keep the word 'open' in the name...for 'your' convenience.

Log in

Don't have an account? Sign up now