POST A COMMENT

99 Comments

Back to Article

  • sprockkets - Monday, March 24, 2014 - link

    Well in the old days Microsoft's dominance on the desktop made Directx win. Today, that's no longer the case. Apple's ios and Android both dominate, and unless MS allows Directx to run on non windows devices, won't have the clout to force Directx 12 on us.

    I'm glad this is the case since this will help prevent a repeat of the late 90s to mid last decade where if you didn't have windows you were screwed.
    Reply
  • lmcd - Monday, March 24, 2014 - link

    Microsoft's dominance on the desktop still makes DirectX win there. On mobile, small games go straight for OpenGL ES but larger games use third-party engines that handle the divide.

    Also, the way tablets are going, we're going to be back to the 90s soon enough. Intel-based Windows 8 tablets likely take over much of Android's strength in productivity tablets, and Apple will always have the iPad for content consumption and overall luxury.

    Android has had a good run, but counting on Dalvik as the center of your OS (which is bound to the weaknesses of Java) puts a damper on Android power efficiency, performance, and scalability which can't be recovered. Android probably dead-ends somewhere between 5.x and 7.x, where Chrome OS takes over on Google's side and subsequently loses a huge install base.

    Basically, unless Google has an upgrade plan from Dalvik code to Dart code, I would put Microsoft as 1st or 2nd in tablet before 2018. If WP comes together with Windows (and finishes integration) then Windows Phone could start making a similar push.
    Reply
  • valvalis - Monday, March 24, 2014 - link

    Google already has a replacement for Dalvik in 4.4 - ART. Reply
  • lmcd - Monday, March 24, 2014 - link

    As if slowing down the already-torturous Android boot time is a good solution. Even if ART is "ready" by the next point release, I don't see how ART improves the situation dramatically. Reply
  • nico_mach - Saturday, March 29, 2014 - link

    I can't see how it's a problem. Windows' lagginess never hurt its platform after it had platform dominance. Windows tablets have the same problem as Apple laptops - they don't have the compatible software and probably never will, they're the domain of artists (with the stylus) and wintel grognards. For all your bluster, PCs with touch aren't selling and tablets are still primarily ios and Android.

    I'll grant that nothing lasts forever, but your 2018 forecast isn't based on evidence yet, just speculation - speculation that also had Windows Phones dominating mobile and Surface selling like ipads. There's nothing seriously wrong with Windows' platform, just like there was nothing seriously wrong with Apple in the 90s, but it's still a declining platform, caught on the wrong side of change with an outdated business model.
    Reply
  • kpb321 - Monday, March 24, 2014 - link

    I have no idea how Google would have plan to upgrade from Davlik to Dart considering Dart is a replacement for Java Script and not Java which are two different things.

    http://en.wikipedia.org/wiki/Dart_%28programming_l...

    Not to mention that Dalvik isn't even Java in the first place. The borrowed some ideas and basic structure for the api's but invented their own new environment to run it. The also do support native applications if performance is really that critical. They don't seem to be that popular so I can only assume that their Davlik performance is doing okay.
    Reply
  • coder543 - Monday, March 24, 2014 - link

    The idea would be to make Dart a viable option for app development on Android. It doesn't have to rewrite developers' code for them. I would love for Dart to be an option, at least... but all this whining about Dalvik performance isn't worthwhile. People don't write things in Java on Android when they need performance -- they use C++ and native binaries, with maybe some Java for a GUI. Dalvik isn't much of a concern for the viability of Android, and ART will be along shortly enough to replace it. Reply
  • Flunk - Tuesday, March 25, 2014 - link

    I think you're a bit confused. Dart is a programming language, Davlik is a compiler (and framework). There is no reason that Davlik applications couldn't be written in Dart. Why they would care enough to write a compiler just to support an extra language that isn't all that different from Java I don't know. Reply
  • syxbit - Monday, March 24, 2014 - link

    ART not Dart :) Reply
  • coder543 - Monday, March 24, 2014 - link

    lmcd definitely said Dart, referring to the programming language. kpb321's comment was replying to lmcd. Reply
  • lmcd - Monday, March 24, 2014 - link

    I am aware of ART and think that it simply moves the same problems to elsewhere in the operating system (boot time). Dart is the "new VM," and in my opinion the best Google-produced environment for development (hopefully PNaCl takes off with greater language support). Reply
  • Scali - Wednesday, March 26, 2014 - link

    ART moves the problem to install-time. Of course, the first time you reboot your phone in ART-mode, it needs to 'install' all apps. As in: it needs to compile them to native code compatible with the ART runtime. Reply
  • lmcd - Monday, March 24, 2014 - link

    As if I didn't know they were different things -- that's obvious from the get-go. But it's also obvious that Google is a web-based company focusing on two major code techniques for web: PNaCl and Dart. Granted, PNaCl is the more likely target for conversion (maybe try and get Dalvik as another PNaCl target language?), but regardless I don't think Google can possibly keep Android as it is right now, and the transition is unlikely to be smooth.

    And Dalvik uses Java classes and is heavily based on Java. Pretending that they only "borrowed" some ideas is turning a blind eye to Google's Android development. There was a lawsuit with Oracle for a reason (though Java was open-source, I agree with the verdict). But point being, Dalvik uses Java classes and for backwards-compatibility's sake will not leave those behind (though new ones may be recommended).
    Reply
  • coder543 - Monday, March 24, 2014 - link

    How does Microsoft desktop dominance make DirectX win when OpenGL performs better on Windows than DirectX does? Reply
  • inighthawki - Monday, March 24, 2014 - link

    Valve has only proven that modern OpenGL 4.x is faster than a legacy 10 year old DX9 implementation. Reply
  • Friendly0Fire - Monday, March 24, 2014 - link

    Which is amusingly a detail often forgotten. Modern OpenGL *and* DirectX are both faster than their older counterparts. Reply
  • Klimax - Tuesday, March 25, 2014 - link

    And furthermore nobody could/can replicate their results, because they never released OpenGL part of Source games. (Thus they could just make that up...) Reply
  • R. Hunt - Tuesday, March 25, 2014 - link

    But they got the headline, which was what they were after. Reply
  • Scali - Wednesday, March 26, 2014 - link

    No, Valve has *claimed* that, benchmarks prove otherwise.... OpenGL is not faster than DX9. Look for the Counter Strike review on Rootgamer for example (can't place a direct link, it seems, won't go through the spam filter). Reply
  • nico_mach - Saturday, March 29, 2014 - link

    Let me say, whatever Valve's motivations, it's unlikely that one is 'faster' than the other ultimately, it's all about the effort put in by developers. That's been proven time and against outside the JIT compilers, determined programmers can generally get top performance, it's a question of how long it takes, and MS has always catered to that need. Reply
  • Scali - Monday, March 31, 2014 - link

    Graphics APIs can differ quite a bit in terms of supported features and overall design. Perhaps you missed the recent buzz about Mantle and DX12? They significantly reduce CPU overhead and improve multi-threaded performance.
    So yes, it is quite possible to run into the limitations of a given API and/or driver.
    Reply
  • althaz - Monday, March 24, 2014 - link

    Does it perform better? I've seen this claimed many times, but have not seen a demonstration (other than newest OpenGL vs DirectX 9, which is hardly relevant). Reply
  • Alexvrb - Tuesday, March 25, 2014 - link

    I was just thinking this. I can't believe they would brag about beating DX9. What's their next challenge? Glide? Reply
  • tuxRoller - Tuesday, March 25, 2014 - link

    Well, you can see the above slides where the AMD, Nvidia and Intel devs claimed gl is about thirty percent faster, without any optimizations. Reply
  • inighthawki - Tuesday, March 25, 2014 - link

    The entire point of the slides above was how it is faster WITH the new extensions/optimizations... Reply
  • tuxRoller - Tuesday, March 25, 2014 - link

    I wouldn't call it the ENTIRE point since, according to the article, they've also said that unoptimized gl is about 30% faster than d3d.
    Most, if not all, of the features are addressing gl 4.3(with some extensions).
    Reply
  • Klimax - Tuesday, March 25, 2014 - link

    And where is evidence and data plus description of their test for replication? Reply
  • JarredWalton - Tuesday, March 25, 2014 - link

    In the Github source link at the end of the article, presumably -- I haven't looked to see if any DX code was released, though, which would of course be needed to compare the two APIs. Reply
  • risa2000 - Tuesday, March 25, 2014 - link

    I no longer follow OGL (maybe someone can shed some light on this from today's perspective), but back then (in late 80, early 90) on every Windows NT there was OGL wrapper provided by MS. And then there could be also native OGL provider implemented by hardware vendor (Nvidia, ATI, 3Dfx, Matrox, Intel). The app could choose, which provider it will use but at least one (MS) was guaranteed. Performance was not comparable though.

    Later on (I am not sure if it was Win2000, or WinXP) this OGL wrapper was removed from the system as DirectX was supposed to be "one and only gfx API to rule them all". And it did pretty good job at forcing certain level of quality on both ends - apps and underlying hardware. OGL was left to different HW vendors as an option, but for some reason the quality was usually subpar to DX implementation. Only some titles/engines were able to achieve comparable performance on both. So since then OGL was kind of option (also with different flavors depending on hw), while DX was guaranteed. I imagine this can still apply today as long as there is no guaranteed OGL provider on every Windows incarnation.
    Reply
  • ET - Tuesday, March 25, 2014 - link

    There was never an OpenGL wrapper. In NT there were two driver models, MCD and ICD, where MCD provided some common code in software so was easier to implement but slower. That got dropped in later OS's. However there was nothing to "wrap" at the NT time since D3D didn't exist.

    IIRC there was some talk of a D3D -> OpenGL 1.4 wrapper in Vista, but it got dropped, and we were left with the old OpenGL 1.1 software implementation or ICD implementation.
    Reply
  • risa2000 - Tuesday, March 25, 2014 - link

    Right. I used the wrong word. It was simply software implementation. I do not remember though if something similar existed in Win9x/ME. Reply
  • Penti - Wednesday, March 26, 2014 - link

    Are you sure your not confusing it with the built in opengl-libraries? MS supplied their own for a while. It never got updated above OGL1.1. However that's not the driver (M-/ICD in the old days) and you use WGL, GLX, GLU libraries/headerfiles from Khronos, or a GLUT wrapper, SDL or some other library to talk to the drivers. Microsoft doesn't provide their own way here. Apple does, that why it can take a while and an OS update to get the full API moved along. Apple doesn't really stop things like extensions, Cuda or other such things though. Reply
  • syxbit - Monday, March 24, 2014 - link

    What does Java have to do with anything?
    Android games are coded with the NDK (native code against OpenGL).
    Reply
  • lmcd - Monday, March 24, 2014 - link

    Java has to do with platform dominance -- if Android's Java implementation fails, Android fails, and Microsoft takes 2nd place in tablets and smartphones. Reply
  • R. Hunt - Tuesday, March 25, 2014 - link

    I don't think Android problems have anything to do with Java. Android suffers a lot from sub-par hardware, terrible OEM customizations, and lack of timely updates (if at all) and decent support. Those are not related to Java at all. Reply
  • theromz - Tuesday, March 25, 2014 - link

    Doesn't look like its going to fail real, and windows is so far behind in everything it would need to explode into bits before they even got close. Reply
  • sprockkets - Monday, March 24, 2014 - link

    "Also, the way tablets are going, we're going to be back to the 90s soon enough. Intel-based Windows 8 tablets likely take over much of Android's strength in productivity tablets, and Apple will always have the iPad for content consumption and overall luxury."

    Why? Because of office which is available on Android as well? Because of the crappy legacy desktop?

    Microsoft will still be around for the desktop. But they won't be able to dictate the market like they used to for the end users.

    "Android has had a good run, but counting on Dalvik as the center of your OS (which is bound to the weaknesses of Java) puts a damper on Android power efficiency, performance, and scalability which can't be recovered. Android probably dead-ends somewhere between 5.x and 7.x, where Chrome OS takes over on Google's side and subsequently loses a huge install base."

    Uh, no. And seeing how Windows is starting over with winrt, that really isn't any different.

    "Basically, unless Google has an upgrade plan from Dalvik code to Dart code, I would put Microsoft as 1st or 2nd in tablet before 2018. If WP comes together with Windows (and finishes integration) then Windows Phone could start making a similar push."

    Keep dreaming. And dart has nothing to do with dalvik. Have a nice day!
    Reply
  • lmcd - Monday, March 24, 2014 - link

    WinRT isn't starting over. It uses a new UI framework on top of .NET, which has been the de-facto Microsoft programming language for how long?

    Dart has a ton to do with Dalvik -- Dart is Google's newest VM and likely candidate for app development within Google. It likely becomes the default way to develop Android apps. There's a programming paradigm shift that needs to occur within Google that is only partially underway.

    Office was not relevant to my comment. The desktop is not relevant to my comment, aside from the legacy of Windows RT (and the similarities in code patterns between the desktop and Modern).

    Given how close Samsung is to diving to Tizen, Google's fall-off could happen independently of Microsoft (I'll grant you that), but it is coming.

    Lastly, how is "Uh, no." any sort of rebuttal? Would you have questioned ARM's dominance in tablets a year or two ago? I'm not dreaming, I'm predicting. Google could pull off a better transition than I expect but I don't imagine it soon, or working well.
    Reply
  • Honest Accounting - Tuesday, March 25, 2014 - link

    What is MS going to have to offer (by 2018 or before) which will increase market share? The OS platform is largely irrelevant - especially in the productivity space you reference. The baseline reference there is HTML5 and it's associated development tools/environment - ie.browsers, and browser run-time environments. The only performance area in the consumer space is gaming where the best applications are native. You can talk about the inefficiency of Dalvik, but why can Nokia launch a Android phone (Nokia X) on lower spec hardware than a device running Windows phone 8.x (Nokia 520)? The same specs (Cortex A5) should be used for the 52x to highlight the performance advantages of Windows Phone. No? Reply
  • jimjamjamie - Tuesday, March 25, 2014 - link

    The problem with this argument is that Android user experience on bottom-end hardware is mostly unbearable garbage even compared to a feature phone. The Lumia 520 is Windows Phone's best seller because it offers a better experience than its equivalent Androids, unless you pony up more cash for a Moto G. Reply
  • Honest Accounting - Tuesday, March 25, 2014 - link

    Not accurate. The 520 is a $180 device. That's precisely the same price as a Moto G (or Xperia E1 for that matter). There's no point comparing it to a device less that half the price. But again, if it's this good performer that you claim, the thing to do would be to reduce the price (to gain more market share) rather than up the specs - which was a necessity since the performance was in fact inadequate Reply
  • Alexvrb - Wednesday, March 26, 2014 - link

    Who's not being accurate? You're thinking of the 620. The 520 is clearly available unlocked for 120-130, and I've seen it on sale for less. Performance is quite good for such a lowend smartphone. Reply
  • Mondozai - Tuesday, March 25, 2014 - link

    "Why? Because of office which is available on Android as well? Because of the crappy legacy desktop?"

    I agree that Windows tablets aren't going to conquer Android tablets. But you're delusional if you think that Windows legacy desktop is somehow bad. Nobody even comes close to the amount of programs available. Everything works. MacOS is better on some things, but it's a common myth that the only thing that holds Mac back is basically gaming and a few other apps. It's a lot more than that.
    Reply
  • Anders CT - Tuesday, March 25, 2014 - link

    That is just flat out nonsense:

    1) OpenGL is also available on Windows. In fact if DirectX 12 wont be available on windows 7, modern OpenGL will have a much wider installbase than DirectX on the desktop alone.

    2) Dalvik is not the center of Android. It is a higly efficient execution model that complements native binaries. Most OpenGL applications, and indeed all of the libraries, will always be natively compiled.

    3) Dalvik is in no way bound to Java, is not based on Java, and does not use any Java classes. It is a virual register machine. Java is a tool that developers can use to target dalvik.

    4) ART only slows down Androids (quite speedy) boot time one single time, when you first turn on ART and reboots your device. Then it will compile all your dex-files and be done with it. ART improves execution efficiency slightly at the exchange for a slightly larger memory footprint. Also, ART helps out with x86 and MIPS compatibility without the performance overhead of jit binary translation.

    5) Dart is not a virtual machine, and cannot replace Dalvik. Dart is a programming language and application framework for client-side webprogramming. In theory you should be able to make a dart compiler that targets dalvik. Go right ahead.

    6) If Microsoft intends to integrate Windows Phone with Windows, they are moving pretty slowly. In fact, running Android apps on Windows is a lot easier than running Windows Phone apps on Windows.
    Reply
  • jimjamjamie - Tuesday, March 25, 2014 - link

    "Microsoft.. are moving pretty slowly."

    Considering how late they have been to the smartphone game I think that is a given, I wouldn't discount the prospect of OS family homogenisation simply because Microsoft is being slow about it; they are just slow in general.
    Reply
  • Mondozai - Tuesday, March 25, 2014 - link

    I think it's less a matter of moving slowly than just trying to manage their main business. Google had no legacy business so they could focus it all on mobile. Microsoft's business in the software space is far more fragmented/divided.

    Another issue is that while casual consumption is done as good if not better on tablets etc, people are starting to find that working is horrible and gaming is atrocious unless for very simple games. You won't have deep immersive experiences on mobile gaming by and large. It's horrible to type long documents. Etc. So they are held back by this, because the cassandras were right about the death of the desktop. And Apple can afford to discount MacOS because they know it'll always be a marginal player.
    Reply
  • Mondozai - Tuesday, March 25, 2014 - link

    "because the cassandras were right about the death of the desktop." should say "cassandras were wrong". Declining, but now stabilizing. And once places like India gets built out properly, a lot of people will tire of just watching 5-10 inches all the time. Especially for tasks that require sustained concentration over hours of time aside from movies. Reply
  • toyotabedzrock - Tuesday, March 25, 2014 - link

    Android allows native code apps. The 12.x versions of Opera where native c or c++ and opengles. And ART only increases the boot time once to pres compile your apps into a new format. Reply
  • TheJian - Wednesday, March 26, 2014 - link

    Chromebooks took 21% of ALL notebook sales already. Let me know when win8 stops that. I expect to see Denver (and S805/810 etc) come for the low-end desktops next and move up the chain until someone stops them. DX12 isn't until xmas 2015 if it isn't late (big if), which gives people 2yrs to use OpenGL to drop draw calls, kill Mantle and get a larger audience overall with mobile pushing it. Since most people don't have a DX11 card (china runs 70% xp still, so not even dx10 there), there are not a TON of DX11 games even today (DX11 ONLY I mean, you can always use dx9.0c in most instances - ALL?), and if you want your game to run on mobile+PC it's best to choose an engine and probably avoid DX.

    Microsoft will be 3rd in all things mobile until Google/Apple mess something up badly. DX means NOTHING there, opengl is everything pretty much which is why they are pushing ES 3.1 for it now. Also MS continues to price themselves to death (see surface/2/pro etc), so windows is going nowhere on mobile fast. Tegra tab adds a modem price goes up $50, MS adds the same one and price rises $130. See the point? You can't tell me NV is charging them almost triple for a software modem...LOL (a Soc goes for $10-45).

    IMHO MS has screwed themselves into losing DX dominance (2yrs away while OpenGL has low driver mantle like stuff already NOW), and in turn gamers, and in the end a lot of PC market will go to linux/android/steamos etc or any combo of them on a PC running ARM custom chips from Apple, Qcom, NV with NV likely winning a SOC battle as the GPU takes over modem's importance on mobile (who has the 20yrs of gaming/gpu+Dev experience here? I see only NV raising their hand). Eventually we'll have a 500w PC like box running these and probably a discrete NV card for high-end just like now on WINTEL (maybe AMD too one day if they live that long or get bought).

    NV wants payback for Intel killing their chipset business (hate is strong between these two or Intel would have already figured out how to buy them and pay Jen Hsun to leave for a few billion which would easily triple his current wealth) and they're planning to get it with Denver/maxwell chips and beyond moving to your desktops/servers (and higher end notebooks too). With Google/Valve/Apple helping OpenGL now is the perfect time (with win8 screwing pc sales) to strike with in-house ARM 64bit cpus and move games to ARM/OpenGL via massive mobile gamers and units already dwarfing PC gamers. They are selling 1.3B units of mobile now and will be over 2B in the next few years. Only ~34mil gamers on steam (IIRC) says the PC market is pretty small compared to mobile (also only ~50mil discrete sold yearly). GDC 2013+2014 show devs have moved massively to mobile, matching PC's at 51/52% making games for both. While consoles on the other hand, are under 15% showing devs are leaving them massively (next gen hasn't moved the needle from last year).

    Why can't I run a quadboot (tri? whatever) of chrome, android, linux & steamOS? All are free and all run on ARM or will, steamos coming soon surely, NV/Valve working hand in hand on steamboxes for a few years says it will be on ARM at some point. At some point we'll all just run VM's of whatever we used to do (if needed for X software). We do it already, but I mean even home users will be using it like they do in enterprise now. I guess I pretty much think the exact opposite of what you think ;)

    Free OS's will take over as games move there, then apps will follow too. The only question I have is how much Wintel will be hurt (and consoles are dead, just a matter of when in this cycle). I think Intel does best out of this (they can fab for others in worst case), while MS has nothing to stop windows/dx/office from going down (slow or quick, but going down), though a NV buyout could help stop it, but I think that's a best fit for and Intel purchase and I'd rather see 14/10nm gpus shortly as opposed to MS getting them who can't do much for my future hardware without fabs.
    Reply
  • gobaers - Monday, March 24, 2014 - link

    An interesting historical note: one of NVIDIA's first successful products, the Riva128, had a key differentiator in having good Direct3D preformance. No, it wasn't as fast as the 3Dfx parts, but it was the first serviceable challenger. It combined decent 3D with great 2D in a single card unit, and the rest is history. The unit I had was this, reviewed by Anand:

    http://www.anandtech.com/show/21
    Reply
  • dakishimesan - Monday, March 24, 2014 - link

    My first card was the RIVA 128ZX, the higher clock version that had more memory (8MB!) Love that card and playing Moto GT racer for the first time. Reply
  • JarredWalton - Monday, March 24, 2014 - link

    I hail back to the glory days of ISA cards, sadly. My first PC I think had something like the earliest Cirrus Logic controllers -- capable of an astounding 640x480 256-colors! SVGA FTW. I did manage to scape together the funds in high school to buy an S3 911 I think. Then around 95 things changed radically and I was able to stop running DOS on top of Windows and start booting straight to Windows 95. Hard to believe all current high school students were born after Win95 launched. LOL Reply
  • gobaers - Monday, March 24, 2014 - link

    I wonder what type of video board my very first computer had. It was a Hyundai AT clone running at 9MHz in 'Turbo' mode, paired with an EGA monitor instead of the CGA that everyone else had.

    I didn't start tinkering with the insides of my machine until the next one, a 386SX-25. The day I bought and installed a SoundBlaster Pro is still memorable for my brother and me. We must have stayed up until 2AM playing with Dr. SBAITSO.
    Reply
  • althaz - Monday, March 24, 2014 - link

    We had a Riva 128, it was relatively poor, but when the TNT came out, that was THE SHIT.

    We added a Voodoo (1) to our PC when I was a kid and just couldn't believe how awesome it was, but it had nothing on the Riva TNT. The TNT was faster than the Voodoo (though not, IIRC the Voodoo 2, though performance was close), plus it had incredible 32-bit colour so image quality was FAR superior. The other bonus for nVidia (and their customers) was that their's was a single-card solution and absolutley mauled 3dfx's single-card solution, the Voodoo Banshee (which was the fastest 2D card on the market then, but had less than Voodoo 1 levels of 3D performance, from memory).
    Reply
  • coder543 - Monday, March 24, 2014 - link

    I've never understood this. Maybe DirectX once held a strong lead in terms of features or performance, but it's been quite a few years since that was the case. If I were starting a new project in the last 5 years, it would seem completely unreasonable to lock myself into DirectX unless I was targeting the Xbox. I'm no Richard Stallman, but locking *your* code into a platform where you have no freedom to move it to something else without rewriting all of the graphics calls just seems like a poor decision. By supporting OpenGL, you not only get arguably better performance, but you can run it on almost every platform in existence. That would seem worthwhile to me, regardless of what Microsoft says. Reply
  • inighthawki - Monday, March 24, 2014 - link

    Any modern game engine is flexible to support almost all rendering APIs. Every talks about how "Oh everyone should use OpenGL cause its up to date and supported everywhere!"

    Except that it's not. Sure you have it on PC, and OpenGL is on mobile devices. But then you have consoles - PS3, PS4, 360, XBO, Wii. None of which use OpenGL. If you want to effectively target these, you need a flexible design that supports multiple APIs. At that point, most developers choose to support DirectX because Windows is the largest market for their games, DirectX tends to provide more consistent results across different hardware vendors, and DirectX tends to be more pleasurable to work with over OpenGL. DX has a long history of significantly better graphics tools, so a lot of developers will write a DX version first just to get it working, then write an OpenGL layer afterwards.
    Reply
  • inighthawki - Monday, March 24, 2014 - link

    My optional U in brackets after Wii seemed to denote underlining. Sorry about that. Reply
  • ruthan - Monday, March 24, 2014 - link

    On all those other devices running OpenGL, because you need API, which are accelered by HW and without Windows you havent DirectXP, some you havent choice and you have to use OpenGL. Reply
  • lmcd - Monday, March 24, 2014 - link

    DirectX tends to get a lot of features beforehand, and there's no outrageous extension system to work with/deal with. Reply
  • errorr - Monday, March 24, 2014 - link

    While desktop games will continue to be a popular driver of some technology the vast majority of people will have a tablet or phone as their only computer. Quality 3D in games requires experience in and knowledge of OpenGL ES. I don't see where MS will have the clout necessary to drive mobile hardware. DirectX will only be as relevant as the console is and mobile hardware will drive future development. No way can they replace OpenGL as long as Android and Apple make the decisions on the api level.

    Also the paradigm of good coding practices is wildly different for TLDR mobile GPUs and there is no evidence the MS knows how to create an api that deals with the quirks of those Arch's.
    Reply
  • inighthawki - Monday, March 24, 2014 - link

    They've announced DX12 support for mobile, and have backing from Qualcomm, who themselves have (what I assume you meant :)) a TBDR architecture. I sincerely doubt Microsoft is oblivious to how they work. Reply
  • TheJian - Monday, March 24, 2014 - link

    What I don't understand is why an NV blog page post has the entire surrounding info from AMD advertised crap. No bias here I guess ;) Why is EVERY single piece of info on this page about AMD when the article is covering NV's slides and Vids? Why is this in AMD center and not NV? Red text for links too? Why not GREEN in an NV article? You should get the point here. Two NV guys, ONE AMD guy, ONE Intel guy, but this is AMD news?

    NV not paying enough for an NV CENTER portal to be prominent on this site or is this just Ryan's/anandtech's AMD love shining through? Some things never change? I digress...
    Reply
  • DanNeely - Monday, March 24, 2014 - link

    Anything with an AMD tag gets the AMD branding automatically no matter how peripheral AMD is to it. In at least one past case objections resulted in the AMD tag being dropped. Reply
  • lmcd - Monday, March 24, 2014 - link

    This is as relevant to AMD as anyone. Reply
  • Will Robinson - Friday, March 28, 2014 - link

    Would you like some cheese with that whine? Reply
  • Homeles - Monday, March 24, 2014 - link

    DirectX still has some life in it, but I'd argue its days are numbered. OGL is simply a more universal API. DX will still live on for the Windows GUI, I'm sure.

    With Android and iOS's growing sales, there's no doubt that OGL will overtake DX on the desktop as time goes on. In the meantime, DX12 sounds really great.
    Reply
  • jimjamjamie - Tuesday, March 25, 2014 - link

    If we look at the fact that Microsoft won't even euthanise Internet Explorer, I doubt DirectX is going anywhere for a good number of years yet. Reply
  • sorten - Tuesday, March 25, 2014 - link

    Why would Microsoft euthanize IE? IE's market share is growing and is greater than the share of all of its competitors combined. Reply
  • Gigaplex - Monday, March 24, 2014 - link

    "Even without fine tuning, they note that in general OpenGL code is around 1.3X faster than DirectX"

    A bold claim. What versions are being compared here, on what system? I recall OpenGL having vastly crippled performance on Windows during the DirectX 10/Vista days. Not a lot of Windows applications since then use OpenGL (or at least ones that I follow) so I haven't heard if this has been resolved.
    Reply
  • tuxRoller - Tuesday, March 25, 2014 - link

    Watch the video and read the slides. Reply
  • drzzz - Tuesday, March 25, 2014 - link

    I find this discussion interesting. Something people have failed to realize is that you write an OGL ES app and it will run on OGL on the desktop. This is what is so great about OGL. Microsoft has DX for PC and XBOne but currently they are different in small ways. Sure by DX12 it should be fully unified. But that is a bit away. With Steam OS and Valves push unto other platforms a door is open for more OGL games to go cross platform.

    Studios are going to go where the money is and the more platforms they can run on with the minimal changes to their code base/engine is going to play a bigger and bigger role in the coming years. Thanks to Andriod and iOS both being great platforms with large user bases supporting OGL ES the concept of building cross platform games is actually becoming very accepted. GDC 2014 just demonstrated a large number of games that will release iOS, Andriod and PC. These games are based on OGL/OGL ES. With a 2015 date on DX12 and no DX library for OS'es other than Windows MS has a huge up hill battle.

    Even if the speed was just equal the game developers are more and more seeing valid sustainable support for OGL and cross platform development.
    Reply
  • mr_tawan - Tuesday, March 25, 2014 - link

    On Windows, many of the OpenGL ES application runs over DirectX (through ANGLE)... one of them is Google Chrome.

    OpenGL and OpenGL ES are different API. though OpenGL ES is fully supported in OpenGL (starting 4.3 or 4.4, I'm not sure), I think majority of people still not be able to run OpenGL ES on their Windows desktop.
    Reply
  • althaz - Tuesday, March 25, 2014 - link

    Seems pretty clear that Jarred is a massive OpenGL fan. Only OpenGL zealots prefer OpenGL (from a coding point of view) to Direct3D which is a FAR superior API (lately, DirectX 1-6 were utter shite). That's the reason people use it. As a side bonus, if you don't want to spend big on optimizations, performance is generally better.

    The issues you run into is if you want to do a little bit more than is standard in terms of optimizations, Direct3D tends to suck a bit compared to OpenGl.

    Open GL used to be the clear leader in performance and API quality, but they fell catastrophically behind in both. They are pretty caught up in performance (and arguably better if you spend the time optimizing) and their API whilst still not as nice to use as DirectX, is no longer complete pants.

    Overall, I think OpenGL is the better API again now, but using it basically guarantees slightly longer development time due to the optimizations being more essential than in D3D and the less abstraction meaning there's more to do yourself. Also the API is not as nice to use, but for me none of that is as important as the end result and OpenGL makes it easier to port to other platforms and gives you more scope to improve on performance (though a lot of devs do not want to delve into that at all).
    Reply
  • tuxRoller - Tuesday, March 25, 2014 - link

    They fel behind after SGI had it's troubles. Khronos had done a pretty good job (obvioulsly we didn't get the clean break we were hoping for with 3.0). Reply
  • JarredWalton - Tuesday, March 25, 2014 - link

    Please, OpenGL fan? I haven't programmed graphics in YEARS -- and I sucked at it. I just find it interesting that NVIDIA, AMD, and Intel all had a panel discussing how OpenGL could dramatically improve performance of graphics in certain areas. Is it true? Well, I don't have the know-how to really figure that out, but developers do and I'm sure some of them are playing around with this stuff. Ultimately, it's the games that matter, and I've played a lot of wonderful games that are built using pretty simply graphics, so DX vs. OGL vs. Mantle vs. Glide vs. whatever is only a secondary technological consideration. id Software made some great technology with games that were mediocre at best (IMO, naturally). Others (Valve) have stuck with DX9 for a long time and yet still made some incredible games. This is why games are art as much as anything else out there. Reply
  • ET - Tuesday, March 25, 2014 - link

    GDC always have panels about optimisations. Some of the stuff on these slides even applies to Direct3D.

    It's possible that SteamOS has caused the HIV's to dedicate more time to OpenGL optimisations and their OpenGL department is enthusiastic about that, and therefore this session was more enthusiastic than in recent years (I haven't tried to compare). However NVIDIA had a session about its DX11 driver optimisations, so I'm sure that any performance comparison is relevant for a narrow snapshot of hardware and drivers.
    Reply
  • Scali - Wednesday, March 26, 2014 - link

    I find it funny that Intel and AMD are even present at all. Last time I looked, neither offered an OpenGL 4.4 driver for their hardware. Besides, most extensions that have now made it to ARB standards and newer OpenGL core versions, have originated from nVidia. Reply
  • ddriver - Tuesday, March 25, 2014 - link

    OpenGL FTW - boycott platform and vendor limited APIs! Don't limit the reach of your code. Reply
  • mr_tawan - Tuesday, March 25, 2014 - link

    My two cents. I think in this context everyone including Jarred means DirectGraphics/Direct3D for the word 'DirectX'. It feels wrong to compare DirectX (which is complete multimedia library) with OpenGL (which is a graphics API). Or probably I missed something :-).

    Somehow I'd love to see debates over OpenAL vs OpenSL ES or DirectSound/XAudio too!.
    Reply
  • ET - Tuesday, March 25, 2014 - link

    DirectX typically refers to Direct3D. I remember a time when Microsoft tried to make people use Direct3D (sometimes in the DX10 days, maybe DX11 release time), but since even internal Microsoft presentations continued to refer to it as DirectX Microsoft gave up on that. They can be considered synonymous. Reply
  • Klimax - Tuesday, March 25, 2014 - link

    " Even without fine tuning, they note that in general OpenGL code is around 1.3X faster than DirectX."

    And still no evidence. It doesn't matter who they are. They haven't published evidence nor data thus it looks more like: "We want API which allows us to push proprietary stuff like in the good old 90s". To re-parcel games once again.

    Why I bring up this? Because that was main reason for DirectX, and back then most problematic part of DX "Caps bits". Since DX 10, Microsoft eliminated most of these stupidities and thus GPU makers are not that happy with this status. (Not innovation, but proprietary crap)

    Note. Before some lost soul points to Valve and their crappy PR article comparing DX and OpenGL, sorry but it is very bad comparison, not possible to repeat and only comparing old DX 9 to new OpenGL and new codebase using it thus not eve n apples/oranges comparison.

    ===

    TL/DR: Back to 90s to proprietary stuff (aka Extension hell of OGL); no evidence for their claims.
    Reply
  • JarredWalton - Tuesday, March 25, 2014 - link

    The slides have comparisons using APItest, which has source available on Github and is linked at the end of the article. I can't say that I tried to download or compile anything (because I've long since given up on doing that sort of thing), but presumably there's code for people to look at and play with. So before crying "foul" look at the source and get it running. I do believe however that they are specifically referring to improvements in the number of draw calls with the "performance increase" claims, and there's more to graphics than draw calls I'm pretty sure. :-)

    Mantle, incidentally, boasts something like a 900% increase in the number of draw calls, but in practice I think it only ends up being 25-35% faster in BF4 -- I'd have to go back and check the figures again, but it's definitely not anywhere near 900% faster with Mantle, or even 100% faster. There are many bottlenecks besides the number of draw calls you can execute per second.
    Reply
  • inighthawki - Tuesday, March 25, 2014 - link

    That's because they're talking about draw calls - i.e. the CPU overhead of them. If your game is completely GPU bound to begin with, you could improve the CPU performance of draw calls by 10000% -heck they could be completely free - but you won't see a difference because your frame is purely GPU bottlenecked. Reply
  • jwcalla - Wednesday, March 26, 2014 - link

    I think you're a bit confused about the Valve story regarding L4D2 performance. It's true that L4D2 is a DX9 game, however it's not true that they made a new codebase for their OpenGL port. First, the version of OpenGL they're using is 2.x or, at most, 3.1. It's basically DX9 features; Valve didn't update the engine and graphics capabilities and that's evident in the system requirements. The performance benefits being discussed here are for OpenGL 4.2 and up.

    And it's not a new codebase. They have an existing OpenGL renderer (for OS X) and a cruddy D3D-to-OGL on-the-fly translation layer, which adds a performance hit.

    The DX path has had years of optimizations and the OGL path has not. They gave up optimizing OGL once they hit the engine's 300 fps limit.

    So we're dealing with a 7-year-old version of OGL, a not completely optimized rendering engine, with a translation layer in between, and they were still doing better than DX9.
    Reply
  • inighthawki - Wednesday, March 26, 2014 - link

    Feel free to sift through the code and prove me wrong, but I highly highly doubt they would waste their time porting their engine and making a translation layer to OpenGL 2. You and a lot of people seem to get confused that OpenGL, just like D3D, is an API. You can write a game targeting D3D11 and OpenGL 4.4 but still only use a subset of the hardware feature levels. Reply
  • jwcalla - Wednesday, March 26, 2014 - link

    OpenGL 4.x features are required for the performance we're talking about here. The video cards listed in the L4D2 system requirements are 7-yo cards. It's Shader Model 3.0 equivalent.

    They didn't port Source to OGL. They already had an OGL rendering backend for OS X and slapped a translation layer on top to convert D3D to those OGL calls. That's how they had something up and running so quickly on Linux.

    Valve doesn't even have a pure OGL engine yet.
    Reply
  • tuxRoller - Wednesday, March 26, 2014 - link

    He's right about the version of gl being used. Intel is only now on 3.3 and steam had been available for many months now. I think when it was first released Intel was on 3.0/3.1. This is only speaking about Linux, of course. Reply
  • ET - Wednesday, March 26, 2014 - link

    Regardless of the other less than valid points, you're still comparing to DX9, and that's highly irrelevant to the "OpenGL vs. DX" debate. Yes, DX9 had a very high overhead, but DX has moved forward quite a bit since then. Reply
  • tuxRoller - Wednesday, March 26, 2014 - link

    Except, according to the presentation linked in the article, it still is decently slower than gl. Reply
  • bobvodka - Tuesday, March 25, 2014 - link

    and this is all well and good BUT.... unless you are playing on NV hardware you aren't going to be able to do this.

    At the time of writing AMD's most recent beta drivers are missing at least 3 of the extensions mentioned there (bindless, buffer_storage and shader_parameters) and have been since the 4.4 spec was released some 8 months ago.

    Intel don't support 4.4 either but that's kind of expected in the graphics world.

    So, right now you are stuck writing 3 PC paths - 4.4, 4.3 + AMD extension which is 'like' buffer_storage but without bindless/shader parameters (higher CPU cost), Intel compatible.

    And none of it addresses the problem of 'spending all your time on one thread'; games do not consist of one scene with 1,000,000 instanced objects. They consist of lots of scenes, with different render targets and shaders and data; the fact that GL does not allow command lists/buffers to be built on separate threads to dispatch just hamstrings things going forward because that magical 'single thread' which is doing all the work isn't getting any faster.
    Reply
  • jwcalla - Wednesday, March 26, 2014 - link

    True, but video game rendering is ultimately a synchronized / serialized process. I'm not saying more threads don't matter, but ultimately all that stuff has to be synchronized and done so very frequently.

    Video games are simply not truly parallel operations.
    Reply
  • inighthawki - Wednesday, March 26, 2014 - link

    Game rendering is only synchronized that way because no modern API provides a mechanism to do otherwise. OpenGL's multithreading model is basically the same as the one D3D11 introduced, which is multiple contexts. This model required significant amounts of additional work for little or no (and occasionally negative) improvement due to the overhead.

    DX12 looks to be solving this issue with command lists and bundles. They show a nearly linear scaling across processors for submitting workloads. And they do so with real world demos - actual games and benchmarks like 3DMark which have been ported to DX12.
    Reply
  • jwcalla - Wednesday, March 26, 2014 - link

    No, game rendering is synchronized because you ultimately have to synchronize the video, audio, input, AI, networking and everything else or you're going to have one messed up experience. It's not like you can just go off and do AI stuff without considering the player's input, or render video frames without considering AI. Just like A/V sync -- it's synchronized. All of that stuff has to eventually be funneled down one pipe for the final presentation. Reply
  • inighthawki - Wednesday, March 26, 2014 - link

    I think there was a misunderstanding. I thought you were referring to the rendering synchronization itself. Once you have all the dependencies needed for rendering, it is possible to split the rendering work nearly equally across cores, but modern game engines do not because none of the existing APIs do it very well. Reply
  • jwcalla - Wednesday, March 26, 2014 - link

    Yeah, you can get a ton more draw calls by splitting the work up across all the cores like that. I think that helps a lot but even then, the actually frame rendering has to be serialized (can only render one at a time) and in order. It can help in CPU-limited scenarios where the GPU becomes starved (like we see in Mantle).

    The OGL approach presented here is somewhat different and intriguing. Instead of trying for more draw calls they're using the multidraw concept to bundle more visual updates into a single draw call. So they're trying for fewer draw calls where each call has a bigger punch. In theory this should alleviate pressure on the CPU. I think this approach has better advantages for mobile platforms.
    Reply
  • LennyZ - Tuesday, March 25, 2014 - link

    Long live the Banshee! Reply
  • ericore - Sunday, March 30, 2014 - link

    "Many of the mainstream laptops I test can hit 30-40 FPS at high quality 1080p settings, but there are periodic dips into the low 20s or maybe even the teens. Double the frame rates and everything becomes substantially smooter."

    You are thinking of DirectX, OpenGL has much more consistent frame rate.
    Also, there is more than the programmers involved in game engines.

    You shouldn't have to restrain artists. Imagine an artist producing the sickess looking character. He brings it to the developer team, and the developer team says, "We got two options, either we incorporate the character as is and it uses 50% of the graphic card or we take away its charm so as to be able to create a playable game." "This was a hero right, not an enemy?" Enemy. "Well I guess we could use it as a boss, but there's no way we are using this as a mainstream high quantity enemy model." Keep drawing, thank you. "That bast****."

    That's an important reason behind a faster less approachable engine and low level access to GPU for faster draw calls which can accommodate for greater polygon count.
    Reply
  • bluevaping - Monday, March 31, 2014 - link

    People who play games don't care which API format games are created on. They care if its available to them to play on what they got. There nothing that interesting on current Windows that makes me want to build a system for gaming. I don't like the modern UI, 16:9 screens, touch screens, gamepad support left out for some games, no smaller higher quality monitors. Then downsides of upgrading just for minor features or to bring features back. Reply
  • Jammrock - Tuesday, April 01, 2014 - link

    DX12 is designed to work across the entire Windows platform, from desktops to Xbox One to Windows Phone. One API to cover the entire spectrum. Should make designing games and game engines for the entire Windows platform easier.

    Most Android and iPhone games use pre-built engines like Unity. Heck, most indie desktop titles use Unity these days. As long as the key cross-platform engines get translated to DX12's low-level APIs the debate will only be relevant to tier 1 game devs who still build their own engines. And all of them have the resources to build for any platform they deem profitable.
    Reply

Log in

Don't have an account? Sign up now