Some Thoughts on Apple’s Metal API

by Ryan Smith on 6/3/2014 10:30 AM EST


Back to Article

  • Tangey - Tuesday, June 03, 2014 - link

    Any thoughts that Metal might be a formalisation and opening up of how Apple has been driving the graphics within IOS ? One assumes that IOS doesn't use gles2.0 for its own internal UI, composition and transitions, and complex blurring and transluency etc etc.

    So perhaps this is taking their own internal graphics drivers and wrapping around a close to the bone API, and tidying it all up for public consumption.
  • Ryan Smith - Wednesday, June 04, 2014 - link

    That's a very good question, and a thought I had not considered yet. It seems unlikely, but certainly it can't be ruled out at this time. Reply
  • przemo_li - Wednesday, June 04, 2014 - link

    That one is easy to answer. This API is tied down to A7 SoC. NO OTHER chips will support it. So iOS8 beeing supported on many SoCs can not relay on it. Also developing such API takes time, even if You need to target just one ISA (instructions GPU can understand), with good compiler team (LLVM is run by Apple ;) ), with direct access to specs and support from GPU Vendor (nothing else would satisfy Apple, and since PowerVR is still their suppliers...). Few months wont cut it.

    And that mean OpenGL ES is still de facto standard for UI engine.

    Now, if Apple phase out support for older SoCs then switching to Metal will be the option.
  • Guspaz - Wednesday, June 04, 2014 - link

    Apple has not, to my knowledge, said that Metal is going to be restricted to A7 or above, merely that it is designed for it. It's entirely possible that older SoCs, which use GPUs from the same vendor, may also be supported in some manner. Reply
  • przemo_li - Thursday, June 05, 2014 - link

    Hmm. Wanted to write A7 and above.

    On stage we seen PR people. They wont tell You that their products wont support something... Not on the show designed to make their product shine.

    On the other hand if Apple will wont to support older chips for quite a while, they may consider such moves.

    However Metal shaders and API are designed for some basic capabilities that must be present.

    Compute may be lacking on older chips.
  • przemo_li - Thursday, June 05, 2014 - link

    *want to support Reply
  • jameskatt - Monday, June 16, 2014 - link

    Apple is restricting METAL to the A7, A8, and other future Apple-designed ARM chips. This allows Apple to unleash the potential of these faster chips.

    It is speculation if Apple is going to support the A6 and earlier chips with METAL or continue to restricted them to OpenGL. I bet Apple keeps METAL for A7 and future chips. This obviously is an incentive for consumers to purchase newer Apple hardware and developers to keep up.
  • jameskatt - Monday, June 16, 2014 - link

    Note that Apple is PART OWNER of Imagination Technologies. Further, Apple is fully licensed to create its own variants of Imagination Technologies' GPUs, just as Apple can create its own variant ARM CPUs.

    Since Apple controls the CPU, GPU, METAL, and the OS, obviously going forward, anything with an A7 and all future Apple CPU/GPUs are going to support METAL.

    More interesting would be for Apple to create METAL for Intel and AMD GPUs and possibly nVidia GPUs. This would make GPU computing far more efficient for Macs. This would place Macs on a more level footing for games with Direct X and Windows.
  • jfraser7 - Tuesday, October 27, 2015 - link

    Metal actually does work with Nvidia and AMD GPUs; I just don’t know which ones. Reply
  • ScottSoapbox - Tuesday, June 03, 2014 - link

    So what would be a potential compute task (usage case) on say the next iPad? Reply
  • Morawka - Tuesday, June 03, 2014 - link

    video transcoding? Reply
  • ravyne - Tuesday, June 03, 2014 - link

    Real-time facial recognition or augmented reality, image filtering -- these things are examples of apps that could be shoe-horned in through GLes before without too much trouble, but proper compute support is always better. Some things, like facial recognition, can be sped up by doing broad-phase detection on the CPU and narrow-phase detection on the GPU, but you need tighter integration of the type Metal appears to provide to make this work out.

    GPUs are great for lots of useful things, not every application fits but its less a question of "well, what can you do with it?" and more like "Great news! No more pounding nails with wrenches! Hammers for everyone!".

    But really the killer app for GPU compute on Mobile is power consumption. In general, for the kinds of jobs that GPUs are good or even middling at, they can spend so much less time doing it that you get better performance *and* a net power win, while also loosing the CPU to concentrate on the jobs its best at.
  • NEDM64 - Saturday, June 07, 2014 - link

    Voice analysis for user recognition?
    Speech analysis?
    Video analysis?
    Radio signal analysis?
    Less laggy drawing?

    I think there are even more applications than in desktop GPGPU, for the most of people.
  • Flunk - Tuesday, June 03, 2014 - link

    You guys make a good point, when is Apple buying Imagination Technologies? It's chump change to them at this point. Reply
  • tipoo - Tuesday, June 03, 2014 - link

    They have a big enough stake to push them around, why buy the cow when you can get the milk for free? Reply
  • testbug00 - Tuesday, June 03, 2014 - link

    No need to buy them. They can just do what they did for other parts of SoC. Reply
  • andrewaggb - Tuesday, June 03, 2014 - link

    meh, feels like the whole world is going backwards. I liked the idea of Mantle as a cross platform low level api, but we haven't seen any linux support yet, or mac support, or nvidia or intel support. So that's not likely to improve things. Directx 12, doesn't help linux or mac... Now metal, which is all good and fine, but definitely won't be supported by anybody else.

    I suppose competition in a good thing and the best ideas of all these will likely make it into directx or opengl or replace them, but it's going to be a mess short term I think.
  • darwinosx - Tuesday, June 03, 2014 - link

    There is every reason to believe metal will be adopted by others. Direct X has been horrible since forever and Open GL is bloated and slow. Reply
  • przemo_li - Wednesday, June 04, 2014 - link

    1) Its rough equivalent to OpenGL ES 3.1 (which is not released yet as final specification)
    2) Apple may have some patents for Metal... (and we all know how likely are they to share)
    3) MS needs DX as vendor-lock-in for Windows, Windows Phone, Xbox. They wont addopt Metal, and will make sure its not by default present on Windows. (Drivers from WU still lack OpenGL, eve though YOU can not have such OpenGL-less drivers from Nv/AMD/Intel ... )
    4) Valve need something that is equivalent to DX11 right now. With Geometry and Tessellation.
    5) Metal is non existent, and tool-less on anything other than OSX.
  • tipoo - Wednesday, June 04, 2014 - link

    How and why do you think Metal, an Apple iOS API, would be adopted on other platforms? I bet it will always be iOS (or iOS and OSX) exclusive. Reply
  • Kevin G - Tuesday, June 03, 2014 - link

    The weird thing is that Apple has been hiring lots of GPU hardware engineers from AMD. This is doubly weird considering that Apple is indeed on good terms with Imagination Technologies and extended their contracts for several more years early in 2014. The arrival of Metal only adds another piece to another complex puzzle of what Apple is doing with graphics.

    The other variable with Metal is if it will arrive on the OS X side of things down the road. Apple is a large enough OEM that they can just write the check to Intel, AMD and nVidia to write Metal drivers and they'll do it. The thing is that Apple is still committed to OpenGL on the OS X side of things it seems.
  • eanazag - Tuesday, June 03, 2014 - link

    With the A7 SoC Apple could produce an iOS/MacOSX laptop, they just need a compiler (Xcode) that will recompile iOS apps to x86 and OSX apps to ARM [this may be feasible today to an extent]. I think that is the direction they are going. We will likely see their product stack start to blend.

    The GPU compute make more sense from an image & video editing standpoint, which still kinda sucks on iOS. This would be a necessity for an iOS laptop from an Apple customer expectation perspective.
  • odaiwai - Wednesday, June 04, 2014 - link

    iOS apps to x86 is easy - they iOS Simulator in XCode does this now. Reply
  • przemo_li - Wednesday, June 04, 2014 - link

    iOS is completely unsuitable for large screens.

    Main roadblock being complete lack of good scaling mechanism for UI.

    Apple solution up to this point was to double screen size. And let the devs quadruple size of their textures/images.

    That wont cut for BIG screens as in 11-13-15 INCH.
  • testbug00 - Tuesday, June 03, 2014 - link

    making a GPU. First they licensed everything, than they made the uncore, than they made the CPU, next comes the GPU... Well, actually, fabbing it themself might come first. Reply
  • tipoo - Wednesday, June 04, 2014 - link

    Fabbing it...Themselves? They'd have to buy or build a large fab first, and we haven't heard of either. I think that would come far in the future if ever, custom GPU first. Reply
  • Krysto - Wednesday, June 04, 2014 - link

    If I were Apple, and making my own CPU core, I'd definitely try to control the graphics side, too, which means either buy out Imagination (for a price much smaller than they paid in that ridiculous Beats deal), or make my own GPU.

    That way they can fully control their future. No Intel, no Imagination to mess things up for them. Their chips will be exactly how they want them, when they want them.
  • WaltFrench - Wednesday, June 04, 2014 - link

    Trouble with buying Imagination is that (according to the industry info I am looking at) Apple is barely in Imagination's Top 10 customer list, at less than 1% of Imagination sales. These data aren't totally trustworthy, but Apple as a customer is buying expertise that Imagination can afford because of the <b>other</b> 99% of their sales.

    And of course, if Apple bought them, the other customers would need to get the product elsewhere. Worst case, a new competitor, with 100X the sales of the Imagination Division at Apple Inc, would arise; this new, well-funded competitor could well become much more successful.

    So presumably Apple is watching out that another big customer (LG, Marvell & Intel show as its top 3) doesn't snap it up and cut off Apple. Maybe even they have paid for the right to be offered a seat at an auction. But buying Imagination could be more problematic than the current, very successful arrangement.
  • richardw115 - Wednesday, June 04, 2014 - link

    According to Imagination Technologies' 2013 annual report, revenue from their largest customer was >33% of total sales. Although not named, that customer is almost certainly Apple. Reply
  • HisDivineOrder - Friday, June 06, 2014 - link

    I think Apple would hire GPU hardware engineers because Apple is always of the belief that in the long term it's better to have the option to make your own than rely on someone else to make something for you.

    That is, to get better prices from Intel, they love to have their own huge CPU design team around they can always point to and say, "So Intel, give us a great deal... or we'll take Macbook Airs to ARM." Boom, better deal.

    "So Imagination, give us a better deal or we'll just make our own GPU." Boom, better deal.

    Haven't you ever noticed how 2-6 months before a major announcement of renewal of a contract with Intel or nVidia/AMD for PC's, you'll start hearing rumors about a major swap? Then suddenly like magic the status quo remains and you'll hear about record profits for Apple's hardware because of "fantastic pricing" given to Apple because of "volume?"

    This is the genius of Tim Cook and this is why Jobs made him CEO. He was/is a master at procuring great pricing on hardware and one of the ways he does that is by ensuring Apple always has a longterm option away from their major source of parts.

    That said, this is also where Apple slipped up with Samsung because they tried that strategy with Samsung before suing them and Samsung shrugged and said, "You need US more than WE need YOU." And by and large, they were right.
  • gss4w - Tuesday, June 03, 2014 - link

    "OpenGL ES shader language (GLSL ES), while it’s initially promising since both languages are based on C++." A somewhat minor point, but I think GLSL is based on C not C++. Metal looks like it adopts a number of C++ features (such as templates and overloading) that are not available in GLSL ES. Reply
  • Ryan Smith - Wednesday, June 04, 2014 - link

    Officially GLSL ES is considered to be based on GLSL and C++.

    "The OpenGL ES Shading Language (also known as GLSL ES or ESSL) is based on the OpenGL Shading Language (GLSL) version 1.20. This document restates the relevant parts of the GLSL specification and so is self-contained in this respect. However GLSL ES is also based on C++ (see section 12: Normative References) and this reference must be used in conjunction with this document."
  • Mondozai - Tuesday, June 03, 2014 - link

    Ryan, superb write-up. Far better than anything I've read on any so-called dedicated Apple sites. Reply
  • tipoo - Tuesday, June 03, 2014 - link

    Interesting, so desktops are getting around 4000 draw calls per frame you say? Because that's exactly the number they said was possible with Metal on the A7. Reply
  • Ryan Smith - Tuesday, June 03, 2014 - link

    Depending on what you're doing, you can regularly get up to several thousand draw calls on a PC today. But that's because you have such a powerful CPU (up to 4GHz Haswell) punching through any overhead. Reply
  • tipoo - Wednesday, June 04, 2014 - link

    Ah, I see, I misunderstood you saying we took 4000 draw calls for granted. Reply
  • uhuznaa - Tuesday, June 03, 2014 - link

    Bare metal + just very few iterations of metal = win

    Apple is in a very favored position here.
  • grahaman27 - Tuesday, June 03, 2014 - link

    I wonder how the performance compares to open gl 4.4, considering open gl is adopting low-level "zero overhead" support as well. Reply
  • ltcommanderdata - Tuesday, June 03, 2014 - link

    I suppose with controller support and now a low level GPU API an Apple TV console is looking more and more viable.

    I do wonder though, if Apple is supposed to be so committed to ImgTech and the Series6, how come they haven't adopted the higher quality PVRTC2 texture format? It is supported on Series5XT and up which is the baseline for iOS 8. ASTC is probably the better format for the future, but with no support in the Series6 broad usability in iOS is still a while away. Them not adopting another ImgTech texture format in the meantime makes it seem like they don't want to tie themselves to ImgTech more than they have to.
  • Krysto - Wednesday, June 04, 2014 - link

    "New to Series6XT is support for Adaptive Scalable Texture Compression (ASTC), a variable block size texture compression algorithm being blessed and promoted by Khronos as the next generation of texture compression for both mobile and desktop devices"
  • Tangey - Wednesday, June 04, 2014 - link

    That is series 6XT, which has not been implemented in any soc yet. Series 6, as is the 6430 used in the A7, doesn't support ASTC. And even if they launched a product tomorrow with 6XT, only that product would be able to support ASTC compressed textures, whereas PRVTC2 capability is on the last 2-3 A series chips (but not supported by ios). Reply
  • Thermogenic - Wednesday, June 04, 2014 - link

    Apple TV becomes a somewhat competent console gaming platform soon. Reply
  • Ferazel - Wednesday, June 04, 2014 - link

    While removing the cpu overhead of draw calls is awesome. It is not the only graphics limiting factor. Fill rate is far more important. Yes... in CPU limited situations this is very helpful to relieve some strain on the CPU from sorting/validating geometry, and SoC graphics is definitely more CPU limited than desktops. However, let's keep some perspective here until we see some ACTUAL GPU benchmarks. It doesn't change everything. Mantle only pushed an additional 10% framerate in most well specced machines. I'm going to guess that we're probably going to see at most 20% for Metal benchmarks compared to OpenGL. However, don't think it's going to be a 2x or 100x that Apple advertised. There are other factors in graphics rendering that aren't going to benefit from additional draw calls. Reply
  • iwod - Wednesday, June 04, 2014 - link

    This also comes in the rumors that Apple are building their own GPU as well. Reply
  • Krysto - Wednesday, June 04, 2014 - link

    What's Khronos' response to this? Will they support C++11 in OpenGL ES 4.0? Reply
  • djgandy - Wednesday, June 04, 2014 - link

    Their response? Keep bickering on the private message boards, do nothing for 5 years. ES will become obsolete and they will keep churning out dull feature after dull feature for a poorly supported API with lacklustre documentation and tools. They'll finally merge GL and GLES when nobody cares any more.

    Khronos has had years and years to address problems with GL and modernise it, and they had a fresh start with ES where they could have done the same, or for ES2. They are scared to change anything for some reason (Apparently developers are unable to refactor/reimplement code for new API's should they wish to use them)
  • przemo_li - Wednesday, June 04, 2014 - link


    OpenGL ES was and still is good specification.

    OpenGL 4.x is series of very good specifications.

    Khronos is doing good job. Yes AMD and Apple may right now be doing even better job (MS just tell fairy tales for next 14 months...), but that pressure should help Khronos to focus more.
    Also it may be important to note that Khronos is trying to solve driver overhead problem from different perspective. OpenGL 4.4 solve many of the problems that Apple/AMD try to solve.
  • djgandy - Wednesday, June 04, 2014 - link

    Except hardly any of the installed base supports OpenGL 4.4 and developers don't want to target some mish mash of poorly supported extensions and drivers, so they just go for lowest common denominator. There have been tonnes of ramblings about this lately. Now this is not directly OpenGL 4.4's fault . The years of mistakes and inaction before it are why it has barely any interest now. Khronos gives too much choice because they think 100 ways of doing something is good and keeps everyone happy.

    ES is a good spec for a 3D rendering abstraction, but it is a poorly implemented API. Bind to modify, barely any function returns anything but void, thousands of untyped enums, integers for objects. Is this an API that has ignored everything that has happened in software for the past 25 years?!

    The API should probably be split into low level with less driver babysitting and a higher level library. That latter is probably unnecessary though as plenty of frameworks already exist that do that job for those that don't want such fine grained control.
  • przemo_li - Wednesday, June 04, 2014 - link

    Windows and Linux AMD/Nvida both support OpenGL 4.4


    (Source: Wikipedia)
    OpenGL 4.4

    Release Date: July 22, 2013[29]

    Supported video cards: Nvidia GeForce 400 series, Nvidia GeForce 500 series, Nvidia GeForce 600 series, Nvidia GeForce 700 series, and Nvidia GeForce 800 series, AMD Radeon HD 5000 Series, AMD Radeon HD 6000 Series, AMD Radeon HD 7000 Series, AMD Radeon HD 8000 Series and AMD Radeon Rx 200 Series.

    That is quite a lot.

    More than Mantle currently!

    Now to be fair, Metal target OpenGL ES 3.1 capable hw. (No Geo, no Tess). And we have to wait and see how good AZDO is for OpenGL ES 3.1 (answer to this may come from Google, or from IHVs, either way that would be good development).

    So NO OpenGL 4.4 is available right now. On hardware that is capable of this new style computing.
  • przemo_li - Wednesday, June 04, 2014 - link

    *So NO! OpenGL 4.4 IS available right NOW. On hardware that is capable of this new style computing. Reply
  • djgandy - Wednesday, June 04, 2014 - link

    Read again. Installed user base. You want to sell your game/app to the most people possible. If you want reach, GL3.2 is probably the latest you can go. Most of the market is Intel GMA or other cheap low end cards.

    An 4.4 still suffers from the things I mentioned above. The actual API is complete turd. They are going to make it even worse soon with a gazillion more type-less entry points and ways to do the same thing.
  • jwcalla - Wednesday, June 04, 2014 - link

    This criticism is really curious. Developers with all APIs have to target the "lowest common denominator". How long were games still focusing on D3D9 or, worse, the equivalent console hardware? Can I get Mantle on pre-GCN cards? Will Metal be available for pre-A7 SoCs?

    What is the deployed base of OGL 4.4 vs Mantle vs Metal right now?
  • jwcalla - Wednesday, June 04, 2014 - link

    There are no technical reasons for this API.

    The reasons are:

    - vendor lock-in
    - product differentiation

    That's what all these proprietary APIs are about.
  • iAPX - Thursday, June 05, 2014 - link

    "Apple has never implemented OpenCL or any other GPU compute API on iOS thus far"
    OpenCL has been implemented on iOS for years, and is available through a PRIVATE API, so it's unavailable to AppleStore's app, but you could check on github and run OpenCL code on a dev iOS device, or even on a jailbreaken device.
  • ryszu - Saturday, June 07, 2014 - link

    There's no OpenCL private API available on iOS. Reply
  • stingerman - Tuesday, March 31, 2015 - link

    Example of using Private API:
  • darkdriving - Tuesday, August 11, 2015 - link

    Would the implementation of Metal have a noticeable affect on things like video encoding, real-time playback of 4K (and higher) footage, and effects in an application like Final Cut Pro X? If so, what other benefits might one working in the post-production field hope to gain? Reply

Log in

Don't have an account? Sign up now