Video Decode

One of the stones we've thrown at NVIDIA is the lack of high profile H.264 decode support. Tegra 2 can decode main profile H.264 at up to 20Mbps, but throw any high profile 1080p content at the chip and it can't do it. This is a problem because a lot of video content out there today is high profile, high bitrate 1080p H.264. Today, even on Tegra 2, you'll have to transcode a lot of your 1080p video content to get it to play on the phone.

With Kal-El, that could change.

NVIDIA's video decoder gets an upgrade in Kal-El to support H.264 at 40Mbps sustained (60Mbps peak) at a resolution of 2560 x 1440. This meets the bandwidth requirements for full Blu-ray disc playback. NVIDIA didn't just make the claim however, it showed us a 50Mbps 1440p H.264 stream decoded and output to two screens simultaneously: a 2560 x 1600 30" desktop PC monitor and a 1366 x 768 tablet display.

Did I mention that this is 12-day-old A0 silicon?

Kal-El also supports stereoscopic 3D video playback, although it's unclear to me what the SoC's capabilities are for 3D capture.

I asked NVIDIA if other parts of the SoC have changed, particularly the ISP as we've seen in both the Optimus 2X and Atrix 4G articles that camera quality is pretty poor on the initial Tegra 2 phones. NVIDIA stated that both ISP performance and quality will go up in Kal-El although we don't know any more than that. NVIDIA did insist that its own development Tegra 2 platforms have good still capture quality, so what we've seen from LG and Motorola may just be limited to those implementations.

 

The Architecture Final Words
Comments Locked

76 Comments

View All Comments

  • Lucian Armasu - Wednesday, February 16, 2011 - link

    They're working on customized ARM chip for servers, called Project Denver, and will be released in 2013. It's mostly focused on performance so they will make it as powerful as an ARM chip can get around that time. It will also be targeted at PC's.
  • Enzo Matrix - Wednesday, February 16, 2011 - link

    Superman < Batman < Wolverine < Iron Man?

    Kinda doing things in reverse order aren't you, Nvidia?
  • ltcommanderdata - Wednesday, February 16, 2011 - link

    It's interesting that nVidia's Coremark slide uses a recent GCC 4.4.1 build on their Tegras but uses a much older GCC 3.4.4 build on the Core 2 Duo. I can't help but think nVidia is trying to find a bad Core 2 Duo score in order to make their own CPU look more impressive.
  • Lucian Armasu - Wednesday, February 16, 2011 - link

    I think you missed their point. They're trying to say that ARM chips are quickly catching up in performance with desktop/laptop chips.
  • jordanp - Wednesday, February 16, 2011 - link

    On Tegra Roadmap chart.. looks like that curve is leveling at 100 on STARK... reaching the limit boundary of 11nm node?!!
  • Lucian Armasu - Wednesday, February 16, 2011 - link

    I think they'll reach 14 nm in 2014. IBM made a partnership with ARM to make 14 nm chips.
  • beginner99 - Wednesday, February 16, 2011 - link

    ... in a mobile phone? Most people only have 2 in their many PC. Agreed, does 2 are much more powerful but still, it will end up the same as on pc side. tons of cores and only niche software using it.

    I still have a very old "phone only" mobile. Yesterday I had some time to kill and looked at a few smart phones. And saw exactly what someone described here. They all seemed laggy and choppy, except the iPhone. I'm anything but an apple Fan boy (more like the opposite) but if I where a consumer with 0 knowledge just by playing with the phone I would chose to buy an iPhone.
  • jasperjones - Wednesday, February 16, 2011 - link

    Did anyone look at the fine print in the chart with the Coremark benchmark?

    Not only do they use more aggressive compiler flags for their products than for the T7200, they also use a much more recent version of gcc. At the very least, they are comparing apples and oranges. Actually, I'm more inclined to call it cheating...
  • Visual - Wednesday, February 16, 2011 - link

    This looks like Moore's Law on steroids.
    I guess (hope?) it is technically possible, simply because for a while now we've had the reverse thing - way slower progress than Moore's Law predicts. So for a brief period we may be able to do some catch-up sprints like this.
    I don't believe it will last long though.

    Another question is if it is economically feasible though. What impact will this have on the prices of the previous generation? If the competition can not catch up, wouldn't nVidia decide to simply hold on to the new one instead of releasing it, trying to milk the old one as much as they can, just like all other greedy corporations in similar industries?

    And finally, will consumers find an application for that performance? It not being x86 compatible, apps will have to be made specifically for it and that will take time.
    I for one can not imagine using a non-x86 machine yet. I need it to be able to run all my favorite games, i.e. WoW, EVE Online, Civ 5 or whatever. I'd love a lightweight 10-12 inch tablet that can run those on max graphics, with a wireless keyboard and touch pad for the ones that aren't well suited for tablet input. But having the same raw power without x86 compatibility will be relatively useless, for now. I guess developers may start launching cool new games for that platform too, or even release the old ones on the new platform where it makes sense (Civ 5 would make a very nice match to tablets, for example), but I doubt it will happen quickly enough.
  • Harry Lloyd - Wednesday, February 16, 2011 - link

    I'm sick of smartphones. All I see here are news about smartphones, just like last year all I saw were news about SSDs.

    Doesn't anything else happen in this industry?

Log in

Don't have an account? Sign up now