Round Two, Still Quad-Core

I have to give NVIDIA credit, back when it introduced Tegra 3 I assumed its 4+1 architecture was surely a gimmick and to be very short lived. I remember asking NVIDIA’s Phil Carmack point blank at MWC 2012 whether or not NVIDIA would standardize on four cores for future SoCs. While I expected a typical PR response, Phil surprised me with an astounding yes. NVIDIA was committed to quad-core designs going forward. I still didn’t believe it, but here we are in 2013 with NVIDIA’s high-end and mainstream roadmaps both exclusively featuring quad-core SoCs. NVIDIA remained true to its word, and the more I think about it, the more the approach makes sense.

In the PC industry we learned that there’s no real downside to quad-core as long as you can power gate individual cores, and turbo up to higher frequencies when fewer than four cores are active, there’s no real tradeoff other than cost. You get good multithreaded performance when you need it, and single threaded performance doesn’t suffer. Tegra 3 complicated things because it was on an older, more power hungry process when Qualcomm introduced its first Krait parts. Tegra 4 on the other hand comes to market on the absolute latest and greatest 28nm HPL process from TSMC. And like Tegra 3, each Cortex A15 core in Tegra 4 can be independently power gated.

Like most of the evolution in the mobile space, NVIDIA skipped the silly transitional period between dual and many core and just ended up exactly where it knows the story ends. Heavily threaded apps are still rare on mobile OSes, but with each core independently power gated the user shouldn’t pay a penalty for them being there as long as NVIDIA and the device vendor don’t configure the DFVS tables improperly.

The downside is cost, not to the end user, but to NVIDIA. Economically, NVIDIA was able to make Tegra 3 work for itself with a die size somewhere around 80mm^2. The move to 28nm allowed NVIDIA to increase transistor count, without straying from that die size. Tegra 4 is a bit larger than Tegra 3, but it’s still somewhere in that 80mm^2 range.

Wafer costs for 28nm HPL are undoubtedly higher than 40nm LPG at TSMC, not to mention any differences in yield between T3 and T4, so without a doubt Tegra 4 will cost NVIDIA more than Tegra 3. All of that being said however, NVIDIA still seems to take a conservative approach to die sizes in mobile, which gives it the flexibility to significantly undercut Qualcomm in costs to OEMs. I do believe this was a key part of NVIDIA’s success last year with Tegra 3 ending up in both the Nexus 7 and Microsoft’s Surface RT. Long term, simply selling your SoCs for less than the competition isn’t a path to market dominance, but being able to do so helps buy NVIDIA time while it gathers the remaining missing pieces of the mobile platform (integrated baseband, RF front end, WiFi, etc...). Tegra 4 isn’t the sort of drive the industry forward type of silicon we’re used to seeing from NVIDIA, but it’s sized appropriately given NVIDIA’s position in the market. From a business standpoint, NVIDIA is making the right decisions to ensure the Tegra business at least has a chance of succeeding.

The Cortex A9 r4p1 & Tegra Clock Speeds The GPU & Memory Interface
Comments Locked

75 Comments

View All Comments

  • Krysto - Monday, February 25, 2013 - link

    S600 is just a slightly overclocked S4 Pro with the same GPU.

    The real competitor of Tegra 4 will be S800. We'll see if it wins in CPU performance (it might not), and I think there's a high chance it will lose in GPU performance, as Adreno 330 is only 50% faster than Adreno 320 I think, and Tegra 4 is about twice as fast.

    Qualcomm has always had slower graphics performance than Nvidia actually. The only "gap" they found in the market was last fall with the Adreno 320, when Nvidia didn't have anything good to show. But Tegra 3 beat S4 with its Adreno 225.
  • watersb - Monday, February 25, 2013 - link

    I'm amazed at the depth of this NVIDIA data-dump. Brilliant work.

    Anand's observation re: die size, cost strategy, position in the market and how this buys them time to consolidate... Wow.

    Clearly, Nvidia is in this game for the long haul.
  • djgandy - Monday, February 25, 2013 - link

    So OpenGL ES 3.0 doesn't matter, but quad core A15 does? Why do people suck up to Nvidia and their marketing BS so much?

    T4i still single channel memory? What a joke configuration.
  • djgandy - Monday, February 25, 2013 - link

    Also a 9 page article about a mobile SoC without a single reference to the word "battery".
  • varad - Monday, February 25, 2013 - link

    Read the article before you write such comments. The very first page is "Introduction & Power" where they do mention some numbers and their thoughts.
  • djgandy - Tuesday, February 26, 2013 - link

    Yeah its all smoke and mirrors under lab test conditions. Where is the real battery life? Is this not for battery powered devices?
  • Krysto - Monday, February 25, 2013 - link

    Personally, I think all 2013 GPU's should have support for OpenGL ES 3.0 and OpenCL. I was stunned to find out Tegra 4 was not going to support it as they haven't even switched to a unified shader architecture.

    That being said, Anand is probably right that it was the right move for Nvidia, and they are just going to wait for the Maxwell architecture to streamline the same custom ARMv8 CPU from Tegra 5 to Project Denver across product line-ups, and also the same Maxwell GPU cores.

    If that's indeed their plan, then switching Tegra 4 to Kepler this year, only to switch again to Maxwell next year wouldn't have made any sense. GPU architectures barely change even every 2-3 years, let alone 1 year. It wouldn't have been cost effective for them.

    I do hope they aren't going to delay the transition again with Tegra 5 though, and I also do hope they follow Qualcomm's strategy with S4 last year of switching IMEMDIATELY to the 20nm process, instead of continuing on 28nm with Tegra 5, like they did with Tegra 3 on 40nm. But I fear Nvidia will repeat the same mistake.

    If they put Tegra 5 on 20nm, and make it 120mm2 in size, with Maxwell GPU core, I don't think even Apple's A8X will stand against it next year in terms of GPU performance (and of course it will get beaten easily in CPU performance, just like this year).
  • djgandy - Tuesday, February 26, 2013 - link

    Tegra is smaller because it lacks features and also memory bandwidth. The comparison is not really fair to assume you can just throw more shaders at the problem. You'll need wider memory bus for a start. You'll need more TMU's and in the future it's probably smart to have a dedicate ROP unit. Then also are you seriously going to just stick with FP20 and not support ES 3.0 and OpenCL? OEMs see OpenCL as a de facto feature these days, not because it is widely used but because it opens up future possibilities. Nvidia has simply designed an SoC for gaming here.

    Your post focuses on performance, but these are battery powered devices. The primary design goal is efficiency, and it would appear that is why apple went swift and not A15. A15 is just too damn power hungry, even for a tablet.
  • metafor - Tuesday, February 26, 2013 - link

    If the silicon division of Apple were its own business, they'd be in the red. Very few silicon providers can afford to make 120mm^2 chips and still make a profit; let alone one with as little bargaining clout in the mobile space as nVidia.

    Numbers are great but at the end of the day, making money is what matters.
  • milli - Monday, February 25, 2013 - link

    nVidia is trying hard but Tegra still isn't making them any money ...

Log in

Don't have an account? Sign up now