Round Two, Still Quad-Core

I have to give NVIDIA credit, back when it introduced Tegra 3 I assumed its 4+1 architecture was surely a gimmick and to be very short lived. I remember asking NVIDIA’s Phil Carmack point blank at MWC 2012 whether or not NVIDIA would standardize on four cores for future SoCs. While I expected a typical PR response, Phil surprised me with an astounding yes. NVIDIA was committed to quad-core designs going forward. I still didn’t believe it, but here we are in 2013 with NVIDIA’s high-end and mainstream roadmaps both exclusively featuring quad-core SoCs. NVIDIA remained true to its word, and the more I think about it, the more the approach makes sense.

In the PC industry we learned that there’s no real downside to quad-core as long as you can power gate individual cores, and turbo up to higher frequencies when fewer than four cores are active, there’s no real tradeoff other than cost. You get good multithreaded performance when you need it, and single threaded performance doesn’t suffer. Tegra 3 complicated things because it was on an older, more power hungry process when Qualcomm introduced its first Krait parts. Tegra 4 on the other hand comes to market on the absolute latest and greatest 28nm HPL process from TSMC. And like Tegra 3, each Cortex A15 core in Tegra 4 can be independently power gated.

Like most of the evolution in the mobile space, NVIDIA skipped the silly transitional period between dual and many core and just ended up exactly where it knows the story ends. Heavily threaded apps are still rare on mobile OSes, but with each core independently power gated the user shouldn’t pay a penalty for them being there as long as NVIDIA and the device vendor don’t configure the DFVS tables improperly.

The downside is cost, not to the end user, but to NVIDIA. Economically, NVIDIA was able to make Tegra 3 work for itself with a die size somewhere around 80mm^2. The move to 28nm allowed NVIDIA to increase transistor count, without straying from that die size. Tegra 4 is a bit larger than Tegra 3, but it’s still somewhere in that 80mm^2 range.

Wafer costs for 28nm HPL are undoubtedly higher than 40nm LPG at TSMC, not to mention any differences in yield between T3 and T4, so without a doubt Tegra 4 will cost NVIDIA more than Tegra 3. All of that being said however, NVIDIA still seems to take a conservative approach to die sizes in mobile, which gives it the flexibility to significantly undercut Qualcomm in costs to OEMs. I do believe this was a key part of NVIDIA’s success last year with Tegra 3 ending up in both the Nexus 7 and Microsoft’s Surface RT. Long term, simply selling your SoCs for less than the competition isn’t a path to market dominance, but being able to do so helps buy NVIDIA time while it gathers the remaining missing pieces of the mobile platform (integrated baseband, RF front end, WiFi, etc...). Tegra 4 isn’t the sort of drive the industry forward type of silicon we’re used to seeing from NVIDIA, but it’s sized appropriately given NVIDIA’s position in the market. From a business standpoint, NVIDIA is making the right decisions to ensure the Tegra business at least has a chance of succeeding.

The Cortex A9 r4p1 & Tegra Clock Speeds The GPU & Memory Interface
Comments Locked

75 Comments

View All Comments

  • PingviN - Monday, February 25, 2013 - link

    Tegra made an operating loss of $150 million for fiscal year 2012, despite getting into both the Nexus 7 (the refresh coming this year has been lost to Qualcomm) and the Surface RT. Sales prognosis cut almost in half for the fiscal year 2013. To date, Nvidia hasn't had any profit coming out of Tegra and now it's in limbo mode until Tegra 4 is released because Tegra 3 gets smashed by it's competition.

    It's been a pretty crappy year for Tegra.
  • guilmon14 - Tuesday, February 26, 2013 - link

    I don't know anything about this company "tegra", but have you heard about Nvidia? I heard they're doing great!

    http://nvidianews.nvidia.com/Releases/NVIDIA-Repor...

    According to this Nvidia is up in income, revenue, and equity.

    If you wanted to check the easy way just look at nvidia's wikipedia page, gives you all the nice money numbers.
    http://en.wikipedia.org/wiki/Nvidia
  • trajan2448 - Monday, February 25, 2013 - link

    5 years down the road phones will be cooking our dinner. It's amazing how fast the tech is advancing now.
  • Scannall - Monday, February 25, 2013 - link

    If they don't hustle right along, SOC's with the PowerVR 6 series (Rogue) will beat them to market. And considering their GPU just barely squeaks by the iPad as it is, it will be behind early on.
  • Khato - Monday, February 25, 2013 - link

    Was it specifically stated that the Tegra 4 SPECint/W figure was running on the high speed cores? As is mentioned later on the page, a SPECint2000 of 520 is within reach of the power optimized companion core, so the only reason I'd expect NVIDIA to not use the companion core for this data is if they explicitly stated that it wasn't.

    Part of the cause for my suspicion is that the Power vs DMIPS chart that Samsung recently provided for the Exynos 5 Octa shows 8k DMIPS at 1 watt... and from the press coverage back in 2009 for the A9 hard macros there's both the 10k DMIPS at 1.9 watts and 2GHz with the speed speed optimized and 4k DMIPS at 250 mW and 800 MHz for the power optimized. Which equate to 5.26 DMIPS/mW and 8 DMIPS/mW, respectively. Now the 2GHz data point should be even worse off than Tegra 3 and yet it only shows the Samsung Exynos 5 Octa as being 52% more efficient.

    Going into estimating rather than published numbers, if we up the efficiency of Tegra 3 a bit compared to that 2GHz figure then it's likely going to be closer to A15 being 30% more efficient... which you then add the known ~40% efficiency bump going from a performance to power implementation and you get the kind of drastic increase NVIDIA is touting.
  • Wilco1 - Monday, February 25, 2013 - link

    It doesn't matter whether they used the 5th core or one of the fast ones. By definition the cross over point is where the 5th core uses as much power as a fast core. Since that is ~800MHz, the power efficiency is the same. The 5th core can likely clock to well over 1GHz, but then it uses more power than a fast core.

    You are basically right that some of the 73% MIPS/W improvement comes from the 40-28nm process change. However the combined improvement of process and micro architecture means that you can use the low power core far more often. The 5th core in Tegra 4 is effectively more than 3 times as fast than the one in Tegra 3. So that means lots of tasks which needed 1-2 fast Tegra 3 cores can now run on the 5th Tegra 4 core. That means the power efficiency will actually improve by what NVidia suggests.
  • Khato - Monday, February 25, 2013 - link

    Mind sharing the source for that? The wording in this article implies differently - "That 825MHz mark ends up being an important number, because that’s where the fifth companion Cortex A15 tops out at." Given 1.9GHz for the performance-optimized cores, something around 800 MHz sounds about right for the max frequency of a power-optimized version.

    Anyway, there's no question that Tegra 4 will be quite a bit more power efficient simply by virtue of being able able to run more workloads exclusively on the companion core. As said before, in exchange for a much lower cap on maximum frequency a power optimized synthesis gives at least a 40% bump in efficiency... and now that power optimized core will still deliver respectable performance.
  • Wilco1 - Monday, February 25, 2013 - link

    Read http://www.nvidia.com/content/PDF/tegra_white_pape... it explains the difference between leakage and active power on low power and high performance transistors. It explicitly says the 5th core in Tegra is capped at 500MHz as that is where it is as power efficient as a fast core. The graphs and the word capped suggest the 5th core can go faster but there is no point.

    Note that Tegra 3 uses a different process with low power transistors for the 5th core rather than a low power synthesis (not that they couldn't have done that too, but it is never mentioned and the 5th core looks pretty much the same in the die plots). I presume Tegra 4 does the same on the 28nm process.
  • Khato - Tuesday, February 26, 2013 - link

    Okay, so your commentary is based on the Tegra 3 which is using an entirely different approach to power savings for the companion core. Note that all of the data I was referencing for the difference in efficiency between ARM's two A9 hard macros was on the same process and hence is more applicable to the case of Tegra 4. As you correctly state, Tegra 3 gains its power efficiency for the companion core by using the LP process rather than a low power synthesis, likely due to it being a simpler and faster route to the desired end result and equally effective for their design goals.

    Tegra 4 isn't playing process games for the companion core. How do you gain efficiency on the same process? You loosen timings to allow for the usage of smaller transistors, less flop stages, so on so forth. The end result being that you sacrifice maximum switching speed to reduce both leakage and dynamic power. From all the information that NVIDIA has made available it's a completely different implementation from Tegra 3.
  • Wilco1 - Tuesday, February 26, 2013 - link

    Tegra 4 does exactly the same as Tegra 3. According to NVidia's white paper on Tegra 4 (http://www.nvidia.com/docs/IO/116757/NVIDIA_Quad_a... it also uses low power transistors for the 5th core. Again if you look at the die photos of Tegra 4 all 5 cores are identical just like Tegra 3. So that seems to exclude a different synthesis.

    The way NVidia get a low power core is by using low power transistors. TSMC 28nm process supports several different transistor libraries, from high performance high leakage to low performance low leakage. Based on the information we have all they have done is swap the transistor libraries.

Log in

Don't have an account? Sign up now