POST A COMMENT

94 Comments

Back to Article

  • dagamer34 - Wednesday, November 09, 2011 - link

    Technically Sony's been planning on having 4 Cortex A9 CPUs inside the Playstation Vita since it was announced in January (plus the very powerful PowerVR SGX 543MP4) Reply
  • Klinky1984 - Wednesday, November 09, 2011 - link

    ...and where might I obtain a Playstation Vita? What other four core ARM chip is available in mainstream products as of today? Reply
  • Klinky1984 - Wednesday, November 09, 2011 - link

    Maybe I should I have phrased that as "available for use in mainstream products as of today", I don't think Sony is going to let anyone use their SoC in a phone or tablet. Reply
  • MrMilli - Wednesday, November 09, 2011 - link

    Both NEC (now Renesas Electronics) and Marvell beat nVidia to it. NEC focuses more on the industrial side of things and Marvell (Armada XP) more on storage/server applications. But NEC chips often find their way into automotive electronics (I believe some GPS systems use their ARM11 quad core from years ago).
    I don't know what you see as a mainstream products. But GPS systems and low-end servers can be seen as mainstream.
    Reply
  • Klinky1984 - Wednesday, November 09, 2011 - link

    Enterprise, industrial & embedded products are not mainstream. The Tegra3 is going to be a selling point for the products that use it & those products will be advertised prominently on the TV, print & the Internet. I highly doubt you'll see that for the chips you mentioned, I don't think I've ever seen a car commercial tout that their GPS is powered by NEC or Marvell or whatever platform they're using. I've seen plenty of phone commercials touting Tegra2 or Snapdragon. Reply
  • Stuka87 - Wednesday, November 09, 2011 - link

    I disagree, and I am not sure you are aware of the meaning of mainstream. As its all dependent on point of view.

    Mainstream is something which is purchased, used or accepted broadly rather than by a tiny fraction of population or market; common, usual or conventional.

    -Wikipedia


    Nowhere is it stated that something has to be on TV to be mainstream. It simply has to be popular in the market that it is aimed at. And NEC is most definitely mainstream in the markets that they target.
    Reply
  • eddman - Wednesday, November 09, 2011 - link

    But the question was: can you buy a phone or tablet with an NEC or marvell quad-core SoC?

    Maybe Tegra 3 wasn't the first quad ARM chip, but it is the first quad for mobile devices.

    TI doesn't have any in its roadmap for now.
    Same thing for St-Ericsson as TI.
    Qualcomm's quads won't appear until Q4 2012.
    Samsung hasn't announced any yet.
    Reply
  • Stuka87 - Wednesday, November 09, 2011 - link

    This is true. The NEC would not be suitable for a mobile device, and is not available for them. So nVidia is first in that market space. Reply
  • Penti - Sunday, November 20, 2011 - link

    Renesas do offer quad-core mobile cpus. As does any one else, it's just a matter off when they are actually available. They do have an more impressive overall offer in the mobile space though. Reply
  • Klinky1984 - Thursday, November 10, 2011 - link

    Almost 1/3rd of the US population has a smartphone, how many of those consumers have an enterprise NAS, SAN or ARM based cloud server or even know what one is & even if they do know what one is do they actually have a desire to buy one? As for embedded GPS devices, which devices used the prototype quad-core ARM11 from Renesas Electronics?

    Additionally I am not finding actual products containing a Marvell Armada XP or Renesas Electronics R-Car H1, The R-Car H1 doesn't even start mass production until almost 2013. Renesas Electronics' Quad ARM11 NaviEngine looks like it was used by Alpine Car Information Systems in 2010, but finding out what Alpine product uses it & where I could buy it is a challenge.
    Reply
  • Lucian Armasu - Wednesday, November 09, 2011 - link

    Those probably use more power, too, so they wouldn't be suited for tablets and smartphones anyway. Tegra 3 is the first quad core to be optimized for them. Reply
  • jeremyshaw - Wednesday, December 21, 2011 - link

    about as relevent as claiming the iPad 2 GPU doesn't matter since it doesn't use Android. Reply
  • jeremyshaw - Friday, December 16, 2011 - link

    PS Vita is going on sale this Sat in Japan. For the rest of the world.... sucks to be you :p Reply
  • Draiko - Wednesday, November 09, 2011 - link

    With a 3-5 hour battery life per charge. Disgusting. Reply
  • Shadowmaster625 - Wednesday, November 09, 2011 - link

    Probably only 1 hour playing a game using the unreal engine or similar. Reply
  • tipoo - Wednesday, November 16, 2011 - link

    They said 3-5 hours of gameplay on the Vita, other stuff is more. Reply
  • tipoo - Saturday, March 10, 2012 - link

    Which is also what you get while gaming on any smartphone, if not less by the way. Reply
  • GeekBrains - Wednesday, November 09, 2011 - link

    Let's wait and see how it affects the real world bandwidth requirements.. Reply
  • dagamer34 - Wednesday, November 09, 2011 - link

    Seems there's also a "lot of concern" too over the unchanged aspects of the Tegra 3 chip. They're shoving this entire design onto a 40nm process where every other competitor is focusing on 28nm which means greater power consumption at some point. Also, when you compare the numbers on the GPU, it seems that nVidia's got a rather weak offering comparing to the competition, which is kind of sad for a GPU company. What gives? Reply
  • B3an - Wednesday, November 09, 2011 - link

    It's because of 40nm, with any more GPU power the chip would be even bigger. It's looking like there will be a 28nm version of Tegra 3 next year, but in order to be the first out with a quadcore SoC they have decided to use 40nm for now. Reply
  • dagamer34 - Wednesday, November 09, 2011 - link

    Using 40nm isn't an excuse when both Apple and Samsung use 45nm and have GPUs that trounce the Tegra 2 in real life and Tegra 3 on paper. Reply
  • eddman - Wednesday, November 09, 2011 - link

    Yeah, and A5 is about 42%-43% bigger than tegra 3, an seems to be consuming more power and run hotter. I'd rather have less GPU power than that.

    Don't know anything about exynos' size and other characteristics.

    Anand, do you have any such information on exynos?
    Reply
  • MySchizoBuddy - Wednesday, November 09, 2011 - link

    what's your source of A5 die size? Reply
  • eddman - Wednesday, November 09, 2011 - link

    At first this: http://www.anandtech.com/show/4840/kalel-has-five-...

    Anand says tegra 3 is 30% smaller than A5, which means A5 is 42-43% bigger.

    After your above comment, I searched a little bit, and noticed in the IT pro portal article linked in my other comment, it says 120 mm^2.

    I also found these:

    http://www.eetimes.com/electronics-news/4215094/A5...

    http://www.notebookcheck.net/Analyst-explains-grap...

    Here, it's 122 mm^2.

    Now with the exact size known, it puts the A5 in an even worse situation, 50-52% bigger.
    Reply
  • eddman - Wednesday, November 09, 2011 - link

    Ok, it seems exynos' size is about 118 mm^2.

    http://www.itproportal.com/2011/06/07/exynos-soc-s...

    http://www.businesswire.com/news/home/201107070061...

    Considering that tegra 3 has 5 cores and yet is still much smaller, I might say nvidia has actually done some nice engineering here.

    Wonder how much of that difference is because of 40 nm process vs. 45 nm. Probably not much, but what do I know. Can anyone do some calculations?
    Reply
  • metafor - Wednesday, November 09, 2011 - link

    It's really difficult to judge because they're from two different foundries. The minimum etch (e.g. 45nm, 40nm) isn't the only thing that affects die area. Some processes require stricter design rules that end up bloating the size of logic.

    Samsung uses Samsung semi's foundries while nVidia uses TSMC. It's difficult to say how they compare without two identical designs that have gone to fab on both.
    Reply
  • Klinky1984 - Wednesday, November 09, 2011 - link

    I think the 500Mhz companion core & proper power gating alleviates most of the concerns about power consumption. Reply
  • metafor - Wednesday, November 09, 2011 - link

    Not really. It alleviates the concern of power consumption on light loads. While that is a big part of common usage and it's definitely a benefit to have great idle/light power, I still would like to have better battery life while I'm heavily using the device. For instance, while playing a resource-heavy game or going to pretty complex websites.

    One thing I do like is that they've improved the efficiency of the video decoder. This makes one of the most common use-cases (watching movies) less power-intensive.
    Reply
  • SniperWulf - Wednesday, November 09, 2011 - link

    I agree. I would rather they had made a strong dual-core and dedicate the rest of the die space to a second memory channel and a stronger GPU Reply
  • a5cent - Wednesday, November 09, 2011 - link

    Qualcom is the only SoC manufacturer making the transition to 28nm anytime soon. Everyone else is shifting at the very end of 2012 (at the earliest). Reply
  • psychobriggsy - Friday, November 11, 2011 - link

    By using 40nm NVIDIA has achieved a first to market advantage in the high-end quad-core SoC for tablets. Obviously this comes at the cost of a larger die, higher power consumption and/or slower clock speeds.

    The larger die will add some cost to the product, but it's hardly a problem given that it is still quite small in the grand scheme of things. I believe it is smaller than the A5 for example. In addition mature yields on the 40nm process may allow NVIDIA to ship millions without worry rather than risk early 28nm yields.

    Tegra 3 was meant to clock to over 1.5GHz, and this hasn't been achieved, probably 1.3GHz was the better option for power consumption. 28nm will fix this for Tegra 3+ next year, hopefully.

    In addition the low power core gives NVIDIA an early entry into the low-power companion core market a year or two before the ARM Cortex A15 + ARM Cortex A7 combos arrive. This is another reason it is 40nm - TSMC don't have the ability to fab 28nm dies with a combination of processes (LP and HP) on the same die yet.

    So the die might costs a couple of dollers more to make vs Tegra 2, but I'm sure they can charge a premium for the product until the competitors arrive.
    Reply
  • Paulman - Wednesday, November 09, 2011 - link

    Wow, I'm amazed by the response times. It looks pretty seamless (i.e. the switching to and from the low-power transistor companion core). From a GUI perspective, there doesn't appear to be any stutter at all.

    Looks like a good job, NVIDIA :O

    P.S. Speaking of low-power transistors, that's ingenious to build an entire core out of low-power transistors on the same die as the four regular cores. I wonder if that's an idea that's been floating around in the field for awhile...
    Reply
  • dagamer34 - Wednesday, November 09, 2011 - link

    You think using LP transistors is something, see big.LITTLE coming from ARM in 2012-2013. ARM designed an entire core to be specifically low power (the Cortex A7) to fit perfectly with the more powerful Cortex A15, so that you get even greater performance with even greater power savings. Reply
  • Mugur - Wednesday, November 09, 2011 - link

    Yes, but Tegra 3 is already here... Reply
  • Draiko - Wednesday, November 09, 2011 - link

    It seems like nVidia and ARM co-developed this kind of Architecture. nVidia is implementing it in the Tegra 3 and ARM is making it available for license with bigLITTLE.

    I'm just blown away with how smooth the dynamic threading is on the Tegra 3. This is going to be an absolute game-changer.
    Reply
  • JonnyDough - Wednesday, November 09, 2011 - link

    That's because it isn't loaded down with crapware like the Blockbuster app...yet. Reply
  • metafor - Wednesday, November 09, 2011 - link

    IIRC, Marvell's Sheeva processors uses this method (came out ~2010 I believe). Reply
  • jcompagner - Thursday, November 10, 2011 - link

    intel does this also for quite some time
    Was the SATA bug they had not a result of something like this?

    There also there was a wrong type of transistor used for that.
    Reply
  • Omega215D - Wednesday, November 09, 2011 - link

    Seeing that the architecture has a sound processor in it, is there any chance that nVidia could revive SoundStorm for the mobile platform? That would be great for things like the Transformer and other tablets as well as smart phones for multimedia purposes. Just a thought. Reply
  • ggathagan - Wednesday, November 09, 2011 - link

    Given the Tegra 3 already includes HD audio and 7.1 support, I'm not clear on what feature you think Soundstream would add. Reply
  • MamiyaOtaru - Friday, November 11, 2011 - link

    what i liked about sound*storm* and what has me using cmedia now is DDL Reply
  • B3an - Wednesday, November 09, 2011 - link

    Just a little thing, but the Transformer Prime has a IPS+ display, not a typical IPS display which you have listed. Asus clam the + version is 1.5x brighter than a normal IPS display.

    I'm impressed by the specs of the Prime, in literally EVERY single way (possibly apart the GPU) the Prime better than the iPad 2.... thinner, ligher, better display (apparently), higher res too, twice as much RAM, SD slot, and more than twice as many cores that are each also clocked higher...
    If it's this good in the real world then i'll be imprssed that Asus could afford to make such a product and keep it at the same price as the iPad.
    Reply
  • name99 - Wednesday, November 09, 2011 - link

    "in literally EVERY single way ... better than the iPad 2"

    You sure about that? You know, for a FACT, that the flash is faster? That the WiFi supports 5GHz and is faster? That there is the same range of sensors (including, eg, magnetometer, accelerometer, gyro, proximity sensor, light sensor, and a dozen I've forgotten --- and that every one of them is better than on iPad2?

    There is a HUGE amount to iPad2 that people seem to forget because it's just hiding there under the covers, it doesn't advertise itself.
    Reply
  • ncb1010 - Saturday, November 12, 2011 - link

    Yes, The prime has a magnetometer, a gyro, a compass and a light sensor according to theverge.com(ex-engadget staff). The base iPad base model is missing a key sensor(GPS) but this includes it at the same price point. The iPad has no flash in any sense of the word(Adobe flash or camera flash) while the transformer has both Adobe Flash and a camera flash so I really don't see how it has faster flash(do you mean shutter speed?). Besides, of all the specs we know on the camera, it looks to be a lot better than the ones Apple put in there to upsell people on the iPad 3. As far as a proximity sensor, what would be the purpose of it? The purpose in the iPad is to detect when the custom cover on the iPad is put on and removed. The Pros on the Optimus Prime hardware wise are numerious while the iPad have some theoretical benefits just because we don't know every single detail on the Prime. You are grasping at straws here. Reply
  • AuDioFreaK39 - Wednesday, November 09, 2011 - link

    Quick question for Tegra 3 architecture engineers: Is the "companion core" identified as Core 0 or Core 4? Thanks in advance. Reply
  • Draiko - Wednesday, November 09, 2011 - link

    Good question. I'd love a solid answer myself but from the core demo video, it looks like it's core 0. Reply
  • Anonymous Blowhard - Wednesday, November 09, 2011 - link

    IANAD (I Am Not A Developer) but I'm betting it's actually still tagged as 0, with lower-level firmware switching as to whether or not "core 0" is the companion core or a full core.

    Remember that the companion core cannot be run at the same time as the full cores, so it's likely that when the demand-based switching kicks in, "companion core 0" is spun down, "full core 0" is spun up, and the rest of "full core 1/2/3" come online as well.

    Since this is happening at the firmware/lower level vis-a-vis x86 "Turbo Core" it will be transparent to the OS.

    /but that's, like, just my opinion man
    Reply
  • eddman - Wednesday, November 09, 2011 - link

    I agree with Anonymous Blowhard. The OS can't see all 5 cores at the same time, so companion core would be 0 when it's enabled. Reply
  • mythun.chandra - Wednesday, November 09, 2011 - link

    Core 0 :) Reply
  • allingm - Thursday, November 10, 2011 - link

    While you guys are probably right, and it is probably just core 0, there is the possibility that its cores 0 - 4. All 4 threads could simply be run on the one core and this would make it seamless to the OS which seems to be what Nvidia suggests. Reply
  • jcompagner - Thursday, November 10, 2011 - link

    does the OS not do the scheduling?

    I think there are loads of things build in to the OS that schedules the processors threads.. For example the OS must be Numa aware for numa systems so that they keep processes/threads on the right cores that are in the same cpu/memory banks

    If i look at windows, then windows schedules everything all lover the place but it does now about hyper threading because those cores are skipped when i don't use more then 4 cores at the same time.
    Reply
  • DesktopMan - Wednesday, November 09, 2011 - link

    Seems risky to launch with a GPU that's weaker than existing SOCs. Compared to the Apple A5 performance it looks more like a 2009 product... Exynos also has it beat. The main competitor it beats is Qualcomm, who isn't far from launching new SOCs themselves. Reply
  • 3DoubleD - Wednesday, November 09, 2011 - link

    At least it looks more powerful than the SGX540 which is in the Galaxy Nexus. I'll wait and see what the real world performance is before writing it off. I suspect it will have "good enough" performance. I doubt we will see much improvement in Android devices until 28 nm as die sizes seem to be the limiting factor. Fortunately Nvidia has their name on the line here and they seem to be viciously optimizing their drivers to get every ounce of performance out of this thing. Reply
  • DesktopMan - Wednesday, November 09, 2011 - link

    Totally agree on the Galaxy Nexus. That GPU is dinosaur old though. Very weird to use it in a phone with that display resolution. Any native 3d rendering will be very painful. Reply
  • eddman - Wednesday, November 09, 2011 - link

    "Exynos also has it beat"

    We don't know that. On paper kal-el's geforce should be at least as fast as exynos. Better wait for benchmarks.
    Reply
  • mythun.chandra - Wednesday, November 09, 2011 - link

    It's all about the content. While it would be great to win GLBench and push out competition-winning benchmarks scores, what we've focused on is high quality content that fully exploits everything Tegra 3 has to offer. Reply
  • psychobriggsy - Friday, November 11, 2011 - link

    I guess it depends on the clock speed the GPU is running at, and the efficiency it achieves when running. Whilst not as powerful per-clock (looking at the table in the article), a faster clock could make up a lot of the difference. Hopefully NVIDIA's experience with GPUs also means it is very efficient. Certainly the demos look impressive.

    But they're going to have to up their game soon considering the PowerVR Series 6, the ARM Mali 6xx series, and so on, as these are far more capable.
    Reply
  • AmdInside - Wednesday, November 09, 2011 - link

    Anyone else getting an error when opening the Asus Transformer Prime gallery? Reply
  • skydrome1 - Wednesday, November 09, 2011 - link

    I am still quite underwhelmed by it's GPU. I mean, come on NVIDIA. A company with roots in GPU development having the lowest GPU performance?

    They need to up their game. Or everyone's just going to license other's IPs and develop their own SoCs. LG got an ARM license. Sony got an Imagination license. Samsung's even got their own SoCs shipping. Apple is sticking to in-house design. HTC acquired S3.

    After telling the whole world that by the end of next year, there will be phones that will beat consoles in raw graphical performance, I feel like an idiot.

    Please prove me right, NVIDIA.
    Reply
  • EmaNymton - Wednesday, November 09, 2011 - link

    REALLY getting tired of all the Anandtech articles being overly focused on performance and ignoring battery life or making statements about the technologies that will theoretically increase battery life. Total ACTUAL battery life matters and increases in perf shouldn't come to the detriment of total ACTUAL battery life.

    This over-emphasis on perf and refusing to hold MFGRs to account for battery life is bordering on irresponsible and is driving this behavior in the hardware MFGRs.

    QUITE REWARDING THE HARDWARE MFGRs FOR RELEASING PRODUCTS WITH JUNK BATTERY LIFE OR BATTERY LIFE THAT IS WORSE THAN THE PREVIOUS GENERATION, ANANDTECH!
    Reply
  • mwarner1 - Wednesday, November 09, 2011 - link

    Did you actually read the article? Reply
  • whsmnky - Wednesday, November 09, 2011 - link

    As stated in the article, they'll have more once they have something in hand like they have in every other item review. Without physically having the product to test, I'm curious how you would expect them to provide ACTUAL battery life numbers to be able to hold anyone accountable to anything. Reply
  • MrSewerPickle - Wednesday, November 09, 2011 - link

    Yeah I agree with both of the replies to this original comment. And please do NOT change the content, format or delivery of any of your Reviews or articles Anand. They are top-notch and rare on tech websites. Perfectly covered BTW and the Tegra 3 GPU is indeed a concern, in my humble opinion. Reply
  • cjs150 - Wednesday, November 09, 2011 - link

    Perfect for a low power, low heat, no noise HTPC?

    Zotac have released the incredibly cute Nano AD10 (and one with Via chip in).

    NVidia come on you can beat them with something as cute but better (no fan for starters!), you did tease with something like the nano box about a year or 2 ago
    Reply
  • krumme - Wednesday, November 09, 2011 - link

    HTPC: you can already have bobcat based, no fan solutions. All running x86. At around the same size. Would be nice with some benchmarks comparing this to bobcat, especially fpu part. LOL. Reply
  • iwod - Wednesday, November 09, 2011 - link

    Would love to known how it perform against the A5. Reply
  • roundhouse_c - Wednesday, November 09, 2011 - link


    Here is another article:
    http://www.slashgear.com/tsmc-starts-28nm-producti...

    Tegra 3 on 28nm die will be out sooner then what's being posted here.
    Reply
  • eddman - Wednesday, November 09, 2011 - link

    Kal-El+ must be 28 nm. It wouldn't make sense if it isn't. Reply
  • ezekiel68 - Wednesday, November 09, 2011 - link

    From the article you linked:

    "AMD and NVIDIA saying they will be using the 28nm process silicon in
    their next-gen graphics products."

    The near-term timeframe is in reference to NVIDIA's next generation Kepler GPUs, not their mobile device SOCs. See also:

    http://vr-zone.com/articles/nvidia-28nm-kepler-pro...
    Reply
  • eddman - Wednesday, November 09, 2011 - link

    True, but it doesn't mean that they are NOT working on a 28 nm tegra. 28 nm GPUs will start rolling out in Q1 2012, so a 28 nm tegra 3+ in mid 2012 isn't unrealistic. Reply
  • Itaintrite - Wednesday, November 09, 2011 - link

    That HD decode processor will make a lot of people happy. Reply
  • JoeTF - Thursday, November 10, 2011 - link

    No it bloody won't. Tegra2 already has hardware video decode unit and it's main trademark is that, it cannot even decode properly (no prediction, but more importantly - not enough power to decode anything higher than L3.0 at 30fps).

    Hardware video decoder in Tegra3 is pretty much unchanged from T2. Hell, you can see light framedrops even in their marketing video.

    Good thing is that they added NEON instruction. Sadly, it mean we will have to use all four cores at 100% utilization to playback our videos correctly and under those conditions runtime will be severely constrained (the 8h they talk about is for hardware decode, not NEON-based cpu decode)
    Reply
  • 3DoubleD - Thursday, November 10, 2011 - link

    This is what I've been holding out for, so I really hope your wrong. Reply
  • psychobriggsy - Friday, November 11, 2011 - link

    The article states that the video decoder has been significantly enhanced in Tegra 3. Where do you get your information from? Reply
  • Jambe - Wednesday, November 09, 2011 - link

    "Die size has almost doubled from 49mm^2 to somewhere in the 80mm^2 range."

    ~80mm^2 is considerably more than double the area of 49mm^2, isn't it?
    Reply
  • eddman - Wednesday, November 09, 2011 - link

    Umm, no!! 80 is 63% bigger than 49. Simple as that. Reply
  • MamiyaOtaru - Friday, November 11, 2011 - link

    c'mon man it's not 49 millimeters squared, its 49 square millimeters. 49/80 is well less than 1/2 Reply
  • MamiyaOtaru - Friday, November 11, 2011 - link

    see this for some review: http://img379.imageshack.us/img379/6015/squaremmhe... Reply
  • vision33r - Wednesday, November 09, 2011 - link

    The Tegra 3 by being evolutionary, left a huge opening for other SOC to surpass it in a matter of months.

    I don't think the performance will be that huge like the Apple A4 - A5 is on the magnitude of 9x faster.

    It will be worthless by April when the Apple A6 comes out and spanks it silly and rumor has it that Apple maybe using a 1600x1200 10" display to up the ante.

    If this is true, it means Nvidia has to bring out a Tegra 4 fall summer or fall 2012.

    It will be a big iPad 2 X-mas for sure and iPad 3 will easily trump Tegra 3.
    Reply
  • metafor - Wednesday, November 09, 2011 - link

    I honestly don't think the biggest decision-maker for people considering an iOS tablet or Android tablet has to do with a ~40% difference in GPU performance.

    When comparing Android tablets to each other -- since the OS is the same -- many people will fall back on "well, x is faster than y". But a 2x performance difference isn't going to change my mind if I like Android better than iOS, or vice versa.

    Things like a high-res screen, battery life and usability of the OS have a much bigger impact; so I'd say nVidia or really any Android SoC vendor really aren't competing with Apple's silicon group.
    Reply
  • psychobriggsy - Friday, November 11, 2011 - link

    In the Android market, it really doesn't matter what features Apple includes in their in-house SoC for their iOS devices.

    Considering that manufacturers are having problems fabbing larger high-DPI displays, I also wouldn't be betting on the iPad 3 having a higher resolution display. And Apple would go for 2048x1536 for simplicities sake.

    Five months is also a long time in the ARM SoC market, one that NVIDIA will try to make use of. Let's just hope the product meets the hype when reviews roll in.
    Reply
  • name99 - Wednesday, November 09, 2011 - link

    I've asked this before, and I will ask it again:
    What software on Android, shipping TODAY, is capable of using 4 cores usefully?
    The browser? The PDF viewer? Google Earth? If so, they're all ahead of their desktop cousins.

    Yes, yes, people are buying the future. And, sure, one day, software will be revved to use 4 cores. (But, this being Android, chances are, the particular device you buy this year using Tegra3 will NOT be revved.)

    I'm not trying to be snarky here, just realistic. It seems to me the competitors ARM manufacturers are targeting the real world, where dual cores can (to some extent) usefully be used. But nVidia is requiring people who adopt this chip to pay for power that, realistically, they're not going to use. This seems a foolish design choice. It seems to me far more sensible for mobile to basically track (lagging by about a year) desktop. Desktop is seeing quad-core adoption in a few places, but it's hardly mainstream --- and I'd say that until, let's say, the low-end MacBook Air is using quad core, that's an indication that "software" (as a general class) probably hasn't been threaded enough to make quad-core worth-while in mobile.

    Yes, it's harder, but until then, I'd say far more useful to look at what's ACTUALLY causing people slowness and hassle on phones and tablets, and add THAT to your chips. So, faster single-threaded core --- great. But think more generally.
    Flash on these devices is still slow. Could you speed it up somehow --- maybe a compression engine to transparently compress data sent to/from flash? Likewise app launch is slow. Are there instructions that could be added to speed up dynamic linking? Memory is a problem, and again transparent compression might be helpful there.
    Basically --- solve the problems people actually have, even if they are hard, NOT the problems you wish people had because you know how to solve those.
    Reply
  • psychobriggsy - Friday, November 11, 2011 - link

    The video shows web browsing and games to be using three cores quite often, and the fourth quite a bit. Android is quite multi-threaded, and if it also supports the Java Concurrency APIs it is very easy for software to also be multi-threaded.

    I also presume that the GeForce drivers and other Tegra SoC drivers utilise multi-threading as much as possible.

    And Flash is being dropped on mobile devices in favour of HTML5. That's Adobe making that move. And not before time, it is a horrible technology.
    Reply
  • Romulous - Thursday, November 10, 2011 - link

    Meh. There may come a time when cores dont metter much.
    http://www.euclideon.com/ :)
    Reply
  • alphadon - Thursday, November 10, 2011 - link

    "Die size has almost doubled from 49mm^2 to somewhere in the 80mm^2 range"

    49^2 = 2401
    80^2 = 6400

    This should probably read:
    "Die size has bloated to over 2.5 times the area of the prior generation leaving everyone wondering why NVIDIA is releasing this 40nm dinosaur. We would have expected a die shrink to keep the power and space requirements in line with the industry's competitors, but seeing all that real estate squandered on such an evolutionary product is downright shameful."
    Reply
  • Lugaidster - Friday, November 11, 2011 - link

    Did you even read the other posts? The other competitors have bigger dies and less cores! and also, the geometry didn't change between tegra 2 and this.

    I find it great that they were able to double the shader core count, increase core count from 2 to 5 (it's slower but not less complex, see the die picture) and increase frequency while still having a smaller die than the competition.

    I think that given the constraints, this might turn out to be a good product. Obviously only time will tell if it actually performs, but who knows...
    Reply
  • Lugaidster - Friday, November 11, 2011 - link

    By the way, its 49 mm² not 49² mm. So its actually less than twice as big. Reply
  • psychobriggsy - Friday, November 11, 2011 - link

    Do you seriously think the Tegra 3 die size is 8cm by 8cm?

    49mm^2 is the area, not the edge dimension. In effect the die size has gone from around 7mm x 7mm to 9mm x 9mm. I.e., your little finger nail to your index finger nail (your hands may vary).
    Reply
  • psychobriggsy - Friday, November 11, 2011 - link

    In addition the 28nm shrink of Tegra 3 (Tegra 3+) next year, if no extra features are added, will shrink the die from 80mm^2 to 40mm^2 (in an ideal world, let's say 50mm^2 worst case and shrinks aren't simple). And Tegra 4 will probably be around 80-100 mm^2 again. Reply
  • lightshapers - Friday, November 11, 2011 - link

    This quad core architecture is still disappointing. Actually they implemented a 4th core with good reinforcement from marketing, presented to be a solution for low power consumption at low CPU load. My guess is actually all competitors can do this (cut clocks and power on all but one CPU and reduce cluster frequency) on the ARM dual cluster without the need to add an extra CPU ( I speak for Samsung and TI, as Qualcomm is designing their own). In addition to that, this 5th core is another non negligible additional gates that leak.
    Then, the action to synchronize L2 Cache by arm coherency port is fast, but 1MB is 1MB, which means probably few hundreds of us for lost reactivity when switching between cluster and this 5th core.
    And at the end, it doesn't really solve all the problems of having 4 cores, as asymmetry in core load balance will ever happen. This solution may solve the low load case. But over the low-load watermark, the cluster is power-up, and we have 4 core consuming at least their leakage. This was reported as an issue on tegra2, I don't think it has changed (the 5th is in some way the proof), but here we have 2 additional cores...
    For example, medium load requires 2 cores. 5th is off, but consumption is 4 times the consumption of one.

    It would have been smarter to design a full speed additional core, so as to get a higher "low limit load" so as to stay on 1st core as longer as possible. With 500Mhz, it's difficult to say if you can manage all graphical interface + OS background on a 720p device...
    Reply
  • shiznit - Friday, November 11, 2011 - link

    In a couple generations Nvidia may have a faster cpu than AMD lol. Who would have thought that when AMD bought ATI and some were saying Nvidia was doomed. Reply
  • ol1bit - Friday, November 11, 2011 - link

    I think Nvidia's 5 core solution is right on for the limitation of the battery today.

    Want to really crap up the amps, then plug your phone in.

    Have to always remember the battery limitations of today's devices.
    Reply
  • lancedal - Monday, November 14, 2011 - link

    As you can see on the demo video, there are quite a bit of overlap between the companion core and the other main core. Note that, per Nvidia, tasks can not be executed across the companion core and the other core. So, in the case of web-browsing and video playback, in order to smoothly transition between the 5th and the main cores, software would try to copy content over to the 5th core while the main cores are running and vice versa. As a result, there are huge overlap and this would cause higher power consumption in general usage (but lower power-consumption in standby). Compare this with Tegra-2, you would have shorter battery life on web-browsing or video playback. Reply
  • staryoshi - Saturday, November 19, 2011 - link

    I am wanting Tegra 3 products something fierce. I can't wait for Prime to hit and smartphones to follow. Reply

Log in

Don't have an account? Sign up now