POST A COMMENT

43 Comments

Back to Article

  • numbertheo - Wednesday, November 17, 2010 - link

    For what definition of similar? The 360/PS3 GPU's have 300+ Million transistors. Reply
  • lehtv - Thursday, November 18, 2010 - link

    What about transistor counts of iPhone 3GS vs Wii or PS2? Reply
  • numbertheo - Thursday, November 18, 2010 - link

    I can't find any specific number. However, the A4 is 200M transistors and Tegra 2 is 260M transistors. For both of these, the vast majority of the chip is not taken up by graphics. Lincroft which has 140M transistors and uses the SGX 535 (sort of) has a pretty clear dieshot available. The graphics portion is clearly less than 1/4 of the die.

    I cant find any info on the Wii but the PS2 GS has 42.7M transistors (7.5M logical). The large difference is because it's memory was on die.

    I'm sure that they could build a chip with 360/PS3 performance at 28nm at reasonable cost. I just don't think something like that will meet the power requirements of a phone.
    Reply
  • skydrome1 - Thursday, November 18, 2010 - link

    It's possible since the shrink to 28 nm will allow a more than doubling of the number of transistors in the same die space. Take 45 and divide it by 28 (comparison on number of transistors length wise) and square the answer to get number of transistors per unit of area. The answer is above 2, meaning more than twice the current number of transistors on the 45nm process.

    So 300 million transistors for the GPU sounds pretty possible. Then factor in architectural enhancements and what nots and the performance of an Xbox 360 or PS3 is actually quite possible.
    Reply
  • B3an - Saturday, November 20, 2010 - link

    Well it's worth noting that the GPU in the Samsung Galaxy S is already capable of pushing 90 million triangles, thats getting near to half of what the PS3 can do (250M triangles). The Xbox360 can do around 500M.

    The GPU in the Nexus One can do 22M.
    The iPhone 3GS can do 7M.
    Not sure about the iPhone 4 but i know it's nowhere near the 90M of the Galaxy S.

    I think it's certainly possible to have a 28nm chip that can do around 250M triangles and use no more power than what a Galaxy S SoC uses.
    Reply
  • Visual - Thursday, November 18, 2010 - link

    I guess the catch is in the different resolutions that they work in. Probably the phones can match a PS3 in FPS when the phone has 480 line screen and the ps3 is working on 1080p, but if you used the tv-out of the phone, you'll likely still be quite disappointed. Reply
  • therealnickdanger - Thursday, November 18, 2010 - link

    PS3 and Xbox 360 are both realistically 720p machines that can always output (upsample) to 1080p. Very few games actually render in 1080p and most of those are Arcade titles with simple graphics. There are exceptions, of course, but very few.

    When you consider that the PS3 basically uses a GeForce 7900GT and the 360 basically uses a Radeon X1900, that's five generations of graphics ago. By the time we see the Adreno 3xx in use, IGPs on PCs will easily best PS3/360 graphics (@720p), so it's reasonable to expect Adreno to to "similar" to PS3/360 graphics by that point. I doubt we'll see intense shader calculations, but most high-end phones in the coming years will have 720p screens as a minimum, so we'll at least get the resolution and frames per second. Will it look as good on my 60" plasma? Probly not.
    Reply
  • ltcommanderdata - Thursday, November 18, 2010 - link

    Seeing Qualcomm's definition of "similar" for the Adreno 130 includes both the Nintendo DS and PSP, which is a very wide performance range, their definition of similar could mean anything. Reply
  • Guspaz - Thursday, November 18, 2010 - link

    Consider that Adreno 2xx series (the Adreno used to be the AMD Imageon) has a reputation for being notoriously slow, and that they've made their 4x performance improvement claims in the past (the Adreno 205 was supposed to be 4x the speed of the 200, and yet *both* are outperformed by the PowerVR SGX 540 despite the fact that the PowerVR SGX 540 competed with the 200, not 205...

    Snapdragon has earned a reputation of pathetic GPU performance. It'll take a lot more than some powerpoint slides to convince me that they've fixed the problem.
    Reply
  • kenyee - Thursday, November 18, 2010 - link

    Similar performance if playing Tetris... :-) Reply
  • DanNeely - Thursday, November 18, 2010 - link

    I suspect this is like an article posted on Arstechnica(?) a month or two back with some vendor claiming their new mobile GPU would have the same polygon (vertex?) capabilities as the PS3. What they weren't saying is that the polygon count on GPUs barely increased over the 4 generations from the geforce 5000 to 200, while total performance roughly doubled each time. The PS3's GPU is a 78xx variant which means that at best the true GPU capability is likely to be 4x less.

    It could be larger if they're playing games with the numbers. I don't recall the details but ATI and nVidia's poloygon numbers differed by about 3x for a comparable level of performance.
    Reply
  • DanNeely - Thursday, November 18, 2010 - link

    Also do you really think the iPhone 3GS and the Wii have equivalent GPU capabilities? I smell a rat here. Reply
  • nbjknk - Thursday, November 25, 2010 - link

    Dear customers, thank you for your support of our company.
    Here, there's good news to tell you: The company recently
    launched a number of new fashion items! ! Fashionable
    and welcome everyone to come buy. If necessary, please
    plut:==== http://www.vipshops.org ======

    ==== http://www.vipshops.org ======

    ==== http://www.vipshops.org ======
    Reply
  • blandead - Wednesday, November 17, 2010 - link

    OoO ... OpenCl 1.1 GPGPU support.. so they claim 5x Performance because the processor is going to dual-core and it's out of order. I wonder how fast AMD fusion will be then when it comes out late 2010. Maybe when everyone moves to 28nm even x86 will be within same power envelope. I wonder who will end up with faster product since everyone has some kind of CPU/GPU combination. Yay for 28nm! Reply
  • ricmoore1 - Thursday, November 18, 2010 - link

    Snapdragon's CPU is already OOO. Reply
  • metafor - Thursday, November 18, 2010 - link

    Not fully. It can do "some" instructions OoO, which could mean anything. Reply
  • jeffbui - Thursday, November 18, 2010 - link

    I'm curious why Intel isn't competing in this market. They have the tools, manufacturing, and engineering prowess to make better processors than all their competitors. Reply
  • Minion4Hire - Thursday, November 18, 2010 - link

    They ARE competing in this market. Moorestown ring a bell?

    http://www.anandtech.com/show/3696/intel-unveils-m...
    Reply
  • Thermogenic - Thursday, November 18, 2010 - link

    It might be because Qualcomm has such a strong patent portfolio that intel would have to pay a pretty hefty premium to include the 3G and LTE technologies, to the point they wouldn't really be competitive any longer.

    Intel could have a better CPU/GPU combination, but if you still need to have a separate chip for the cell phone functions, many vendors will likely stick with Qualcomm's simplified solution - system on a chip!
    Reply
  • metafor - Thursday, November 18, 2010 - link

    Many vendors only supply the App processor without a baseband and do very well. TI's OMAP for example, is "just" a CPU/GPU/DSP combo but it gives the current Snapdragon a run for its money.

    The problem with Intel is that for the past 30+ years, they've been chasing performance in their process technology as well as their design team philosophy.

    Now all of a sudden, a market where you absolute cannot go above a 500mW limit has presented itself and -- even for Intel -- it'll take a while to transition to that.

    Their 32nm process is impressive for performance, but even their "LP" line is well behind TSMC's LP line in terms of power -- both idle and switching. They may be faster, but in a mobile devices, 1GHz at 30mW idle and 500mW peak is far better than 2GHz at 200mW idle and 4W peak.

    The thing holding Intel back is both their design (Atom) and process. But Intel being Intel, I wouldn't discount them. We're talking about the guys who could throw out the Pentium 4, then transition back to a more conservative design, without ever losing profit.
    Reply
  • Khato - Thursday, November 18, 2010 - link

    Which is why Intel is set to acquire Infineon's wireless division, which gives them access to the 3G/LTE and more for proper integration - http://www.intel.com/pressroom/archive/releases/20...

    Intel's offerings next year will start to be competitive in the market, while by 2012 I'd expect the level of integration and process tech advantage to be such that it could be better than other available options. Very interesting competitive landscape coming up, that's for sure.
    Reply
  • T2k - Thursday, November 18, 2010 - link

    Intel has NOTHING when it comes to this low-power technology - x86 simply won't work, EVER, period. The reason is that to match Qualcomm in performance/watt Intel/x86 needs one generation advantage in mfr'ing process which will never happen ie on the same process Qualcomm will always beat x86/Intel, period. Reply
  • sleepeeg3 - Thursday, November 18, 2010 - link

    Minor details... I'll bet the dual core, 1.2 GHz chip consumes at least twice as much power. Would like to see a website that graphed the battery life of various cell phone chips under difference mobile phone OSes. All the rage seems to be about FASTER, but I would rather see the PHONE go a month without a charge than see it run Flash on a dinky 4", 64k screen with a $480/year data plan. That's what computers are for. Reply
  • Exodite - Thursday, November 18, 2010 - link

    "I'll bet the dual core, 1.2 GHz chip consumes at least twice as much power."

    And you'd likely be wrong, definitely at 28nm. Besides, even if it did it'd complete a workload in less than half the time so there's a net gain in battery time.

    If you worry about battery time in your mobile device you should be moving your focus from the SoC to the display as display manufacturers are really the ones you should be hounding about energy conservation.
    Reply
  • Klinky1984 - Thursday, November 18, 2010 - link

    Umm, hate to ruin your whining but there are a ton of phones out there that can go a week or so between charges, just not any of these whizbang smartphones you seem to despise. So go get yourself an old Motorola or Nokia and quit your whining. Reply
  • Visual - Thursday, November 18, 2010 - link

    Galaxy S "whizbang" goes a week and more easily too, if I use it just as "an old Motorola or Nokia", i.e. no 3G or wifi or bluetooth or movies or games.
    Goes a day with non-stop music playing over stereo bluetooth headset, and that's fine for me too - the headset itself is also drained by then and I charge them both together.
    Reply
  • zebrax2 - Thursday, November 18, 2010 - link

    You should really look at philips xenium series of phones Reply
  • glad2meetu - Thursday, November 18, 2010 - link

    This Qualcomm marketing presentation is very misleading. Claims of 4x performance and 75% leakage reduction are well beyond a 40nm to 28nm spin. The architecture information is not correct. Having two cores does not give 2x performance increases, you get about 1.25x if the application code is able to run on multiple cores (4 cores is about 1.4x, 8 cores is about 1.5x). Given their stock went up today, I will rate Qualcomm as a sell on the stock market based on this misleading information and the fact that the stock went from $32 a share in July 2010 to $48 a share in Nov 2010. Fair market value for QCOM is probably about $40 a share. Even at $40 a share, they have outperformed the market. Overall, I except QCOM to do well compared to the S&P500 since the profit margins are good. But I think they are overpriced at the moment. Reply
  • DigitalFreak - Thursday, November 18, 2010 - link

    Yeah, I'm going to take stock market advice from the Anandtech comments section... Reply
  • Shadowmaster625 - Thursday, November 18, 2010 - link

    lol just before I read your reply I was thinking QCOM has a pretty nice setup on the weekly. $55 before february wouldnt surprise me. Reply
  • Iketh - Sunday, November 21, 2010 - link

    wow dude you're way off on your performance per core figures Reply
  • coconutboy - Thursday, November 18, 2010 - link

    What's up with the pic comparing the Adreno 130 to the Nokia N-guage[/i]? Did anyone actually buy that thing? I still remember the hilarious ad that showed a skater guy hanging out in an alley playing it with a crazy look on his face and the words "EXTREME HARDCORE!"

    That the nearly-nonexistent N-gauge gets real estate in any sort of press material is silly. It serves as a reference point only to the half-dozen people on the planet who were suckers enough to buy that POS.
    Reply
  • macs - Thursday, November 18, 2010 - link

    The rumor is that the upcoming Nexus S from Samsung will use a dual core cpu.
    Is that possible? Is Samsung Orion ready for mass production? I think that the first dual core on the market will be Qualcomm MSM8260 because they are still based on Cortex A8.
    Reply
  • alxx - Sunday, November 21, 2010 - link

    Nope omap4430 is already out and about.
    (not that TI has publicly released the docs yet)

    1Ghz dual core A9 with 1GB ram (POP)
    http://www.pandaboard.org/content/platform
    http://pandaboard.org/content/resources/references
    http://pandaboard.org/content/resources/omap-devic...
    http://omappedia.org/wiki/PandaBoard_FAQ

    US$174 if digikey has any stock
    http://dkc1.digikey.com/us/en/ph/ti/pandaboard.htm...

    Looks to be a nice one for a low power media center with hdmi and dvid with full HD output

    Be one for you guys to get your hands on and give a run

    Can run android , angstrom , maemo or ubuntu. Less over head running angstrom.
    Easy way to build an image is with the online image builder they have
    http://www.angstrom-distribution.org/narcissus/
    Reply
  • T2k - Thursday, November 18, 2010 - link

    ...and their rumored Android-PlayStation-PSP-hybrid phone. It's coming and it's going to be huge in the mobile gaming scene.
    I'm not a mobile gamer - having plenty of high-ends machines at hand helps to stick with decent PC graphics ;) - but I could appreciate some decent graphics on my phone (decent != shitty iPod/iPhone-level junk graphics) too.
    Reply
  • iwodo - Thursday, November 18, 2010 - link

    How did all of a sudden Mobile GPU become so easy to make everyone seems to have their own version?

    Where did this Adreon came from? I know QualComm brought ATI's embedded GPU team, but did QualComm improve it themselves?

    And all these sounds like Apple are going to use it for their next CPU. Since Qualcomm properly have the best CDMA / WCDMA baseband chip on the market. I dont see why not since their invented and have most of the patents in current mobile network.

    Would Apple simple be using Snapdragon for their next iPhone? If so why buy Intrinsity?
    Or would Apple simply be licensing the QualComm baseband part for their own SoC.
    Reply
  • nafhan - Thursday, November 18, 2010 - link

    I wouldn't say it's easy... I think there's really only three out there:

    -Adreno (formerly Imageon, developed by ATI before being sold)
    -PowerVR SGX
    -Tegra

    PowerVR GPU's seem to be the most common in the mobile space.
    Reply
  • metafor - Thursday, November 18, 2010 - link

    There's also ARM's Mali, which has recently entered.

    These have been around for a while in one form or another. It's just never been mentioned as much before because the competition for performance wasn't as high as it is today.
    Reply
  • jimmiwalker - Thursday, November 18, 2010 - link

    I like that. I really like that. :) Reply
  • numberoneoppa - Thursday, November 18, 2010 - link

    Shit just got real. Wow. Reply
  • drc2008 - Thursday, November 18, 2010 - link

    I have been trying to compare energy efficiency of various chip architectures, particularly ARM, x86. So when I look at current generation of smartphone ARM chips they usually have TDP power of 2-3Watt but their linpack score is generally between 8-25 MFLOPS whereas atom type devices have Linpack score of 1000 MFLOPS (TDP 8Watt). It seems MFLOP/watt for x86 (125MFlop/watt) is lot better than ARM architecture (4-10MFLOP/Watt). At High end CPU x86 MFLOP/watt is close to 1000.

    I'm not sure why ARM continues to claim ....their chips are more energy efficient. Any insights will be greatly appreciated.

    DRC2008
    Reply
  • metafor - Thursday, November 18, 2010 - link

    One problem with Linpack on ARM is that they are almost (as I've seen) run on Android, which runs the Linpack program on top of a JIT. Whereas x86 Linpack generally uses Intel's SSE optimized libraries.

    Also, typical CPU peak power for ARM is more like 500-700mW for a single core and 1-1.4W for dual-core. It may be a different story for Marvell chips but at least TI/Samsung/Qualcomm SoC's have those numbers.

    N1 @ 1GHz already scores ~45MFLOPS despite being run through a JIT without any use of NEON and at 45nm, Scorpion is sub-500mW.
    Reply
  • ProDigit - Friday, November 19, 2010 - link

    There's a big performance difference between the Wii and the PS2; the DS and psp!
    So the chart shows a very rough estimation of where the chips Adreno 130, 2xx, and 3xx fit, seeing that some gaming consoles are almost twice as fast as others!
    Reply

Log in

Don't have an account? Sign up now