Back to Article

  • arnavvdesai - Wednesday, November 09, 2011 - link

    The reason I ask is because of the rumor currently regarding an ARM based architecture for the next Xbox.
    Given that almost certainly a large segment of the population now have 1080p TVs can these chipsets give a higher performance (or rather scale to such levels) than current generation high to mid range PC graphic cards?

    Frankly, I have my doubts because relatively speaking they seem so weak.
  • PWRuser - Wednesday, November 09, 2011 - link

    I doubt it. While these mobile parts also enjoy the benefit of progress (such as DX11 compatibility) there is not enough raw horsepower to "endanger" the stationary counterparts.

    If I had to guess, I would say these high end SoC's are comparable to 6-7 years old run of the mill desktop counter parts, edging them out here and there due to adopting innovations that came about since then.
  • MGSsancho - Wednesday, November 09, 2011 - link

    Xbox360 is still more powerfull. At simular resolutions (480p) than the mali should be comparable Reply
  • B3an - Thursday, November 10, 2011 - link

    The mail 400 isn't comparable to a 360 at ANY res. I know this, apart from obvious benchmarks, because i have a mali 400 GPU in my SGSII and use HDMI out to play games on my HD TV with it.

    But the Playstation Vita uses a quadcore ARM SoC. It has a 543MP4 GPU which is roughly equal to a PS3 in raw graphics power.

    Theres no way the next gen consoles will use ARM though. The latest GPU's from Nvidia and AMD are so many more times more powerful, theres no comparison. Totally different market too, MS and Sony have no reason at all to use ARM, why would they? Consoles are devices that use mains power, they're not mobile, they can afford to have high end power eating GPU's and use 100+ watts.
  • Guspaz - Thursday, November 10, 2011 - link

    The Mali 400 might not be, but the PowerVR SGX543MP4 isn't that far off, and the next generation of GPUs (like the Mali-T658 or a multi-core series 6 SGX) are definitely going to be decently faster than a 360. I mean, that was inevitable, the 360 came out 6 years ago, and if we compare it to mobile GPUs from a year or two in the future, it should be obvious. Even die shrinks alone get us 10.3x higher density than the 360 (90nm -> 28nm), and that's before architectural improvements. Reply
  • B3an - Friday, November 11, 2011 - link

    The point i'm making is that theres no way the next gen consoles from MS or Sony will use any of these future ARM GPU's anyway, even if they are faster. They are designed for mobile low power devices. Thats a different market. Consoles run on mains power, and because of this can use GPU's from AMD and Nvidia that can use 100+ watts will be vastly faster than any ARM SoC for this reason alone.

    And a new console has to be many many times more powerful than the previous generation to really make a difference to most people, and it also has to last many years. If ARM was used, while the GPU might be faster, it wont be massively faster. AMD and Nvidia will certainly have vastly more powerful alternatives that would be used instead.
  • tipoo - Wednesday, December 07, 2011 - link

    The Wii U uses an IBM Power 7 based processor and an AMD chip from the HD4000 series. If Microsoft and Sony used ARM SoC's they would have a hard time beating even that. No, no matter how powerful ARM chips and graphics will be by then, traditional architectures will be that much faster. Reply
  • Calin - Thursday, November 10, 2011 - link

    You're comparing current generation, 40nm graphic chips (and graphic memory) running in 50-150W with mobile parts (and mobile memory) against 40nm or 28 nm, constrained at maybe 1-2W for mobile phones and possibly 5+W on tablets?
    I'd say they will be comparable to current generation integrated graphics (Llano, Sandy Bridge), not with any mid-range PC graphic cards
  • Guspaz - Thursday, November 10, 2011 - link

    They're definitely optimized for power efficiency, but they do scale up more than anybody has used them for yet.

    The SGX 5XT line scales up to 16 cores, but the biggest we've seen in a smartphone is two cores (iPhone 4S with SGX543MP2), and the highest anybody seems to have planned is four (Playstation Vita with SGX543MP4). And something to keep in mind is that the SGX543MP taped out in January 2009, about three years ago.

    If you consider that console hardware is pretty aggressive in terms of getting stuff ahead of the curve, and on new processes, a 16 or 32-core series 6 PowerVR chip on a 28nm process would start looking a lot like a desktop part (or at least a high-end notebook part), both in terms of performance and power consumption.

    The thing is nobody has shown any interest in scaling PowerVR chips that high. I suspect that it's because, while they might be able to produce something in the same ballpark as whatever AMD/nVidia is going to put out, the focus on power efficiency over performance would be the wrong optimization strategy for home consoles (except Nintendo, it seems), and so it wouldn't be competitive.
  • MGSsancho - Thursday, November 10, 2011 - link

    unless a new console will use a 16 core version. a console does not need to worry about total power use as much as a cell phone. 1w vs 10w or even 100w? these are all things consoles have the ability to handle (of course there have been problems getting them cool.) Although there would be an advantage to just using more cores in a console, would make it raiser to port games from hand held to consoles if we want to look at Microsofts triple screen idea Reply
  • Meaker10 - Thursday, November 10, 2011 - link

    There are three major factors why they dont scale them up.

    1. Power consumption
    2. Chip size and yield
    3. Bandwidth limitations

    3 is a kicker, you can scale it up, but with the current memory bandwidth its not going to help.
  • tipoo - Wednesday, December 07, 2011 - link

    They also wouldn't be limited to low power DDR RAM in a console though, they could use XDR2 RAM or DDR5 with a wide bus, etc. Reply
  • iwod - Wednesday, November 09, 2011 - link

    I am more interested in PowerVR RX and the 6 Series.. Reply
  • dagamer34 - Wednesday, November 09, 2011 - link

    I can't wait until ARM SoCs literally power every electronics device imaginable. No more crappy, low frame rate devices but fast, fluid interfaces limited only by what our minds can think of.

    Basically, someone PLEASE make a high end car stereo worth owning!!! Theres no reason I should have to pay $1000 for a touchscreen and UI from 1999! Ridiculous!
  • juhatus - Thursday, November 10, 2011 - link

    What hit me in the eye was

    "Perfect embedded high-end GPU for 4k mass markets"

    Are we gonna finally start to see devices for 4k x 2k resolution?
    Any comments, Anand? :)
  • Guspaz - Thursday, November 10, 2011 - link

    Well, not smartphones or tablets, no. iPad 3 is expected to use a 2048*1536 display, and pixel densities much higher than that on a 10" screen don't really make any sense. In fact, there isn't much point increasing resolution much more than we're already seeing now on smartphones, because there isn't any benefit to scaling it higher (it's already better than we can resolve with our eyes).

    A similar argument can be made for televisions; yes, they're much larger, but you're sitting a lot farther away. Larger sizes might see a benefit, but even 1080p (or 2K, if you will) is probably overkill at 32" or lower. On the other hand, a large display (46", 55", etc) would probably see a benefit from 4K.

    A more interesting place for improvements would be desktop displays, IMO. The problem is that no modern desktop OS is resolution agnostic. Smartphones and tablets have already gone that way, because you can't make UI elements any smaller and still expect fingers to be able to accurately touch them, but Windows/OS X/Linux/etc all have poor or non-existent resolution-independent support.
  • ajp_anton - Thursday, November 10, 2011 - link

    If they compare single-core 400 with an 8-core T658, a 10x increase in performance doesn't sound very good... Reply
  • CityBlue - Thursday, November 10, 2011 - link

    Is that they are completely open source compatible, none of the NDA bullsh1t you get with PowerVR implementations.

    Imagination Technologies, through their PowerVR designs, are thoroughly hostile towards the open source community, whereas ARM are the complete opposite.

    The sooner PowerVR GPUs disappear from all future ARM SoCs the better, and my guess is that this will start to happen sooner than later as it becomes a no brainer to use ARM Mali IP rather than deal with the hostile and difficult Imagination Technologies, and their separately licensed (and perhaps more expensive while also less flexible) PowerVR IP.

    Goodbye Imagination Technologies, and good riddance.
  • geniekid - Thursday, November 10, 2011 - link

    I don't know about that. Posted on AT the same day as your comment:

    You may very well be right about Imagination Technologies and their hostile attitude, but they will probably be sticking around at least a little bit longer.
  • CityBlue - Thursday, November 10, 2011 - link

    Hopefully that's just to keep Apple happy.

    For everyone else, and their own hardware and software, fingers crossed they'll use switch to Mali.

    I'll be dismayed if Samsung start shipping Tizen handsets based on PowerVR SoCs.
  • silverblue - Thursday, November 10, 2011 - link

    ...and linked back to yourselves. :)

Log in

Don't have an account? Sign up now