POST A COMMENT

43 Comments

Back to Article

  • jeffkibuule - Sunday, January 13, 2013 - link

    What are the chances we'll see Exynos SoCs in non-Samsung devices? Still 0%? Reply
  • liem107 - Sunday, January 13, 2013 - link

    Exynos socs can be found on some lenovo phones and some chinese phones like meizu. Reply
  • larsivi - Monday, January 14, 2013 - link

    The Cotton Candy by FXI also use Exynos. Reply
  • dishayu - Wednesday, January 16, 2013 - link

    Indian "iBerry" tablets as well. Reply
  • edlee - Sunday, January 13, 2013 - link

    thats great, now the gs4 will get all the game optimizations thats the apple games get.

    as well as the fact that powervr 544MP3 is faster that mali t604, but mayber not as quick as T658 that samsung could not get ready in time.
    Reply
  • alex3run - Monday, January 14, 2013 - link

    Powervr is a third slower than mali t604, so it's strange that samsung has chosen a weaker GPU for its flagship SoC. That's why I think the rumor is fake. Reply
  • djgandy - Monday, January 14, 2013 - link

    Show your workings.... Reply
  • edlee - Monday, January 14, 2013 - link

    maybe an 8 core mali t604 might be faster than a 4 core 543/544 series.

    Buts its more efficient to have a four core 543/544, at least till mali T658 goes mainstream.
    Reply
  • alex3run - Monday, January 14, 2013 - link

    4 core Mali t604@ 533MHz = 68.2 GFLOPS, that's higher than can deliver PowerVR 544MP3 (51.1 GFLOPS). Reply
  • Death666Angel - Monday, January 14, 2013 - link

    But it is usually getting outperformed in the offscreen tests vs the iPhone5 with a 543MP3 *link to Anandtech Nexus 10 performance figures* Am I missing something? Reply
  • alex3run - Thursday, January 24, 2013 - link

    You are missing that android is more power hungry than IOS. You won't get the same performance on android and IOS with the same GPU. Reply
  • djgandy - Tuesday, January 29, 2013 - link

    What has that got to do with delivering performance in test configurations? Are you saying the Mali cannot actually deliver any of its performance promises? Reply
  • alex3run - Tuesday, January 29, 2013 - link

    I try to explain that powervr wouldn't be so powerful on android as it seems on IOS. Exynos 5 octa would power android devices, right? Reply
  • kamejoko - Saturday, January 19, 2013 - link

    http://www.cnx-software.com/2013/01/19/gpus-compar...

    544MP2@532MHz = 136Gflops. why anandtech said 544MP3@533MHz = 51.2Gflops?

    Who's wrong?
    Reply
  • alex3run - Thursday, January 24, 2013 - link

    3 cores * 4 SIMD/core * 4 ALU/SIMD * 2 Flops/ALU = 51168 Flops. Mali has similar core structure but it's quad core. Reply
  • blanarahul - Thursday, January 17, 2013 - link

    Mali-T658 is dead. And Nexus 10 will be the second and (most probably) the last device to use Mali-T604. Reply
  • alex3run - Saturday, January 26, 2013 - link

    http://arm.com/products/mali-t658.php
    It isn't dead.
    Reply
  • djgandy - Tuesday, January 29, 2013 - link

    Except it is as ARM have pulled all indexing to its product page. Reply
  • blanarahul - Thursday, January 17, 2013 - link

    It doesn't matter. ARM has discontinued their 1st Gen Midguard GPUs (Mali-T604 and T658) for all intents and purposes. If you go to the Mali-T604 page, you see this message:

    "Did you know about the ARM® Mali™-T624?

    Designed for visual computing and using innovative tri-pipe architecture, the ARM® Mali™-T624 GPU Compute is the upgraded version of the Mali-T604. This solution builds upon a track record for high-quality scalable multicore solutions for 2D and 3D graphics. Key APIs supported include OpenGL® ES 1.1, 2.0 and 3.0, DirectX® 11 and OpenCL™ 1.1."

    This means that ARM is pushing it's partners to use 2nd Gen Midguard GPUs (Mali-T624, T628 and T678) instead of 1st Gen Midguard GPUs.

    ARM has even removed the Mali-T658 page from their servers. In fact the only reason Mali-T604's page is still present on ARM's website is because Nexus 10 has recently been launched. I guess this infuriated Samsung and they decided to leave Mali for the Exynos 5.

    ----------------------------------------------------------------------------------------------------------------------------

    But this isn't the first time ARM's GPUs are being ignored. ARM has launched a total of 9 GPUs till now, out of which only 3 have made it to the market: Mali-200, Mali-400MP, and Mali-T604.

    Mali-200: Chinese Android Devices
    Mali-400MP: ST-Ericsson Novathor, Samsung Exynos 4, Allwinner A Series SoC.
    Mali-T604: Exynos 5 Dual.
    Reply
  • mayankleoboy1 - Monday, January 14, 2013 - link

    Then join them. No other mobile GPU could beat the PowerVR. The Mali is second best at best. And with high res displays, you need more horsepower.

    I wonder, why did they not opt for the PowerVR Rogue arch ? That could have been much handy in beating Apple.
    Reply
  • warisz00r - Monday, January 14, 2013 - link

    Likely because they want to push this out as quick as they can, and that adding newer techs may impede them Reply
  • Death666Angel - Monday, January 14, 2013 - link

    Hummingbird already used PowerVR hardware, so it's not surprising to see them go with that again.
    As for PowerVR > anything, Adreno, especially the new 3xx series looks very competitive in terms of performance/power consumption and depending on the. Getting power consumption numbers for Mali GPUs is a bit more difficult, but for what they are, I find them quite adequate as well
    But I guess Fanboys have conquered the smartphone hardware space now as well.
    Reply
  • mayankleoboy1 - Monday, January 14, 2013 - link

    FTR, I am a Apple hater.
    That said, for Samsung to make a big marketing impact on the semi-tech people, it should ace all the GPU benchies.
    And PowerVR on Apple chips wins most of them.
    Reply
  • Death666Angel - Monday, January 14, 2013 - link

    Yes, but not necessarily because PowerVR are the best (there aren't many good apples to apples comparisons for mobile GPUs out there). Apple spends the most die area of any SoC manufacturer on their GPU, that's why they are the fastest. Reply
  • djgandy - Tuesday, January 29, 2013 - link

    And most power efficient. There is a good argument to burn die space for power efficiency. Reply
  • McDave - Monday, January 14, 2013 - link

    "I wonder, why did they not opt for the PowerVR Rogue arch ? That could have been much handy in beating Apple."

    Because Apple usually get the good stuff first? They're typically a version ahead of the comp hence the 554MP4 on the iPad4 vs the 54xMPx on the others. It's been this way since the first iPad. Probably Apple's 10% stake in Imagination working for them although the NovaThors have been touting Rogue for a couple of years but still no production - issues?
    Reply
  • Krysto - Monday, January 14, 2013 - link

    Well that's very disappointing. I was hoping for Mali T628. But even this Mali GPU is disappointing in itself. If they decided to move to PowerVR, they should've at least gone with Series 6. Instead of moving forward with OpenGL ES 3.0, they're moving backwards.

    What the hell Samsung? I fear this may be just the first of Samsung's dumb moves in 2013, and 2012 may have been their peak.
    Reply
  • Krysto - Monday, January 14, 2013 - link

    even this PowerVR GPU* Reply
  • DesktopMan - Monday, January 14, 2013 - link

    If it's possible that this could end up in phones adding Apple A6 to the comparison table would make a lot of sense. Reply
  • JomaKern - Monday, January 14, 2013 - link

    I suspect, as Anand has mentioned before, that the power efficiency of the PowerVR architecture was a key decision in not using a Mali T6xx. In the power test article the T604 was pretty power hungry.

    I don't think Apple will get Rogue into their next iPad, especially if the rumored March launch is true. A6X is so much faster than its rivals, they can use the die shrink to 28nm to boost clocks, and still easily have the fastest GPU.
    Reply
  • edlee - Monday, January 14, 2013 - link

    [IMG]http://img16.imageshack.us/img16/4239/powervr.jpg[/IMG] Reply
  • ltcommanderdata - Monday, January 14, 2013 - link

    It would be nice if they added a row that explicitly states what the max shipping frequencies for each chip are that are used to calculate the GLFOPS rating. Working backwards the frequencies are: 250MHz (A5), 250MHz (A5X), 533MHz (Exynos 5 Octa), 280MHz (A6X). The 280MHz A6X frequency in particular would be good to get explicitly on paper since it's been hinted at before, but I don't think anyone has explicitly stated it.

    And for most correctness, I believe the GFLOPS actually round to 51.2 GLFOPS for the Exynos rather than 51.1 GFLOPS at 533MHz and 71.7 GFLOPS for the A6X rather than 71.6 GFLOPS at 280MHz.
    Reply
  • alexvoda - Monday, January 14, 2013 - link

    How does the GPU in the Tegra4 compare to this, the A6X and to the future iPad 5?

    Since Nvidia is a GPU company I expect their chip while it is fresh new to stomp everything else on the GPU side. How does it actually compare?
    Reply
  • augiem - Monday, January 14, 2013 - link

    Hasn't happened yet. PVR and Apple have ruled the roost since 2007. Don't hold your breath. Every single halo mobile gpu launched in that timeframe has been one disappointment after another... well except of course for PVR launches. I'm really tired of waiting for 1) Some other device maker to use as many PVR cores as Apple and 2) Some other GPU developer (Nvidia!) to get off their butts and make something competitive. I think I'm going to close my eyes and check back in 3 years.... AGAIN. Reply
  • alexvoda - Monday, January 14, 2013 - link

    In that case is an extra question.
    How do they count cores?
    From what I understand, current SoC with PowerVR have at most 4 PowerVR cores.
    Mali designs are similar having at most 8 cores according to wikipedia
    The Tegra 4 has 72 GPU cores of some kind.
    Are they counting different things? Is the architecture so significantly different?
    Reply
  • ltcommanderdata - Monday, January 14, 2013 - link

    http://www.anandtech.com/show/5072/nvidias-tegra-3...

    Yes they are counting different things. Each PowerVR "core" is really a complete GPU with shader ALUs, texture units, control logic, etc. Each nVidia "core" is really just a shader ALU. That's why Anand counts SIMDs which is basically the ALU count. Each SGX543/SGX544 "core" has 4 SIMDs/ALUs. Whereas each SGX554 "core" doubles the SIMD/ALU count to 8 per core. As such the SGX554MP4 in the A6X has 32 ALUs. PowerVR ALUs are also beefier and are able to process 4 MADs instructions each whereas each nVidia ALU can only do 1 MAD. That works out to the SGX554MP4 being able to process 128 MAD instructions per clock whereas Tegra 3 is only able to do 12. Tegra 4 is reported to use basically the same GPU shader architecture as Tegra 3 just with everything increased 6x. Tegra 4's 72 cores therefore can likely process 72 MAD instructions per clock vs 128 MAD instructions per clock for the SGX554MP4. Clock speeds, memory bandwidth, and other factors will affect real world performance, but clock-for-clock, the SGX554MP4 in the A6X should be faster than the Tegra 4.
    Reply
  • frenchy_2001 - Monday, January 14, 2013 - link

    The problem is always the same and likely to stay: cost.
    This is reflected in the die area of each mobile chip. A5X (iPad3) was ~163mm2 while Tegra3 was ~80mm2. You are talking of a 2x factor. For twice the cost, Apple can afford to be better.
    (see this table to see the different sizes: http://www.anandtech.com/show/5685/apple-a5x-die-s... )

    In discreet GPU, this would be like comparing a GTX680 (top of the line, $400) to a Radeon 7870 (top middle, $200). The Nvidia card will trample the Radeon, at the cost of... money.

    Of course, in the case of tablets and phone, we do NOT see those costs, as they get hidden into the total cost and often pocketed by the product manufacturer (another interesting link explaining why Apple may have introduced the iPad4 as a cost reduction: http://blogs.timesofindia.indiatimes.com/WebWise/e... )

    So, as the GPU capabilities will be capped by costs, it is unlikely that we will see anyone else beat Apple (which, honestly, overspends in graphic power at the moment as a differentiation and advantage for gaming).
    Reply
  • mayankleoboy1 - Monday, January 14, 2013 - link

    money and.... power usage at load/idle. Reply
  • djgandy - Tuesday, January 29, 2013 - link

    The cost between a 80mm2 die and a 160mm2 is not that huge if you are shipping in the tens of millions. You'd spend that $10 when it significantly improves your $500 device and imposes less demand on the battery, and the amount of heat from the device.

    Having cool running silicon is very important in these slim devices. That's why apple burn area. It means they can design the device they want, rather than the device that the chip enforces them to.

    Having a smaller, higher clocked and hotter running chip would probably cost Apple more in working around the constraints or having a larger battery.
    Reply
  • mikegonzalez2k - Monday, March 04, 2013 - link

    I thought calculating the GFLOPS was

    # of Cores x Clock Speed x FLOPS/clock

    I'm not getting the correct values so I was wondering if someone could show the calculation.
    Reply
  • mikegonzalez2k - Monday, March 04, 2013 - link

    Here is my calculation, what is wrong?
    533x10^6 * 4 cores * 4 SIMD/core * 4 ALU/SIMD * 2 Flops/ALU = 68.2 GFLOPS
    Reply
  • johamin - Saturday, April 13, 2013 - link

    Exynos 5 Octa does not support OpenGL ES 3.0 , Is it matter or not? Reply
  • lacp - Friday, June 14, 2013 - link

    the gpu of this phone has twice the graphic performance of the Iphone 5, which the gpu runs at 267MHz, then the gpu of the Galaxy S4 I9500 running at 533MHz, already that one core of the SGX544 gpu, at 200MHz has 7.2 GFLOPS of compute performance, 3 cores of SGX544 at 533MHz have 57.6 GFLOPS, not 51.1 GFLOPS, like the specialists of tech site Anandtech wrote. Reply

Log in

Don't have an account? Sign up now