For the past several days I've been playing around with Futuremark's new 3DMark for Android, as well as Kishonti's GL and DXBenchmark 2.7. All of these tests are scheduled to be available on Android, iOS, Windows RT and Windows 8 - giving us the beginning of a very wonderful thing: a set of benchmarks that allow us to roughly compare mobile hardware across (virtually) all OSes. The computing world is headed for convergence in a major way, and with benchmarks like these we'll be able to better track everyone's progress as the high performance folks go low power, and the low power folks aim for higher performance.

The previous two articles I did on the topic were really focused on comparing smartphones to smartphones, and tablets to tablets. What we've been lacking however has been perspective. On the CPU side we've known how fast Atom was for quite a while. Back in 2008 I concluded that a 1.6GHz single core Atom processor delivered performance similar to that of a 1.2GHz Pentium M, or a mainstream Centrino notebook from 2003. Higher clock speeds and a second core would likely push that performance forward by another year or two at most. Given that most of the ARM based CPU competitors tend to be a bit slower than Atom, you could estimate that any of the current crop of smartphones delivers CPU performance somewhere in the range of a notebook from 2003 - 2005. Not bad. But what about graphics performance?

To find out, I went through my parts closet in search of GPUs from a similar time period. I needed hardware that supported PCIe (to make testbed construction easier), and I needed GPUs that supported DirectX 9, which had me starting at 2004. I don't always keep everything I've ever tested, but I try to keep parts of potential value to future comparisons. Rest assured that back in 2004 - 2007, I didn't think I'd be using these GPUs to put smartphone performance in perspective.

Here's what I dug up:

The Lineup (Configurations as Tested)
  Release Year Pixel Shaders Vertex Shaders Core Clock Memory Data Rate Memory Bus Width Memory Size
NVIDIA GeForce 8500 GT 2007 16 (unified) 520MHz (1040MHz shader clock) 1.4GHz 128-bit 256MB DDR3
NVIDIA GeForce 7900 GTX 2006 24 8 650MHz 1.6GHz 256-bit 512MB DDR3
NVIDIA GeForce 7900 GS 2006 20 7 480MHz 1.4GHz 256-bit 256MB DDR3
NVIDIA GeForce 7800 GT 2005 20 7 400MHz 1GHz 256-bit 256MB DDR3
NVIDIA GeForce 6600 2004 8 3 300MHz 500MHz 128-bit 256MB DDR

I wanted to toss in a GeForce 6600 GT, given just how awesome that card was back in 2004, but alas I had cleared out my old stock of PCIe 6600 GTs long ago. I had an AGP 6600 GT but that would ruin my ability to keep CPU performance in-line with Surface Pro, so I had to resort to a vanilla GeForce 6600. Both core clock and memory bandwidth suffered as a result, with the latter being cut in half from using slower DDR. The core clock on the base 6600 was only 300MHz compared to 500MHz for the GT. What does make the vanilla GeForce 6600 very interesting however is that it delivered similar performance to a very famous card: the Radeon 9700 Pro (chip codename: R300). The Radeon 9700 Pro also had 8 pixel pipes, but 4 vertex shader units, and ran at 325MHz. The 9700 Pro did have substantially higher memory bandwidth, but given the bandwidth-limited target market of our only cross-platform benchmarks we won't always see tons of memory bandwidth put to good use here.

The 7800 GT and 7900 GS/GTX were included to showcase the impacts of scaling up compute units and memory bandwidth, as the architectures aren't fundamentally all that different from the GeForce 6600 - they're just bigger and better. The 7800 GT in particular was exciting as it delivered performance competitive with the previous generation GeForce 6800 Ultra, but at a more attractive price point. Given that the 6800 Ultra was cream of the crop in 2004, the performance of the competitive 7800 GT will be important to look at.

Finally we have a mainstream part from NVIDIA's G8x family: the GeForce 8500 GT. Prior to G80 and its derivatives, NVIDIA used dedicated pixel and vertex shader hardware - similar to what it does today with its ultra mobile GPUs (Tegra 2 - 4). Starting with G80 (and eventually trickling down to G86, the basis of the 8500 GT), NVIDIA embraced a unified shader architecture with a single set of execution resources that could be used to run pixel or vertex shader programs. NVIDIA will make a similar transition in its Tegra lineup with Logan in 2014. The 8500 GT won't outperform the 7900 GTX in most gaming workloads, but it does give us a look at how NVIDIA's unified architecture deals with our two cross-platform benchmarks. Remember that both 3DMark and GL/DXBenchmark 2.7 were designed (mostly) to run on modern hardware. Although hardly modern, the 8500 GT does look a lot more like today's architectures than the G70 based cards.

You'll notice a distinct lack of ATI video cards here - that's not from a lack of trying. I dusted off an old X800 GT and an X1650 Pro, neither of which would complete the first graphics test in 3DMark or DXBenchmark's T-Rex HD test. Drivers seem to be at fault here. ATI dropped support for DX9-only GPUs long ago, the latest Catalyst available for these cards (10.2) was put out well before either benchmark was conceived. Unfortunately I don't have any AMD based ultraportables, but I did grab the old Brazos E-350. As a reminder, the E-350 was a 40nm APU that used two Bobcat cores and featured 80 GPU cores (Radeon HD 6310). While we won't see the E-350 in a tablet, a faster member of its lineage will find its way into tablets beginning this year.

Choosing a Testbed CPU & 3DMark Performance
Comments Locked

128 Comments

View All Comments

  • ChronoReverse - Thursday, April 4, 2013 - link

    Very interesting article. I've been wondering where the current phone GPUs stood compared to desktop GPUs
  • krumme - Thursday, April 4, 2013 - link

    +1
    Anand sticking to the subject and diving into details and at the same time giving perspective is great work!
    I dont know if i buy the convergence thinking on the technical side, because from here it look like people is just buying more smatphones and far less desktop. The convergence is there a little bit, but i will see the battle on the field before it gets really intereting. Atom is not yet ready for phones and bobcat is not ready for tablets. When they get there, where are arm then?

    I put my money on arm :)
  • kyuu - Thursday, April 4, 2013 - link

    If Atom is ready for tablets, then Bobcat is more than ready. The Z-60 may only have one design win (in the Vizio Tablet PC), but it should deliver comparable (if not somewhat superior) CPU performance with much, much better GPU performance.
  • zeo - Tuesday, April 16, 2013 - link

    Uh, no... The Hondo Z-60 is basically just an update to the Desna, which itself was derived from the AMD Fusion/Brazos Ontario C-50.

    While it is true that Bobcats are superior to ATOM processors for equivalent clock speeds. The problem is AMD has to deal with higher power consumption and that generates more heat, which in turn forces them to lower the max clock speed... especially, if they want to offer anywhere near competitive run times.

    So the Bobcat cores for the Z-60 are only running at 1GHz, while Clover Trail ATOM is running at 1.8GHz (Clover Trail+ even goes up to 2GHz for the Z2580, that that version is only for Android devices). The differences in processor efficiency is overcome by just a few hundred MHz difference in clock speed.

    Meaning you actually get more CPU performance from Clover Trail than you would a Hondo... However, where AMD holds dominance over Intel is in graphical performance and while Clover Trail does provide about 3x better performance than previous GMA 3150 (back in the netbook days of Pine Trail ATOM) it is still about 3x less powerful as the Hondo graphical performance.

    Only other problems is Hondo only slightly improves power consumption compared to the previous Desna, down to about 4.79W max TDP though that is at least nearly half of the original C-50 9W...

    However, keep in mind Clover Trail is a 1.7W part and all models are fan-less but Hondo models will continue to require fans.

    While AMD also doesn't offer anything like Intel's advance S0i Power Management that allows for ARM like extreme low mw idling states and allowing for features like always connected standby.

    So the main reason to get a Hondo tablet is because it'll likely offer better Linux support, which is presently virtually non-existent for Intel's present 32nm SoC ATOMs, and the better graphical performance if you want to play some low end but still pretty good games.

    It's the upcoming 28nm Temash that you should keep a eye out for, being AMD's first SoC that can go fan-less for the dual core version and while the quad core version will need a fan, it will offer a Turbo docking feature that lets it go into a much higher 14W max TDP power mode that will provide near Ultrabook level performance... Though the dock will require an additional battery and additional fans to support the feature.

    Intel won't be able to counter Temash until their 22nm Bay Trail update comes out, though that'll be just months later as Bay Trail is due to start shipping around September of this year and may be in products in time for the holiday shopping season.
  • Speedfriend - Thursday, April 4, 2013 - link

    Atom is not yet ready for phones?

    It is in several phones already, where it delivers a strong performance from a CPU and power consumption perspective. It's weak point is the GPU from Imagination. In two years time, ARM will be a distant memory in high-end tablets and possibly high-end smartphones too, even more so if we get advances in battery technology.
  • krumme - Thursday, April 4, 2013 - link

    Well Atom is in several phones that do not sell in any meaningfull matter. Sure there will be x86 in high-end tablets, and jaguar will make sure that happens this year, but will those tablets matter? There is ARM servers also. Do they matter?
    Right now there is sold tons of cheap 40nm A9 products, the consumers is just about to get into 28nm quadcore A7 at 2mm2 for the cpu part. And they are ready for cheap, slim phones, with google play, and acceptable graphics performance for templerun 2.
  • Wilco1 - Thursday, April 4, 2013 - link

    Also note that despite Anand making the odd "Given that most of the ARM based CPU competitors tend to be a bit slower than Atom" claim, the Atom 2760 in the Vivo Tab Smart scores consistently the worst on both the CPU and GPU tests. Even Surface RT with low clocked A9's beats it. That means Atom is not even tablet-ready...
  • kyuu - Thursday, April 4, 2013 - link

    The Atom scores worse in 3DMark's physics test, yes. But any other CPU benchmark I've seen has always favored Clover Trail over any A9-based ARM SoC. A15 can give the Atom a run for its money, though.
  • Wilco1 - Thursday, April 4, 2013 - link

    Well I haven't seen Atom beat an A9 at similar frequencies except perhaps SunSpider (a browser test, not a CPU test). On native CPU benchmarks like Geekbench Atom is well behind A9 even when you compare 2 cores/4 threads with 2 cores/2 threads.
  • kyuu - Friday, April 5, 2013 - link

    At similar frequencies? What does that matter? If Atom can run at 1.8GHz while still being more power efficient than Tegra 3 at 1.3GHz, then that's called -- advantage: Atom.

    DId you read the reviews of Clover Trail when it came out?

    http://www.anandtech.com/show/6522/the-clover-trai...

    http://www.anandtech.com/show/6529/busting-the-x86...

Log in

Don't have an account? Sign up now