The Great Equalizer 3: How Fast is Your Smartphone/Tablet in PC GPU Terms
by Anand Lal Shimpi on April 4, 2013 1:00 AM EST- Posted in
- Tablets
- SOC
- Smartphones
- GPUs
For the past several days I've been playing around with Futuremark's new 3DMark for Android, as well as Kishonti's GL and DXBenchmark 2.7. All of these tests are scheduled to be available on Android, iOS, Windows RT and Windows 8 - giving us the beginning of a very wonderful thing: a set of benchmarks that allow us to roughly compare mobile hardware across (virtually) all OSes. The computing world is headed for convergence in a major way, and with benchmarks like these we'll be able to better track everyone's progress as the high performance folks go low power, and the low power folks aim for higher performance.
The previous two articles I did on the topic were really focused on comparing smartphones to smartphones, and tablets to tablets. What we've been lacking however has been perspective. On the CPU side we've known how fast Atom was for quite a while. Back in 2008 I concluded that a 1.6GHz single core Atom processor delivered performance similar to that of a 1.2GHz Pentium M, or a mainstream Centrino notebook from 2003. Higher clock speeds and a second core would likely push that performance forward by another year or two at most. Given that most of the ARM based CPU competitors tend to be a bit slower than Atom, you could estimate that any of the current crop of smartphones delivers CPU performance somewhere in the range of a notebook from 2003 - 2005. Not bad. But what about graphics performance?
To find out, I went through my parts closet in search of GPUs from a similar time period. I needed hardware that supported PCIe (to make testbed construction easier), and I needed GPUs that supported DirectX 9, which had me starting at 2004. I don't always keep everything I've ever tested, but I try to keep parts of potential value to future comparisons. Rest assured that back in 2004 - 2007, I didn't think I'd be using these GPUs to put smartphone performance in perspective.
Here's what I dug up:
| The Lineup (Configurations as Tested) | |||||||||||||
| Release Year | Pixel Shaders | Vertex Shaders | Core Clock | Memory Data Rate | Memory Bus Width | Memory Size | |||||||
| NVIDIA GeForce 8500 GT | 2007 | 16 (unified) | 520MHz (1040MHz shader clock) | 1.4GHz | 128-bit | 256MB DDR3 | |||||||
| NVIDIA GeForce 7900 GTX | 2006 | 24 | 8 | 650MHz | 1.6GHz | 256-bit | 512MB DDR3 | ||||||
| NVIDIA GeForce 7900 GS | 2006 | 20 | 7 | 480MHz | 1.4GHz | 256-bit | 256MB DDR3 | ||||||
| NVIDIA GeForce 7800 GT | 2005 | 20 | 7 | 400MHz | 1GHz | 256-bit | 256MB DDR3 | ||||||
| NVIDIA GeForce 6600 | 2004 | 8 | 3 | 300MHz | 500MHz | 128-bit | 256MB DDR | ||||||
I wanted to toss in a GeForce 6600 GT, given just how awesome that card was back in 2004, but alas I had cleared out my old stock of PCIe 6600 GTs long ago. I had an AGP 6600 GT but that would ruin my ability to keep CPU performance in-line with Surface Pro, so I had to resort to a vanilla GeForce 6600. Both core clock and memory bandwidth suffered as a result, with the latter being cut in half from using slower DDR. The core clock on the base 6600 was only 300MHz compared to 500MHz for the GT. What does make the vanilla GeForce 6600 very interesting however is that it delivered similar performance to a very famous card: the Radeon 9700 Pro (chip codename: R300). The Radeon 9700 Pro also had 8 pixel pipes, but 4 vertex shader units, and ran at 325MHz. The 9700 Pro did have substantially higher memory bandwidth, but given the bandwidth-limited target market of our only cross-platform benchmarks we won't always see tons of memory bandwidth put to good use here.
The 7800 GT and 7900 GS/GTX were included to showcase the impacts of scaling up compute units and memory bandwidth, as the architectures aren't fundamentally all that different from the GeForce 6600 - they're just bigger and better. The 7800 GT in particular was exciting as it delivered performance competitive with the previous generation GeForce 6800 Ultra, but at a more attractive price point. Given that the 6800 Ultra was cream of the crop in 2004, the performance of the competitive 7800 GT will be important to look at.
Finally we have a mainstream part from NVIDIA's G8x family: the GeForce 8500 GT. Prior to G80 and its derivatives, NVIDIA used dedicated pixel and vertex shader hardware - similar to what it does today with its ultra mobile GPUs (Tegra 2 - 4). Starting with G80 (and eventually trickling down to G86, the basis of the 8500 GT), NVIDIA embraced a unified shader architecture with a single set of execution resources that could be used to run pixel or vertex shader programs. NVIDIA will make a similar transition in its Tegra lineup with Logan in 2014. The 8500 GT won't outperform the 7900 GTX in most gaming workloads, but it does give us a look at how NVIDIA's unified architecture deals with our two cross-platform benchmarks. Remember that both 3DMark and GL/DXBenchmark 2.7 were designed (mostly) to run on modern hardware. Although hardly modern, the 8500 GT does look a lot more like today's architectures than the G70 based cards.
You'll notice a distinct lack of ATI video cards here - that's not from a lack of trying. I dusted off an old X800 GT and an X1650 Pro, neither of which would complete the first graphics test in 3DMark or DXBenchmark's T-Rex HD test. Drivers seem to be at fault here. ATI dropped support for DX9-only GPUs long ago, the latest Catalyst available for these cards (10.2) was put out well before either benchmark was conceived. Unfortunately I don't have any AMD based ultraportables, but I did grab the old Brazos E-350. As a reminder, the E-350 was a 40nm APU that used two Bobcat cores and featured 80 GPU cores (Radeon HD 6310). While we won't see the E-350 in a tablet, a faster member of its lineage will find its way into tablets beginning this year.





111 Comments
View All Comments
tech4real - Friday, April 05, 2013 - link
but why do we have to compare them at similar frequencies? one of atom's strength is working at high freq within thermal budget. If tegra 3 can't hit 2GHz within power budget, it's nvidia/arm's problem. why should atom bother to downclock itself. ReplyWilco1 - Friday, April 05, 2013 - link
There is no need to clock the Atom down - typical A9-based tablets are at 1.6 or 1.7GHz. Yes an Z-2760 beats a 1.3GHz Tegra 3 on SunSpider, but that's not true for Cortex-A9's used today (Tegra 3 goes up to 1.7GHz, Exynos 4 does 1.6GHz), let alone future ones. So it's incorrect to claim that Atom is generally faster than A9 - that implies Atom has an IPC advantage (which it does not have - it only wins if it has a big frequency advantage). I believe MS made a mistake by choosing the slowest Tegra 3 for Surface RT as it gives RT as well as Tegra a bad name - hopefully they fix this in the next version.Beating an old low clocked Tegra 3 on performance/power is not all that difficult, however beating more modern SoCs is a different matter. Pretty much all ARM SoCs are already at 28 or 32nm, while Tegra 3 is still 40nm. That will finally change with Tegra 4. Reply
tech4real - Sunday, April 07, 2013 - link
Based on this anand articlehttp://www.anandtech.com/show/6340/intel-details-a...
the linearly projected 1.7GHz Tegra 3 specint2000 score is about 1.12, while the 1.8Ghz atom stands at 1.20, so the gap is still there. If you consider 2GHz atom turbo case, we can argue the gap is even wider. Of course since this specint data is provided by intel, we have to take it with a grain of salt, but i think the general idea has its merit. Reply
theduckofdeath - Thursday, April 04, 2013 - link
That is not true. A few months ago Anandtech themselves made a direct comparison between the Tegra 3 in the Surface tablet and an Atom processor, and the Atom beat the Tegra 3 both on performance and power efficiency. ReplyWilco1 - Friday, April 05, 2013 - link
I was talking about similar frequencies - did you read what I said? Yes the first Surface RT is a bit of a disappointment due to the low clocked Tegra 3, but hopefully MS will use a better SoC in the next version. Tegra 4(+) or Exynos Octa would make it shine. We can then see how Atom does against that. ReplySlyNine - Saturday, April 06, 2013 - link
Nobody cares if the frequencies are different, if one performs better and uses less power that's a win; REGARDLESS OF FREQUENCY.Give one good reason, that matters to the consumer and manufacture, for frequencies being an important factor. Reply
pSupaNova - Sunday, April 07, 2013 - link
Your not listening to what Wilco1 is saying.Microsoft used a poor Tegra 3 part, the HTC One X+ ships with a Tegra 3 clocked at 1.7ghz.
So by Anand comparing the Atom based tabs to the Surface RT it puts Intel chip in a much better light. Reply
nofumble62 - Friday, April 05, 2013 - link
LTE is not available on Intel platform yet, that is why they don't offer in US. But I heard the new Intel LTE chip is pretty good (won award), so next year will be interesting.The ARM Big cores suck up a lot of power when they are running. That is the reason Qualcomm SnapDragon is winning the latest Samsung S4 (over Samsung own Enoxys chip) and Nexus 7 (over Nvidia Tegra). Reply
Spunjji - Friday, April 05, 2013 - link
Nvidia Tegra's not really ready for the new Nexus 7, so it's not entirely fair to say it's out because of power issues. When you consider that the S4 situation you described isn't strictly true either (if I buy an S4 here in the UK it's going to have the Exynos chip in it) it tends to harm your conclusion a bit. ReplyWaltC - Friday, April 05, 2013 - link
Unfortunately, that's not what this article delivers. It doesn't tell you a thing about current desktop gpu performance versus current ARM performance. What it does is tell you about how obsolete cpus & gpus from roughly TEN YEARS AGO look like against state-of-the-art cell-phone and iPad ARM running a few isolated 3d Mark graphics tests. What a disappointment. Nobody's even using these desktop cpus & gpus anymore. All this article does is show you how poorly ARM-powered mobile devices do when stacked up against common PC technology a decade ago! (That's assuming one assumes the 3dMark tests used here, such as they are, are actually representative of anything.) AH, if only he had simply used state-of-the-art desktops & cpus to compare with state-of-the-art ARM devices--well, the ARM stuff would have been crushed by such a wide margin it would astound most people. Why *would you* compare current ARM tech with decade-old desktop cpus & gpus? Beats me. Trying to make ARM look better than it has any right to look? Maybe in the future Anand will use a current desktop for his comparison, such as it is. Right now, the article provides no useful information--unless you like learning about really old x86 desktop technology that's been hobbled...;)To be fair, in the end Anand does admit that current ARM horsepower is roughly on a par with ~10-year-old desktop technology IF you don't talk about bandwidth or add it into the equation--in which case the ARMs don't even do well enough to stand up to 10-year-old commonplace cpu & gpu technology. So what was the point of this article? Again, beats me, as the comparisons aren't relevant because nobody is using that old desktop stuff anymore--they're running newer technology from ~5 years old to brand new--and it runs rings around the old desktop nVidia gpus Anand used for this article.
BTW, and I'm sure Anand is aware of this, you can take DX11.1 gpus and run DX9-level software on them just fine (or OpenGL 3.x-level software, too.) Comments like this are baffling: "While compute power has definitely kept up (as has memory capacity), memory bandwidth is no where near as good as it was on even low end to mainstream cards from that time period." What's "kept up" with what? It sure isn't ARM technology as deployed in mobile devices--unless you want to count reaching ~decade-old x86 "compute power" levels (sans real gpu bandwidth) as "keeping up." I sure wouldn't say that.
Neither Intel or AMD will be sitting still on the x86 desktop, so I'd imagine the current performance advantage (huge) of x86 over ARM will continue to hold if not to grow even wider as time moves on. I think the biggest flaw in this entire article is that it pretends you can make some kind of meaningful comparisons between current x86 desktop performance and current ARM performance as deployed in the devices mentioned. You just can't do that--the disparity would be far too large--it would be embarrassing for ARM. There's no need in that because in mobile ARM cpu/gpu technology, performance is *not* king by a long shot--power conservation for long battery life is king in ARM, however. x86 performance desktops, especially those setup for 3d gaming, are engineered for raw horsepower first and every other consideration, including power conservation, second. That's why Apple doesn't use ARM cpus in Macs and why you cannot buy a desktop today powered by an ARM cpu--the compute power just isn't there, and no one wants to retreat 10-15 years in performance just to run an ARM cpu on the desktop. The forte for ARM is mobile-device use, and the forte for x86 power cpus is on the desktop (and no, I don't count Atom as a powerful cpu...;)) Reply