Choosing a Testbed CPU

Although I was glad I could put some of these old GPUs to use (somewhat justifying them occupying space for years in my parts closet), there was the question of what CPU to pair them with. Go too insane on the CPU and I may unfairly tilt performance in favor of these cards. What I decided to do was to simulate the performance of the Core i5-3317U in Microsoft's Surface Pro. That part is a dual-core Ivy Bridge with Hyper Threading enabled (4 threads). Its max turbo is 2.6GHz for a single core, 2.4GHz for two cores. I grabbed a desktop Core i3 2100, disabled turbo, and forced its default clock speed to 2.4GHz. In many cases these mobile CPUs spend a lot of time at or near their max turbo until things get a little too toasty in the chassis. To verify that I had picked correctly I ran the 3DMark Physics test to see how close I came to the performance of the Surface Pro. As the Physics test is multithreaded and should be completely CPU bound, it shouldn't matter what GPU I paired with my testbed - they should all perform the same as the Surface Pro:

3DMark - Physics Test

3DMark - Physics

Great success! With the exception of the 8500 GT, which for some reason is a bit of an overachiever here (7% faster than Surface Pro), the rest of the NVIDIA cards all score within 3% of the performance of the Surface Pro - despite being run on an open-air desktop testbed.

With these results we also get a quick look at how AMD's Bobcat cores compare against the ARM competitors it may eventually do battle with. With only two Bobcat cores running at 1.6GHz in the E-350, AMD actually does really well here. The E-350's performance is 18% better than the dual-core Cortex A15 based Nexus 10, but it's still not quite good enough to top some of the quad-core competitors here. We could be seeing differences in drivers and/or thermal management with some of these devices since they are far more thermally constrained than the E-350. Bobcat won't surface as a competitor to anything you see here, but its faster derivative (Jaguar) will. If AMD can get Temash's power under control, it could have a very compelling tablet platform on its hands. The sad part in all of this is the fact that AMD seems to have the right CPU (and possibly GPU) architectures to be quite competitive in the ultra mobile space today. If AMD had the capital and relationships with smartphone/tablet vendors, it could be a force to be reckoned with in the ultra mobile space. As we've seen from watching Intel struggle however, it takes more than just good architecture to break into the new mobile world. You need a good baseband strategy and you need the ability to get key design wins.

Enough about what could be, let's look at how these mobile devices stack up to some of the best GPUs from 2004 - 2007.

We'll start with 3DMark. Here we're looking at performance at 720p, which immediately stops some of the cards with 256-bit memory interfaces from flexing their muscles. Never fear, we will have GL/DXBenchmark's 1080p offscreen mode for that in a moment.

Graphics Test 1

Ice Storm Graphics test 1 stresses the hardware’s ability to process lots of vertices while keeping the pixel load relatively light. Hardware on this level may have dedicated capacity for separate vertex and pixel processing. Stressing both capacities individually reveals the hardware’s limitations in both aspects.

In an average frame, 530,000 vertices are processed leading to 180,000 triangles rasterized either to the shadow map or to the screen. At the same time, 4.7 million pixels are processed per frame.

Pixel load is kept low by excluding expensive post processing steps, and by not rendering particle effects.

3DMark - Graphics Test 1

Right off the bat you should notice something wonky. All of NVIDIA's G70 and earlier architectures do very poorly here. This test is very heavy on the vertex shaders, but the 7900 GTX and friends should do a lot better than they are. These workloads however were designed for a very different set of architectures. Looking at the unified 8500 GT, we get some perspective. The fastest mobile platforms here (Adreno 320) deliver a little over half the vertex processing performance of the GeForce 8500 GT. The Radeon HD 6310 featured in AMD's E-350 is remarkably competitve as well.

The praise goes both ways of course. The fact that these mobile GPUs can do as well as they are right now is very impressive.

Graphics Test 2

Graphics test 2 stresses the hardware’s ability to process lots of pixels. It tests the ability to read textures, do per pixel computations and write to render targets.

On average, 12.6 million pixels are processed per frame. The additional pixel processing compared to Graphics test 1 comes from including particles and post processing effects such as bloom, streaks and motion blur.

In each frame, an average 75,000 vertices are processed. This number is considerably lower than in Graphics test 1 because shadows are not drawn and the processed geometry has a lower number of polygons.

3DMark - Graphics Test 2

The data starts making a lot more sense when we look at the pixel shader bound graphics test 2. In this benchmark, Adreno 320 appears to deliver better performance than the GeForce 6600 and once again roughly half the performance of the GeForce 8500 GT. Compared to the 7800 GT (or perhaps 6800 Ultra), we're looking at a bit under 33% of the performance of those cards. The Radeon HD 6310 in AMD's E-350 appears to deliver performance competitive with the Adreno 320.

3DMark - Graphics

The overall graphics score is a bit misleading given how poorly the G7x and NV4x architectures did on the first graphics test. We can conclude that the E-350 has roughly the same graphics performance as Qualcomm's Snapdragon 600, while the 8500 GT appears to have roughly 2x that. The overall Ice Storm scores pretty much repeat what we've already seen:

3DMark - Ice Storm

Again, the new 3DMark appears to unfairly penalize the older non-unified NVIDIA GPU architectures. Keep in mind that the last NVIDIA driver drop for DX9 hardware (G7x and NV4x) is about a month older than the latest driver available for the 8500 GT.

It's also worth pointing out that Ice Storm also makes Intel's HD 4000 look very good, when in reality we've seen varying degrees of competitiveness with discrete GPUs depending on the workload. If 3DMark's Ice Storm test could map to real world gaming performance, it would mean that devices like the Nexus 4 or HTC One would be able to run BioShock 2-like titles at 10x7 in the 20 fps range. As impressive as that would be, this is ultimately the downside of relying on these types of benchmarks to make comparisons - they fundamentally tell us how well these platforms would run the benchmark itself, not other games unfortunately.

At a high level, it looks like we're somewhat narrowing down the level of performance that today's high end ultra mobile GPUs deliver when put in discrete GPU terms. Let's see what GL/DXBenchmark 2.7 tell us.

Digging Through the Parts Closet GL/DXBenchmark 2.7 & Final Words
Comments Locked

128 Comments

View All Comments

  • Spunjji - Friday, April 5, 2013 - link

    Agreed, this would save me a shedload of hassle when recommending upgrades for friends! I often get questions like "is my 8800GTX better than an HD7770" and making that comparison tends to involve rather shady comparisons (e.g. 8800GTX x% better than 6570, 7770 x% better than 6570, therefore..?!) because the older cards just don't get included in the newer comparisons.
  • powerarmour - Thursday, April 4, 2013 - link

    Interestingly, the Atom 330/ION would be fairly high up most of those lists too, here's my Ice Storm compare link :- http://www.3dmark.com/is/312945

    Just shows how badly Intel need competitive graphics hardware on Atom again.
  • Spunjji - Friday, April 5, 2013 - link

    Killing ION (and 3rd party chipset support in general) starts to look like a huge mistake on Intel's part when you look closely at the performance numbers of their Atom chips.
  • powerarmour - Saturday, April 13, 2013 - link

    Here's also an Atom D2700/GMA 3650 run for comparison also :-

    http://www.3dmark.com/is/394030

    That CedarView SGX545 (also in CloverView) is pretty damn slow!
  • Hrel - Thursday, April 4, 2013 - link

    Good article. I'd like to see how more modern desktop GPU's compare though. Especially at the lower resolutions these "phones" run at. 8800GT, since that was THE part to get. GTX460, same thing. HD6850. GTX660M/560M. Probably just wishful thinking, but if you have these dinosaurs laying around perhaps you have those too. But also at 720p and 1080p. Since you can output the phones to a tv just like a desktop. I know you have the Razer, which is more modern. So that could replace the 5 and 6 series GPU's I mentioned, I guess.
  • geniekid - Thursday, April 4, 2013 - link

    I recognize the XFX 7900 GS as it was the graphics card I chose for the very first computer I built myself! Based on the picture and the specs, it appears you used the factory OCed version of the 7900 GS?
  • marc1000 - Thursday, April 4, 2013 - link

    oh boy, I'm getting old....... the GPU for the very first computer I built myself was an ancient Voodoo Banshee 16mb from Diamond Multimedia. It was an intermediate card between Voodoo 2 and 3, and I even installed linux on it and enabled the first 3D-like desktop experience I ever saw (it was on gnome). MS could only deliver a similar experience with windows 7...

    yes, i'm old!
  • jamyryals - Thursday, April 4, 2013 - link

    Awesome article Anand, this juxtaposition is just wild. The real mind blower would be the perf/watt comparison that has been mentioned. Obviously time moves and almost everything gets better, but to turn around and look in the rearview like that was a fun read Anand. I'm just wondering how big an impact the stacked DRAM will have on memory bandwidth in the future. Also, what's been the challenge to incorporate this up to now?
  • WagonWheelsRX8 - Thursday, April 4, 2013 - link

    Great article!!!
  • Peroxyde - Thursday, April 4, 2013 - link

    Very interesting article. Hope you will do a similar one comparing mobile CPUs vs desktop CPUs

Log in

Don't have an account? Sign up now