Choosing a Testbed CPU

Although I was glad I could put some of these old GPUs to use (somewhat justifying them occupying space for years in my parts closet), there was the question of what CPU to pair them with. Go too insane on the CPU and I may unfairly tilt performance in favor of these cards. What I decided to do was to simulate the performance of the Core i5-3317U in Microsoft's Surface Pro. That part is a dual-core Ivy Bridge with Hyper Threading enabled (4 threads). Its max turbo is 2.6GHz for a single core, 2.4GHz for two cores. I grabbed a desktop Core i3 2100, disabled turbo, and forced its default clock speed to 2.4GHz. In many cases these mobile CPUs spend a lot of time at or near their max turbo until things get a little too toasty in the chassis. To verify that I had picked correctly I ran the 3DMark Physics test to see how close I came to the performance of the Surface Pro. As the Physics test is multithreaded and should be completely CPU bound, it shouldn't matter what GPU I paired with my testbed - they should all perform the same as the Surface Pro:

3DMark - Physics Test

3DMark - Physics

Great success! With the exception of the 8500 GT, which for some reason is a bit of an overachiever here (7% faster than Surface Pro), the rest of the NVIDIA cards all score within 3% of the performance of the Surface Pro - despite being run on an open-air desktop testbed.

With these results we also get a quick look at how AMD's Bobcat cores compare against the ARM competitors it may eventually do battle with. With only two Bobcat cores running at 1.6GHz in the E-350, AMD actually does really well here. The E-350's performance is 18% better than the dual-core Cortex A15 based Nexus 10, but it's still not quite good enough to top some of the quad-core competitors here. We could be seeing differences in drivers and/or thermal management with some of these devices since they are far more thermally constrained than the E-350. Bobcat won't surface as a competitor to anything you see here, but its faster derivative (Jaguar) will. If AMD can get Temash's power under control, it could have a very compelling tablet platform on its hands. The sad part in all of this is the fact that AMD seems to have the right CPU (and possibly GPU) architectures to be quite competitive in the ultra mobile space today. If AMD had the capital and relationships with smartphone/tablet vendors, it could be a force to be reckoned with in the ultra mobile space. As we've seen from watching Intel struggle however, it takes more than just good architecture to break into the new mobile world. You need a good baseband strategy and you need the ability to get key design wins.

Enough about what could be, let's look at how these mobile devices stack up to some of the best GPUs from 2004 - 2007.

We'll start with 3DMark. Here we're looking at performance at 720p, which immediately stops some of the cards with 256-bit memory interfaces from flexing their muscles. Never fear, we will have GL/DXBenchmark's 1080p offscreen mode for that in a moment.

Graphics Test 1

Ice Storm Graphics test 1 stresses the hardware’s ability to process lots of vertices while keeping the pixel load relatively light. Hardware on this level may have dedicated capacity for separate vertex and pixel processing. Stressing both capacities individually reveals the hardware’s limitations in both aspects.

In an average frame, 530,000 vertices are processed leading to 180,000 triangles rasterized either to the shadow map or to the screen. At the same time, 4.7 million pixels are processed per frame.

Pixel load is kept low by excluding expensive post processing steps, and by not rendering particle effects.

3DMark - Graphics Test 1

Right off the bat you should notice something wonky. All of NVIDIA's G70 and earlier architectures do very poorly here. This test is very heavy on the vertex shaders, but the 7900 GTX and friends should do a lot better than they are. These workloads however were designed for a very different set of architectures. Looking at the unified 8500 GT, we get some perspective. The fastest mobile platforms here (Adreno 320) deliver a little over half the vertex processing performance of the GeForce 8500 GT. The Radeon HD 6310 featured in AMD's E-350 is remarkably competitve as well.

The praise goes both ways of course. The fact that these mobile GPUs can do as well as they are right now is very impressive.

Graphics Test 2

Graphics test 2 stresses the hardware’s ability to process lots of pixels. It tests the ability to read textures, do per pixel computations and write to render targets.

On average, 12.6 million pixels are processed per frame. The additional pixel processing compared to Graphics test 1 comes from including particles and post processing effects such as bloom, streaks and motion blur.

In each frame, an average 75,000 vertices are processed. This number is considerably lower than in Graphics test 1 because shadows are not drawn and the processed geometry has a lower number of polygons.

3DMark - Graphics Test 2

The data starts making a lot more sense when we look at the pixel shader bound graphics test 2. In this benchmark, Adreno 320 appears to deliver better performance than the GeForce 6600 and once again roughly half the performance of the GeForce 8500 GT. Compared to the 7800 GT (or perhaps 6800 Ultra), we're looking at a bit under 33% of the performance of those cards. The Radeon HD 6310 in AMD's E-350 appears to deliver performance competitive with the Adreno 320.

3DMark - Graphics

The overall graphics score is a bit misleading given how poorly the G7x and NV4x architectures did on the first graphics test. We can conclude that the E-350 has roughly the same graphics performance as Qualcomm's Snapdragon 600, while the 8500 GT appears to have roughly 2x that. The overall Ice Storm scores pretty much repeat what we've already seen:

3DMark - Ice Storm

Again, the new 3DMark appears to unfairly penalize the older non-unified NVIDIA GPU architectures. Keep in mind that the last NVIDIA driver drop for DX9 hardware (G7x and NV4x) is about a month older than the latest driver available for the 8500 GT.

It's also worth pointing out that Ice Storm also makes Intel's HD 4000 look very good, when in reality we've seen varying degrees of competitiveness with discrete GPUs depending on the workload. If 3DMark's Ice Storm test could map to real world gaming performance, it would mean that devices like the Nexus 4 or HTC One would be able to run BioShock 2-like titles at 10x7 in the 20 fps range. As impressive as that would be, this is ultimately the downside of relying on these types of benchmarks to make comparisons - they fundamentally tell us how well these platforms would run the benchmark itself, not other games unfortunately.

At a high level, it looks like we're somewhat narrowing down the level of performance that today's high end ultra mobile GPUs deliver when put in discrete GPU terms. Let's see what GL/DXBenchmark 2.7 tell us.

Digging Through the Parts Closet GL/DXBenchmark 2.7 & Final Words
Comments Locked

128 Comments

View All Comments

  • Beenthere - Thursday, April 4, 2013 - link

    Anand -

    Any chance of spending some time fixing the website issues instead of tweeting? I would think with hundreds of negative comments regarding the redesigned site and the fact that many of us can't even read it with parts cut of, odd fonts, conflicting color schemes and layout design, etc. that this would be far more important than posting tweets??? I'm sure that your ad revenue is going to take a big hit with the significant drop in page hits from a basically unusable website for many people, since the redesign.
  • mrzed - Friday, April 5, 2013 - link

    Anand,

    Such a great amd timely article for me. The PC I am posting this from runs a 7900GS, which shows you how much gaming I've done in the last 6 years (I now have a 5 year old and a toddler). Came down to the rumpus room with my phone in my pocket, saw the article title and was immediately curious just how a new phone might compare in gaming prowess with the PC I still mostly surf on.

    You call yourself out for not ever dreaming that your closet full of cards may be used to benchmark phones, but I recall many years ago wondering why you were spending so much editorial direction on phones. You saw the writing on the wall before many in the industry, and as a result, your hardware site is remaining relevant as the PC industry enters its long sunset. Huzzah.
  • Infy2 - Friday, April 5, 2013 - link

    My old office PC with dual core Core2 Duo 8400 3GHz with GF8600GT scored 3DMark Score 28333 3DMarks, Gfx Score 30322, Phy Score 23044, Gfx 1 132.3 FPS, Gfx 2 131.4 FPS, Phy Test 73.2 FPS.
  • yannigr - Friday, April 5, 2013 - link

    Ι Loved this article. It is really really REALLY interesting.
  • somata - Friday, April 5, 2013 - link

    If anyone was curious how a contemporaneous ATI card performs in this context, here are some DXBenchmark results from my Radeon X1900 XT (early 2006):

    http://dxbenchmark.com/phonedetails.jsp?benchmark=...

    With 38 fps in T-Rex HD (offscreen), it looks like the R580 architecture did wind up being more future-proof than G70 as games became more shader-heavy, not that it matters anymore ;-).

    Unfortunately, like Anand, I was unable to run 3DMark on this card most likely due to outdated drivers.
  • tipoo - Sunday, April 7, 2013 - link

    Yeah, I remember when those came out everyone was saying the x1k series would be more future proof than the 7k series due to the shader/vertex balance, and I guess that turned out to be true. Too bad it happened so late that their performance isn't relevant in modern gaming though, heh. But that's also part of why the 360 does better on cross platform ports.
  • GregGritton - Saturday, April 6, 2013 - link

    The comparisons to 6-8 year old desktop CPUs and GPUs is actually quite relevant. It gives an idea of the graphical intensity of games that you might be able to play on modern cell phones and tablets. For example, World of Warcraft was released on 2004 and ran on graphics cards of the day. Thus, modern Cell Phone and Tablet processors should be able to run a game of similar graphical quality.
  • tipoo - Sunday, April 7, 2013 - link

    But possibly not size. Storage and standardized controllers are the bottleneck now imo.
  • jasonelmore - Sunday, April 7, 2013 - link

    instead of doing embedded/stacked DRAM to increase memory performance, couldn't they just embed the memory controller into the CPU/GPU like intel did with the i7? this increased memory bandwidth significantly.
  • marc1000 - Monday, April 8, 2013 - link

    I believe it already is integrated for all SOCs listed...

Log in

Don't have an account? Sign up now