Mobile Sandy Bridge Application Performance

We’ll start off with a bang and show application performance, along with media encoding performance comparing Intel’s QuickSync technology with CUDA video encoding. We’ve got a variety of laptops and notebooks in our charts for comparison, including dual-core and quad-core AMD (not their fastest mobile chips, unfortunately), plenty of Arrandale options, and a few Clarksfield notebooks as well. And just for good measure, we’ve included results from a desktop Core i7-920 as a point of comparison. Normally that would be raining on Intel’s parade by showing how previous generation desktop hardware is still quite a bit faster; this time, however… well, I’ll let the charts tell the story.

Futuremark PCMark Vantage

Futuremark PCMark05

3D Rendering - CINEBENCH R10

3D Rendering - CINEBENCH R10

Video Encoding - x264

Video Encoding - x264

So, is anyone as impressed as I am? Sure, hex-core Gulftown is still the fastest game in town, and the desktop Sandy Bridge chips are obviously going to beat the mobile chips, but check out the scores relative to the i7-920 in my own “for play” system. (Incidentally, my “for work” system is actually running QX6700 still, and it does even worse! But all I do there is type documents and surf the web.) Let’s talk percentages here just to put it all into perspective.

Since PCMark Vantage and 05 are susceptible to heavy SSD influence, we’ll just skip those scores; suffice it to say that Sandy Bridge is no slouch there. Turn to CPU-intensive benchmarks, however, and we can really see the changes. Starting with single-threaded Cinebench, the new i7-2820QM checks in 35% faster than the outgoing i7-920XM, 32% faster than a desktop i7-920, 43% faster than the i7-740QM, and 19% faster than the previous generation’s fastest dual-core part.

Use applications that are thread-friendly and the gap widens even more. In Cinebench SMP, the closest competitor in our charts is the desktop i7-920, and 2820QM maintains a healthy 23% lead—in fact, looking at our desktop reviews, the stock (but with Turbo) 2820QM is roughly equal to an i7-930 overclocked to 3.5GHz. It also leads the i7-920XM by 84%, and an i7-640M by 104% (!). x264 encoding tells a similar story: the second pass is 9% faster than i7-920 desktop, 65% faster than i7-920XM, and twice as fast as i7-640M—and that’s without using the new QuickSync technology! [Whoa, nice segue Batman!]

Meet the Compal Sandy Bridge Notebook Mobile Sandy Bridge QuickSync and 3DMarks
POST A COMMENT

66 Comments

View All Comments

  • skywalker9952 - Monday, January 03, 2011 - link

    For your CPU specific benchmarks you annotate the CPU and GPU. I beleive the HDD or SSD plays a much larger role in those benchmarks then a GPU. Would it not be more appropriate to annotate the storage device used. Were all of the CPUs in the comparison paired with SSDs? If they weren't how much would that affect the benchmarks? Reply
  • JarredWalton - Monday, January 03, 2011 - link

    The SSD is a huge benefit to PCMark, and since this is laptop testing I can't just use the same image on each system. Anand covers the desktop side of things, but I include PCMark mostly for the curious. I could try and put which SSD/HDD each notebook used, but then the text gets to be too long and the graph looks silly. Heh.

    For the record, the SNB notebook has a 160GB Intel G2 SSD. The desktop uses a 120GB Vertex 2 (SF-1200). W870CU is an 80GB Intel G1 SSD. The remaining laptops all use HDDs, mostly Seagate Momentus 7200.4 I think.
    Reply
  • Macpod - Tuesday, January 04, 2011 - link

    the synthetics benchmarks are all run at turbo frequencies. the scores from the 2.3ghz 2820qm is almost the same as the 3.4ghz i7 2600k. this is because the 2820qm is running at 3.1ghz under cinebench.

    no one knows how long this turbo frequency lasts. maybe just enough to finish cinebench!

    this review should be re done
    Reply
  • Althernai - Tuesday, January 04, 2011 - link

    It probably lasts forever given decent cooling so the review is accurate, but there is something funny going on here: the score for the 2820QM is 20393 while the score for the score in the 2600K review is 22875. This would be consistent with a difference between CPUs running at 3.4GHz and 3.1GHz, but why doesn't the 2600K Turbo up to 3.8GHz? The claim is that it can be effortlessly overclocked to 4.4GHz so we know the thermal headroom is there. Reply
  • JarredWalton - Tuesday, January 04, 2011 - link

    If you do continual heavy-duty CPU stuff on the 2820QM, the overall score drops about 10% on later runs in Cinebench and x264 encoding. I mentioned this in the text: the CPU starts at 3.1GHz for about 10 seconds, then drops to 3.0GHz for another 20s or so, then 2.9 for a bit and eventually settles in at 2.7GHz after 55 seconds (give or take). If you're in a hotter testing environment, things would get worse; conversely, if you have a notebook with better cooling, it should run closer to the maximum Turbo speeds more often.

    Macpod, disabling Turbo is the last thing I would do for this sort of chip. What would be the point, other than to show that if you limit clock speeds, performance will go down (along with power use)? But you're right, the whole review should be redone because I didn't mention enough that heavy loads will eventually drop performance about 10%. (Or did you miss page 10: "Performance and Power Investigated"?)
    Reply
  • lucinski - Tuesday, January 04, 2011 - link

    Just like any other low-end GPU (integrated or otherwise) I believe most users would rely on the HD3000 just for undemanding games in the category of which I would mention Civilization IV and V or FIFA / PES 11. This goes to say that I would very much like to see how the new Intel graphics fares in these games, should they be available in the test lab of course.

    I am not necessarily worried about the raw performance, clearly the HD3000 has the capacity to deliver. Instead, the driver maturity may come out as an obstacle. Firstly one has to consider the fact that Intel traditionally has problems with GPU driver design (relative to their competitors). Secondly, should at one point Intel be able to repair (some of) the rendering issues mentioned in this article or elsewhere, notebook producers still take their sweet time before supplying users with new driver versions.

    In this context I am genuinely concerned about the HD3000 goodness. The old GMA HD + Radeon 5470 combination still seems tempting. Strictly referring to the gaming aspect I honestly prefer reliability and a few FPS' missing rather than the aforementioned risks.
    Reply
  • NestoJR - Tuesday, January 04, 2011 - link

    So, when Apple starts putting these in Macbooks, I'd assume the battery life will easily eclipse 10 hours under light usage, maybe 6 hours under medium usage ??? I'm no fanboy but I'll be in line for that ! My Dell XPS M1530's 9-cell battery just died, I can wait a few months =] Reply
  • JarredWalton - Tuesday, January 04, 2011 - link

    I'm definitely interested in seeing what Apple can do with Sandy Bridge! Of course, they might not use the quad-core chips in anything smaller than the MBP 17, if history holds true. And maybe the MPB13 will finally make the jump to Arrandale? ;-) Reply
  • heffeque - Wednesday, January 05, 2011 - link

    Yeah... Saying that the nVidia 320M is consistently slower than the HD3000 when comparing a CPU from 2008 and a CPU from 2011...

    Great job comparing GPUs! (sic)

    A more intelligent thing to say would have been: a 2008 CPU (P8600) with an nVidia 320M is consistently slightly slower than a 2011 CPU (i7-2820QM) with HD3000, don't you think?

    That would make more sense.
    Reply
  • Wolfpup - Wednesday, January 05, 2011 - link

    That's the only thing I care about with these-and as far as I'm aware, the jump isn't anything special. It's FAR from the "tock" it supposedly is, going by earlier Anandtech data. (In fact the "tick/tock" thing seems to have broken down after just one set of products...)

    This sounds like it is a big advantage for me...but only because Intel refused to produce quad core CPUs at 32nm, so these by default run quite a bit faster than the last gen chips.

    Otherwise it sounds like they're wasting 114 million transistors that I want spent on the CPU-whether it's more cache, more, more functional units, another core (if that's possible in 114 million transistors) etc.

    I absolutely do NOT want Intel's garbage, incompatible graphics. I do NOT want the addition complexity, performance hit, and software complexity of Optimus or the like. I want a real GPU, functioning as a real GPU, with Intels' garbage completely shut off at all times.

    I hope we'll see that in mid range and high end notebooks, or I'm going to be very disappointed.
    Reply

Log in

Don't have an account? Sign up now