Mobile Sandy Bridge QuickSync and 3DMarks

Anand has provided plenty of coverage of transcoding quality in the desktop SNB review, using Arcsoft’s Media Encoder 7. For the mobile side of things, we’ll turn to CyberLink’s MediaEspresso 6—a similar package that’s useful for quick encodes of movies for YouTube or mobile device consumption. NVIDIA has been touting the benefits of GPU acceleration for such tasks for over a year now, with CUDA making a fairly decent showing. MediaEspresso also supports CUDA acceleration, making for a nice head-to-head, though I’m limited to hardware that I still have on hand.

For the encoding test, I’ve grabbed two other recently reviewed notebooks to show how they compare to Sandy Bridge. The first is ASUS’ mainstream N53JF notebook, sporting an i5-460M and GT 425M GPU. For the higher performance notebook offering, we’ve got ASUS’ G73Jw with i7-740QM and GTX 460M. [Ed: Sorry for the delay in shipping it back, ASUS—it will go out this week now that we’re done with Sandy Bridge testing!] I used a 720p shot with an iPod Touch and transcoded it to a 2Mb 720p YouTube compatible stream. MediaEspresso also has some video quality enhancement features available, dubbed TrueTheater AutoLight, Denoise, and HD. I ran the transcode tests with and without the enhancements enabled, with and without QuickSync/GPU acceleration. Since MediaEspresso also supports ATI GPUs, I tossed in results from my i7-920 with CrossFire HD 5850 as well.

Accelerated MediaEspresso Encoding

CPU-Based MediaEspresso Encoding

Accelerated MediaEspresso Enhanced Encoding

CPU-Based MediaEspresso Enhanced Encoding

First things first, I’d say it’s fair to state that the GPU acceleration for AMD GPUs (at least in this particular instance) isn’t as good as NVIDIA’s CUDA or Intel’s QuickSync. Perhaps future driver, hardware, and/or software updates will change the picture, but the HD 5850 cards in my desktop fail to impress. The CUDA results for GTX 460M are quite good, while the GT 425M was roughly on par with CPU encoding on a quad-core (plus Hyper-Threading) processor. Finally, Intel’s Sandy Bridge manages to easily eclipse any of the other systems—with or without QuickSync.

Using pure CPU encoding, the 2820QM finishes the transcode in 15% less time than a desktop i7-920, and 44% less time than the i7-740QM. Enabling all of the extra TrueTheater enhancements definitely has an impact on performance (and depending on the video source may or may not be worthwhile). Sandy Bridge still required 8% less time than i7-920, and 36% less time than i7-740QM, never mind the i5-460M that requires 134% longer to accomplish the same task.

Switch on all of the GPU acceleration support (including QuickSync, which isn’t technically a GPU feature) and all of the times drop, some substantially. The basic transcode on SNB finishes in a blisteringly fast 10 seconds—this is a 1:33 minute clip with 30FPS content, so the transcode happens at roughly 280FPS (wow!). GTX 260M comes in next at 17 seconds (174FPS), then CrossFire 5850 ends up needing three times longer than SNB and almost twice as long as the mobile GTX 460M, and GT 425M brings up the rear at twice the time of the HD 5850. With the TrueTheater features enabled, the CPU appears to do a lot more work and the GTX 460M and Sandy Bridge are both over an order of magnitude slower.

This is obviously a huge in for Intel, but of course it all depends on how often you happen to transcode videos—and how patient you happen to be. I do it seldom enough that even running encodes on my old quad-core Kentsfield CPU doesn’t particularly bother me; I just set up the transcodes in TMPGEnc Express and walk away, and they’re usually done when I return. If on the other hand you’re the type that lives in the social networks and Twitter feeds, being able to get your video up on YouTube five to ten times faster (without a significant loss in quality, at least based on my iPod Touch experience) is definitely useful.

Futuremark 3DMark Vantage

Futuremark 3DMark06

Futuremark 3DMark05

One final item to quickly cover is synthetic graphics performance, courtesy of 3DMark. Sandy Bridge places in the middle of the pack, and obviously desktop solutions are far out of reach for the time being, but according to 3DMark we could see performance actually surpass some of the entry-level GPUs. Maybe 3DMark just has heavy optimizations from Intel…then again, maybe they actually do have a GPU that can compete.

Mobile Sandy Bridge Application Performance Mobile Sandy Bridge Gaming Performance
Comments Locked

66 Comments

View All Comments

  • skywalker9952 - Monday, January 3, 2011 - link

    For your CPU specific benchmarks you annotate the CPU and GPU. I beleive the HDD or SSD plays a much larger role in those benchmarks then a GPU. Would it not be more appropriate to annotate the storage device used. Were all of the CPUs in the comparison paired with SSDs? If they weren't how much would that affect the benchmarks?
  • JarredWalton - Monday, January 3, 2011 - link

    The SSD is a huge benefit to PCMark, and since this is laptop testing I can't just use the same image on each system. Anand covers the desktop side of things, but I include PCMark mostly for the curious. I could try and put which SSD/HDD each notebook used, but then the text gets to be too long and the graph looks silly. Heh.

    For the record, the SNB notebook has a 160GB Intel G2 SSD. The desktop uses a 120GB Vertex 2 (SF-1200). W870CU is an 80GB Intel G1 SSD. The remaining laptops all use HDDs, mostly Seagate Momentus 7200.4 I think.
  • Macpod - Tuesday, January 4, 2011 - link

    the synthetics benchmarks are all run at turbo frequencies. the scores from the 2.3ghz 2820qm is almost the same as the 3.4ghz i7 2600k. this is because the 2820qm is running at 3.1ghz under cinebench.

    no one knows how long this turbo frequency lasts. maybe just enough to finish cinebench!

    this review should be re done
  • Althernai - Tuesday, January 4, 2011 - link

    It probably lasts forever given decent cooling so the review is accurate, but there is something funny going on here: the score for the 2820QM is 20393 while the score for the score in the 2600K review is 22875. This would be consistent with a difference between CPUs running at 3.4GHz and 3.1GHz, but why doesn't the 2600K Turbo up to 3.8GHz? The claim is that it can be effortlessly overclocked to 4.4GHz so we know the thermal headroom is there.
  • JarredWalton - Tuesday, January 4, 2011 - link

    If you do continual heavy-duty CPU stuff on the 2820QM, the overall score drops about 10% on later runs in Cinebench and x264 encoding. I mentioned this in the text: the CPU starts at 3.1GHz for about 10 seconds, then drops to 3.0GHz for another 20s or so, then 2.9 for a bit and eventually settles in at 2.7GHz after 55 seconds (give or take). If you're in a hotter testing environment, things would get worse; conversely, if you have a notebook with better cooling, it should run closer to the maximum Turbo speeds more often.

    Macpod, disabling Turbo is the last thing I would do for this sort of chip. What would be the point, other than to show that if you limit clock speeds, performance will go down (along with power use)? But you're right, the whole review should be redone because I didn't mention enough that heavy loads will eventually drop performance about 10%. (Or did you miss page 10: "Performance and Power Investigated"?)
  • lucinski - Tuesday, January 4, 2011 - link

    Just like any other low-end GPU (integrated or otherwise) I believe most users would rely on the HD3000 just for undemanding games in the category of which I would mention Civilization IV and V or FIFA / PES 11. This goes to say that I would very much like to see how the new Intel graphics fares in these games, should they be available in the test lab of course.

    I am not necessarily worried about the raw performance, clearly the HD3000 has the capacity to deliver. Instead, the driver maturity may come out as an obstacle. Firstly one has to consider the fact that Intel traditionally has problems with GPU driver design (relative to their competitors). Secondly, should at one point Intel be able to repair (some of) the rendering issues mentioned in this article or elsewhere, notebook producers still take their sweet time before supplying users with new driver versions.

    In this context I am genuinely concerned about the HD3000 goodness. The old GMA HD + Radeon 5470 combination still seems tempting. Strictly referring to the gaming aspect I honestly prefer reliability and a few FPS' missing rather than the aforementioned risks.
  • NestoJR - Tuesday, January 4, 2011 - link

    So, when Apple starts putting these in Macbooks, I'd assume the battery life will easily eclipse 10 hours under light usage, maybe 6 hours under medium usage ??? I'm no fanboy but I'll be in line for that ! My Dell XPS M1530's 9-cell battery just died, I can wait a few months =]
  • JarredWalton - Tuesday, January 4, 2011 - link

    I'm definitely interested in seeing what Apple can do with Sandy Bridge! Of course, they might not use the quad-core chips in anything smaller than the MBP 17, if history holds true. And maybe the MPB13 will finally make the jump to Arrandale? ;-)
  • heffeque - Wednesday, January 5, 2011 - link

    Yeah... Saying that the nVidia 320M is consistently slower than the HD3000 when comparing a CPU from 2008 and a CPU from 2011...

    Great job comparing GPUs! (sic)

    A more intelligent thing to say would have been: a 2008 CPU (P8600) with an nVidia 320M is consistently slightly slower than a 2011 CPU (i7-2820QM) with HD3000, don't you think?

    That would make more sense.
  • Wolfpup - Wednesday, January 5, 2011 - link

    That's the only thing I care about with these-and as far as I'm aware, the jump isn't anything special. It's FAR from the "tock" it supposedly is, going by earlier Anandtech data. (In fact the "tick/tock" thing seems to have broken down after just one set of products...)

    This sounds like it is a big advantage for me...but only because Intel refused to produce quad core CPUs at 32nm, so these by default run quite a bit faster than the last gen chips.

    Otherwise it sounds like they're wasting 114 million transistors that I want spent on the CPU-whether it's more cache, more, more functional units, another core (if that's possible in 114 million transistors) etc.

    I absolutely do NOT want Intel's garbage, incompatible graphics. I do NOT want the addition complexity, performance hit, and software complexity of Optimus or the like. I want a real GPU, functioning as a real GPU, with Intels' garbage completely shut off at all times.

    I hope we'll see that in mid range and high end notebooks, or I'm going to be very disappointed.

Log in

Don't have an account? Sign up now