Sandy Bridge Graphics: Extended Compatibility and Performance Results

It’s been quite a while since we last looked at gaming compatibility and performance on a large group of titles, so we figured the timing was ripe with the Sandy Bridge launch. We went through and selected fourteen additional games from the past several years; the intention is to see if SNB can run the games properly, as well as what sort of performance it can provide.

For comparison, we selected four other notebooks that we had on hand, which we’ve already highlighted on the previous page. Dell’s Latitude E6410 represents the old guard Intel HD Graphics, and the Toshiba A660D (forced onto the integrated HD 4250 GPU) is AMD’s soon-to-be-replaced IGP. Both are slower than SNB by a large amount, as we’ve already established. On the higher performance side of the equation, we’ve again got the Acer 5551G with a Turion II P520 (2.3GHz dual-core) processor and HD 5650 GPU, and for NVIDIA we have the ASUS N53JF with i5-460M and GT 425M. We tested Low and Medium detail performance, again skipping the Dell and Toshiba systems for Medium.

Assassin's Creed

Batman: Arkham Asylum

Borderlands

Chronicles of Riddick: Dark Athena

Crysis: Warhead

Elder Scrolls IV: Oblivion

Empire: Total War

Fallout 3

Fallout: New Vegas

Far Cry 2

FEAR 2: Project Origin

H.A.W.X. 2

Mafia II

Metro 2033

Low Gaming Average - 20 Titles

Adding 14 additional titles to the mix exposes a few more areas where Intel’s HD Graphics 3000 chip needs some fine tuning, but again all titles managed to at least run (with a bit of elbow grease). The problem areas run the range of blacklisted titles to minor rendering flaws (sometimes major flaws on older Intel graphics), with one title running but doing so poorly that it may as well have failed the test.

Going into details, first up is the now-infamous Fallout 3, which required a hacked D3D9.dll file to even run (just put the file in the game’s directory—thanks to the creators at OldBlivion). The hacked DLL identifies Intel graphics as a GeForce 7900 GS; without the DLL, the game crashes to the desktop with an error message as soon as you try to enter the actual game world. (Also note that the newer Fallout: New Vegas has no such problems, so Ubisoft was kind enough to stop blacklisting Intel’s IGPs it appears.) There are almost certainly other titles where the Intel IGP is blacklisted, and more than a few games warned of an unknown GPU and potential rendering problems (HAWX 2, Mass Effect 2 and Metro 2033, for instance), but only FO3 required a hack to actually run.

Besides the above, there were some other issues. Assassin’s Creed and HAWX 2 had occasionally flickering polygons, and Mafia II had some rendering issues with shadows; both are minor glitches that don’t render the games unplayable, but in the case of Mafia II performance is too low to be manageable. Finally, the one title from our list that has clear problems with Intel’s current drivers is Chronicles of Riddick: Dark Athena. It’s interesting to note that this is the sole OpenGL title in our suite, and it checks in at a dismal <3FPS. The older Intel HD Graphics on Arrandale has the same issues as HD 3000, with the additional problem of seriously broken rendering in HAWX 2.

Outside of the above problems, performance is typically high enough to handle minimum to medium detail levels. Average frame rates on Sandy Bridge across the 20 test titles ends up at 41FPS. That works out to a 128% improvement over the previous Intel HD Graphics, and a 136% lead over AMD’s HD 4250. The HD 5650 with a slower CPU still leads by over 55%, and GT 425M likewise maintains a comfortable lead of 62%; that said, you can certainly make the case that mainstream gaming is easily achievable with Sandy Bridge. Finally, it’s worth noting that while AMD’s HD 4250 actually ends up slightly slower than the old Intel HD Graphics on average, we didn’t encounter a single noticeable rendering error with that GPU in our test suite.

There are three exceptions to “playability” in our list, counting Dark Athena: both Mafia II and Metro 2033 fail to get above 30FPS, regardless of setting—though Mafia II comes close at 29FPS when set to 800x600. These two titles are a familiar refrain, and it’s worth noting that many discrete mobile GPUs also fail to reach playable performance; in fact, Dark Athena also tends to be a bit too much for anything lower than an HD 5650/GT 420M. They’re the modern equivalent of Crysis, except you can’t even turn down setting enough (without hacking configuration files) to make them run acceptably.

Mobile Sandy Bridge Medium Gaming Performance Extended Compatibility and Performance Results – Medium Detail
Comments Locked

66 Comments

View All Comments

  • skywalker9952 - Monday, January 3, 2011 - link

    For your CPU specific benchmarks you annotate the CPU and GPU. I beleive the HDD or SSD plays a much larger role in those benchmarks then a GPU. Would it not be more appropriate to annotate the storage device used. Were all of the CPUs in the comparison paired with SSDs? If they weren't how much would that affect the benchmarks?
  • JarredWalton - Monday, January 3, 2011 - link

    The SSD is a huge benefit to PCMark, and since this is laptop testing I can't just use the same image on each system. Anand covers the desktop side of things, but I include PCMark mostly for the curious. I could try and put which SSD/HDD each notebook used, but then the text gets to be too long and the graph looks silly. Heh.

    For the record, the SNB notebook has a 160GB Intel G2 SSD. The desktop uses a 120GB Vertex 2 (SF-1200). W870CU is an 80GB Intel G1 SSD. The remaining laptops all use HDDs, mostly Seagate Momentus 7200.4 I think.
  • Macpod - Tuesday, January 4, 2011 - link

    the synthetics benchmarks are all run at turbo frequencies. the scores from the 2.3ghz 2820qm is almost the same as the 3.4ghz i7 2600k. this is because the 2820qm is running at 3.1ghz under cinebench.

    no one knows how long this turbo frequency lasts. maybe just enough to finish cinebench!

    this review should be re done
  • Althernai - Tuesday, January 4, 2011 - link

    It probably lasts forever given decent cooling so the review is accurate, but there is something funny going on here: the score for the 2820QM is 20393 while the score for the score in the 2600K review is 22875. This would be consistent with a difference between CPUs running at 3.4GHz and 3.1GHz, but why doesn't the 2600K Turbo up to 3.8GHz? The claim is that it can be effortlessly overclocked to 4.4GHz so we know the thermal headroom is there.
  • JarredWalton - Tuesday, January 4, 2011 - link

    If you do continual heavy-duty CPU stuff on the 2820QM, the overall score drops about 10% on later runs in Cinebench and x264 encoding. I mentioned this in the text: the CPU starts at 3.1GHz for about 10 seconds, then drops to 3.0GHz for another 20s or so, then 2.9 for a bit and eventually settles in at 2.7GHz after 55 seconds (give or take). If you're in a hotter testing environment, things would get worse; conversely, if you have a notebook with better cooling, it should run closer to the maximum Turbo speeds more often.

    Macpod, disabling Turbo is the last thing I would do for this sort of chip. What would be the point, other than to show that if you limit clock speeds, performance will go down (along with power use)? But you're right, the whole review should be redone because I didn't mention enough that heavy loads will eventually drop performance about 10%. (Or did you miss page 10: "Performance and Power Investigated"?)
  • lucinski - Tuesday, January 4, 2011 - link

    Just like any other low-end GPU (integrated or otherwise) I believe most users would rely on the HD3000 just for undemanding games in the category of which I would mention Civilization IV and V or FIFA / PES 11. This goes to say that I would very much like to see how the new Intel graphics fares in these games, should they be available in the test lab of course.

    I am not necessarily worried about the raw performance, clearly the HD3000 has the capacity to deliver. Instead, the driver maturity may come out as an obstacle. Firstly one has to consider the fact that Intel traditionally has problems with GPU driver design (relative to their competitors). Secondly, should at one point Intel be able to repair (some of) the rendering issues mentioned in this article or elsewhere, notebook producers still take their sweet time before supplying users with new driver versions.

    In this context I am genuinely concerned about the HD3000 goodness. The old GMA HD + Radeon 5470 combination still seems tempting. Strictly referring to the gaming aspect I honestly prefer reliability and a few FPS' missing rather than the aforementioned risks.
  • NestoJR - Tuesday, January 4, 2011 - link

    So, when Apple starts putting these in Macbooks, I'd assume the battery life will easily eclipse 10 hours under light usage, maybe 6 hours under medium usage ??? I'm no fanboy but I'll be in line for that ! My Dell XPS M1530's 9-cell battery just died, I can wait a few months =]
  • JarredWalton - Tuesday, January 4, 2011 - link

    I'm definitely interested in seeing what Apple can do with Sandy Bridge! Of course, they might not use the quad-core chips in anything smaller than the MBP 17, if history holds true. And maybe the MPB13 will finally make the jump to Arrandale? ;-)
  • heffeque - Wednesday, January 5, 2011 - link

    Yeah... Saying that the nVidia 320M is consistently slower than the HD3000 when comparing a CPU from 2008 and a CPU from 2011...

    Great job comparing GPUs! (sic)

    A more intelligent thing to say would have been: a 2008 CPU (P8600) with an nVidia 320M is consistently slightly slower than a 2011 CPU (i7-2820QM) with HD3000, don't you think?

    That would make more sense.
  • Wolfpup - Wednesday, January 5, 2011 - link

    That's the only thing I care about with these-and as far as I'm aware, the jump isn't anything special. It's FAR from the "tock" it supposedly is, going by earlier Anandtech data. (In fact the "tick/tock" thing seems to have broken down after just one set of products...)

    This sounds like it is a big advantage for me...but only because Intel refused to produce quad core CPUs at 32nm, so these by default run quite a bit faster than the last gen chips.

    Otherwise it sounds like they're wasting 114 million transistors that I want spent on the CPU-whether it's more cache, more, more functional units, another core (if that's possible in 114 million transistors) etc.

    I absolutely do NOT want Intel's garbage, incompatible graphics. I do NOT want the addition complexity, performance hit, and software complexity of Optimus or the like. I want a real GPU, functioning as a real GPU, with Intels' garbage completely shut off at all times.

    I hope we'll see that in mid range and high end notebooks, or I'm going to be very disappointed.

Log in

Don't have an account? Sign up now