Extended Compatibility and Performance Results – Medium Detail

Batman: Arkham Asylum

Borderlands

Chronicles of Riddick: Dark Athena

Crysis: Warhead

Elder Scrolls IV: Oblivion

Empire: Total War

Fallout 3

Fallout: New Vegas

Far Cry 2

FEAR 2: Project Origin

H.A.W.X. 2

Mafia II

Metro 2033

Medium Gaming Average - 20 Titles

Bumping quality settings up to Medium puts the screws to the HD 3000, dropping nearly every test game below 30FPS. Besides Mass Effect 2 and STALKER (which we mentioned on the previous page), only Empire: Total War breaks the 30FPS mark, and it’s not even a clear victory there. Yes, Intel can run Medium detail at 42FPS, but the game prevents us from selecting the “High” defaults, which is where we would have preferred to test. (This is possibly another case of blacklisting, although not as severe as Fallout 3.)

At our Medium settings, the discrete GPUs easily pull away from Sandy Bridge, with both the Acer 5551G and ASUS N53JF nearly doubling (95-96% faster on average) the HD 3000. Rendering quality also gets worse in HAWX 2, with the entire skybox missing once detail levels are increase, so you get a black sky. (It’s still better than the horribly corrupted rendering that Arrandale’s IGP managed at lower settings.)

Ultimately, Sandy Bridge’s IGP is far more capable than many would have expected. Sure, it doesn’t even try to support DX11 or OpenCL, but at least for gaming DX11 is typically too much for even midrange GPUs. Intel uses 114 million transistors in Sandy Bridge on the graphics, which is quite small considering transistor counts on other GPUs. The HD 5470 for example—a chip that is frequently surpassed by HD 3000—has an estimated count of 242 million transistors.

This is where Intel’s manufacturing prowess comes into play, as SNB uses a refined 32nm process that allows Intel to push clock speeds far higher than other competing offerings. What’s more, late 2011 should bring the follow-up Ivy Bridge processor, which shrinks the process even further to 22nm. At that node, Intel could potentially double the number of EUs (Execution Units) and further increase clocks. If Intel puts the requisite effort into improving driver compatibility and adds DX11 support, and if rumors of high-bandwidth stacked memory prove true, next year we could see integrated graphics reach the point where they match HD 5650/GT 425M, effectively killing off anything less than the upper-midrange and lower-high-end discrete GPUs.

Sandy Bridge Graphics: Extended Compatibility and Performance Results All the Performance, and Good Battery Life As Well!
Comments Locked

66 Comments

View All Comments

  • mtoma - Monday, January 3, 2011 - link

    Something like Core i7 1357M could make Win 7 tablets temporarily viable. Remember that in the ultra portable space the big words are: multitasking, dual core processors (like Cortex A9). So, realistically, we need ULV dual-core Sandy Bridge.
  • JarredWalton - Monday, January 3, 2011 - link

    The i7-640M runs at 1.2GHz minimum and 2.26GHz maximum. The i7-2657M runs at 1.6GHz minimum and 2.7GHz maximum. (Actually, minimum on all the Core 2nd Gen is 800MHz when you aren't doing anything that needs more speed.) That would be 33% faster base speed and up to 19% higher max speed, just on clock speeds alone. However, you forgot to factor in a round 20-25% performance increase just from the Sandy Bridge architecture, so you're really looking at anywhere from 19% (bare minimum) to as much as 66% faster for normal usage, and things like Quick Sync would make certain things even faster.
  • DanNeely - Monday, January 3, 2011 - link

    You've got a limited range of TDP that any given architecture will be good in. According to Intel (at the time of the atom launch) things start getting rather ragged when the range gets to 10x. Until Core2 this wasn't really an issue for Intel because the p3 and prior's top end parts had sufficiently low TDPs that fitting the entire product line into a single architecture wasn't a problem. It didn't matter much in the P4 era because the Pentium-M and Core 1 were separate architectures and could be tuned so its sweet spot was significantly lower than the desktop P4. Beginning with Core2 however Intel only had a single architecture. The bottom tier of ULV chips suffered due to this, and on the high end the fact that overclocking (especially voltage OCing) was very poor on the performance gain/increased power consumption scale.

    The atom is weak as you approach 10W because it was designed not as a low end laptop part (although Intel is more than willing to take your money for a netbook); but to invade ARM's stronghold in smartphones, tablets, and other low power embedded systems. Doing that requires good performance at <1W TDP. By using a low power process (instead of the performance process of every prior Intel fabbed CPU) Moorestown should finally be able to do so. The catch is that it leaves Intel without anything well optimized for the 10-15W range. In theory the AMD Bobcat should be well placed for this market, but the much larger chunk of TDP given to graphics combined with AMDs historic liability in idle power make it something of a darkhorse. I wouldn't be surprised if the 17W Sandybridge is able to end up getting better battery life than the 10W Bobcat because of this.
  • Kenny_ - Monday, January 3, 2011 - link

    I have seen in the past that when Mac OS X and Win 7 are run on the same machine, Mac OS X can have significantly better battery life. Is there any chance we could see what Sandy Bridge does for battery life under Mac OS X?
  • QChronoD - Monday, January 3, 2011 - link

    This was a test machine that intel cobbled together. Give it a few weeks or months after some retail machines come out, and then I'm sure that someone in the community will have somehow shoehorned OSX onto one of the machines. (Although I don't know how well it would perform since they'd probably have to write new drivers for the chipset and the graphics)
  • cgeorgescu - Monday, January 3, 2011 - link

    I think that in the past we've seen MacOS and Win7 battery life comparison while running on the same Mac, not on the same Acer/Asus/Any machine (cause MacOS doesn't run on such w/o hacks). And I suspect Apple manages better power management only because they have to support only few hardware configurations (so doing optimizations especially for that hardware), it's a major advantage of their business model.
    It's like with the performance of games on Xbox and the like... The hardware isn't that impressive but you write and compile only for that configuration and nothing else: you're sure that every other machine is the same, not depending on AMD code paths, smaller or larger cache, slower or faster RAM, that or the other video card, and so on...

    Aside power management in macs, to see what Sandy Bridge can do under MacOS would be frustrating... You know how long it takes until Jobs fits new stuff in those MBPs. Hell, he still sells Core2 duo.
  • Penti - Monday, January 3, 2011 - link

    Having fewer configurations don't mean better optimized graphics drivers they are worse. Having only intel doesn't mean the GCC compiler only outputs optimized code. It's a compiler AMD contribute to among others and there's no such thing as AMD code paths, there is some minor difference in how it manages SSE but that's it. Most is exactly the same and the compiler just optimizes for x86 not a brand. If it supports the same features it is as optimized. Machine Code is the same. It's not like having a cell processor there.

    Power management is handles by the kernel/drivers. You can expect SB MacBooks in like this summer. Not too long off. And you might even be seeing people accepting Flash on their macs again as Adobe is starting to move away from their archaic none video player work flow. With 10.2 and forward. Battery/Power management won't really work without Apples firmware though. But you are simply not going to optimize code on a OS X machine like a console, your gonna leave it in a worse state then the Windows counterpart. Apple will also be using C2D as long as Intel don't provide them with optimized proper drivers. It's a better fit for the smaller models as is.
  • mcdill the pig - Monday, January 3, 2011 - link

    Perhaps the issue is more the Compal's cooling system but those max CPU temps (91 degrees celsius) seem high. It may also be that the non-Extreme CPUs will have lower temps when stressed.

    My Envy 17 already has high temps - I was looking forward to SB notebooks having better thermal characteristics than the i7 QM chips (i.e. no more hot palmrests or ball-burning undersides)....
  • JarredWalton - Monday, January 3, 2011 - link

    This is a "works as designed" thing. Intel runs the CPU at the maximum speed allowed (3.1GHz on heavily threaded code in this case) until the CPU gets too warm. Actually, funny thing is that when the fan stopped working at one point (a cold reboot fixed it), CPU temps maxed out at 99C. Even with no fan running, the system remained fully stable; it just ran at 800MHz most of the time (particularly if you put a load on the CPU for more than 5 seconds), possibly with other throttling going on. Cinebench 11.5 for instance ran about 1/4 as fast as normal.
  • DanNeely - Monday, January 3, 2011 - link

    Throttling down to maintain TDP at safe levels has been an intel feature since the P4 era. back in 2001(?) toms hardware demoed this dramatically by running quake on a P4 and removing the cooler entirely. Quake dropped into slideshow mode but remained stable and recovered as soon as the heatsink was set back on top.

    The p3 they tested did a hard crash. The athlon XP/MP chips reached several hundred degrees and self destructed (taking the mobos with them). Future AMD CPUs had thermal protection circuitry to avoid this fail mode as well.

Log in

Don't have an account? Sign up now