Intel’s Gen 6 Graphics

All 2nd generation Core series processors that fit into an LGA-1155 motherboard will have one of two GPUs integrated on-die: Intel’s HD Graphics 3000 or HD Graphics 2000. Intel’s upcoming Sandy Bridge E for LGA-2011 will not have an on-die GPU. All mobile 2nd generation Core series processors feature HD Graphics 3000.

The 3000 vs. 2000 comparison is pretty simple. The former has 12 cores or EUs as Intel likes to call them, while the latter only has 6. Clock speeds are the same although the higher end parts can turbo up to higher frequencies. Each EU is 128-bits wide, which makes a single EU sound a lot like a single Cayman SP.

Unlike Clarkdale, all versions of HD Graphics on Sandy Bridge support Turbo. Any TDP that is freed up by the CPU running at a lower frequency or having some of its cores shut off can be used by the GPU to turbo up. The default clock speed for both HD 2000 and 3000 on the desktop is 850MHz; however, the GPU can turbo up to 1100MHz in everything but the Core i7-2600/2600K. The top-end Sandy Bridge can run its GPU at up to 1350MHz.

Processor Intel HD Graphics EUs Quick Sync Graphics Clock Graphics Max Turbo
Intel Core i7-2600K 3000 12 Y 850MHz 1350MHz
Intel Core i7-2600 2000 6 Y 850MHz 1350MHz
Intel Core i5-2500K 3000 12 Y 850MHz 1100MHz
Intel Core i5-2500 2000 6 Y 850MHz 1100MHz
Intel Core i5-2400 2000 6 Y 850MHz 1100MHz
Intel Core i5-2300 2000 6 Y 850MHz 1100MHz
Intel Core i3-2120 2000 6 Y 850MHz 1100MHz
Intel Core i3-2100 2000 6 Y 850MHz 1100MHz
Intel Pentium G850 Intel HD Graphics 6 N 850MHz 1100MHz
Intel Pentium G840 Intel HD Graphics 6 N 850MHz 1100MHz
Intel Pentium G620 Intel HD Graphics 6 N 850MHz 1100MHz

Mobile is a bit different. The base GPU clock in all mobile SNB chips is 650MHz but the max turbo is higher at 1300MHz. The LV/ULV parts also have different max clocks, which we cover in the mobile article.

As I mentioned before, all mobile 2nd gen Core processors get the 12 EU version—Intel HD Graphics 3000. The desktop side is a bit more confusing. In desktop, the unlocked K-series SKUs get the 3000 GPU while everything else gets the 2000 GPU. That’s right: the SKUs most likely to be paired with discrete graphics are given the most powerful integrated graphics. Of course those users don’t pay any penalty for the beefier on-die GPU; when not in use the GPU is fully power gated.

Despite the odd perk for the K-series SKUs, Intel’s reasoning behind the GPU split does makes sense. The HD Graphics 2000 GPU is faster than any desktop integrated GPU on the market today, and it’s easy to add discrete graphics to a desktop system if the integrated GPU is insufficient. The 3000 is simply another feature to justify the small price adder for K-series buyers.

On the mobile side going entirely with 3000 is simply because of the quality of integrated or low-end graphics in mobile. You can’t easily add in a discrete card so Intel has to put its best foot forward to appease OEMs like Apple. I suspect the top-to-bottom use of HD Graphics 3000 in mobile is directly responsible for Apple using Sandy Bridge without a discrete GPU in its entry level notebooks in early 2011.

I’ve been careful to mention the use of HD Graphics 2000/3000 in 2nd generation Core series CPUs, as Intel will eventually bring Sandy Bridge down to the Pentium brand with the G800 and G600 series processors. These chips will feature a version of HD Graphics 2000 that Intel will simply call HD Graphics. Performance will be similar to the HD Graphics 2000 GPU, however it won’t feature Quick Sync.

Image Quality and Experience

Perhaps the best way to start this section is with a list. Between Jarred and I, these are the games we’ve tested with Intel’s on-die HD 3000 GPU:

Assassin’s Creed
Batman: Arkham Asylum
Borderlands
Battlefield: Bad Company 2
BioShock 2
Call of Duty: Black Ops
Call of Duty: Modern Warfare 2
Chronicles of Riddick: Dark Athena
Civilization V
Crysis: Warhead
Dawn of War II
DiRT 2
Dragon Age Origins
Elder Scrolls IV: Oblivion
Empire: Total War
Far Cry 2
Fallout 3
Fallout: New Vegas
FEAR 2: Project Origin
HAWX
HAWX 2
Left 4 Dead 2
Mafia II
Mass Effect 2
Metro 2033
STALKER: Call of Pripyat
Starcraft II
World of Warcraft

This is over two dozen titles, both old and new, that for the most part worked on Intel’s integrated graphics. Now for a GPU maker, this is nothing to be proud of, but given Intel’s track record with game compatibility this is a huge step forward.

We did of course run into some issues. Fallout 3 (but not New Vegas) requires a DLL hack to even run on Intel integrated graphics, and we saw some shadow rendering issues in Mafia II, but for the most part the titles—both old and new—worked.


Modern Warfare 2 in High Quality

Now the bad news. Despite huge performance gains and much improved compatibility, even the Intel HD Graphics 3000 requires that you run at fairly low detail settings to get playable frame rates in most of these games. There are a couple of exceptions but for the most part the rule of integrated graphics hasn’t changed: turn everything down before you start playing.


Modern Warfare 2 the way you have to run it on Intel HD Graphics 3000

This reality has been true for more than just Intel integrated graphics however. Even IGPs from AMD and NVIDIA had the same limitations, as well as the lowest end discrete cards on the market. The only advantage those solutions had over Intel in the past was performance.

Realistically we need at least another doubling of graphics performance before we can even begin to talk about playing games smoothly at higher quality settings. Interestingly enough, I’ve heard the performance of Intel’s HD Graphics 3000 is roughly equal to the GPU in the Xbox 360 at this point. It only took six years for Intel to get there. If Intel wants to contribute positively to PC gaming, we need to see continued doubling of processor graphics performance for at least the next couple generations. Unfortunately I’m worried that Ivy Bridge won’t bring another doubling as it only adds 4 EUs to the array.

Quick Sync: The Best Way to Transcode Intel HD Graphics 2000/3000 Performance
Comments Locked

283 Comments

View All Comments

  • Exodite - Monday, January 3, 2011 - link

    I'm of two minds about that really.

    I had really set my mind on the 2500K as it offers unparalleled bang-for-buck and real-world testing have shown that Hyper-threading makes little difference in games.

    With the compile tests it's clear there's a distinct benefit to going with the 2600K for me though, which means this'll end up more expensive than I had planned! :)
  • Lazlo Panaflex - Monday, January 3, 2011 - link

    FYI, the 1100T is missing from several of the gaming benchmarks.....
  • Melted Rabbit - Monday, January 3, 2011 - link

    It wouldn't surprise me if that was intentional. I would hope that Anandtech reviewers were not letting companies dictate how their products were to be reviewed lest AT be denied future prerelease hardware. I can't tell from where I sit and there appears to be no denial that stating there is no such interference.

    In addition, real world benchmarks aside from games looks to be absent. Seriously, I don't use my computer for offline 3D rendering and I suspect that very few other readers do to any significant degree.

    Also, isn't SYSMark 2007 a broken, misleading benchmark? It was compiled on Intel's compiler, you know the broken one that degrades performance on AMD and VIA processors unnecessarily. Also there is this bit that Intel has to include with its comparisons that use BAPco(Intel) benchmarks that include Intel's processors with comparisons to AMD or VIA processors:

    Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchase, including the performance of that product when combined with other products.

    It isn't perfect, but that is what the FTC and Intel agreed to, and until new benchmarks are released by BAPco that do not inflict poor performance on non-Intel processors, the results are not reliable. I don't see any problem if the graph did not contain AMD processors, but that isn't what we have here. If you are curious, for better or for worse, BAPco is a non-profit organization controlled by Intel.
  • Anand Lal Shimpi - Monday, January 3, 2011 - link

    Hardware vendors have no input into how we test, nor do they stipulate that we must test a certain way in order to receive future pre-release hardware. I should also add that should a vendor "cut us off" (it has happened in the past), we have many ways around getting supplied by them directly. In many cases, we'd actually be able to bring you content sooner as we wouldn't be held by NDAs but it just makes things messier overall.

    Either way, see my response above for why the 1100T is absent from some tests. It's the same reason that the Core i7 950 is absent from some tests, maintaining Bench and adding a bunch of new benchmarks meant that not every test is fully populated with every configuration.

    As far as your request for more real world benchmarks, we include a lot of video encoding, file compression/decompression, 3D rendering and even now a compiler test. But I'm always looking for more, if there's a test out there you'd like included let me know! Users kept on asking for compiler benchmarks which is how the VS2008 test got in there, the same applies to other types of tests.

    Take care,
    Anand
  • Melted Rabbit - Tuesday, January 4, 2011 - link

    Thanks for replying to my comment. I was understand why the review was missing some benchmarks for processors like the 1100T. I was also a bit hasty in my accusations with respect to interference from manufacturers, which I apologize for.

    I still have trouble with including benchmarks compiled on the Intel compiler without a warning or explanation of what they mean. It really isn't a benchmark with meaningful results if the 1100T is used x87 code and the Core i7-2600K used SSE2/SSE3 code. I would have no problem with reporting results for benchmarks compiled with Intel's defective compiler, like SYSmark 2007 and Cinebench R10 assuming they did not include results for AMD or VIA processors along with an explanation of why they were not applicable to AMD and VIA processors. However, not giving context to such results I find problematic.
  • DanNeely - Monday, January 3, 2011 - link

    Sysmark2k7 is like the various 3dmark benches. Mostly useless but with a large enough fanbase that running it is less hassle than dealing with all the whining fanboi's/
  • Anand Lal Shimpi - Monday, January 3, 2011 - link

    There are a few holes in the data we produce for Bench, I hope to fill them after I get back from CES next week :) You'll notice there are some cases where there's some Intel hardware missing from benchmarks as well (e.g. Civ V).

    Take care,
    Anand
  • Lazlo Panaflex - Monday, January 3, 2011 - link

    Thanks Anand :-)
  • MeSh1 - Monday, January 3, 2011 - link

    Seems Intel did everything right for these to fit snuggly into next gen macs. Everthing nicely integrated into one chip and the encode/trascode speed boost is icing on the cake (If supported of course) being that Apple is content focused. Nice addition if youre a mac user.
  • Doormat - Monday, January 3, 2011 - link

    Except for the whole thing about not knowing if the GPU is going to support OpenCL. I've heard Intel is writing OpenCL drivers for possibly a GPU/CPU hybrid, or utilizing the new AVX instructions for CPU-only OpenCL.

    Other than that, the AT mobile SNB review included a last-gen Apple MBP 13" and the HD3000 graphics could keep up with the Nvidia 320M - it was equal to or ahead in low-detail settings and equal or slightly behind in medium detail settings. Considering Nvidia isn't going to rev the 320M again, Apple may as well switch over to the HD3000 now and then when Ivy Bridge hits next year, hopefully Intel can deliver a 50% perf gain in hardware alone from going to 18 EUs (and maybe their driver team can kick in some performance there too).

Log in

Don't have an account? Sign up now