Discrete GPU Gaming Performance

Gaming performance with a discrete GPU does improve in line with the rest of what we've seen thus far from Ivy Bridge. It's definitely a step ahead of Sandy Bridge, but not enough to warrant an upgrade in most cases. If you haven't already made the jump to Sandy Bridge however, the upgrade will do you well.

Dragon Age Origins

DAO has been a staple of our CPU gaming benchmarks for some time now. The third/first person RPG is well threaded and is influenced both by CPU and GPU performance. Our benchmark is a FRAPS runthrough of our character through a castle.

Dragon Age Origins—1680 x 1050—Max Settings (no AA/Vsync)

Dawn of War II

Dawn of War II is an RTS title that ships with a built in performance test. I ran at Ultra quality settings at 1680 x 1050:

Dawn of War II—1680 x 1050—Ultra Settings

World of Warcraft

Our WoW test is run at High quality settings on a lightly populated server in an area where no other players are present to produce repeatable results. We ran at 1680 x 1050.

World of Warcraft

Starcraft 2

We have two Starcraft II benchmarks: a GPU and a CPU test. The GPU test is mostly a navigate-around-the-map test, as scrolling and panning around tends to be the most GPU bound in the game. Our CPU test involves a massive battle of 6 armies in the center of the map, stressing the CPU more than the GPU. At these low quality settings however, both benchmarks are influenced by CPU and GPU. We'll get to the GPU test shortly, but our CPU test results are below. The benchmark runs at 1024 x 768 at Medium Quality settings with all CPU influenced features set to Ultra.

Starcraft 2

Metro 2033

We're using the Metro 2033 benchmark that ships with the game. We run the benchmark at 1024 x 768 for a more CPU bound test as well as 1920 x 1200 to show what happens in a more GPU bound scenario.

Metro 2033 Frontline Benchmark—1024 x 768—DX11 High Quality

Metro 2033 Frontline Benchmark—1920 x 1200—DX11 High Quality

DiRT 3

We ran two DiRT 3 benchmarks to get an idea for CPU bound and GPU bound performance. First the CPU bound settings:

DiRT 3—Aspen Benchmark—1024 x 768 Low Quality

DiRT 3—Aspen Benchmark—1920 x 1200 High Quality

Crysis: Warhead

Crysis Warhead Assault Benchmark—1680 x 1050 Mainstream DX10 64-bit

Civilization V

Civ V's lateGameView benchmark presents us with two separate scores: average frame rate for the entire test as well as a no-render score that only looks at CPU performance. We're looking at the no-render score here to isolate CPU performance alone:

Civilization V—1680 x 1050—DX11 High Quality

The Test & CPU Performance Intel's HD 4000 Explored
Comments Locked

173 Comments

View All Comments

  • DanNeely - Monday, April 23, 2012 - link

    Isn't the net OC performance roughly a wash? You're losing ~10% off the top in clock speed, but getting it back by the CPU doing ~10% more per clock.

    I'm curious what the power gap for the OCed IB is vs SB. For a system kept running at full load, the stock power gap would give a decent amount of yearly savings on the utility bills. If the gap opens even more under OC it'd be a decent upgrade for anyone running CPU farms.
  • Shadow_k - Monday, April 23, 2012 - link

    Very nice igp improvements

    An also when will anandtech do a review on the i5 3570k because igp is underclocked
  • Ram21 - Monday, April 23, 2012 - link

    Ultraoboks
  • Ram21 - Monday, April 23, 2012 - link

    Page 11 has the incorrect title or chart of Starcraft II - GPU Bench on the Dirt 3 page
  • Anand Lal Shimpi - Monday, April 23, 2012 - link

    Fixed both of these, thank you!
  • ltcommanderdata - Monday, April 23, 2012 - link

    I don't have a comment on this Ivy Bridge review itself since it's thorough as always from Anandtech and Ivy Bridge seems pretty much what was expected. I do want to suggest a new benchmark for the eventual OpenCL followup when Intel releases new GPU drivers. As AMD mentioned as part of heir HD7000 series launches, WinZip 16.5 has finally been released with OpenCL acceleration in collaboration with AMD. Since fluid simulations won't be a common use case for most consumers and video encoding seems better suited to fixed function hardware like QuickSync, this OpenCL accelerated file compression/decompression will probably be the first and most popular use of GPGPU by consumers. It'll be interesting to see how much of a benefit GPU acceleration brings and whether AMD's collaboration results in better performance from AMD GPUs compared to Intel and nVidia GPUs then raw hardware specs would suggest. Other interesting tests would be to see if the acceleration is more pronounced with 1x1GB compressed file versus many compressed files adding up to 1GB. How well acceleration scales with between different GPU performance classes and whether it'll be bottlenecked by PCIe bandwidth, CPU setup time, system memory transfers or more likely HDD I/O. Whether tightly coupled CPU/GPUs like Llano and Ivy Bridge gives a performance advantage compared to otherwise similar specced discrete GPUs. Whether GPU acceleration is worthwhile on older GPU generations like the AMD HD5000/6000 and nVidia 8000/9000/100/200 series which aren't as compute optimized as the latest AMD HD7000 and nVidia 400/500 series. Whether WinZip 16.5 supports the HD4000 series which is OpenCL 1.0 or whether it requires OpenCL 1.1. Does WinZip 16.5 use OpenCL to help improve performance scaling on high core count CPUs (such as 8 or more cores).

    If GPU accelerated file compression/decompression is effective hopefully Microsoft and Apple will consider adding it to their native OS .zip handler.
  • Ryan Smith - Monday, April 23, 2012 - link

    Rest assured it's on our list of things to look at, though I haven't seen it yet.
  • mgoldshteyn - Monday, April 23, 2012 - link

    The graphics engine still cannot support 10-bit per color IPS displays, as can be found on quality modern laptops from Dell and HP. That means that one is forced to get an overpriced mobile video card from ATI or NVidia to compensate, lowering the laptops power on hours by requiring an external card to be used with these displays. On non-IPS displays, one can choose to use the Intel built in graphics engine to save battery life. No such choice on high quality IPS displays since they are incompatible with the graphics engine of even Ivy Bridge.
  • zaccun - Monday, April 23, 2012 - link

    The workstation class laptops you are referring to are only offered with discrete graphics cards. No other machine has a 10 bit IPS panel. There is zero sense in dell or HP offering a machine aimed at professionals doing 3d modeling/CAD/video editing/etc without also putting the graphics horsepower in the laptop to support it.

    While I personally would love the option of getting a machine with the awesome panels that those notebooks use, without also paying for the $$$$ quadro cards that pros need, neither Dell nor HP offer anything like that.
  • Arnulf - Tuesday, April 24, 2012 - link

    Neither can you eyes distinguish 1 073 741 824 different colors so why would you care ?

Log in

Don't have an account? Sign up now