Intel HD Graphics 4000 Performance

With respectable but still very tick-like performance gains on the CPU, our focus now turns to Ivy Bridge's GPU. Drivers play a significant role in performance here and we're still several weeks away from launch so these numbers may improve. We used the latest available drivers as of today for all other GPUs.

 

A huge thanks goes out to EVGA for providing us with a GeForce GT 440 and GeForce GT 520 for use in this preview.

Crysis: Warhead

We'll start with Crysis, a title that no one would have considered running on integrated graphics a few years ago. Sandy Bridge brought playable performance at low quality settings (Performance defaults) last year, but how much better does Ivy do this year?

Crysis: Warhead - Frost Bench

In our highest quality benchmark (Mainstream) settings, Intel's HD Graphics 4000 is 55% faster than the 3000 series graphics in Sandy Bridge. While still tangibly slower than AMD's Llano (Radeon HD 6550D), Ivy Bridge is a significant step forward. Drop the quality down a bit and playability improves significantly:

Crysis: Warhead - Frost Bench

Crysis: Warhead - Frost Bench

Over 50 fps at 1680 x 1050 from Intel integrated graphics is pretty impressive. Here we're showing a 41% increase in performance compared to Sandy Bridge, with Llano maintaining a 33% advantage over Ivy. I would've liked to have seen an outright doubling of performance, but this is a big enough step forward to be noticeable on systems with no discrete GPU.

Power Consumption Intel HD 4000 Performance: Metro 2033
Comments Locked

195 Comments

View All Comments

  • tipoo - Wednesday, March 7, 2012 - link

    Thankfully the comments of a certain troll were removed so mine no longer makes sense, for any future readers.
  • Articuno - Tuesday, March 6, 2012 - link

    Just like how overclocking a Pentium 4 resulted in it beating an Athlon 64 and had lower power consumption to boot-- oh wait.
  • SteelCity1981 - Tuesday, March 6, 2012 - link

    That's a stupid comment only a stupid fanboy would make AMD is way ahead of Intel in the graphics department and is very competitive with Intel in the mobile segment now.
  • tipoo - Tuesday, March 6, 2012 - link

    Your comments would do nothing to inform regular readers of sites like this, we already know more. So please, can it.
  • tipoo - Tuesday, March 6, 2012 - link

    Not what I asked little troll. Give a source that says Apple will get a special HD4000 like no other.
  • Operandi - Tuesday, March 6, 2012 - link

    What are you talking about? As long as AMD has a better iGPU there is plenty of reason for them to be viable choice today. And if gaming iGPU performance holds on against Intel there is more than just hope of them getting back in the game in terms of high performance comput tomorrow.
  • tipoo - Tuesday, March 6, 2012 - link

    I'm pretty sure even 16x AF has a sub 2% performance hit on even the lowest end of todays GPUs, is it different with the HD Graphics? If not, why not just enable it like most people would, even on something like a 4670 I max out AF without thinking twice about it, AA still hurts performance though.
  • IntelUser2000 - Tuesday, March 6, 2012 - link

    AF has greater performance impact on low end GPUs. Typically its about 10-15%. It's less on the HD Graphics 3000, only because their 16x AF really only works at much lower levels. It's akin to having option for 1280x1024 resolution, but performing like 1024x768 because it looks like the latter.

    If Ivy Bridge improved AF quality to be on par with AMD/Nvidia, performance loss should be similar as well.
  • tipoo - Wednesday, March 7, 2012 - link

    Hmm I did not know that, what component of the GPU is involved in that performance hit (shaders, ROPs, etc)? My card is fairly low end and 16x AF performs nearly no different than 0x.
  • Exophase - Wednesday, March 7, 2012 - link

    AF requires more samples in cases of high anisotropy so I guess the TMU load increases, which may also increase bandwidth requirements since it could force higher LOD in these cases. You'll only see a performance difference if the AF causes the scene to be TMU/bandwidth limited instead of say, ALU limited. I'd expect this to happen more as you move up in performance, not down, since ALU:TEX ratio tends to go up along the higher end.. but APUs can be more bandwidth sensitive and I think Intel's IGPs never had a lot of TMUs.

    Of course it's also very scene dependent. And maybe an inferior AF implementation could end up sampling more than a better one.

Log in

Don't have an account? Sign up now