Ivy Bridge HD 4000: Medium Quality Gaming Now Possible

We’ve run a larger than normal set of games this time around. We’ll start with our current 2012 gaming suite, which we’ve discussed previously. The goal of our 2012 game tests is to get reasonable quality rather than bare minimum quality, so we’ve set the bar at around medium detail for our Value settings and high detail for our Mainstream settings. Along with the 2012 suite, we also ran all of the gaming tests from our 2011 suite at our medium detail settings.

We won’t provide a complete list of results here, but you can find those in Mobile Bench (including Mainstream and Enthusiast performance results, though not surprisingly HD 4000 falls well short of playability at those settings). What we will do is show how HD 4000 compares to HD 3000, HD 6620G (Llano A8), HD 6630M, and GT 640M.

Batman: Arkham City - Value

Battlefield 3 - Value

Civilization V - Value

DiRT 3 - Value

Elder Scrolls: Skyrim - Value

Portal 2 - Value

Total War: Shogun 2 - Value

We don’t have a huge selection of laptop hardware (yet) for our 2012 gaming suite, and Ivy Bridge tends to place near the bottom of the Value gaming charts, but that’s only part of the story. Of the seven games we’ve selected for our current tests, two fail to deliver acceptable frame rates: Battlefield 3 and Civilization V. Civ5 is actually still playable at low frame rates, since it’s not a real-time game, but all things considered we’d still like to see >30 FPS. Battlefield 3 on the other hand is simply a beast—notice that Llano A8 along with Sony’s VAIO SE and Z2 with discrete GPUs all fail to break 30 FPS. If you’re into the multiplayer element of BF3, you’d really want a faster GPU; we’d suggest NVIDIA’s GT 555M or AMD’s HD 6730M as a more reasonable target to handle BF3 at our Value (medium) settings.

The remaining games all run acceptably, with the only possible exception being Portal 2. Many times during the game, when you’re looking through a portal the frame rate takes a substantial hit. This is something you’ll see to a lesser extent on other GPUs, but those GPUs already average well over 60 FPS so a 15-20 FPS drop isn’t that noticeable. HD 4000 unfortunately can drop below 30 FPS on some Portal 2 levels, which means despite the moderately high average frame rate it’s sometimes borderline unplayable. If you'd like to read more discussion of HD 4000 gaming potential, we'd also point you at Ryan's dissection of Ivy Bridge graphics performance on the desktop.

If we skip comparisons with all the faster discrete GPUs, things become quite a bit more interesting. We’re looking at fifteen games from the past two or three years, all run at medium detail 1366x768 settings. (Note that the benchmark we use in the above chart for Left 4 Dead 2 differs from our previous tests, as Valve broke backwards compatibility for their timedemo about five months back.) Comparing Sandy Bridge and Ivy Bridge, HD 4000 ends up being nearly 50% faster than HD 3000 on average. The only title in our suite that doesn’t see a substantial boost in performance is Civilization V.

We’ve speculated in the past that the problem is with Intel’s shader and/or geometry throughput, which might also explain the slowdowns we see in other titles (e.g. Portal 2 and Left 4 Dead 2 both suffer from frame rate dips in Intel’s IGPs). In a similar story, enabling tessellation in Deus Ex: Human Revolution absolutely killed frame rates on HD 4000—it’s perfectly playable with most of the quality settings enabled, but turn on tessellation and the frame rates plummet to less than half. Hopefully, Haswell’s IGP will get the geometry and shader processing capabilities it needs to push the rumored 40 EUs on the fastest chips. Still, Intel has really stepped up their commitment to graphics performance over the past several generations. That’s best illustrated by a comparison with the only other major IGP, AMD’s Llano:

We used two different sets of results for the above chart, one from the original Llano A8-3500M laptop AMD shipped us (for the 2011 games) and the second is the Toshiba Satellite P755D with A8-3520M. Red bars indicate games where AMD wins, green bars are for Intel wins, and blue bars are for ties (the two score within 10% of each other—100% being identical performance). Gathering all the gaming results together, what we end up with is Intel’s HD 4000 offering a very similar experience to Llano A8 in overall gaming capability. On the desktop, Llano continues to enjoy a fairly sizeable lead over Ivy Bridge, but on laptops with much lower TDPs it's a different story.

There are some titles where AMD pulls off substantial wins (Civ5, DiRT2, Portal 2, and Total War: Shogun 2), and several titles where Intel takes a similarly large lead (Batman, DiRT 3, Left 4 Dead 2, and Skyrim). Neither platform will handle medium detail settings at 1366x768 for every game out there, but they’re both generally fast enough to handle most games. Here are the details numbers:

Core i7-3720QM vs. AMD A8-3500M/3520M
Gaming Perforamnce Comparison
  HD 4000 HD 6620G
Total War: Shogun 2 27.4 36.9
StarCraft II 27.7 26.9
Stalker: Call of Pripyat 48.3 46.7
Portal 2 53.9 66.1
Metro 2033 26.9 26.9
Mass Effect 2 43.6 42.1
Mafia II 28.5 30.4
Left 4 Dead 2 40.2 35.5
Elder Scrolls: Skyrim 42.7 30.4
DiRT 3 41.9 37.7
DiRT 2 36.5 43.2
Civilization V 15.6 26.2
Battlefield: Bad Company 2 39.0 37.3
Battlefield 3 21.9 20.7
Batman: Arkham City 49.0 39.0
Average of 15 Titles 36.2 36.4

Granted, the A8-3500M/3520M aren't the fastest Llano parts, and the Llano systems we tested were both using DDR3-1333 memory. Give Llano an MX part and faster memory and performance should improve around 20% (5-10% for the RAM, and 10-15% for the CPU). Outside of a few titles, however, both solutions are playable on most of the same subset of games. And yes, we know AMD has Trinity coming out that should improve on Llano’s GPU performance; AMD has suggested it will be around 50% faster, but we can’t comment on Trinity performance right now—check back in a few weeks.

What About Drivers?

Of course, there’s another question that always comes up when we look at Intel’s IGP: driver quality. When Sandy Bridge launched we did a similar investigation and found that there were minor to moderate problems in four of the twenty games we tested. We trimmed our list down to 15 titles this time out, and if we only look at games where there was clearly some sort of driver problem we have two failures. The first isn’t quite as serious: StarCraft II appears to have a memory leak with the current HD 4000 drivers, as it ends up using nearly 4GB of RAM after 10 to 15 minutes of play before crashing to desktop. (I had this happen in a Versus AI match three out of three times in testing.) Prior to running out of memory, however, performance is generally playable, and HD 3000 doesn’t appear to have the same problem.

The second problem title is Battlefield 3; almost everything renderers properly, but the overlay of text (e.g. subtitles and some HUD elements) didn’t work for me, and navigating the menu also didn’t work properly in fullscreen mode—the mouse wouldn’t register on any of the buttons, so I had to switch to windowed mode, click on the menu settings I wanted to change (e.g. click on “Video” and then change the resolution and quality settings), apply, and then switch back to fullscreen mode. What’s more, BF3 would also lose the menu system entirely after loading a level, typically requiring a restart to bring it back. Performance is acceptable at minimum detail settings, but until the overlay/menu issues get sorted out BF3 isn’t what I would deem playable. (Note that I also experienced the same overlay/menu issues on HD 3000, so this appears to be a driver bug with Intel’s latest 8.15.10.2696 drivers.) I've uploaded a YouTube video showing the bug as well as how things should work.

[Update, 4/28/2012: It's not clear what the precise cause of the Battlefield 3 glitch was, but a 1.5GB patch just got pushed live via Origin some time in the past few days. With the patch in place and running the same Intel drivers, the bugs observed above are no longer present. Performance is still sub-30FPS, but for the single-player campaign you could probably manage in a pinch.]

Looking at the big picture, Ivy Bridge is still a very large step forward for Intel’s graphics division. You’ll note that I didn’t provide any examples of DX11 not working properly in recent games; that’s because as far as I can tell all the DX11 titles I tried rendered correctly. I’m sure there are other exceptions out there, but besides the above fifteen titles I also briefly loaded and played an additional eight games and found they all rendered properly in brief testing.

If you’re interested, the list of additional games I tried includes Rage, Super Street Fighter IV: Arcade Edition, Deus Ex: Human Revolution, Duke Nukem Forever, Dungeon Siege III, Far Cry 2, Just Cause 2, and The Witcher 2. Of these, only The Witcher 2 struggled to reach playable frame rates; even at minimum detail settings, in-game frame rates were typically in the high teens, though The Witcher 2 tends to perform poorly on NVIDIA and AMD mainstream mobile GPUs as well—it’s a bit of a graphics pig. So that brings the total number of games I tried on Ivy Bridge to 23, with only one game showing clear rendering issues, and a second experiencing periodic instability due to a memory leak. While not perfect, it’s another healthy step in the right direction.

Ivy Bridge: Much Faster Quick Sync and 3DMark Performance Battery Life: Generally Improved, Depending on the Laptop
Comments Locked

49 Comments

View All Comments

  • krumme - Monday, April 23, 2012 - link

    There is a reason Intel is bringing 14nm to the atoms in 2014.

    The product here doesnt make sense. Its expensive and not better than the one before it, except better gaming - that is, if the drivers work.

    I dont know if the SB notebooks i have in the house is the same as the ones Jarred have. Mine didnt bring a revolution, but solid battery life, like the penryn notebook and core duo i also have. In my world more or less the same if you apply a ssd for normal office work.

    Loads of utterly uninteresting benchmark doest mask the facts. This product excels where its not needed, and fails where it should excell most: battery life.

    The trigate is mostyly a failure now. There is no need to call it otherwise, and the "preview" looks 10% like a press release i my world. At least trigate is not living up to expectations. Sometimes that happen with technology development, its a wonder its so smooth for Intel normally, and a testament to their huge expertise. When the technology matures and Intel makes better use of the technology in the arch, we will se huge improvements. Spare the praise until then, this is just wrong and bad.
  • JarredWalton - Monday, April 23, 2012 - link

    Seriously!? You're going to mention Atom as the first comment on Ivy Bridge? Atom is such a dog as far as performance is concerned that I have to wonder what planet you're living on. 14nm Atom is going to still be a slow product, only it might double the performance of Cedar Trail. Heck, it could triple the performance of Cedar Trail, which would make it about as fast as Core 2 CULV from three years ago. Hmmm.....

    If Sandy Bridge wasn't a revolution, offering twice the performance as Clarksfield at the high end and triple the battery life potential (though much of that is because Clarksfield was paired with power hungry GPUs), I'm not sure what would be a revolution. Dual-core SNB wasn't as big of a jump, but it was still a solid 15-25% faster than Arrandale and offered 5% to 50% better battery life--the 50% figure coming in H.264 playback; 10-15% better battery life was typical of office workloads.

    Your statement with regards to battery life basically shows you either don't understand laptops, or you're being extremely narrow minded with Ivy Bridge. I was hoping for more, but we're looking at one set of hardware (i7-3720QM, 8GB RAM, 750GB 7200RPM HDD, switchable GT 630M GPU, and a 15.6" LCD that can hit 430 nits), and we're looking at it several weeks before it will go on sale. That battery life isn't a huge leap forward isn't a real surprise.

    SNB laptops draw around 10W at idle, and 6-7W of that is going to the everything besides the CPU. That means SNB CPUs draw around 2-3W at idle. This particular IVB laptop draws around 10W at idle, and all of the other components (especially the LCD) will easily draw at least 6-7W, which means once again the CPU is using 2-3W at idle. IVB could draw 0W at idle and the best we could hope for would be a 50% improvement in battery life.

    As for the final comment, 22nm and tri-gate transistors are hardly a failure. They're not the revolution many hoped for, at least not yet. Need I point out that Intel's first 32nm parts (Arrandale) also failed to eclipse their outgoing and mature 45nm parts? I'm not sure what the launch time frame is for ULV IVB, but I suspect by the time we see those chips 22nm will be performing a lot better than it is in the first quad-core chips.

    From my perspective, to shrink a process node, improve performance of your CPU by 5-25%, and keep power use static is still a definite success and worthy of praise. When we get at least three or four other retail IVB laptops in for review, then we can actually start to say with conviction how IVB compares to SNB. I think it's better and a solid step forward for Intel, especially for lower cost laptops and ultrabooks.

    If all you're doing is office work, which is what it sounds like, you're right: Core 2, Arrandale, Sandy Bridge, etc. aren't a major improvement. That's because if all you're doing is office work, 95% of the time the computer is waiting for user input. It's the times where you really tax your PC that you notice the difference between architectures, and the change from Penryn to Arrandale to Sandy Bridge to Ivy Bridge represents about a doubling in performance just for mundane tasks like office work...and a lot of people would still be perfectly content to run Word, Excel, etc. on a Core 2 Duo.
  • usama_ah - Monday, April 23, 2012 - link

    Trigate is not a failure, this move to Trigate wasn't expected to bring any crazy amounts of performance benefits. Trigate was necessary because of the limitations (leaks) from ever smaller transistors. Trigate has nothing to do with the architecture of the processor per se, it's more about how each individual transistor is created on such a small scale. Architectural improvements are key to significant improvements.

    Sandy Bridge was great because it was a brand new architecture. If you have been even half-reading what they post on Anandtech, Intel's tick-tock strategy dictates that this move to Ivy Bridge would be small improvements BY DESIGN.

    You will see improvements in battery life with the NEW architecture, AFTER Ivy Bridge (when Intel stays at 22nm), the so-called "tock," called "Haswell." And yes, tri-gate will still be in use at that time.
  • krumme - Monday, April 23, 2012 - link

    As I understand trigate, trigate provides the oportunity to even better granularity of power for the individual transistor, by using different numbers of gates. If you design your arch to the process (using that oportunity,- as IB is not, but the first 22nm Atom aparently is), there should be "huge" savings

    I asume you BY DESIGN mean "by process" btw.

    In my world process improvement is key to most industrial production, with tools often being the weak link. The process decides what is possible in your design. That why Intel have used billions "just" mounting the right equipment.
  • JarredWalton - Monday, April 23, 2012 - link

    No, he means Ivy Bridge is not the huge leap forward by design -- Intel intentionally didn't make IVB a more complex, faster CPU. That will be Haswell, the 22nm tock to the Ivy Bridge tick. Making large architectural changes requires a lot of time and effort, and making the switch between process nodes also requires time and effort. If you try to do both at the same time, you often end up with large delays, and so Intel has settled on a "tick tock" cadence where they only do one at a time.

    But this is all old news and you should be fully aware of what Intel is doing, as you've been around the comments for years. And why is it you keep bringing up Atom? It's a completely different design philosophy from Ivy Bridge, Sandy Bridge, Merom/Conroe, etc. Atom is more a competitor to ARM SoCs, which have roughly an order of magnitude less compute performance than Ivy Bridge.
  • krumme - Monday, April 23, 2012 - link

    - Intel speeds up Atom development, - not using depreciated equipment for the future.
    - Intel invest heavily to get into new business areas and have done for years
    - Haswell will probably be slimmer on the cpu part

    The reason they do so is because the need of cpu power outside of the servermarket, is stagnating. And new third world markets is emergin. And all is turning mobile - its all over your front page now i can see.

    The new Atom probably will provide adequate for most. (like say core 2 culv). Then they will have the perfect product. Its about mobility and price and price. Haswell will probably be the product for the rest of the mainstream market leaving even less for the dedicated gpu.

    IB is an old style desktop cpu, maturing a not quite ready 22nm trigate process. Designed to fight a BD that did not arive. Thats why it does not impress. And you can tell Intel knows because the mobile lineup is so slim.

    The market have changed. The shareprice have rocketed for AMD even though their high-end cpu failed, because the Atom sized bobcat and old technology llano could enter the new market. I could note have imagined the success of Llano. I didnt understand the purpose of it, because trinity was comming so close. But the numbers talk for themselves. People buy an user experience where it matter at lowest cost, not pcmark, encoding times, zip, unzip.

    You have to use new benchmarks. And they have to be reinvented again. They have to make sense. Obviously cpu have to play a less role and the rest more. You have a very strong team, if not the strongest out there. Benchmark methology should be at the top of your list and use a lot of your development time.
  • JarredWalton - Monday, April 23, 2012 - link

    The only benchmarks that would make sense under your new paradigm are graphics and video benchmarks, well, and battery life as well, because those are the only areas where a better GPU matters. Unless you have some other suggestions? Saying "CPU speed is reaching the point where it really doesn't matter much for a large number of people" is certainly true, and I've said as much on many occasions. Still, there's a huge gulf between Atom and Core 2 still, and there are many tasks where CULV would prove insufficient.

    By the time the next Atom comes out, maybe it will be fixed in the important areas so that stuff like YouTube/Netflix/Hulu all work without issue. Hopefully it also supports at least 4GB RAM, because right now the 2GB limit along with bloated Windows 7 makes Atom a horrible choice IMO. Plus, margins are so low on Atom that Intel doesn't really want to go there; they'd rather figure out ways to get people to continue paying at least $150 per CPU, and I can't fault their logic. If CULV became "fast enough" for everyone Intel's whole business model goes down the drain.

    Funny thing is that even though we're discussing Atom and by extension ARM SoCs, those chips are going through the exact same rapid increases in performance. And they need it. Tablets are fine for a lot of tasks, but opening up many web sites on a tablet is still a ton slower than opening the same sites on a Windows laptop. Krait and Tegra 3 are still about 1/3 the amount of performance I want from a CPU.

    As for your talk about AMD share prices, I'd argue that AMD share prices have increased because they've rid themselves of the albatross that was their manufacturing division. And of course, GF isn't publicly traded and Abu Dhabi has plenty of money to invest in taking over CPU manufacturing. It's a win-win scenario for those directly involved (AMD, UAE), though I'm not sure it's necessarily a win for everyone.
  • bhima - Monday, April 23, 2012 - link

    I figure Intel wants everyone to want their CULV processors since they seem to charge the most for them to the OEMs, or are the profit margins not that great because they are a more difficult/expensive processor to make?
  • krumme - Tuesday, April 24, 2012 - link

    Yes - video and gaming is what matters for the consumer now, everything is okey as it will - hopefully - be 2014. What matters is ssd, screen quality, and everything else, - just not cpu power. It just needs to have far less space. Cpu having so much space is just old habits for us old geeks.

    AMD getting rid of GF burden have been in the plan for years. Its known and can not influence share price. Basicly the, late, move to mobile focus, and the excellent execution of those consumer / not reviewer shaped apus is a part of the reason.

    The reviewers need to move their mindset :) - btw its my impression Dustin is more in line with what the general consumer want. Ask him if he thinks the consumer want a new ssd benchmark with 100 hours of 4k reading and writing.
  • MrSpadge - Monday, April 23, 2012 - link

    No, the finer granularity is just a nice side effect (which could probably be used more aggressively in the future). However, the main benefit of tri-gate is more control over the channel, which enables IB to reach high clock speeds at comparably very low voltages, and at very low leakage.

Log in

Don't have an account? Sign up now