This week Intel will begin sharing some of the first details of its Ivy Bridge processor (2012 Core i-series CPU) at the annual Intel Developer Forum in San Francisco. While the show officially starts on Tuesday, we have some early details about the chip.

Sandy Bridge was Intel's first high-end architecture to integrate a GPU on-die. The SNB GPU is available in two configurations: GT1 with 6 EUs (processors/execution units/cores) and GT2 with 12 EUs. All mobile versions ship with GT2 while most desktop parts ship with GT1. Intel calls GT2 its HD Graphics 3000 while GT1 chips come with HD Graphics 2000. There's a less featured version of GT1 that's simply called Intel HD Graphics as well and it's found in Sandy Bridge Pentium & Celeron CPUs.

Ivy Bridge's GT2 configuration has 16 EUs, no word on how many the GT1 configuration will have. As a result Intel is expecting a 60% increase in 3DMark Vantage scores (Performance Preset) and a 30% increase in 3DMark '06 scores. IVB GT1 on the other hand will only see performance increase by 10 - 20%. If we look at the 3DMark Vantage data from our Llano notebook review, a 60% increase in performance over SNB would put Ivy Bridge's GPU performance around that of AMD's A8. It remains to be seen how well this translates into actual gaming performance though.

The other information about Ivy Bridge's GPU has been known for a while: DX11, OpenCL 1.1 and OpenGL 3.1 will all be supported. The last tidbit we have is that Quick Sync performance is apparently much improved. Intel is privately claiming up to 2x better performance than Sandy Bridge in accelerated video transcoding or lesser gains but improved image quality. The performance improvements only apply to GT2 IVB configurations.

 

Comments Locked

35 Comments

View All Comments

  • silverblue - Monday, September 12, 2011 - link

    Exactly. If you wanted to talk performance, this touted 60% increase in performance may be commendable, but let's consider for one moment that a) Trinity is on its way, and b) Llano is already that much ahead. I'm very happy to see Intel challenge in this area, but until their drivers improve, you have to ask what else they can do - add more hardware or pump up the clocks?
  • tipoo - Monday, September 12, 2011 - link

    What about the ULV version of the Ivy Bridge GPU? In Sandy Bridge, it had the same name but different performance. It only clocked at 350MHz instead of 650 and turbo'd up to 1GHz instead of 1.2.
  • KPOM - Monday, September 12, 2011 - link

    The new i4-2467 (1.6GHz) turbos up to 1.15GHz and the i7-2677 turbos up to 1.2GHz.
  • iwodo - Monday, September 12, 2011 - link

    Intel should have used GT2 in all their Processors.

    I was expecting Double performance increase. Up to 60% is still not good enough.

    Drivers update is still very slow - I dont expect an monthly update like Nvidia or ATI. But at least have quarterly update for GPU drivers.

    Why No OpenGL 4?

    2x QuickSync speed will finally makes it faster then x264 fastest.
  • KPOM - Monday, September 12, 2011 - link

    I'm actually slightly impressed. I was expecting a 30% improvement based on what Intel had previously said. The Sandy Bridge is OK for many mainstream games in low detail settings. This would make it acceptable in medium detail.
  • MarkLuvsCS - Monday, September 12, 2011 - link

    Performance increase will be an added benefit, but i think the best part is the expected quality increase. I know this is still a major concern for people trying to stave best quality backups for videos. GPU transcoding seems to be the worst of the three vs. QuickSync and x264.
  • shivoa - Monday, September 12, 2011 - link

    Agreed with the above on the lack of OGL4.x support. This seems to be a trend for Intel ignoring OGL support for their GPUs whose DX versions imply are probably only driver locked to earlier OGL versions.

    The SB support for only OGL3.1 (rather than the full fat 3.2 or the backport 3.3) created additional work for graphics programmers by introducing a class for GPU that can support the DX10 new engine but cannot take an OGL3.3 Mac/Linux engine with equivalent features (because OGL3.3 is a backport to the OGL3 hardware level it means an OGL4/DX11 engine can be quickly converted to OGL3.3 support while OGL3.0-3.2 would require a far more significant rewrite to the OGL3 conventions and language versions).
  • Doormat - Monday, September 12, 2011 - link

    I'm also disappointed. Between the shink from 32 to 22, 3D gates, and a micro architecture revision, I was expecting 2x perf increase at least. 10% on the low end is useless. Looking forward, intel is setting the "low end must work" bar and GT1-2012 is incredibly weak.
  • iwodo - Tuesday, September 13, 2011 - link

    You pointed out the maths. With Performance at 2, Double it would only make it 4. Compare to ATI or Nvidia, who had their performance starting already at 6 or 8. Even 50% would be sufficient.
  • DanNeely - Monday, September 12, 2011 - link

    It's disappointing on the mobile side; but with Intel being pushed hard to drop power consumption in its mobile parts, with 17W reportedly becoming the new standard level in Haswell (vs 35W now); a lot of the potential of 22nm is having to be consumed in dropping power use, not in boosting performance.

    OTOH if they do offer the GPU for more of their mainstream desktop parts it will be a 3.2x boost there.

Log in

Don't have an account? Sign up now