While compute functionality could technically be shoehorned into DirectX 10 GPUs such as Sandy Bridge through DirectCompute 4.x, neither Intel nor AMD's DX10 GPUs were really meant for the task, and even NVIDIA's DX10 GPUs paled in comparison to what they've achieved with their DX11 generation GPUs. As a result Ivy Bridge is the first true compute capable GPU from Intel. This marks an interesting step in the evolution of Intel's GPUs, as originally projects such as Larrabee Prime were supposed to help Intel bring together CPU and GPU computing by creating an x86 based GPU. With Larrabee Prime canceled however, that task falls to the latest rendition of Intel's GPU architecture.

With Ivy Bridge Intel will be supporting both DirectCompute 5—which is dictated by DX11—but also the more general compute focused OpenCL 1.1. Intel has backed OpenCL development for some time and currently offers an OpenCL 1.1 runtime for their CPUs, however an OpenCL runtime for Ivy Bridge will not be available at launch. As a result Ivy Bridge is limited to DirectCompute for the time being, which limits just what kind of compute performance testing we can do with Ivy Bridge.

Our first compute benchmark comes from Civilization V, which uses DirectCompute 5 to decompress textures on the fly. Civ V includes a sub-benchmark that exclusively tests the speed of their texture decompression algorithm by repeatedly decompressing the textures required for one of the game’s leader scenes. And while games that use GPU compute functionality for texture decompression are still rare, it's becoming increasingly common as it's a practical way to pack textures in the most suitable manner for shipping rather than being limited to DX texture compression.

As we alluded to in our look at Civilization V's performance in game mode, Ivy Bridge ends up being compute limited here. It's well ahead of the even more DirectCompute anemic Radeon HD 5450 here—in spite of the fact that it can't take a lead in game mode—but it's slightly trailing the GT 520, which has a similar amount of compute performance on paper. This largely confirms what we know from the specs for HD 4000: it can pack a punch in pushing pixels, but given a shader heavy scenario it's going to have a great deal of trouble keeping up with Llano and its much greater shader performance.

But with that said, Ivy Bridge is still reaching 55% of Llano's performance here, thanks to AMD's overall lackluster DirectCompute performance on their pre-7000 series GPUs. As a result Ivy Bridge versus Llano isn't nearly as lop-sided as the paper specs tell us; Ivy Bridge won't be able to keep up in most situations, but in DirectCompute it isn't necessarily a goner.

And to prove that point, we have our second compute test: the Fluid Simulation Sample in the DirectX 11 SDK. This program simulates the motion and interactions of a 16k particle fluid using a compute shader, with a choice of several different algorithms. In this case we’re using an (O)n^2 nearest neighbor method that is optimized by using shared memory to cache data.

Thanks in large part to its new dedicated L3 graphics cache, Ivy Bridge does exceptionally well here. The framerate of this test is entirely arbitrary, but what isn't is the performance relative to other GPUs; Ivy Bridge is well within the territory of budget-level dGPUs such as the GT 430, Radeon HD 5570, and for the first time is ahead of Llano, taking a lead just shy of 10%.  The fluid simulation sample is a very special case—most compute shaders won't be nearly this heavily reliant on shared memory performance—but it's the perfect showcase for Ivy Bridge's ideal performance scenario. Ultimately this is just as much a story of AMD losing due to poor DirectCompute performance as it is Intel winning due to a speedy L3 cache, but it shows what is possible. The big question now is what OpenCL performance is going to be like, since AMD's OpenCL performance doesn't have the same kind of handicaps as their DirectCompute performance.

Synthetic Performance

Moving on, we'll take a few moments to look at synthetic performance. Synthetic performance is a poor tool to rank GPUs—what really matters is the games—but by breaking down workloads into discrete tasks it can sometimes tell us things that we don't see in games.

Our first synthetic test is 3DMark Vantage’s pixel fill test. Typically this test is memory bandwidth bound as the nature of the test has the ROPs pushing as many pixels as possible with as little overhead as possible, which in turn shifts the bottleneck to memory bandwidth so long as there's enough ROP throughput in the first place.

It's interesting to note here that as DDR3 clockspeeds have crept up over time, IVB now has as much memory bandwidth as most entry-to-mainstream level video cards, where 128bit DDR3 is equally common. Or on a historical basis, at this point it's half as much bandwidth as powerhouse video cards of yesteryear such as the 256bit GDDR3 based GeForce 8800GT.

Altogether, with 29.6GB/sec of memory bandwidth available to Ivy Bridge with our DDR3-1866 memory, Ivy Bridge ends up being able to push more pxiels than Llano, more pixels than the entry-level dGPUs, and even more pixels the budget-level dGPUs such as GT 440 and Radeon HD 5570 which have just as much dedicated memory bandwidth. Or put in numbers, Ivy Bridge is pushing 42% more pixels than Sandy Bridge and 25% more pixels than the otherwise more powerful Llano. And since pixel fillrates are so memory bandwidth bound Intel's L3 cache is almost certainly once again playing a role here, however it's not clear to what extent that's the case.

Moving on, our second synthetic test is 3DMark Vantage’s texture fill test, which provides a simple FP16 texture throughput test. FP16 textures are still fairly rare, but it's a good look at worst case scenario texturing performance.

After Ivy Bridge's strong pixel fillrate performance, its texture fillrate brings us back down to earth. At this point performance is once again much closer to entry level GPUs, and also well behind Llano. Here we see that Intel's texture performance increases also exactly linearly with the increase in EUs from Sandy Bridge to Ivy Bridge, indicating that those texture units are being put to good use, but at the same time it means Ivy Bridge has a long way to go to catch Llano's texture performance, achieving only 47% of Llano's performance here. The good news for Intel here is that texture size (and thereby texel density) hasn't increased much over the past couple of years in most games, however the bad news is that we're finally starting to see that change as dGPUs get more VRAM.

Our final synthetic test is the set of settings we use with Microsoft’s Detail Tessellation sample program out of the DX11 SDK. Since IVB is the first Intel iGPU with tessellation capabilities, it will be interesting to see how well IVB does here, as IVB is going to be the de facto baseline for DX11+ games in the future. Ideally we want to have enough tessellation performance here so that tessellation can be used on a global level, allowing developers to efficiently simulate their worlds with fewer polygons while still using many polygons on the final render.

The results here are actually pretty decent. Compared to what we've seen with shader and texture performance, where Ivy Bridge is largely tied at the hip with the GT 520, at lower tessellation factors Ivy Bridge manages to clearly overcome both the GT 520 and the Radeon HD 5450. Per unit of compute performance, Intel looks to have more tessellation performance than AMD or NVIDIA, which means Intel is setting a pretty good baseline for tessellation performance. Tessellation performance at high tessellation factors does dip however, with Ivy Bridge giving up much of its performance lead over the entry-level dGPUs, but still managing to stay ahead of both of its competitors.

Intel HD 4000 Performance: Civilization V Power Consumption
Comments Locked

173 Comments

View All Comments

  • Shadowmaster625 - Monday, April 23, 2012 - link

    I would like to start using quicksync, but 2 mbps for a tablet is way too much for me. I just want to quickly take a video and transcode it. There is nothing quick about copying a 1+ gigabyte file onto a tablet or phone. It does no good to be able to transcode faster than you can even copy it LOL. Can quicksync go lower? I want no more than 800 kbps,400-600 ideally.

    Also, is it possible to transcode and copy at the same time? Is anyone doing that?
  • BVKnight - Tuesday, April 24, 2012 - link

    When you mention "2 mbps," I think you are referring to the bitrate, which is generally synonymous with the quality of the encoding.

    "It does no good to be able to transcode faster than you can even copy" <---I think this is completely false. The transcoding is a separate file conversion step that creates the final version which you will move to your device. Your machine won't even start copying until transcoding is complete, which means that every little bit of speed you can add to the transcoding process will directly reduce the amount of time it takes to get your file on your device.

    Getting quicksync will make a huge difference for your encoding.
  • ncrubyguy - Monday, April 23, 2012 - link

    "Features like VT-d and Intel TXT are once again reserved for regular, non-K-series parts alone."

    Why do they keep doing that?
  • JarredWalton - Monday, April 23, 2012 - link

    Because those are mostly for business users, and business users don't overclock and thus don't need K-series.
  • Old_Fogie_Late_Bloomer - Monday, April 23, 2012 - link

    I have a feeling that the real reason is that, if business users could get those features on a K-series processor, it would largely obviate the need/demand for SB-E. A 2600K/2700K overclocked up to, say, 4.5 GHz--which seems consistently achievable, even conservative--would compare very favorably to the 3930K, given the prices of both.

    Yes, I know you can overclock the 3930K, and yes, I know it has six cores and four memory controllers and more cache. But I bet that overclocked SB or IB with VT-d, &c., would make a lot of sense for a lot of applications, given price/performance considerations.
  • piroroadkill - Monday, April 23, 2012 - link

    I'd be very interested in seeing overclocked 2500K and 2600K benchmarks tossed in, because lets be honest, one of those is the most popular CPU at the high end right now, and anyone with one has bumped it to at least 4.3GHz, often about 4.4-4.5.

    I think it would be nice to have a visual aid to see how that fares, but I understand the impracticality of doing so.
  • Rasterman - Monday, April 23, 2012 - link

    Thank you for including this section, it is great. I think it would be more relevant for people though if it were a much smaller test. I think pretty much anyone is going to know that a project of that size is going to be faster with more cores and speed. What isn't so obvious though are smaller projects, where you are compiling only a few files and debugging. A typical cycle for almost all developers is: making changes, compiling, debugging to test them out. Even though you are only talking times of a few seconds, add this up to 100s-1000s of iterations per day and it makes a difference, I base my entire computer hardware selection around this workflow. For now I use the single threaded benchmarks you post as a guide.
  • iGo - Monday, April 23, 2012 - link

    The features table has put me in a great dilemma. I'm very much interested in running multiple virtual machines on my desktop, for debugging and testing purposes. Although I won't be running these virtual boxes 24x7, it would be great to have processor support for any kind of hardware acceleration that I can get, whenever I fire up these for testing. On the other hand, ability to overclock the K series processor is really tempting, and yes, a decent/modest overclock of say, 4.2-4.5GHz sounds lovely for 24x7 use.

    Anyone using SNB/Intel processors with VT-d can share if its worth going for non-K processor to get better virtualization performance? To be more clear, my primary job involves web-application development with UX development. For which I require a varied testing under different browsers. Currently I've setup 4 different virtual machines on my desktop with different browsers installed on different windows OS versions. Although these machines will never run 24x7 and never all at once (max 2 at once when testing). Apart from that, I also do lot of photo editing (RAW files, Lightroom and works) and bit of video editing/encoding stuff on my dekstop, mostly personal projects, rarely commercial work). Is it better to opt for 3770 for better virtual machine performance or 3770k with chance to boost overall performance by overclocking?
  • dcollins - Monday, April 23, 2012 - link

    At the moment, VT-d will not give you any additional performance on your VM's using desktop virtualization programs like VMware workstation or Virtualbox. Neither supports VT-d right now. Based on progress this year, I expect VT-d support is still be a year away in Virtualbox, which is what I use.

    VT-d doesn't help performance in general; instead, VT-d allows VMs to directly access computer hardware. This is essential for high performance networking on servers or for accessing certain hardware like sound cards where low latency is crucial. For your workload, the only advantage will be slightly higher network speeds using native drivers versus a bridged connection. It may facilitate testing GPU accelerated browsers in the future as well.

    If you plan on overclocking, the K series is worth loosing VT-d.
  • iGo - Monday, April 23, 2012 - link

    Thanks, that helps a lot. I've been reading about and VT-d and your comment confirms where my thinking was going. I guess, 3770K it is then. :)

Log in

Don't have an account? Sign up now