HD 2500: Compute & Synthetics

While compute functionality could technically be shoehorned into DirectX 10 GPUs such as Sandy Bridge through DirectCompute 4.x, neither Intel nor AMD's DX10 GPUs were really meant for the task, and even NVIDIA's DX10 GPUs paled in comparison to what they've achieved with their DX11 generation GPUs. As a result Ivy Bridge is the first true compute capable GPU from Intel. This marks an interesting step in the evolution of Intel's GPUs, as originally projects such as Larrabee Prime were supposed to help Intel bring together CPU and GPU computing by creating an x86 based GPU. With Larrabee Prime canceled however, that task falls to the latest rendition of Intel's GPU architecture.

With Ivy Bridge Intel will be supporting both DirectCompute 5—which is dictated by DX11—but also the more general compute focused OpenCL 1.1. Intel has backed OpenCL development for some time and currently offers an OpenCL 1.1 runtime that runs across multiple generations of CPUs, and now Ivy Bridge GPUs.

Our first compute benchmark comes from Civilization V, which uses DirectCompute 5 to decompress textures on the fly. Civ V includes a sub-benchmark that exclusively tests the speed of their texture decompression algorithm by repeatedly decompressing the textures required for one of the game’s leader scenes. And while games that use GPU compute functionality for texture decompression are still rare, it's becoming increasingly common as it's a practical way to pack textures in the most suitable manner for shipping rather than being limited to DX texture compression.

Compute: Civilization V

These compute results are mostly academic as I don't expect anyone to really rely on the HD 2500 for a lot of GPU compute work. With under 40% of the EUs of the HD 4000, we get under 30% of the performance from the HD 2500.

We have our second compute test: the Fluid Simulation Sample in the DirectX 11 SDK. This program simulates the motion and interactions of a 16k particle fluid using a compute shader, with a choice of several different algorithms. In this case we’re using an (O)n^2 nearest neighbor method that is optimized by using shared memory to cache data.

DirectX11 Compute Shader Fluid Simulation - Nearest Neighbor

Thanks to its large shared L3 cache, Intel's HD 4000 did exceptionally well here. Thanks to its significantly fewer EUs, Intel's HD 2500 does much worse by comparison.

Our last compute test and first OpenCL benchmark, SmallLuxGPU, is the GPU ray tracing branch of the open source LuxRender renderer. We’re now using a development build from the version 2.0 branch, and we’ve moved on to a more complex scene that hopefully will provide a greater challenge to our GPUs.

 

SmallLuxGPU 2.0d4

Intel's HD 4000 does well here for processor graphics, delivering over 70% of the performance of NVIDIA's GeForce GTX 285. The HD 2500 takes a big step backwards though, with less than half the performance of the HD 4000.

Synthetic Performance

Moving on, we'll take a few moments to look at synthetic performance. Synthetic performance is a poor tool to rank GPUs—what really matters is the games—but by breaking down workloads into discrete tasks it can sometimes tell us things that we don't see in games.

Our first synthetic test is 3DMark Vantage’s pixel fill test. Typically this test is memory bandwidth bound as the nature of the test has the ROPs pushing as many pixels as possible with as little overhead as possible, which in turn shifts the bottleneck to memory bandwidth so long as there's enough ROP throughput in the first place.

3DMark Vantage Pixel Fill

It's interesting to note here that as DDR3 clockspeeds have crept up over time, IVB now has as much memory bandwidth as most entry-to-mainstream level video cards, where 128bit DDR3 is equally common. Or on a historical basis, at this point it's half as much bandwidth as powerhouse video cards of yesteryear such as the 256bit GDDR3 based GeForce 8800GT.

Moving on, our second synthetic test is 3DMark Vantage’s texture fill test, which provides a simple FP16 texture throughput test. FP16 textures are still fairly rare, but it's a good look at worst case scenario texturing performance.

3DMark Vantage Texture Fill

Our final synthetic test is the set of settings we use with Microsoft’s Detail Tessellation sample program out of the DX11 SDK. Since IVB is the first Intel iGPU with tessellation capabilities, it will be interesting to see how well IVB does here, as IVB is going to be the de facto baseline for DX11+ games in the future. Ideally we want to have enough tessellation performance here so that tessellation can be used on a global level, allowing developers to efficiently simulate their worlds with fewer polygons while still using many polygons on the final render.

DirectX11 Detail Tessellation Sample - Normal

DirectX11 Detail Tessellation Sample - Max

The results here are as expected. With far fewer EUs, the HD 2500 falls behind even some of the cheapest discrete GPUs.

GPU Power Consumption

As you'd expect, power consumption with the HD 2500 is tangibly lower than HD 4000 equipped parts:

GPU Power Consumption Comparison under Load (Metro 2033)
  Intel HD 2500 (i5-3470) Intel HD 4000 (i7-3770K)
Intel DZ77GA-70K 76.2W 98.9W

Running our Metro 2033 test, the HD 4000 based Core i7 drew nearly 30% more power at the wall compared to the HD 2500.

Intel HD 2500 Performance General Performance
Comments Locked

67 Comments

View All Comments

  • shin0bi272 - Friday, June 1, 2012 - link

    any gamer with a good quad core doesnt need to upgrade their cpu. Who's going to spend hundreds of dollars to upgrade from another quad core (like lets say my i7 920) to this one for a whopping 7 fps in one game and 1 fps in another? That sounds like something an apple fanboy would do... oh look the new isuch-and-such is out and its marginally better than the one I spent $x00 on last month I have to buy it now! no thanks.
  • Sogekihei - Monday, June 4, 2012 - link

    This really depends a lot on what you have (or want) to do with your computer. Architectural differences are obviously a big deal or else instead of an i7-920 you'd probably be rocking a Phenom (1) x4 or Core2 Quad by your logic that having a passable quad core means you don't need to upgrade your processor until the majority of gaming technology catches up.

    Let's take the bsnes emulator as an example here, it focuses on low-level emulation of the SNES hardware to reproduce games as accurately as possible. With most new version releases, the hardware requirements gradually increase as more intricate backend code needs to execute within the same space of time to avoid dropping framerates; being that these games determined their running speed by their framerate and being sub-60 or sub-50 (region-dependent) means running at less than full speed, this could eventually be a problem for somebody wanting to use such an accurate emulator. From what I've heard, most Phenom and Phenom II systems are very bogged down and can barely get any games running at full speed on it these days and from my own experience, Nehalem-based Intel chips either require ludicrous clock speeds or simply aren't capable of running certain games at full speed (such as Super Mario RPG.) Obviously in cases such as this, the performance increases from a new architecture could benefit a user greatly.

    Another example I'll give is based on the probability through my own experiences dealing with other people that the vast majority of gamers DO use their rigs for other tasks too. Any intensive work with maths, spreadsheets, video or image editing and rendering, software development, blueprinting, or anything else you could name that people do on a computer nowadays instead of by hand in order to speed the process will see massive gains when moving to a faster processor architecture. For anybody that has such work to do, be it for a very invested-in hobby, as part of a post-secondary education course, or as part of their career, the few hundred dollars/euros/currency of choice it costs to update their system is easily worth the potentially hundreds or thousands of hours per upgrade cycle they may save through the more powerful hardware.

    I will concede that in today's market, the majority of gaming-exclusive cases don't yield much better results from increasing a processor's power (usually being GPU-limited instead) however that's a very broad statement and doesn't account for things that are heavily multithreaded (like newer Assassin's Creed games) or that are very processor-intensive (which I believe Civilization V can qualify as in mid- to late-game scenarios.)

    There will always be case-specific conditions which will make buying something make sense or not, but do try to keep in mind that a lot of people do have disposable income and will very likely end up putting it into their hobbies before anything else. If their hobbies deal with computers they're likely going to want to always have, to the best extent they can afford, the latest and greatest technology available. Does it mean your system is trash? Of course not. Does it mean they're stupid? No moreso than the man that puts $10 a week into his local lottery and never wins anything. It just comes down to you having different priorities from them.

    The only other thing I want to address is your stance on Apple products. Yes the hipsters are annoying, but you would likely lose the war if you wanted to argue on the upgrade cycle users take with Mac OSX-based computers. New product generations only come about once a year or so and most users wait 2-3 generations before upgrading and quite a few wait much longer than the average Linux/Windows PC user will before upgrading. The ones that don't wait are usually professionals in some sort of graphic arts industry (such as photography) where they need the most processing power, memory, graphics capabilities, and battery life possible and it's a justified business expense.
  • CeriseCogburn - Monday, June 11, 2012 - link

    People usually skip a generation - so from i7 920 we can call it one gen with SB being nearly the same as IB, so you're correct.

    But anyone on core 2 or phenom 2 or athlon 2x or 4x, yeah they could do it an be happy - and don't forget the sata 6 and usb 3 they get with the new board - so it's not just the cpu with IB and SB - you get hard drive and usb speed too.

    So with those extras it could drive a few people in your position - sata 6 card and usb 3 card is half the upgrade cost anyway, so add in pci-e 3 as well. I see some people moving from where you are.
  • ClagMaster - Saturday, June 2, 2012 - link

    The onboard graphics of the Ivy Bridge processors was never seriously intended for playing games. It is intended to replace chipset graphics for to support office applications with large LCD monitors. And it adds transcoding capabilities.

    @Anand : If you want to do a more meaningful comparison of graphics performance for those that might be doing gaming, why not test and compare some DX9 games (still being written) of titles available 5 years ago. Real people play these games because they are cheap or free and provide as much entertainment as DX10 or DX11 games. Frame rates will be 60fps or slightly better. Or will your sponsors at nVidia, AMD or Intel not permit this sort of comparison.

    Its ridiculous to compare onboard graphics to discrete graphics performance. A dedicated GPU, optimized for graphics, will always beat a onboard graphics GPU for a given gate size.

    The Ivy Bridge graphics (performance/power consumption) , if I interpret these comparisons that have been presented correctly, is also inefficient compared to the processing capabilities of a discrete graphics card.
  • vegemeister - Wednesday, June 6, 2012 - link

    As you mentioned, I'd like to see some mention of the 2D performance. I use Awesome WM on a 3520x1200 X screen, and smooth scrolling can sometimes get choppy with my Graphics My Ass GPU.

    I'd like to upgrade my Core2 duo, but I'm not sure whether the HD2500 graphics in this chip will suffice, or if I need to be looking at higher end CPUs. I don't really care about the difference between shitty 3D and ho-hum 3D.
  • P39Airacobra - Tuesday, July 1, 2014 - link

    That's a shame that they still sale the GT 520 and GT 610 and the ATi 5450, When a integrated GPU like the HD 2500 out performs a dedicated GPU it's time to retire them from the market. I bought a 3470 and I am running a R9 270 with 8GB of 1600 Ripjaws. I tried out the HD 2500 on the chip just to see how it would do, It honestly sucked, But for videos and gaming on very low settings it works, It actually surprised me. But I don't think I could ever stand to have a intergrated GPU, What's the point in buying a i5 if you are only going to use the integrated gpu? It does not make sense, You may as well keep your old P4 if you are not going to add a real GPU to it. This is why I don't understand the point of a integrated GPU inside a high end processor.
  • Imogen596 - Saturday, September 29, 2018 - link

    Materials to guarantee lasting sturdiness. https://about.me/lenabryan It needs to likewise be adjustable for http://www.bricksite.com/dogharnesstouch convenience as well as safety.

Log in

Don't have an account? Sign up now