Overclocking Intel’s HD Graphics - It Works...Very Well

The coolest part of my job is being able to work with some ridiculously smart people. One such person gave me the idea to try overclocking the Intel HD graphics core on Clarkdale a few weeks ago. I didn’t get time to do it with the Core i5 661, but today is a different day.

Clarkdale offers three different GPU clocks depending on the model:

Processor Intel HD Graphics Clock
Intel Core i5-670 733MHz
Intel Core i5-661 900MHz
Intel Core i5-660 733MHz
Intel Core i5-650 733MHz
Intel Core i3-540 733MHz
Intel Core i3-530 733MHz
Intel Pentium G9650 533MHz

 

The Core i5 661 runs it at the highest speed - 900MHz. The rest of the Core i5 and i3 processors pick 733MHz. And the Pentium G6950 has a 533MHz graphics clock.

Remember that the Intel HD Graphics die is physically separate from the CPU die on Clarkdale. It’s a separate 45nm package and I’m guessing it’s not all that difficult to make. If AMD can reliably ship GPUs with hundreds of shader processors, Intel can probably make a chip with 12 without much complaining.

So the theory is that these graphics cores are easily overclockable. I fired up our testbed and adjusted the GPU clock. It’s a single BIOS option and without any changes to voltage or cooling I managed to get our Core i3 530’s GPU running at 1200MHz. That’s a 64% overclock!

I could push the core as high as 1400MHz and still get into Windows, but the system stopped being able to render any 3D games at that point.

I benchmarked World of Warcraft with the Core i3 running at three different GPU clocks to show the potential for improvement:

CPU (Graphics Clock) World of Warcraft
Intel Core i5 661 (900MHz gfx) 14.8 fps
Intel Core i3 530 (733MHz gfx) 12.5 fpx
Intel Core i3 530 (900MHz gfx) 14.2 fps
Intel Core i3 530 (1200MHz gfx) 19.0 fps

 

A 64% overclock resulted in a 52% increase in performance. If Intel wanted to, it could easily make its on-package GPU a lot faster than it is today. I wonder if this is what we’ll see with Sandy Bridge and graphics turbo on the desktop.

Integrated Graphics - Slower than AMD, Still Perfect for an HTPC Overclocking the i3 - 4GHz with the Stock Cooler
Comments Locked

107 Comments

View All Comments

  • Anand Lal Shimpi - Friday, January 22, 2010 - link

    I'm a huge fan of competition, but I'm not sure I want to deal with the 3rd party chipset guys again. As the technologies got more sophisticated we started to see serious problems with all of the 3rd party guys (Remember VIA when we got PCIe? Remember the issues with later nForce chipsets?). As more features get integrated onto the CPU, the role of competition in the chipset space is less interesting to me.

    While I'd love to see competition there, I'm not willing to deal with all of the compatibility and reliability nightmares that happened way too often.

    Take care,
    Anand
  • Penti - Sunday, January 24, 2010 - link

    Also the competition didn't help with the K8-chipset prices. Even though they didn't have to do a memory controller any more, and some registers/logic moved to the CPU. Even if single chip chipsets where possible and available.

    I agree, I'm glad to see the third party chipsets gone. It would just be bad in the corporate space today when features as remote management needs to be built in. (Better to just have that removed from the consumer products then having very different products). Unless Intel / AMD gives away (sell) IP-blocks to the third parties I don't see the point. Extra features is perfectly capable of being implemented / integrated directly on the motherboard. And it would be enough to give incentive and pressure the chipset maker (Intel or AMD) to better themselves. Unless chipsets moves to the SoC-model, I guess that will do. It's not like VIA, nVidia and SiS did beat Intel when they made chipsets for the QPB-bus/P4. Plus I doubt they could make a cheaper southbridge for the Nehalem/DMI platform. It's still SATA, USB, PCI-e, Ethernet MAC/PHY, Audio and SPI/LPC and other legacy I/O. As well as the video/display output stuff.
  • geok1ng - Friday, January 22, 2010 - link

    FPS numbers for this Intel IGP is too good to be true. Intel has cheated before with IGP that ddin,t render all polygons and textures to achieve "competite" frames per second numbers.

    hence i request (once again)side by side screenshots to put aside any doubts that this "competitive" IGP from Intel has a similar image quality of NIVIDIA and ATI integrated graphics.
  • silverblue - Friday, January 22, 2010 - link

    I'm still not convinced by the IGP. Those 661 results are a 900MHz sample vs. the 700MHz HD 3300 on the 790GX board, and the 530 uses a 733MHz IGP. In every single case, the AMD solution wins, be it by a small margin (Dragon Age Origins) or a large one (CoD: MW2) against the 530, but in general, AMD's best is probably about 20-25% better, clock for clock, than Intel's - all depending on title of course. Overclocking the new IGP turns out excellent, however we're still comparing it to a relatively old IGP from AMD.

    If AMD updates their IGP and bumps up the clock speed, the gap will increase once again. There's nothing to say they can't bring out a 900MHz, 40nm flavour of the 3300 or better now that TSMC have sorted out their production issues. Intel's IGPs are on a 45nm process so AMD may produce something easily superior that doesn't require too much power. However... I'm still waiting.

    Intel are definitely on the right track, though they need to do something about the amount of work per clock.
  • IntelUser2000 - Monday, January 25, 2010 - link

    Silverblue, I don't know how it is on the GMA HD, but at least till the GMA 4500, the Intel iGPUs clocked in a similar way like the Nvidia iGPUs. The mentioned clocks are for the shader core. The rest like the ROP and the TMU is likely lower.

    While for AMD, 700MHz is for EVERYTHING.

    Plus, the 780/785/790 has more transistors than the GMA HD. The AMD chipset has 230 million transistors while GMA HD package has 170 million.

    All in all, the final performance is what matters.
  • Suntan - Friday, January 22, 2010 - link

    I would agree. I question why it wasn’t compared against the newer (and usually cheaper) 785g mobos (that are ATI 4200 based systems.)

    -Suntan
  • Anand Lal Shimpi - Friday, January 22, 2010 - link

    I chose to compare it to the 790GX because that's currently the fastest AMD IGP on the market. I wanted to put AMD's best foot forward. The 785G is literally the same performance as the 780G, the more impressive product name comes from its DirectX 10.1 support.

    The 890GX is the next IGP from AMD. In our Clarkdale review I mentioned that Intel simply getting close to AMD's IGP performance isn't really good enough. Unfortunately the current roadmaps have the 890GX IGP running at 700MHz, just like the 790GX. The only question I have is whether or not the 890GX has any more shader cores (currently at 8). We'll find out soon enough.

    Take care,
    Anand
  • Suntan - Friday, January 22, 2010 - link

    [QUOTE]I chose to compare it to the 790GX because that's currently the fastest AMD IGP on the market.[/QUOTE]

    Fair enough. But your own tests in the 785 article shows the two being basically identical in performance, the 790 might have some small advantage in certian places (gaming) but if a person wasn't happy with the 785 in that use, I'd put wagers that they wouldn't be happy with the 790 either.

    At this point, integrated gfx are completely capable for pcs for everything except gaming (and some NLE video editors that use heavy gfx accel for pre-visualizing EFX.) In both those cases, even a $50 dedicated card will far overshadow all the integrated options to the point that I think it isn't right to concentrate on game capability for picking integrated gfx cards.

    In my opinion, the last area for IGX to compete is video acceleration, which the 785 is at paratiy or beats the 790 there. (Although they both fail without the inclusion of digital TrueHD/DTSHDMA support) but at least the 785 is usually cheaper.

    In any case, the new i3 really doesn't impress me *just* because it has an integrated gfx on the chip. It seems that the same thing still holds true, namely: If your building a computer for anything other than gaming, you can probably build one where an AMD will result in the same or better performance for less money (TrueHD/DTSHDMA not withstanding.) If you're building one for gaming, you're probably not going to be using the integrated gfx anyway.

    -Suntan

  • archcommus - Friday, January 22, 2010 - link

    I don't understand what the big deal is about TrueHD/DTS-HD MA bitstreaming. I've been decoding in software and sending 6 channels over 3 analog cables (is this LPCM?) to my 5.1 system ever since I've owned one, and the sound quality is fantastic. Is there really a perceived quality difference with bitstreaming to a high quality receiver if you own a very high end sound system?
  • eit412 - Friday, January 22, 2010 - link

    TrueHD/DTS-HD MA use lossless compression and if you bit stream them to the receiver instead of decoding in software there is less of a chance of interference(the longer the signal is in analog form the more interference is possible). In most cases the difference is not distinguishable, but people love to say that theoretically my stereo is better than yours.

Log in

Don't have an account? Sign up now