Overclocking the i3 - 4GHz with the Stock Cooler

I’ve become a fan of stock voltage overclocking over the past few years. As power consumption and efficiency has become more important, and manufacturing processes improved, how far you can push a CPU without increasing its core voltage appears to be the most efficient way to overclock. You minimize any increases in power consumption while maximizing performance. You really find out whether or not you’ve been sold a chip that’s artificially binned lower than it could have.

With Bloomfield, Intel hit a new peak for how far you can expect to push a CPU without increasing voltage. AMD followed with the Phenom II, but Lynnfield took a step back. Thanks to its on-die PCIe controller, Lynnfield needed some amount of additional voltage to overclock well. Clarkdale is somewhere in between. It lacks the crippling on-die PCIe controller, but it’s also a much higher volume part which by definition shouldn’t be as overclockable.

The Core i3 530 runs at 2.93GHz by default, with no available turbo boost. Without swapping coolers or feeding the chip any additional voltage, the most I got out of it was 3.3GHz (150MHz BCLK x 22). Hardly impressive.

I added another 0.16V to the CPU’s core voltage. That’s just under 14%. And here’s what I was able to do:

That’s 4GHz, stable using the stock heatsink/fan. Part of the trick to overclocking this thing was lowering the clock multiplier. Despite always keeping the QPI and memory frequencies in spec, lowering the clock multiplier on the chip improved stability significantly and allowed me to reach much higher frequencies.

I could push beyond 4GHz but that requires more voltage and potentially better cooling. With a stable 4GHz overclock, I was happy.

If you’ll remember from my review of the processor, my Phenom II X2 550 BE managed 3.7GHz using the stock cooler and a pound of voltage. Unfortunately it’s not enough to challenge the overclocked 530.

CPU x264 HD 3.03 - 2nd pass 7-zip KB/s Batman: AA Dawn of War II Dragon Age Origins World of Warcraft
Intel Core i3 530 @ 4GHz 18.4 fps 2822 192 fps 62.7 fps 115 fps 92 fps
AMD Phenom II X2 550 @ 3.7GHz 10.4 fps 2681 170 fps 50.9 fps 63 fps 60.8 fps
AMD Phenom II X4 965 (3.4GHz) 22.2 fps 3143 196 fps 54.3 fps 109 fps 74.1 fps

 

Even an overclocked Athlon II X4 630 isn’t going to dramatically change things. It’ll still be faster in multithreaded applications, and still the overall slower gaming CPU.

If the Core i3 530 is right for you, overclocking is just going to make it more right.

Overclocking Intel’s HD Graphics - It Works...Very Well Final Words
Comments Locked

107 Comments

View All Comments

  • Anand Lal Shimpi - Friday, January 22, 2010 - link

    I'm a huge fan of competition, but I'm not sure I want to deal with the 3rd party chipset guys again. As the technologies got more sophisticated we started to see serious problems with all of the 3rd party guys (Remember VIA when we got PCIe? Remember the issues with later nForce chipsets?). As more features get integrated onto the CPU, the role of competition in the chipset space is less interesting to me.

    While I'd love to see competition there, I'm not willing to deal with all of the compatibility and reliability nightmares that happened way too often.

    Take care,
    Anand
  • Penti - Sunday, January 24, 2010 - link

    Also the competition didn't help with the K8-chipset prices. Even though they didn't have to do a memory controller any more, and some registers/logic moved to the CPU. Even if single chip chipsets where possible and available.

    I agree, I'm glad to see the third party chipsets gone. It would just be bad in the corporate space today when features as remote management needs to be built in. (Better to just have that removed from the consumer products then having very different products). Unless Intel / AMD gives away (sell) IP-blocks to the third parties I don't see the point. Extra features is perfectly capable of being implemented / integrated directly on the motherboard. And it would be enough to give incentive and pressure the chipset maker (Intel or AMD) to better themselves. Unless chipsets moves to the SoC-model, I guess that will do. It's not like VIA, nVidia and SiS did beat Intel when they made chipsets for the QPB-bus/P4. Plus I doubt they could make a cheaper southbridge for the Nehalem/DMI platform. It's still SATA, USB, PCI-e, Ethernet MAC/PHY, Audio and SPI/LPC and other legacy I/O. As well as the video/display output stuff.
  • geok1ng - Friday, January 22, 2010 - link

    FPS numbers for this Intel IGP is too good to be true. Intel has cheated before with IGP that ddin,t render all polygons and textures to achieve "competite" frames per second numbers.

    hence i request (once again)side by side screenshots to put aside any doubts that this "competitive" IGP from Intel has a similar image quality of NIVIDIA and ATI integrated graphics.
  • silverblue - Friday, January 22, 2010 - link

    I'm still not convinced by the IGP. Those 661 results are a 900MHz sample vs. the 700MHz HD 3300 on the 790GX board, and the 530 uses a 733MHz IGP. In every single case, the AMD solution wins, be it by a small margin (Dragon Age Origins) or a large one (CoD: MW2) against the 530, but in general, AMD's best is probably about 20-25% better, clock for clock, than Intel's - all depending on title of course. Overclocking the new IGP turns out excellent, however we're still comparing it to a relatively old IGP from AMD.

    If AMD updates their IGP and bumps up the clock speed, the gap will increase once again. There's nothing to say they can't bring out a 900MHz, 40nm flavour of the 3300 or better now that TSMC have sorted out their production issues. Intel's IGPs are on a 45nm process so AMD may produce something easily superior that doesn't require too much power. However... I'm still waiting.

    Intel are definitely on the right track, though they need to do something about the amount of work per clock.
  • IntelUser2000 - Monday, January 25, 2010 - link

    Silverblue, I don't know how it is on the GMA HD, but at least till the GMA 4500, the Intel iGPUs clocked in a similar way like the Nvidia iGPUs. The mentioned clocks are for the shader core. The rest like the ROP and the TMU is likely lower.

    While for AMD, 700MHz is for EVERYTHING.

    Plus, the 780/785/790 has more transistors than the GMA HD. The AMD chipset has 230 million transistors while GMA HD package has 170 million.

    All in all, the final performance is what matters.
  • Suntan - Friday, January 22, 2010 - link

    I would agree. I question why it wasn’t compared against the newer (and usually cheaper) 785g mobos (that are ATI 4200 based systems.)

    -Suntan
  • Anand Lal Shimpi - Friday, January 22, 2010 - link

    I chose to compare it to the 790GX because that's currently the fastest AMD IGP on the market. I wanted to put AMD's best foot forward. The 785G is literally the same performance as the 780G, the more impressive product name comes from its DirectX 10.1 support.

    The 890GX is the next IGP from AMD. In our Clarkdale review I mentioned that Intel simply getting close to AMD's IGP performance isn't really good enough. Unfortunately the current roadmaps have the 890GX IGP running at 700MHz, just like the 790GX. The only question I have is whether or not the 890GX has any more shader cores (currently at 8). We'll find out soon enough.

    Take care,
    Anand
  • Suntan - Friday, January 22, 2010 - link

    [QUOTE]I chose to compare it to the 790GX because that's currently the fastest AMD IGP on the market.[/QUOTE]

    Fair enough. But your own tests in the 785 article shows the two being basically identical in performance, the 790 might have some small advantage in certian places (gaming) but if a person wasn't happy with the 785 in that use, I'd put wagers that they wouldn't be happy with the 790 either.

    At this point, integrated gfx are completely capable for pcs for everything except gaming (and some NLE video editors that use heavy gfx accel for pre-visualizing EFX.) In both those cases, even a $50 dedicated card will far overshadow all the integrated options to the point that I think it isn't right to concentrate on game capability for picking integrated gfx cards.

    In my opinion, the last area for IGX to compete is video acceleration, which the 785 is at paratiy or beats the 790 there. (Although they both fail without the inclusion of digital TrueHD/DTSHDMA support) but at least the 785 is usually cheaper.

    In any case, the new i3 really doesn't impress me *just* because it has an integrated gfx on the chip. It seems that the same thing still holds true, namely: If your building a computer for anything other than gaming, you can probably build one where an AMD will result in the same or better performance for less money (TrueHD/DTSHDMA not withstanding.) If you're building one for gaming, you're probably not going to be using the integrated gfx anyway.

    -Suntan

  • archcommus - Friday, January 22, 2010 - link

    I don't understand what the big deal is about TrueHD/DTS-HD MA bitstreaming. I've been decoding in software and sending 6 channels over 3 analog cables (is this LPCM?) to my 5.1 system ever since I've owned one, and the sound quality is fantastic. Is there really a perceived quality difference with bitstreaming to a high quality receiver if you own a very high end sound system?
  • eit412 - Friday, January 22, 2010 - link

    TrueHD/DTS-HD MA use lossless compression and if you bit stream them to the receiver instead of decoding in software there is less of a chance of interference(the longer the signal is in analog form the more interference is possible). In most cases the difference is not distinguishable, but people love to say that theoretically my stereo is better than yours.

Log in

Don't have an account? Sign up now