Final Words

I’m a fan of Haswell, even on the desktop. The performance gains over Ivy Bridge depend on workload, but in general you’re looking at low single digits to just under 20%. We saw great behavior in many of our FP heavy benchmarks as well as our Visual Studio compile test. If you’re upgrading from Sandy Bridge you can expect to see an average improvement just under 20%, while coming from an even older platform like Nehalem will yield closer to a 40% increase in performance at the same clocks. As always, annual upgrades are tough to justify although Haswell may be able to accomplish that in mobile.

Even on the desktop, idle power reductions are apparent both at the CPU level and at the platform level.  Intel focused on reducing CPU power, and it seems like Intel's motherboard partners did the same as well. Under load Haswell can draw more power than Ivy Bridge but it typically makes up for it with better performance.

Overclockers may be disappointed at the fact that Haswell is really no more of an overclocker (on air) compared to Ivy Bridge. Given the more mobile focused nature of design, and an increased focus on eliminating wasted power, I don’t know that we’ll ever see a return to the heyday of overclocking.

If the fact that you can’t easily get tons of additional frequency headroom at marginal increase to CPU voltage is the only real downside to the platform, then I’d consider Haswell a success on the desktop. You get more performance and a better platform at roughly the same prices as Ivy Bridge a year ago. It’s not enough to convince folks who just bought a PC over the past year or two to upgrade again, but if you are upgrading from even a 3 year old machine the performance gains will be significant.

Quick Sync Performance
Comments Locked

210 Comments

View All Comments

  • chizow - Saturday, June 1, 2013 - link

    The other big problem with the CPU space besides the problems with power consumption and frequency, is the fact Intel has stopped using it's extra transistor budget from a new process node on the actual CPU portion of the die long ago. Most of the increased transistor budget afforded by a new process goes right to the GPU. We will probably not see a stop to this for some time until Intel reaches discrete performance equivalency.
  • Jaybus - Monday, June 3, 2013 - link

    Well, I don't know. Cache sizes have increased dramatically.
  • chizow - Monday, June 3, 2013 - link

    Not per core, these parts are still 4C 8MB, same as my Nehalem-based i7. Some of the SB-E boards have more cache per core, 4C 10MB on the 3820, 6C 15MB on the 3960/3970, but the extra bit results in a negligible difference over the 2MB per core on the 3930K.
  • Boissez - Sunday, June 2, 2013 - link

    I think you've misunderstood me.

    I'm merely pointing out that, in the past 2½ years we've barely seen any performance improvements in the 250-300$ market from Intel. And that is in stark contrast to the developments in mobileland. They too, are bound by the constraints you mention.

    And please, stop the pompous know-it-all attitude. For the record, power consumption actually rises *linearly* with clock speed and *quadratically* with voltage. If your understanding of Joule's law and Ohm's law where better developed you would know.
  • klmccaughey - Monday, June 3, 2013 - link

    Exactly. And it won't change until we see optical/biological chips or some other such future-tech breakthrough. As it is the electrons are starting to behave in light/waveform fashion at higher frequencies if I remember correctly from my semiconductor classes (of some years ago I might add).
  • Jaybus - Monday, June 3, 2013 - link

    Yes, but we will first see hybrid approaches. Intel, IBM, and others have been working on them and are getting close. Sure, optical interconnects have been available for some time, but not as an integrated on-chip feature which is now being called "silicon photonics". Many of the components are already there; micro-scale lenses, waveguides, and other optical components, avalanche photodiode detectors able to detect a very tiny photon flux, etc. All of those can be crafted with existing CMOS processes. The missing link is a cheaply made micro-scale laser.

    Think about it. An on-chip optical transceiver at THz frequencies allows optical chip-to-chip data transfer at on-chip electronic bus speeds, or faster. There is no need for L2 or L3 cache. Multiple small dies can be linked together to form a larger virtual die, increasing productivity and reducing cost. What if you could replace a 256 trace memory bus on a GPU with a single optical signal? There are huge implications both for performance and power use, even long before there are photonic transistors. Don't know about biological, but optical integration could make a difference in the not-so-far-off future.
  • tipoo - Saturday, June 1, 2013 - link

    It's easier to move upwards from where ARM chips started a few years back. A bit like a developing economy showing growth numbers you would never see in a developed one.
  • Genx87 - Saturday, June 1, 2013 - link

    Interesting review. But finding it hard to justify replacing my i2500K. I guess next summer on the next iteration?
  • kyuu - Saturday, June 1, 2013 - link

    Agreed, especially considering Haswell seems to be an even poorer overclocker than Ivy Bridge. My i5-2500k @ 4.6GHz will be just fine for some time to come, it seems.
  • klmccaughey - Monday, June 3, 2013 - link

    Me too. I have a 2500k @ 4.3Ghz @ 1.28v and I am starting to wonder if even the next tick/tock will tempt me to upgrade.

    Maybe if they start doing a K chip with no onboard GPU and use the extra silicon for extra cores? Even then the cores aren't currently used well @ 4. But maybe concurrency adoption will increase as time goes by.

Log in

Don't have an account? Sign up now