Final Words

After the show many seemed to feel like Intel short changed us at this year's IDF when it came to architecture details and disclosures. The problem is perspective. Shortly after I returned home from the show I heard an interesting comparison: Intel detailed quite a bit about an architecture that wouldn't be shipping for another 9 months, while Apple wouldn't say a thing about an SoC that was shipping in a week. That's probably an extreme comparison given that Apple has no motivation to share details about A6 (yet), but even if you compare Intel's openness at IDF to the rest of the chip makers we cover - there's a striking contrast. We'll always want more from Intel at IDF, but I do hope that we won't see a retreat as the rest of the industry seems to be ok with non-disclosure as standard practice.

There are three conclusions that have to be made when it comes to Haswell: its CPU architecture, its platform architecture and what it means for Intel's future. Two of the three look good from my perspective. The third one is not so clear.

Intel's execution has been relentless since 2006. That's over half a decade of iterating architectures, as promised, roughly once a year. Little, big, little, big, process, architecture, process, architecture, over and over again. It's a combination of great execution on the architecture side combined with great enabling by Intel's manufacturing group. Haswell will continue to carry the torch in this regard.

The Haswell micro-architecture focuses primarily on widening the execution engine that has been with us, moderately changed, for the past several years. Increasing data structures and buffers inside the processor helps to feed the beast, as does a tremendous increase in cache bandwidth. Support for new instructions in AVX2 via Intel's TSX should also pave the way for some big performance gains going forward. Power consumption is also a serious target for Haswell given that it must improve performance without dramatically increasing TDP. There will be slight TDP increases across the board for traditional form factors, while ultra portables will obviously shift to lower TDPs. Idle power drops while active power should obviously be higher than Ivy Bridge.

You can expect CPU performance to increase by around 5 - 15% at the same clock speed as Ivy Bridge. Graphics performance will see a far larger boost (at least in the high-end GT3 configuration) of up to 2x vs. Intel's HD 4000 in a standard voltage/TDP system. GPU performance in Ultrabooks will increase by up to 30% over HD 4000.

As a desktop or notebook microprocessor, Haswell looks very good. The architecture remains focused and delivers a sensible set of improvements over its predecessor.

As a platform, Haswell looks awesome. While the standard Haswell parts won't drive platform power down considerably, the new Haswell U/ULT parts will. Intel is promising a greater than 20x reduction in platform idle power and it's planning on delivering it by focusing its power reduction efforts beyond Intel manufactured components. Haswell Ultrabooks and tablets will have Intel's influence in many (most?) of the components placed on the motherboard. And honestly, this is something Intel (or one of its OEMs) should have done long ago. Driving down platform power is a problem that extends beyond the CPU or chipset, and it's one that requires a holistic solution. With Haswell, Intel appears committed to delivering that solution. It's not for purely altruistic reasons, but for the survival of the PC. I remember talking to Vivek about an iPad as a notebook replacement piece he was doing a while back. The biggest advantage the iPad offered over a notebook in his eyes? Battery life. Even for light workloads today's most power efficient ultraportable notebooks can't touch a good ARM based tablet. Haswell U/ULT's significant reduction in platform power is intended to fix that. I don't know that we'll get to 10+ hours of battery life on a single charge, but we should be much better off than we are today.

Connected standby is coming to PCs and it's a truly necessary addition. Haswell's support of active idle states (S0ix) is a game changer for the way portable PCs work. The bigger concern is whether or not the OEMs and ISVs will do their best to really take advantage of what Haswell offers. I know one will, but will the rest? Intel's increasingly hands on approach to OEM relations seems to be its way of ensuring we'll see Haswell live up to its potential.

Haswell, on paper, appears to do everything Intel needs to evolve the mobile PC platform. What's unclear is how far down the TDP stack Intel will be able to take the architecture. Intel seems to believe that TDPs below 8W are attainable, but it's too early to tell just how low Haswell can go. It's more than likely that Intel knows and just doesn't want to share at this point. I don't believe we'll see fanless Haswell designs, but Broadwell is another story entirely.

There's no diagram for where we go from here. Intel originally claimed that Atom would service an expanded range of TDPs all the way up to 10W. With Core architectures dipping below 10W, I do wonder if that slide was a bit of misdirection. I wonder if, instead, the real goal is to drive Core well into Atom territory. If Intel wants to solve its ARM problem, that would appear to be a very good solution.

Haswell Media Engine: QuickSync the Third
Comments Locked

245 Comments

View All Comments

  • TeXWiller - Friday, October 5, 2012 - link

    Perhaps they also try to reach lower usable clock frequencies through performance upgrades and this way gain some additional voltage scaling, or what is left of it.
  • vegemeister - Saturday, October 6, 2012 - link

    >think loop counters which store an INT for loop iteration then perform some FP calcs

    If updating the loop counter us taking a substantial fraction of the CPU time, doesn't that mean the compiler should have unrolled more?
  • Anand Lal Shimpi - Friday, October 5, 2012 - link

    The high end desktop space was abandoned quite a while ago. The LGA-2011/Extreme platform remains as a way to somewhat address the market, but I think in reality many of those users simply shifted their sights downward with regards to TDPs. A good friend of mine actually opted for an S-series Ivy Bridge part when building his gaming mini-ITX PC because he wanted a cooler running system in addition to great performance.

    To specifically answer your question though - the common thread since Conroe/Merom was this belief that designing for power efficiency actually means designing for performance. All architectures since Merom have really been mobile focused, with versions built for the desktop. I like to think that desktop performance has continued to progress at a reasonable rate despite that, pretty much for the reason I just outlined.

    Take care,
    Anand
  • csroc - Friday, October 5, 2012 - link

    Sandy Bridge E just seems to price itself out of being reasonable for a lot of people. The boards in particular are rather steep as well.
  • dishayu - Friday, October 5, 2012 - link

    Well, LGA2011 is bit of a halo product with no real substance. An ivy bridge 3770K will stand up to a quad core LGA2011 part nicely, not to mention it supports PCI e gen3, so even though it has lesser lanes, it doesn't have a bandwidth disadvantage. Moreover LGA2011 is still stuck at sandy based architecture, so that again isn't quite on the bleeding edge and as far as i understand, Haswell will come out before IB-E does, so it's 2 full cycles behind.
  • Kevin G - Friday, October 5, 2012 - link

    For a single discrete GPU, Ivy Bridge would be able to match the bandwidth of Sandy Bridge-E: a single 16 lane PCI-E 3.0 connection. Things get interesting when you scale the number of GPU's. There is a small but clear advantage to Sandy Bridge-E in a four GPU configuration. Ivy Bridge having fewer lanes does make a difference in such high end scenarios.

    For its target market (mobile, low end desktop), Ivy Bridge is 'good enough'.
  • vegemeister - Saturday, October 6, 2012 - link

    Quad core LGA2011 is kind of a waste though. If you're already paying extra for the socket, my philosophy is go hexcore and 8 DIMMs or go home.
  • Peanutsrevenge - Friday, October 5, 2012 - link

    Given that desktop software's not really been pushing for better CPU performance, the direction intel has taken is not a bad one IMO either.
    It's now possible to build a mighty gaming rig in an mITX case (Bit Fenix Prodigy), think 3770K and GTX 690 gfx and watercooled.

    A rig like that will likely last 3 years before settings have to be tweaked to keep 60+ fps.

    What's really needed is for software to take advantage of GPUs more, (which would play into AMDs hands), but I fear many of the best coders have switched from windows to Android/iOS development, With windows 8 shipping shortly, that number will increase further.
  • j_newbie - Saturday, October 6, 2012 - link

    I think that is quite sad.

    I for one always need more FLOPS, MCAD work and simulation work depends on two things memory bandwidth+size and flops, surprisingly AMD still offers a better vfm deal in this space thanks to avx instructions not being widely adopted into most FEA/CFD code yet and the additional ram slots you get with cheaper boards.

    Server components are always overpriced as we dont need a system to last very long.
    my 3930k setup is about 1.5 times faster than the x6 setup at 3 times the cost... :(
  • Peanutsrevenge - Saturday, October 6, 2012 - link

    You're talking more of a workstation than a desktop. Hence my use of the word 'desktop'.

Log in

Don't have an account? Sign up now