Final Words

After the show many seemed to feel like Intel short changed us at this year's IDF when it came to architecture details and disclosures. The problem is perspective. Shortly after I returned home from the show I heard an interesting comparison: Intel detailed quite a bit about an architecture that wouldn't be shipping for another 9 months, while Apple wouldn't say a thing about an SoC that was shipping in a week. That's probably an extreme comparison given that Apple has no motivation to share details about A6 (yet), but even if you compare Intel's openness at IDF to the rest of the chip makers we cover - there's a striking contrast. We'll always want more from Intel at IDF, but I do hope that we won't see a retreat as the rest of the industry seems to be ok with non-disclosure as standard practice.

There are three conclusions that have to be made when it comes to Haswell: its CPU architecture, its platform architecture and what it means for Intel's future. Two of the three look good from my perspective. The third one is not so clear.

Intel's execution has been relentless since 2006. That's over half a decade of iterating architectures, as promised, roughly once a year. Little, big, little, big, process, architecture, process, architecture, over and over again. It's a combination of great execution on the architecture side combined with great enabling by Intel's manufacturing group. Haswell will continue to carry the torch in this regard.

The Haswell micro-architecture focuses primarily on widening the execution engine that has been with us, moderately changed, for the past several years. Increasing data structures and buffers inside the processor helps to feed the beast, as does a tremendous increase in cache bandwidth. Support for new instructions in AVX2 via Intel's TSX should also pave the way for some big performance gains going forward. Power consumption is also a serious target for Haswell given that it must improve performance without dramatically increasing TDP. There will be slight TDP increases across the board for traditional form factors, while ultra portables will obviously shift to lower TDPs. Idle power drops while active power should obviously be higher than Ivy Bridge.

You can expect CPU performance to increase by around 5 - 15% at the same clock speed as Ivy Bridge. Graphics performance will see a far larger boost (at least in the high-end GT3 configuration) of up to 2x vs. Intel's HD 4000 in a standard voltage/TDP system. GPU performance in Ultrabooks will increase by up to 30% over HD 4000.

As a desktop or notebook microprocessor, Haswell looks very good. The architecture remains focused and delivers a sensible set of improvements over its predecessor.

As a platform, Haswell looks awesome. While the standard Haswell parts won't drive platform power down considerably, the new Haswell U/ULT parts will. Intel is promising a greater than 20x reduction in platform idle power and it's planning on delivering it by focusing its power reduction efforts beyond Intel manufactured components. Haswell Ultrabooks and tablets will have Intel's influence in many (most?) of the components placed on the motherboard. And honestly, this is something Intel (or one of its OEMs) should have done long ago. Driving down platform power is a problem that extends beyond the CPU or chipset, and it's one that requires a holistic solution. With Haswell, Intel appears committed to delivering that solution. It's not for purely altruistic reasons, but for the survival of the PC. I remember talking to Vivek about an iPad as a notebook replacement piece he was doing a while back. The biggest advantage the iPad offered over a notebook in his eyes? Battery life. Even for light workloads today's most power efficient ultraportable notebooks can't touch a good ARM based tablet. Haswell U/ULT's significant reduction in platform power is intended to fix that. I don't know that we'll get to 10+ hours of battery life on a single charge, but we should be much better off than we are today.

Connected standby is coming to PCs and it's a truly necessary addition. Haswell's support of active idle states (S0ix) is a game changer for the way portable PCs work. The bigger concern is whether or not the OEMs and ISVs will do their best to really take advantage of what Haswell offers. I know one will, but will the rest? Intel's increasingly hands on approach to OEM relations seems to be its way of ensuring we'll see Haswell live up to its potential.

Haswell, on paper, appears to do everything Intel needs to evolve the mobile PC platform. What's unclear is how far down the TDP stack Intel will be able to take the architecture. Intel seems to believe that TDPs below 8W are attainable, but it's too early to tell just how low Haswell can go. It's more than likely that Intel knows and just doesn't want to share at this point. I don't believe we'll see fanless Haswell designs, but Broadwell is another story entirely.

There's no diagram for where we go from here. Intel originally claimed that Atom would service an expanded range of TDPs all the way up to 10W. With Core architectures dipping below 10W, I do wonder if that slide was a bit of misdirection. I wonder if, instead, the real goal is to drive Core well into Atom territory. If Intel wants to solve its ARM problem, that would appear to be a very good solution.

Haswell Media Engine: QuickSync the Third
Comments Locked

245 Comments

View All Comments

  • Kevin G - Saturday, October 6, 2012 - link

    Losing just the MacBook AIr isn't going to hurt Intel much as a whole but it is doubtful that Apple would just move that product line to ARM. The rest of the line up would likely follow. The results by the numbers would hurt Intel but nothing to doom the company. Intel does have the rest of the PC industry to fall back upon... except the PC market is shrinking.

    Apple is one of Intel's best gateway into the ultra mobile market. Apple has made indications that they want to merge iOS and OS X over the long term which would likely result in dropping either ARM or x86 hardware to simplify the line up.

    WinRT is also a threat to Intel and
  • Kevin G - Saturday, October 6, 2012 - link

    (Hrm... got cut off there)

    WinRT is also a threat to Intel but WinRT has next to zero market share. The threat here is any success it obtains. Apple on the other hand controls ~75% of the tablet market last I checked.

    Andriod is a bit neutral to Intel as manufacturers can transition between ARM and x86 versions with relative ease. Intel will just have to offer competitive hardware at competitive prices here. The sub 10W Haswell parts are going to be competitive but price is a great unknown. The ARM SoC's are far cheaper than what Intel has traditionally been comfortable with. So even if Intel were to acquire all of the Android tablet market, it would be a minority at this time and over the short term (even in the best case scenario, it'd take time for Android based tablets to surpass the iPad in terms of market share).

    So ultimately it would be best for Intel to snag Apple's support due to their dominant market share in the tablet space and influential position in the smart phone space.
  • andrewaggb - Friday, October 5, 2012 - link

    Agree with others. Best Anandtech article I've read in a long time.

    Most articles lack the detail and insights that this one has.
  • mrdude - Friday, October 5, 2012 - link

    Great article. Great depth, great info and very thorough. Hats off :)

    But I couldn't shake the feeling that I was missing perhaps the most important bit of information: price.

    Obviously, Intel isn't going to give that away 9 months away from the presumed launch date -- though in typical fashion we'll see it leaked early. It still is the biggest question regarding Haswell's, and in turn Intel's, success against ARM.

    I think most consumers are already at that good enough stage, where your Tegra 3 or Snapdragon S4 can fulfill all of their computing needs on a tablet or a phone. The biggest drawback for productivity purposes isn't necessarily the "lack of CPU performance" but rather the lack of a proper keyboard/mouse, gaming, along with a rare application or two that's still locked to x86 (Office rings a bell, though not for long). Or I should say, these were drawbacks. Not any longer.

    So is Intel going to cut their margins and go for volume? Or are they just going to keep their massive margins and price themselves out of contention? Apple carries with itself a brand name that people want. It's become more than a gadget but a fashion accessory. People don't mind paying for Apple tax. I don't think I ever will, but at least I can notice the trend. The Intel brand doesn't carry with it the same cult following and neither does x86. Unless Intel is willing to compete with ARM on price, lowering the cost of their products below Apple's, I don't think think the substantial increases in efficiency and performance will matter all that much.
  • name99 - Friday, October 5, 2012 - link

    "Sandy Bridge made ports 2 & 3 equal class citizens, with both capable of being used for load or store address calculation. In the past you could only do loads on port 2 and store addresses on port 3. Sandy Bridge's flexibility did a lot for load heavy code, which is quite common. Haswell's dedicated store address port should help in mixed workloads with lots of loads and stores."

    The rule of thumb numbers are, on "ordinary" integer type code:
    1/6 instructions are branches
    1/6 are writes
    2/6 are reads
    2/6 are ALU

    This makes it more obvious why Intel moved as it did.
    You want to sustain as close to 4ops/cycle as you can.
    This means that your order of adding abilities should be exactly as Intel has done
    - first two ALUs
    - next two read/writes per cycle (ideal would be a mix of load/store) but Intel gave us that you can do a load+store per cycle

    - next two loads per cycle

    - next make sure the branches aren't throttled (because back-to-back branches are common, and you want branches resolved ASAP)
    - next make the load-store system wide enough to sustain a MAC per cycle (two loads+store)

    It's hard to see what is left to complain about at this level.
    And of course we have better lock performance. So what's left?

    What I think still have substantial room for improvement (correct me if I'm wrong) is
    (a) TLB coverage
    (b) TLB efficiency.

    TLB coverage could be improved with a 2nd level TLB but (as far as I know) Intel doesn't go in for that, unlike POWER.
    By TLB efficiency, I mean not needing to lose performance due to different address spaces. Unfortunately Intel seems screwed here. The POWER segment scheme (especially the 64-bit scheme) is REALLY powerful here in allowing multiple address spaces to coexist, so that multiple shared libraries, the main app code, IO, and memory mapped files, can all have persistent simultaneous TLB entries. (Note that this has nothing to do with the Intel segment scheme --- different technology, to solve a different problem.)

    As far as I know, right now all Intel has is a single ASID representing a process. Better than no ASID, and having to flush the TLB on every context switch; but not especially good at sharing entries --- so (again as far as I know) shared libraries or shared mem-mapped files being used by multiple processes, even when they are mapped to the same address, have to have separate TLB entries, each one with a different ASID corresponding to the process calling them.
  • name99 - Friday, October 5, 2012 - link

    Stupid me. I should have read the entire article. So we do have a (nicely sized 2nd level TLB).

    I guess my only remaining complaint now is that ASIDs are too coarse a tool.
    In principle you could get dove some of the problems I mention using dedicated large pages for some particular purposes (eg to over the OS code and data, the equivalent of the frame buffer for modern windowing systems, and some pool of common shared libraries).
    Does anyone know the extent to which both Windows and OSX actually make use of dedicated large pages in this way?
  • Peanutsrevenge - Friday, October 5, 2012 - link

    Great article Anand, but when will Anand cloning be incorporated in CPU designs so we can all have one of you at home to pull out and extract information from @ will ? ?

    Although, with that said, I was already made aware of much of this recently from listening in to some random guys babbling about tech stuff on a podcast ;)
  • Rectified - Friday, October 5, 2012 - link

    Anand, you write the best tech articles on the web. As a graduate student in computer engineering, I appreciate the practical yet technical analyses you write on the industry. Keep it up!
  • Crazy1 - Friday, October 5, 2012 - link

    I like the concept of Panel Self Refresh, yet I feel that Intel could implement this themselves. I'm not an expert, but couldn't a buffer be placed on the CPU package between the GPU and panel? This may not be as efficient as if the panel makers did it themselves and it would probably only work when using the IGP (when it would most likely have the greatest impact), but at least it is a step in the right direction.

    Additionally, Great Article! Anandtech provides some of the most thorough technology articles. Keep it up.
  • random2 - Saturday, October 6, 2012 - link

    " If all mainstream client computing moves to smartphones,..........."

    Seriously? The idea of all mainstream computing done on nothing but smartphones seems to stretch the imagination just a bit much. There isn't even the most basic of businesses that do not have a computer (made with mainstream components as are most small and medium sized businesses) and business software. Don't forget the PC gamers and people who like larger viewing and typing surfaces. Or the fact that in eight years, home and business PC's will be blindingly fast with larger displays with much greater pixel density, possibly clear screen touch surfaces, likely alternative interfaces than just a keyboard and mouse and incredible computing and rendering power.

    The likelihood of the general populace turning all their computing needs over to a palm size PC I see as kind of weird fantasy where people learn to love minute typing interfaces and squinting at hi density displays fit into 3.5by 4.5 inches for long periods of the day without interruption. No, to push the idea of micro computing one must discount all of the other advances in the computer/electronics industries in order to make their pet theory viable.

Log in

Don't have an account? Sign up now