Haswell's Wide Execution Engine

Conroe introduced the six execution ports that we've seen used all the way up to Ivy Bridge. Sandy Bridge saw significant changes to the execution engine to enable 256-bit AVX operations but without increasing the back end width. Haswell does a lot here.

Just as before, I put together a few diagrams that highlight the major differences throughout the past three generations for the execution engine.


The reorder buffer is one giant tracking structure for all of the micro-ops that are in various stages of execution. The size of this buffer is directly impacted by the accuracy of the branch predictor as that will determine how many instructions can be kept in flight at a given time.

The reservation station holds micro-ops as they wait for the data they need to begin execution. Both of these structures grow by low double-digit percentages in Haswell.

Simply being able to pick from more instructions to execute in parallel is one thing, we haven't seen an increase in the number of parallel execution ports since Conroe. Haswell changes that.

From Conroe to Ivy Bridge, Intel's Core micro-architecture has supported the execution of up to six micro-ops in parallel. While there are more than six execution units in the system, there are only six ports to stacks of execution units. Three ports are used for memory operations (loads/stores) while three are on math duty. Over the years Intel has added additional types and widths of execution units (e.g. Sandy Bridge added 256-bit AVX operations) but it hasn't strayed from the 6 port architecture.

Haswell finally adds two more execution ports, one for integer math and branches (port 6) and one for store address calculation (port 7). Including both additional compute and memory hardware is a balanced decision on Intel's part.

The extra ALU and port does one of two things: either improve performance for integer heavy code, or allow integer work to continue while FP math occupies ports 0 and 1. Remember that Haswell, like its predecessors, is an SMT design meaning each core will see instructions from up to two threads at the same time. Although a single app is unlikely to mix heavy vector FP and integer code, it's quite possible that two applications running at the same time may produce such varied instructions. Having more integer ALUs is never a bad thing.

Also using port 6 is another unit that can handle x86 branch instructions. Branch heavy code can now enjoy two independent branch units, or if port 0 is occupied with other math the machine can still execute branches on port 6. Haswell moved the original Core branch unit from port 5 over to port 0, the most capable port in the system, so a branch unit on a lightly populated port makes helps ensure there's no performance regression as a result of the change.

Sandy Bridge made ports 2 & 3 equal class citizens, with both capable of being used for load or store address calculation. In the past you could only do loads on port 2 and store addresses on port 3. Sandy Bridge's flexibility did a lot for load heavy code, which is quite common. Haswell's dedicated store address port should help in mixed workloads with lots of loads and stores.

The other major addition to the execution engine is support for Intel's AVX2 instructions, including FMA (Fused Multiply-Add). Ports 0 & 1 now include newly designed 256-bit FMA units. As each FMA operation is effectively two floating point operations, these two units double the peak floating point throughput of Haswell compared to Sandy/Ivy Bridge. A side effect of the FMA units is that you now get two ports worth of FP multiply units, which can be a big boon to legacy FP code.

Fused Multiply-Add operations are incredibly handy in all sorts of media processing and 3D work. Rather than having to independently multiply and add values, being able to execute both in tandem via a single execution port increases the effective execution width of the machine. Note that a single FMA operation takes 5 cycles in Haswell, which is the same latency as a FP multiply from Sandy/Ivy Bridge. In the previous generation a floating point multiply+add took 8 cycles, so there's a good latency improvement here as well as the throughput boost from having two FMA units.

Intel focused a lot on adding more execution horsepower in Haswell without creating a power burden for legacy use cases. All of the new units can be shut off when not in use. Furthermore, Intel went in and ensured that this applied to the older execution units as well: in Haswell if you're not doing work, you're not consuming power.

Prioritizing ILP Feeding the Beast: 2x Cache Bandwidth in Haswell
Comments Locked

245 Comments

View All Comments

  • dishayu - Friday, October 5, 2012 - link

    I derived immense pleasure reading the article. Thank you, Anand. Big ups for the comprehensive read.
    My thoughts :
    I think Intel really dropped the ball by not having unlinked clocks for each core, like qualcomm has for it's s4 pro processors. There are so many times that, for instance, i have a page open with some animated GIFs. They are strictly single thread processes and they won't let the processor go to idle state. And this is a very VERY common occurance that can IMO, only be solved by adopting unlocked states for each core. 3 cores can stay in sleep state (almost perpetually) and the processor runs on a single core with lowered frequency. THAT would be power efficient.
  • dagamer34 - Friday, October 5, 2012 - link

    Uhh... isn't turning off unused cores and overclocking the 4th core within TDP to perform single threaded tasks exactly what Turbo Boost introduced in Sandy Bridge is?
  • know of fence - Friday, October 5, 2012 - link

    Reducing power is great and also inevitable, but Intel's move to compete against everything and everybody is alarming. With everyone trying to follow/please Apple, that means nothing good for the consumer, throw-away luxury electronics for exceptionally well groomed masses.
    Also, isn't it too early to be hyping this stuff?
  • A5 - Friday, October 5, 2012 - link

    Intel has to compete against ARM to keep them from taking over the "good-enough" computing space.

    As for the rest of it, you're not making any sense.
  • jjj - Friday, October 5, 2012 - link

    The ARM problem is not about the product but about price, long term the CPU/SoC ASP will drop hard ,there is competition now. Servers will keep them on life support for a while but without fundamental changes to their business model they can't make it.
    Intel should remember how they won the market .
  • dishayu - Friday, October 5, 2012 - link

    It's about both. Intel does not have sufficinetly low power parts at all, regardless the price point.
  • mrdude - Friday, October 5, 2012 - link

    Regardless of whether they step foot into that end of the spectrum or not (and by Anand's analysis that's more likely with Broadwell and on?), they still need to compete on price.

    It's one thing to make a chip, it's quote another to make it competitive with respect to pricing. What works against a distant AMD won't work against ARM.
  • DesDizzy - Sunday, October 7, 2012 - link

    I agree. This seems to be something that most people overlook when addressing the Wintel monopoly. The costs of Wintel products are high within the PC/Laptop space. The price of ARM/Apps are cheap within the Smartphone/Tab space. How do Wintel square this circle without damaging their business model?
  • Krysto - Friday, October 5, 2012 - link

    You may not agree with Charlie, Anand, but reality seems to agree with him:

    http://www.techradar.com/news/computing/apple/appl...

    I really don't know how you can think Apple would ever start using Intel chips in their iPads when Apple has already proven they want to make their own chips with A6.

    Also, according to Charlie, Haswell will be like 40% more expensive than IVB. Atom tablets already seem to start at like $800. So I wish Intel good luck with that. Ultrabooks and Win8 hybrids won't drop down in price any time soon.

    http://semiaccurate.com/2012/10/03/oems-call-intel...
  • Penti - Friday, October 5, 2012 - link

    I don't know how you could fail so much in reading comprehension, Anand only said the same flying spaghetti monster-damn form factor. Nothing else. There also must be an ecosystem, but if you can run the same app on a tablet as well as a desktop on x86 with more performance then ARM why wouldn't you see vendors use it. It is a full system even capable of building itself. It's not about killing ARM. Intel still uses it, they need fairly high-performance RISC chips for stuff like baseband. They had a large markets in smart-phones before 2006 and they made the choice to sell it because they had Atom in their lineup. They didn't forget about it.

    It's Microsoft tablets that costs 500-900 dollars even on Atom, but they only need to compete with Windows RT which is totally retarded as far as corporate customers go and not the same system as 8 Pro, doesn't run the same software. An Android tablet could use a Z2460 (and coming Z2580, after that Valleyview SoC's) and build a 240 dollar tablet. There is no price difference to be had as far as hardware is concerned. Windows 8 tablets are a whole other form factor and device to begin with. Most will have keyboard and multitouch trackpad.

    He only talks about the same form factor, size and battery life here. In the Microsoft ecosystem there is really no reason to go to Windows RT powered ARM-devices which doesn't have better performance and runs no third party desktop (Win32/Full Windows SDK) software. It also lacks the same features in other areas which makes them devices instead of general computing platforms. Remember they offer both here. Hell the built in email is even worse then the one built into Android since version 3.0 or so, it's a lot worse then Third party mail-clients in Android, it's worse then mail-clients in Blackberry 10, Symbian, iOS and so on. If your replacing a desktop your not going with ARM here, not on a Windows device at least, Anand only talks about a new bread of DTR Tablets and Ultra-portables that will fit in the same form factor and battery life scenarios as ARM-tablets. Apple certainly don't need to participate here.

    Intel certainly has sales to be made if they move Haswell down to low-power Atom territory when it comes out later next year. They could be used as the only computing device you have (smartphone + hybrid tablet-pc). Replacing desktops, ARM/ATOM-tablets, media PCs for your TV (just stream with Miracast). Et cetera. ARM-devices would just be cheaper less capable devices there. But it's still different targets. Haswell still targets server (enterprise-market), desktop, notebooks with larger form-factor/power-usage, as well as more portable stuff. Atom is still for the handheld stuff you use with one hand. ARM has moved quiet fast but they have no reason to target high-performance applications or built 100W SoC's that is fast without parallel computing. Applications like high-performance routers for example still uses licensed and custom MIPS and PowerPC chips. There are plenty of markets where a full feature ARM Cortex or x86 won't work either. ARM is just moving into the multimedia-field, replacing customs architectures in TV's, displacing MIPS, PPC etc. If Apple builds a very large custom CPU-architecture compatible with ARM ISA for workstations, notebooks etc they will just be in the same position they were with PowerPC and have to compete with the high-performance chips that most can't compete with, even with much larger resources then Apple. Apple and Samsung has no reason in doing so outside handheld devices, low-power servers, consumer oriented routers, streaming media boxes which leaves plenty of room for Intel and all the rest. Plus WiFi and wireless baseband in a huge market in of it self and it doesn't matter what the application processor architecture is. Stuff like ARM has competed because you could replace previous products with it easily, thus taking some of the SoC-market away from other, but that coincides with the choice to do so.

Log in

Don't have an account? Sign up now