Haswell's Wide Execution Engine

Conroe introduced the six execution ports that we've seen used all the way up to Ivy Bridge. Sandy Bridge saw significant changes to the execution engine to enable 256-bit AVX operations but without increasing the back end width. Haswell does a lot here.

Just as before, I put together a few diagrams that highlight the major differences throughout the past three generations for the execution engine.


The reorder buffer is one giant tracking structure for all of the micro-ops that are in various stages of execution. The size of this buffer is directly impacted by the accuracy of the branch predictor as that will determine how many instructions can be kept in flight at a given time.

The reservation station holds micro-ops as they wait for the data they need to begin execution. Both of these structures grow by low double-digit percentages in Haswell.

Simply being able to pick from more instructions to execute in parallel is one thing, we haven't seen an increase in the number of parallel execution ports since Conroe. Haswell changes that.

From Conroe to Ivy Bridge, Intel's Core micro-architecture has supported the execution of up to six micro-ops in parallel. While there are more than six execution units in the system, there are only six ports to stacks of execution units. Three ports are used for memory operations (loads/stores) while three are on math duty. Over the years Intel has added additional types and widths of execution units (e.g. Sandy Bridge added 256-bit AVX operations) but it hasn't strayed from the 6 port architecture.

Haswell finally adds two more execution ports, one for integer math and branches (port 6) and one for store address calculation (port 7). Including both additional compute and memory hardware is a balanced decision on Intel's part.

The extra ALU and port does one of two things: either improve performance for integer heavy code, or allow integer work to continue while FP math occupies ports 0 and 1. Remember that Haswell, like its predecessors, is an SMT design meaning each core will see instructions from up to two threads at the same time. Although a single app is unlikely to mix heavy vector FP and integer code, it's quite possible that two applications running at the same time may produce such varied instructions. Having more integer ALUs is never a bad thing.

Also using port 6 is another unit that can handle x86 branch instructions. Branch heavy code can now enjoy two independent branch units, or if port 0 is occupied with other math the machine can still execute branches on port 6. Haswell moved the original Core branch unit from port 5 over to port 0, the most capable port in the system, so a branch unit on a lightly populated port makes helps ensure there's no performance regression as a result of the change.

Sandy Bridge made ports 2 & 3 equal class citizens, with both capable of being used for load or store address calculation. In the past you could only do loads on port 2 and store addresses on port 3. Sandy Bridge's flexibility did a lot for load heavy code, which is quite common. Haswell's dedicated store address port should help in mixed workloads with lots of loads and stores.

The other major addition to the execution engine is support for Intel's AVX2 instructions, including FMA (Fused Multiply-Add). Ports 0 & 1 now include newly designed 256-bit FMA units. As each FMA operation is effectively two floating point operations, these two units double the peak floating point throughput of Haswell compared to Sandy/Ivy Bridge. A side effect of the FMA units is that you now get two ports worth of FP multiply units, which can be a big boon to legacy FP code.

Fused Multiply-Add operations are incredibly handy in all sorts of media processing and 3D work. Rather than having to independently multiply and add values, being able to execute both in tandem via a single execution port increases the effective execution width of the machine. Note that a single FMA operation takes 5 cycles in Haswell, which is the same latency as a FP multiply from Sandy/Ivy Bridge. In the previous generation a floating point multiply+add took 8 cycles, so there's a good latency improvement here as well as the throughput boost from having two FMA units.

Intel focused a lot on adding more execution horsepower in Haswell without creating a power burden for legacy use cases. All of the new units can be shut off when not in use. Furthermore, Intel went in and ensured that this applied to the older execution units as well: in Haswell if you're not doing work, you're not consuming power.

Prioritizing ILP Feeding the Beast: 2x Cache Bandwidth in Haswell
Comments Locked

245 Comments

View All Comments

  • random2 - Saturday, October 6, 2012 - link

    "The race to the bottom that we've seen in the LCD space made it unlikely that any of the panel vendors would be jumping at the opportunity to make their products more expensive."

    It's unfortunate, because of what might have been had the manufacturers, of which there are only three main ones, if I recall, had the foresight to market to customers that weren't just looking to buy the lowest priced panel on display at Best Buy. Had they the initiative to have started years ago, there would be some pretty fantastic panels available today for much more reasonable prices than seen for the 27 and 30 inch 2560X1600 panels today.
  • Klugfan - Saturday, October 6, 2012 - link

    This doesn't really belong in the Haswell article, but I would love to know more about the physics and constraints of TDP. Like, hit me with a chart of TDP impact for a variety of important parts in phones, tablets, laptops, and desktops. Show me a chart of TDP budgets and mitigation strategies. Explain to me roughly how physics forces those things to relate. Please.

    Seems important and it's easy to understand the comparison from Ivy Bridge to Haswell but that doesn't feel like the big picture.
  • havoti97 - Saturday, October 6, 2012 - link

    I read the 1st page then got bored. Writing style is overly wordy... am I the only the feeling this way?
  • xeizo - Saturday, October 6, 2012 - link

    It's an article, not a twitter feed! Some of us like to get the whole picture not just the flashy stuff ....
  • watersb - Saturday, October 6, 2012 - link

    Phenomenal feature, Anand! This is why I check your site each day. Thanks very much!
  • bill4 - Saturday, October 6, 2012 - link

    like atom, you're stuck in no mans land. way too high for tablets and phones, but in desktops and laptop, who cares if the amd solution uses 30 watts instead of 8? that difference isn't enough to matter when you take the whole platform into account, especially at lower price points where battery life wont be fantastic anyway. on the dsktop it's completely pointless.
  • JlHADJOE - Sunday, October 7, 2012 - link

    On a laptop using 30 watts instead of 8 will more than triple your battery life, especially at lower price points/smaller form factors where manufacturers gimp the battery.

    How's about browsing for 9 hours instead of 3? Or 27 hours instead of 9? I'd jump on it in a heartbeat.
  • 1008anan - Saturday, October 6, 2012 - link

    Haswell will sport 32 single precision or 16 double precision flops per cycle per core for its desktop and high tdp mobile skews [at least 30 watt and up].

    Can anyone speculate on how many single precision and double precision flops per cycle per core Haswell will execute for its low TDP skews? For example the less than 10 watt skews? the 15 watt skews?

    I would also be interested in learning speculation about how many execution units (or shader cores if you prefer standard nomenclature) the low TDP Haswell products will have.
  • 1008anan - Saturday, October 6, 2012 - link

    Haswell will be able to execute 16 double precision or 32 single precision flops per clock per core for desktop and high TDP mobile skews [at least 30 watts and up].

    Can anyone speculate on how many flops per cycle per core the sub 10 watt and 15 watt Haswell skews will execute? Similarly I would be interested in hearing speculation about how many graphic execution units (shader cores) the sub 10 watt and 15 watt Haswell products will come with. Any speculation on graphics clock speed?

    Is it possible that the high end tock 22 nm Xeon server parts could have 32 double precision or 64 single precision flops per clock per core?
  • Laststop311 - Saturday, October 6, 2012 - link

    Best explanation of haswell I've read to date. Good Job Anand.

Log in

Don't have an account? Sign up now