Broadwell CPU Architecture

We’ll kick off our look at Broadwell-Y with Broadwell’s CPU architecture. As this is a preview Intel isn’t telling us a great deal about the CPU at this time, but they have given us limited information about Broadwell’s architectural changes and what to expect for performance as a result.

With Broadwell Intel is at the beginning of the next cycle of their tick-tock cadence. Whereas tock products such as Haswell and Sandy Bridge designed to be the second generation of products to use a process node and as a result are focused on architectural changes, tick products such as Ivy Bridge and now Broadwell are the first generation of products on a new process node and derive much (but not all) of their advantage from manufacturing process improvements. Over the years Intel has wavered on just what a tick should contain – it’s always more than simply porting an architecture to a new process node – but at the end of the day Broadwell is clearly derived from Haswell and will be taking limited liberties in improving CPU performance as a result.

Intel's Tick-Tock Cadence
Microarchitecture Process Node Tick or Tock Release Year
Conroe/Merom 65nm Tock 2006
Penryn 45nm Tick 2007
Nehalem 45nm Tock 2008
Westmere 32nm Tick 2010
Sandy Bridge 32nm Tock 2011
Ivy Bridge 22nm Tick 2012
Haswell 22nm Tock 2013
Broadwell 14nm Tick 2014
Skylake 14nm Tock 2015

All told, Intel is shooting for a better than 5% IPC improvement over Haswell. This is similar to Ivy Bridge (4%-6%), though at this stage in the game Intel is not talking about expected clockspeeds or the resulting overall performance improvement. Intel has made it clear that they don’t regress on clockspeeds, but beyond that we’ll have to wait for further product details later this year to see how clockspeeds will compare.

To accomplish this IPC increase Intel will be relying on a number of architectural tweaks in Broadwell. Chief among these are bigger schedulers and buffers in order to better feed the CPU cores themselves. Broadwell’s out-of-order scheduling window for example is being increased to allow for more instructions to be reordered, thereby improving IPC. Meanwhile the L2 translation lookaside buffer (TLB) is being increased from 1K to 1.5K entries to reduce address translation misses.

The TLBs are also receiving some broader feature enhancements that should again improve performance. A second miss handler is being added for TLB pages, allowing Broadwell to utilize both handlers at once to walk memory pages in parallel. Otherwise the inclusion of a 1GB page mode should pay off particularly well for servers, granting Broadwell the ability to handle these very large pages on top of its existing 2MB and 4K pages.

Meanwhile, as is often the case Intel is once again iterating on their branch predictor to cut down on missed branches and unnecessary memory operations. Broadwell’s branch predictor will see its address prediction improved for both branches and returns, allowing for more accurate speculation of impending branching operations.

Of course efficiency increases can only take you so far, so along with the above changes Intel is also making some more fundamental improvements to Broadwell’s math performance. Both multiplication and division are receiving a performance boost thanks to performance improvements in their respective hardware. Floating point multiplication is seeing a sizable reduction in instruction latency from 5 cycles to 3 cycles, and meanwhile division performance is being improved by the use of an even larger Radix-1024 (10bit) divider. Even vector operations will see some improvements here, with Broadwell implementing a faster version of the vector Gather instruction.

Finally, while it’s not clear whether these will be part of AES-NI or another instruction subset entirely, Intel is once again targeting cryptography for further improvements. To that end Broadwell will bring with it improvements to multiple cryptography instructions.

Meanwhile it’s interesting to note that in keeping with Intel’s power goals for Broadwell, throughout all of this Intel put strict power efficiency requirements in place for any architecture changes. Whereas Haswell was roughly a 1:1 ratio of performance to power – a 1% increase in performance could cost no more than a 1% increase in power consumption – Broadwell’s architecture improvements were required to be at 2:1. While a 2:1 mandate is not new – Intel had one in place for Nehalem too – at the point even on the best of days meaningful IPC improvements are hard to come by at 1:1, never mind 2:1. The end result no doubt limited what performance optimizations Intel could integrate into Broadwell’s design, but it also functionally reduces power requirements for any given performance level, furthering Intel’s goals in getting Core performance in a mobile device. In the case of Broadwell this means Broadwell’s roughly 5% performance improvement comes at a cost of just a 2.5% increase in immediate power consumption.

With that said, Intel has also continued to make further power optimizations to the entire Broadwell architecture, many of which will be applicable not just to Core M but to all future Broadwell products. Broadwell will see further power gating improvements to better shut off parts of the CPU that are not in use, and more generalized design optimizations have been made to reduce power consumption of various blocks as is appropriate. These optimizations coupled with power efficiency gains from the 14nm process are a big part of the driving force in improving Intel’s power efficiency for Core M.

Intel Broadwell Architecture Preview Broadwell GPU Architecture
Comments Locked

158 Comments

View All Comments

  • AnnonymousCoward - Tuesday, August 12, 2014 - link

    You should look at discrete graphics HW sales.
  • tuxRoller - Tuesday, August 12, 2014 - link

    Nvidia, which has around 60% of the discrete gpu market, has a yearly revenue of around $4 000 000 000. So, you're looking at a total market of around $7 000 000 000.
  • Johnmcl7 - Tuesday, August 12, 2014 - link

    "Maybe not obsess, but to characterise the PC gaming market as ridiculously small, is pretty far off the mark...."

    I think the original comment was fairly accurate, even in the PC gaming market there's a large proportion of people using Intel graphics cards. Looking at the current Steam survey results, 75% are using Intel processors and 20% overall are using Intel graphics which means around 1 in 3 people with Intel processors on Steam are using the onboard graphics card. The means even among the gaming market there's a lot of integrated cards in use and that's just one small portion as I'd expect most other areas to mainly be using integrated cards.

    There are workstation graphics cards but professionals using those are unlikely to be using consumer processors and the enthusiast/workstation processors do not have an integrated graphics card.
  • zepi - Tuesday, August 12, 2014 - link

    I have had steam on my company laptop with just internal GPU just to take part into the sales campains etc. This makes my contribution to 50:50 in terms on dGPU / iGPU, even though 100% of gaming happens with dGPU.
  • AnnonymousCoward - Tuesday, August 12, 2014 - link

    So....how do NVIDIA and ATI stay in business? Obviously many people use discrete cards. The fact you say "obsess" tells me you probably don't realize the massive performance difference, and it's not limited to gaming. CAD uses 3D.
  • AnnonymousCoward - Wednesday, August 13, 2014 - link

    Doesn't Intel make X-version CPUs that can be overclocked? The OC market is gonna be much smaller than dGPU, and they're already making a dedicated product for that.
  • Krysto - Tuesday, August 12, 2014 - link

    Because they are using that anti-competitive tactic to drive out the discrete competition. They force OEMs to buy them bundled, so more and more people say "why should I pay twice for the GPU...I'll just get the Intel one".

    It's a nasty tactic, Intel has been employing for years, and unfortunately it's working. But it's terribly uncompetitive.
  • Krysto - Tuesday, August 12, 2014 - link

    It's akin to Microsoft bundling IE with Windows "Why would I need to get another browser...I'll just use IE". That tactic WORKED for Microsoft. It only stopped working when they became lazy. But they could've hold the 90 percent market share of IE for a lot longer, if they didn't get lazy.
  • AnnonymousCoward - Tuesday, August 12, 2014 - link

    I dunno--anyone who plans to get a discrete card is going to get one, regardless of Intel forcing it onto the CPU.

    I wonder what percent of the desktop die will be GPU. Maybe with the GPU disabled, the CPU turbo will work better since there will be less heat.
  • name99 - Tuesday, August 12, 2014 - link

    "we’ll still have to wait to see just how good the resulting retail products are, but there shouldn’t be any technical reason for why it can’t be put into a mobile device comparable to today’s 10”+ tablets. "

    There may not be TECHNICAL reasons, but there are very definite economic reasons.
    People think of tablets as cheap devices --- iPad at the high end, but the mass market at $350 or so. This CPU alone will probably cost around $300. MS is willing to pay that for Surface Pro 4; no-one else is, not for a product where x86 compatibility is not essential.
    We'll see it in ultrabooks (and various ultrabook perversions that bend or slide or pop into some sort of tablet) but we're not going to see a wave of sub<$1000 products using this.

Log in

Don't have an account? Sign up now