Sandy Bridge: Inside the Core Microarchitecture

In the modern era, we are talking about chips roughly the size of 100-200mm2 having up to eight high performance cores on the latest variants of Intel’s 14nm process or AMD’s use of GlobalFoundries / upcoming with TSMC. Back with Sandy Bridge, 32nm was a different beast. The manufacturing process was still planar without FinFETs, implementing Intel’s second generation High-K Metal Gate, and achieving 0.7x scaling compared to the larger 45nm previous. The Core i7-2600K was the largest quad core die, running at 216 mm2 and 1.16 billion transistors, which compared to the latest Coffee Lake processors on 14nm offer eight cores at ~170 mm2 and over 2 billion transistors.

The big leap of the era was in the microarchitecture. Sandy Bridge promised (and delivered) a significant uplift in raw clock-for-clock performance over the previous generation Westmere processors, and forms the base schema for Intel’s latest chips almost a decade later. A number of key innovations were first made available at retail through Sandy Bridge, which have been built upon and iterated over many times to get to the high performance we have today.

Through this page, I have largely used Anand’s initial report into the microarchitecture back in 2010 as a base, with additions based on the modern look on this processor design.

A Quick Recap: A Basic Out-of-Order CPU Core

For those new to CPU design, here’s a quick run through of how an out-of-order CPU works. Broadly speaking, a core is divided into the front end and back end, and data first comes into the front end.

In the front end, we have the prefetchers and branch predictors that will predict and pull in instructions from the main memory. The idea here is that if you can predict what data and instructions are needed next before they are needed, then you can save time by having that data close to the core when needed. The instructions are then placed into a decoder, which transforms the byte code instruction into a number of ‘micro-operations’ that the core can then use. There are different types of decoders for simple and complex instructions – simple x86 instructions map easily to one micro-op, whereas more complex instructions can decode to more – the ideal situation is a decode ratio as low as possible, although sometimes instructions can be split into more micro-ops if they can be run in parallel together (instruction level parallelism, or ILP).

If the core has a ‘micro-operation cache’, or uOp cache, then the results from each decoded instruction ends up there. The core can detect before an instruction is decoded if that particular instruction has been decoded recently, and use the result from the previous decode rather than doing a full decode which wastes power.

Now the uOps are now in an allocation queue, which for modern cores usually means that the core can detect if the instructions are part of a simple loop, or if it can fuse uOps together to make the whole thing go quicker, it can. The uOps are then fed into the re-order buffer, which forms the ‘back end’ of the core.

In the back end, starting with the re-order buffer, uOps can be rearranged depending on where the data each micro-op needs is. This buffer can rename and allocate uOps depending on where they need to go (integer vs FP), and depending on the core, it can also act as a retire station for complete instructions. After the re-order buffer, uOps are fed into the scheduler in a desired order to ensure data is ready and the uOp throughput is as high as possible.

In the scheduler, it passes the uOps into the execution ports (what does the compute) as required. Some cores have a unified scheduler between all the ports, however some split the scheduler depending on integer operations or vector style operations. Most out-of-order cores can have anywhere from 4 to 10 ports (or more), and these execution ports will do the math required on the data given the instruction passed through the core. Execution ports can take the form of a load unit (load from cache), a store unit (store into cache), an integer math unit, a floating point math unit, vector math units, special division units, and a few others for special operations. After the execution port is complete, the data can then be held for reuse in a cache, be pushed to main memory, while the instruction feeds into the retire queue, and finally retired.

This brief overview doesn’t touch on some of the mechanisms that modern cores use to help caching and data look up, such as transaction buffers, stream buffers, tagging, etc., some of which get iterative improvements every generation, but usually when we talk about ‘instructions per clock’ as a measure of performance, we aim to get as many instructions through the core (through the front end and back end) as many as possible – this relies on the decode strength of the front end, the prefetchers, the reorder buffers, and maximising the execution port use, along with retiring as many completed instructions as possible every clock cycle.

With this in mind, hopefully it will give context to some of Anand’s analysis back when Sandy Bridge was launched.

Sandy Bridge: The Front End

Sandy Bridge’s CPU architecture is evolutionary from a high level viewpoint but far more revolutionary in terms of the number of transistors that have been changed since Nehalem/Westmere. The biggest change for Sandy Bridge (and all microarchitectures since) is the micro-op cache (uOp cache).

https://images.anandtech.com/reviews/cpu/intel/sandybridge/arch/uopcache.jpg

In Sandy Bridge, there’s now a micro-op cache that caches instructions as they’re decoded. There’s no sophisticated algorithm here, the cache simply grabs instructions as they’re decoded. When SB’s fetch hardware grabs a new instruction it first checks to see if the instruction is in the micro-op cache, if it is then the cache services the rest of the pipeline and the front end is powered down. The decode hardware is a very complex part of the x86 pipeline, turning it off saves a significant amount of power.

The cache is direct mapped and can store approximately 1.5K micro-ops, which is effectively the equivalent of a 6KB instruction cache. The micro-op cache is fully included in the L1 instructioncache and enjoys approximately an 80% hit rate for most applications. You get slightly higher and more consistent bandwidth from the micro-op cache vs. the instruction cache. The actual L1 instruction and data caches haven’t changed, they’re still 32KB each (for total of 64KB L1).

All instructions that are fed out of the decoder can be cached by this engine and as I mentioned before, it’s a blind cache - all instructions are cached. Least recently used data is evicted as it runs out of space. This may sound a lot like Pentium 4’s trace cache but with one major difference: it doesn’t cache traces. It really looks like an instruction cache that stores micro-ops instead of macro-ops (x86 instructions).

Along with the new micro-op cache, Intel also introduced a completely redesigned branch prediction unit. The new BPU is roughly the same footprint as its predecessor, but is much more accurate. The increase in accuracy is the result of three major innovations.

https://images.anandtech.com/reviews/cpu/intel/sandybridge/arch/BPU.jpg

The standard branch predictor is a 2-bit predictor. Each branch is marked in a table as taken/not taken with an associated confidence (strong/weak). Intel found that nearly all of the branches predicted by this bimodal predictor have a strong confidence. In Sandy Bridge, the bimodal branch predictor uses a single confidence bit for multiple branches rather than using one confidence bit per branch. As a result, you have the same number of bits in your branch history table representing many more branches, which can lead to more accurate predictions in the future.

Branch targets also got an efficiency makeover. In previous architectures there was a single size for branch targets, however it turns out that most targets are relatively close. Rather than storing all branch targets in large structures capable of addressing far away targets, SNB now includes support for multiple branch target sizes. With smaller target sizes there’s less wasted space and now the CPU can keep track of more targets, improving prediction speed.

Finally we have the conventional method of increasing the accuracy of a branch predictor: using more history bits. Unfortunately this only works well for certain types of branches that require looking at long patterns of instructions, and not well for shorter more common branches (e.g. loops, if/else). Sandy Bridge’s BPU partitions branches into those that need a short vs. long history for accurate prediction.

A Physical Register File

Compared to Westmere, Sandy Bridge moves to a physical register file. In Core 2 and Nehalem, every micro-op had a copy of every operand that it needed. This meant the out-of-order execution hardware (scheduler/reorder buffer/associated queues) had to be much larger as it needed to accommodate the micro-ops as well as their associated data. Back in the Core Duo days that was 80-bits of data. When Intel implemented SSE, the burden grew to 128-bits. With AVX however we now have potentially 256-bit operands associated with each instruction, and the amount that the scheduling/reordering hardware would have to grow to support the AVX execution hardware Intel wanted to enable was too much.

https://images.anandtech.com/reviews/cpu/intel/sandybridge/arch/PRF.jpg

A physical register file stores micro-op operands in the register file; as the micro-op travels down the OoO engine it only carries pointers to its operands and not the data itself. This significantly reduces the power of the out of order execution hardware (moving large amounts of data around a chip eats tons of power), and it also reduces die area further down the pipe. The die savings are translated into a larger out of order window.

The die area savings are key as they enable one of Sandy Bridge’s major innovations: AVX performance.

AVX

The AVX instructions support 256-bit operands, which as you can guess can eat up quite a bit of die area. The move to a physical register file enabled Intel to increase the OoO buffers to properly feed a higher throughput floating point engine. Intel clearly believes in AVX as it extended all of its SIMD units to 256-bit wide. The extension is done at minimal die expense. Nehalem has three execution ports and three stacks of execution units:

https://images.anandtech.com/reviews/cpu/intel/sandybridge/arch/AVX1.jpg

Sandy Bridge allows 256-bit AVX instructions to borrow 128-bits of the integer SIMD datapath. This minimizes the impact of AVX on the execution die area while enabling twice the FP throughput, you get two 256-bit AVX operations per clock (+ one 256-bit AVX load).

https://images.anandtech.com/reviews/cpu/intel/sandybridge/arch/AVX2.jpg

Granted you can’t mix 256-bit AVX and 128-bit integer SSE ops, however remember SNB now has larger buffers to help extract more instruction level parallelism (ILP).

Load and Store

The improvements to Sandy Bridge’s FP performance increase the demands on the load/store units. In Nehalem/Westmere you had three LS ports: load, store address and store data.

https://images.anandtech.com/reviews/cpu/intel/sandybridge/arch/memory1.jpg

In SNB, the load and store address ports are now symmetric so each port can service a load or store address. This doubles the load bandwidth compared to Westmere, which is important as Intel doubled the peak floating point performance in Sandy Bridge.

https://images.anandtech.com/reviews/cpu/intel/sandybridge/arch/memory2.jpg

There are some integer execution improvements in Sandy Bridge, although they are more limited. Add with carry (ADC) instruction throughput is doubled, while large scale multiplies (64 * 64) see a ~25% speedup.

https://images.anandtech.com/reviews/cpu/intel/sandybridge/arch/otherarch.jpg

Upgrading from an Intel Core i7-2600K: Testing Sandy Bridge in 2019 Sandy Bridge: Outside the Core
Comments Locked

213 Comments

View All Comments

  • cwolf78 - Friday, May 10, 2019 - link

    Is there any way you can do a similar comparison with the i5 CPUs? I have a 3570k OC to 4.2 GHz and its starting to struggle in some games. E.g., I can get over 60 fps in AC Odyssey for the most part, but there's all sorts of annoying spikes where the min FPS will tank for whatever reason. I'm running a GTX 970 that's OC'ed pretty close to a 980 and I don't know if it would be worth upgrading that or if my CPU would strangle anything faster. Also, whats the performance difference between an OC 3570k and a OC 3770k in modern games?
  • RSAUser - Saturday, May 11, 2019 - link

    This is mostly due to being 4 threads, that's also why I wouldn't go with anything <8 threads as you'll see it happen more and more as we all move to higher core counts.
    Plus Ubisoft has probably got the buggiest/worst optimized games, last one I can think of that was all right was Black Flag, mostly because they didn't change the engine and just changed the story line/map.
  • uibo - Friday, May 10, 2019 - link

    At what voltage did you run the 2600k?
  • abufrejoval - Friday, May 10, 2019 - link

    I owned pretty much every iteration of Intel and AMD since the 80286. I pushed them all on relatives and friends to make space for the next iteration.

    But everything since Sandy Bridge stuck around, both because there was no reason to move them out and I had kids to serve. Mine was a 2600 no-K, because I actually wanted to test VT-d and for that you needed to use a Q-chipset and -K was not supported.

    Still drives the gaming rig of one of my sons, while another has the Ivy Bridge (K this time but not delivering beyond 4 GHz). Got Haswell Xeons, 4 and 18 core, a Broadwell as Xeon-D 8 Core, Skylake in notebooks and Kaby Lakes i7-7700K in workstations and an i7-7700T in a pfSense.

    Those newer i7s were really just replacing AMDs and Core-2 systems being phased out over time, not because I was hoping for extra performance: AT made it very clear for years, that that simply won’t happen anymore with silicon physics.

    What I really wanted from Intel, more cores instead of a useless iGPU, more PCIe lanes, more memory channels I eventually got all from the e5-2696v3 I scored for less than $700 on eBay.

    Zen simply came a little too late, a couple of Phenom II x4-6 and three generations of APUs taught me not to expect great performance nor efficiency from AMD, but at least they were budget and had become reliable (unlike the K2-K3+s).

    With the family all settled and plenty of systems in all sizes and shapes the only reason to buy CPU any time soon would be to replace failed parts. And fail they just don’t, at least not the CPUs.

    And then I must have 100GB or so in DDR3, which I really don't buy again as DDR4 or 5. DDR3-2400 is really just fine with Kaby Lakes.

    I overclocked a bit here and there, mostly out of curiosity. But I got bitten far to often with reliability issues, when I was actually working on the machines and not playing around, so I keep them very close to stock for years now: And then it’s simply not worth the trouble, because the GPU/SSD/RAM is far more important or nothing will help anyway (Windows updates…).

    Nice write-up, Ian, much appreciated and not just because it confirms my own impressions.
  • WasHopingForAnHonestReview - Friday, May 10, 2019 - link

    Nice reply. Thanks. My 2600k is just cranking along as my darknet browsing machine
  • RSAUser - Saturday, May 11, 2019 - link

    The Zen chips actually have pretty good efficiency, I was expecting way worse before it came out since AMD hadn't been competitive in years. Zen 2 will be quite interesting, mostly due to the node shrinkage hopefully bringing way lower power envelopes and maybe cheaper CPUs, since we all need that saving for the mess that the GPU market has become.
  • Targon - Tuesday, May 14, 2019 - link

    Don't discount the significant IPC improvements that are expected from the third generation Ryzen processors(not the APUs which are Zen+ based from what I have read).
  • evilspoons - Friday, May 10, 2019 - link

    Still have a 2600k at 4.6 GHz with proper turbo support (slows down when idle). Went from GTX 680s in SLI to a single GTX 1080 and it plays most games just fine.

    That being said I'd love to throw in a Ryzen 7 2700X but only if one of you pays for it... 😁
  • rocky12345 - Friday, May 10, 2019 - link

    Nice flash back review thank you. I am still on a i7 2600K@5.1GHz with 32GB DDR3@2400MHz and very tight timings. It took a while to dial in the memory since Sandy does not really support this speed gracefully like it's newer brothers & sisters do. I have 2 Samsung 512GB SSD drives in raid zero so plenty fast for windows drive and some games installed as well as 2 4TB 7200RPM hard drives.

    I think some of the issues you were having with the OC 4.7GHz was probably do to either memory not 100% stable or the CPU may have just been at the edge of stable because it probably wanted just a tad bit more voltage. on my system I had random problems when it was new due to memory timings and finding just the right voltage for the CPU. After getting all of that dialed in my system is pretty much 100% stable with 5.1GHz and DDR3@2400MHz and has been running this way since 2011.

    So going from these charts for the gaming results & mine at 5.1GHz would place my system faster than the i7 7700K stock and a slightly over clocked one as well. Though I am 100% sure a i7 7700K fully overclocked would get better FPS since their IPC is like what 10%-12% better than a Sandy clock for clock and then if you throw in AVX2 My Sandy would get hammered.

    I am going to be upgrading my system this summer not because I feel my system is slow but more because I know because of it's age that something could fail such as main board or CPU and it would be costly to try to replace either of those so time for the big upgrade soon. I probably will move this system to do secondary duties and have it as a back up gaming system or there for my friends to use when we get to together for a gaming session. I have not fully decided which way to go but am leaning towards maybe AMD Ryzen with Zen 2 and at least 8/16 CPU and maybe a 12/24 CPU if they release more than 8 cores on the main stream desktops.
  • isthisavailable - Friday, May 10, 2019 - link

    Still running a i5 3450. Runs fine and maintains 60 FPS for 95% of the time.

Log in

Don't have an account? Sign up now