Execution, Load/Store, INT and FP Scheduling

The execution of micro-ops get filters into the Integer (INT) and Floating Point (FP) parts of the core, which each have different pipes and execution ports. First up is the Integer pipe which affords a 168-entry register file which forwards into four arithmetic logic units and two address generation units. This allows the core to schedule six micro-ops/cycle, and each execution port has its own 14-entry schedule queue.

The INT unit can work on two branches per cycle, but it should be noted that not all the ALUs are equal. Only two ALUs are capable of branches, one of the ALUs can perform IMUL operations (signed multiply), and only one can do CRC operations. There are other limitations as well, but broadly we are told that the ALUs are symmetric except for a few focused operations. Exactly what operations will be disclosed closer to the launch date.

The INT pipe will keep track of branching instructions with differential checkpoints, to cut down on storing redundant data between branches (saves queue entries and power), but can also perform Move Elimination. This is where a simple mov command between two registers occurs – instead of inflicting a high energy loop around the core to physically move the single instruction, the core adjusts the pointers to the registers instead and essentially applies a new mapping table, which is a lower power operation.

Both INT and FP units have direct access to the retire queue, which is 192-entry and can retire 8 instructions per cycle. In some previous x86 CPU designs, the retire unit was a limiting factor for extracting peak performance, and so having it retire quicker than dispatch should keep the queue relatively empty and not near the limit.

The Load/Store Units are accessible from both AGUs simultaneously, and will support 72 out-of-order loads. Overall, as mentioned before, the core can perform two 16B loads (2x128-bit) and one 16B store per cycle, with the latter relying on a 44-entry Store queue. The TLB buffer for the L2 cache for already decoded addresses is two level here, with the L1 TLB supporting 64-entry at all page sizes and the L2 TLB going for 1.5K-entry with no 1G pages. The TLB and data pipes are split in this design, which relies on tags to determine if the data is in the cache or to start the data prefetch earlier in the pipeline.

The data cache here also has direct access to the main L2 cache at 32 Bytes/cycle, with the 512 KB 8-way L2 cache being private to the core and inclusive. When data resides back in L1 it can be processed back to either the INT or the FP pipes as required.

Moving onto the floating point part of the core, and the first thing to notice is that there are two scheduling queues here. These are listed as ‘schedulable’ and ‘non-schedulable’ queues with lower power operation when certain micro-ops are in play, but also allows the backup queue to sort out parts of the dispatch in advance via the LDCVT. The register file is 160 entry, with direct FP to INT transfers as required, as well as supporting accelerated recovery on flushes (when data is written to a cache further back in the hierarchy to make room).

The FP Unit uses four pipes rather than three on Excavator, and we are told that the latency in Zen is reduced as well for operations (though more information on this will come at a later date). We have two MUL and two ADD in the FP unit, capable of joining to form two 128-bit FMACs, but not one 256-bit AVX. In order to do AVX, the unit will split the operations accordingly. On the counter side each core will have 2 AES units for cryptography as well as decode support for SSE, AVX1/2, SHA and legacy mmx/x87 compliant code.

Fetch and Decode The Core Complex, Caches, and Fabric
Comments Locked

574 Comments

View All Comments

  • theuglyman0war - Saturday, March 4, 2017 - link

    I'd like to see a lot more older i7 extreme editions covered all the way to westmere so I can sell clients on new builds with such a comparison.
  • mapesdhs - Sunday, March 5, 2017 - link

    Which older i7s interest you specifically?
  • theuglyman0war - Saturday, March 4, 2017 - link

    Checking what I paid last month for i7-7700k at Microcenter...
    Although I did get the motherboard combo price sale they "usually" offer...
    The supposed $60 off for $319 is the cheapest price I found with a quick survey of new egg, amazon etc... And only $20 less then what I paid! Hardly A slashed priced answer shot across the bow by Intel! Not by a long shot!
    I thought I was going to recommend the new cheap price to all my customer's new builds but I am pushing RYZEN and AM4 for a real combined price that makes a difference. ( the cheap price for enthusiast Am4 is enticing but the loss of PCI lanes is of concern for extreme cpu comparison anyway. Not so much compared to i7-7700k though which brings the comparison back to 16 lane parity! )
  • theuglyman0war - Saturday, March 4, 2017 - link

    Could anyone actually point me to the amazing slashed deals that "BEAT" what I couldn't get last month by a long shot?

    ( which was $349 BEFORE rebate. In other words it's not like there were not sales last month as well. And I see nothing now that really amounts to AMAZING compared to last month? )

    Pretty dam insulting from somewhere in the pipe? Not sure if it's Intel. Or it's resellers clinging on to greedy margins not reflecting the savings to save their own ass's and bottom line due to stock considerations? Which iz no excuse considering the writing was on the wall. Someone needs to do a lot better. A heck of a lot better. Particularly considering I was thinking I could jes laff off AMD with an Intel savings and now have egg on my face! :)
  • rpns - Saturday, March 4, 2017 - link

    The 'Test Bed Setup' section could do with some more details. E.g. what BIOS version? Windows 10 build version? Any notable driver versions?

    These details aren't useful just now, but also when looking back at the review a few months down the line.
  • jorkevyn - Saturday, March 4, 2017 - link

    why they don't get 4 channel for DDR4 memory? I think, if you get that you will may be the real I7 6950K Killer
  • sedra - Saturday, March 4, 2017 - link

    have a look at this:
    "Many software programmers consider Intel's compiler the best optimizing compiler on the market, and it is often the preferred compiler for the most critical applications. Likewise, Intel is supplying a lot of highly optimized function libraries for many different technical and scientific applications. In many cases, there are no good alternatives to Intel's function libraries.

    Unfortunately, software compiled with the Intel compiler or the Intel function libraries has inferior performance on AMD and VIA processors. The reason is that the compiler or library can make multiple versions of a piece of code, each optimized for a certain processor and instruction set, for example SSE2, SSE3, etc. The system includes a function that detects which type of CPU it is running on and chooses the optimal code path for that CPU. This is called a CPU dispatcher. However, the Intel CPU dispatcher does not only check which instruction set is supported by the CPU, it also checks the vendor ID string. If the vendor string says "GenuineIntel" then it uses the optimal code path. If the CPU is not from Intel then, in most cases, it will run the slowest possible version of the code, even if the CPU is fully compatible with a better version."

    http://www.agner.org/optimize/blog/read.php?i=49&a...
  • HomeworldFound - Saturday, March 4, 2017 - link

    Everyone here already knew that ten years ago.
  • Notmyusualid - Sunday, March 5, 2017 - link

    Indeed it was.
  • sedra - Sunday, March 5, 2017 - link

    it is worth to bring it up now.

Log in

Don't have an account? Sign up now