Comparing with Intel's Best

Comparing CPUs in tables is always a very risky game: those simple numbers hide a lot of nuances and trade-offs. But if we approach with caution, we can still extract quite a bit of information out of it.

Feature IBM POWER8
 
Intel
Broadwell (Xeon E5 v4)
Intel
Skylake
L1-I cache
Associativity
32 KB
8-way
32 KB
8-way
32 KB
8-way
L1-D cache
Associativity
64 KB
8-way
32 KB
8-way
32 KB
8-way
Outstanding L1-cache misses 16 10 10
Fetch Width 8 instructions 16 bytes (+/- 4-5 x86) 16 bytes (+/- 4-5 x86)
Decode Width 8 4 µops 5-6* µops
(*µop cache hit)
Issue Queue 64+15 branch+8 CR
= 87 
60 unified 97 unified
Issue Width/Cycle 10   8 8
Instructions in Flight 224 (GCT SMT-8 modus) 192 (ROB) 224 (ROB)
Archi regs
Rename regs
32 (ST), 2x32 (SMT-2)
92 (ST), 2x92 (SMT-2)
16
168
16
180
Load
Bandwidth (per unit)
Load Queue Size
4 per cycle
16B/cycle

44 entries
2 per cycle
32B/cycle

72 entries
2 per cycle
32B/cycle

72 entries
Store
Bandwidth
Store Queue Size
2 per cycle
16B/cycle
40 entries
1 per cycle
32B/cycle
42 entries
1 per cycle
32B/cycle
56 entries
Int. Pipeline Length

18 stages

19 stages
14 stage from µop cache


19 stages
14 stage from µop cache
TLB 2048
4-way
128I + 64D L1
1024
8-way
128I + 64D L1
1536
8-way
Page Support 4 KB, 64 KB, 16 MB, 16 GB 4 KB, 2/4 MB, 1 GB 4 KB, 2/4 MB, 1 GB

Both CPUs are very wide brawny Out of Order (OoO) designs, especially compared to the ARM server SoCs.

Despite the lower decode and issue width, Intel has gone a little bit further to optimize single threaded performance than IBM. Notice that the IBM has no loop stream detector nor µop cache to reduce branch misprediction. Furthermore the load buffers of the Intel microarchitecture are deeper and the total number of instructions in flight for one thread is higher. The TLB architecture of the IBM POWER8 has more entries while Intel favors speedy address translations by offering a small level one TLB and a L2 TLB. Such a small TLB is less effective if many threads are working on huge amounts of data, but it favors a single thread that needs fast virtual to physical address translation.

On the flip side of the coin, IBM has done its homework to make sure that 2-4 threads can really boost the performance of the chip, while Intel's choices may still lead to relatively small SMT related performance gains in quite a few applications. For example, the instruction TLB, µop cache (Decode Stream Buffer) and instruction issue queues are divided in 2 when 2 threads are active. This will reduced the hit rate in the micro-op cache, and the 16 byte fetch looks a little bit on the small side. Let us see what IBM did to make sure a second thread can result in a more significant performance boost.

Inside the Beast(s) Heavy SMT: Multi Threading Prowess
Comments Locked

124 Comments

View All Comments

  • JohanAnandtech - Thursday, July 21, 2016 - link

    I don't think so, we just expressed it in ns so you can compare with IBM's numbers more easily. Can you elaborate why you think they are wrong?
  • Taracta - Thursday, July 21, 2016 - link

    Sorry, mixed up cycles with ns especially after reading the part about transition for the Intel from L3 to MEM.
  • Sahrin - Thursday, July 21, 2016 - link

    Yikes. Pictures without captions. Anandtech is terrible about this. ALWAYS caption your pictures, guys.
  • djayjp - Thursday, July 21, 2016 - link

    Are bar graphs not a thing anymore...?
  • Drumsticks - Thursday, July 21, 2016 - link

    Afaik, Anandtech has always used the chart when presenting things like SPEC. I'd guess it'd be for clutter reasons, but the exact reason is up to the editors to mention.
  • JohanAnandtech - Thursday, July 21, 2016 - link

    The reason for me is simply to give you the exact numbers and allow people to do their own comparisons.
  • Drumsticks - Thursday, July 21, 2016 - link

    Just to be clear, the Xeon CPU used today is 3 times more expensive than the Power8 CPU benchmarked? That's really impressive, isn't it? The Power8 has a pretty significant power increase, but if it's 43% faster, that cuts into the perf/w gap.

    I know we've only looked at SPEC so far in round 2, but this looks like a good showing for IBM. How big is the efficiency gap between 22nm SOI and 14nm FinFet? Any estimates?
  • Michael Bay - Thursday, July 21, 2016 - link

    Selling at a loss is hardly impressive, especially in IBM`s case. This thing is literally their last chance.
  • tipoo - Friday, July 22, 2016 - link

    Is it at a loss, or is it just not at crazy Intel margins?
  • Michael Bay - Saturday, July 23, 2016 - link

    They`d have to have a healthy margin to offset all the R&D, plus IBM as a whole is not in a good financial position. Consider they sold their fab capability not so long ago.

Log in

Don't have an account? Sign up now