The AMD Zen and Ryzen 7 Review: A Deep Dive on 1800X, 1700X and 1700
by Ian Cutress on March 2, 2017 9:00 AM ESTThoughts and Comparisons
Throughout AMD's road to releasing details on Zen, we have had a chance to examine the information on the microarchitecture often earlier than we had expected to each point in the Zen design/launch cycle. Part of this is due to the fact that internally, AMD is very proud of their design, but some extra details (such as the extent of XFR, or the size of the micro-op cache), AMD has held close to its chest until the actual launch. With the data we have at hand, we can fill out a lot of information for a direct comparison chart to AMD’s last product and Intel’s current offerings.
CPU uArch Comparison | ||||
AMD | Intel | |||
Zen 8C/16T 2017 |
Bulldozer 4M / 8T 2010 |
Skylake Kaby Lake 4C / 8T 2015/7 |
Broadwell 8C / 16T 2014 |
|
L1-I Size | 64KB/core | 64KB/module | 32KB/core | 32KB/core |
L1-I Assoc | 4-way | 2-way | 8-way | 8-way |
L1-D Size | 32KB/core | 16KB/thread | 32KB/core | 32KB/core |
L1-D Assoc | 8-way | 4-way | 8-way | 8-way |
L2 Size | 512KB/core | 1MB/thread | 256KB/core | 256KB/core |
L2 Assoc | 8-way | 16-way | 4-way | 8-way |
L3 Size | 2MB/core | 1MB/thread | >2MB/cire | 1.5-3MB/core |
L3 Assoc | 16-way | 64-way | 16-way | 16/20-way |
L3 Type | Victim | Victim | Write-back | Write-back |
L0 ITLB Entry | 8 | - | - | - |
L0 ITLB Assoc | ? | - | - | - |
L1 ITLB Entry | 64 | 72 | 128 | 128 |
L1 ITLB Assoc | ? | Full | 8-way | 4-way |
L2 ITLB Entry | 512 | 512 | 1536 | 1536 |
L2 ITLB Assoc | ? | 4-way | 12-way | 4-way |
L1 DTLB Entry | 64 | 32 | 64 | 64 |
L1 DTLB Assoc | ? | Full | 4-way | 4-way |
L2 DTLB Entry | 1536 | 1024 | - | - |
L2 DTLB Assoc | ? | 8-way | - | - |
Decode | 4 uops/cycle | 4 Mops/cycle | 5 uops/cycle | 4 uops/cycle |
uOp Cache Size | 2048 | - | 1536 | 1536 |
uOp Cache Assoc | ? | - | 8-way | 8-way |
uOp Queue Size | ? | - | 128 | 64 |
Dispatch / cycle | 6 uops/cycle | 4 Mops/cycle | 6 uops/cycle | 4 uops/cycle |
INT Registers | 168 | 160 | 180 | 168 |
FP Registers | 160 | 96 | 168 | 168 |
Retire Queue | 192 | 128 | 224 | 192 |
Retire Rate | 8/cycle | 4/cycle | 8/cycle | 4/cycle |
Load Queue | 72 | 40 | 72 | 72 |
Store Queue | 44 | 24 | 56 | 42 |
ALU | 4 | 2 | 4 | 4 |
AGU | 2 | 2 | 2+2 | 2+2 |
FMAC | 2x128-bit | 2x128-bit 2x MMX 128-bit |
2x256-bit | 2x256-bit |
Bulldozer uses AMD-coined macro-ops, or Mops, which are internal fixed length instructions and can account for 3 smaller ops. These AMD Mops are different to Intel's 'macro-ops', which are variable length and different to Intel's 'micro-ops', which are simpler and fixed-length.
Excavator has a number of improvements over Bulldozer, such as a larger L1-D cache and a 768-entry L1 BTB size, however we were never given a full run-down of the core in a similar fashion and no high-end desktop version of Excavator will be made.
This isn’t an exhaustive list of all features (thanks to CPU World, Real World Tech and WikiChip for filling in some blanks) by any means, and doesn’t paint the whole story. For example, on the power side of the equation, AMD is stating that it has the ability to clock gate parts of the core and CCX that are not required to save power, and the L3 runs on its own clock domain shared across the cores. Or the latency to run certain operations, which is critical for workflow if a MUL operation takes 3, 4 or 5 cycles to complete. We have been told that the FPU load is two cycles quicker, which is something. The latency in the caches is also going to feature heavily in performance, and all we are told at this point is that L2 and L3 are lower latency than previous designs.
A number of these features we’ve already seen on Intel x86 CPUs, such as move elimination to reduce power, or the micro-op cache. The micro-op cache is a piece of the puzzle we wanted to know more about from day one, especially the rate at which we get cache hits for a given workload. Also, the use of new instructions will adjust a number of workloads that rely on them. Some users will lament the lack of true single-instruction AVX-2 support, however I suspect AMD would argue that the die area cost might be excessive at this time. That’s not to say AMD won’t support it in the future – we were told quite clearly that there were a number of features originally listed internally for Zen which didn’t make it, either due to time constraints or a lack of transistors.
We are told that AMD has a clear internal roadmap for CPU microarchitecture design over the next few generations. As long as we don’t stay for so long on 14nm similar to what we did at 28/32nm, with IO updates over the coming years, a competitive clock-for-clock product (even to Broadwell) with good efficiency will be a welcome return.
574 Comments
View All Comments
mapesdhs - Sunday, March 5, 2017 - link
Yet another example of manipulation which wouldn't be tolerated in other areas of commercial product. I keep coming across examples in the tech world where products are deliberately crippled, prices get hiked, etc., but because it's tech stuff, nobody cares. Media never mentions it.Last week I asked a seller site about why a particular 32GB 3200MHz DDR4 kit they had listed (awaiting an ETA) was so much cheaper than the official kits for Ryzen (same brand of RAM please note). Overnight, the seller site changed the ETA to next week but also increased the price by a whopping 80%, making it completely irrelevant. I've seen this happen three times with different products in the last 2 weeks.
Ian.
HomeworldFound - Sunday, March 5, 2017 - link
If they were pretty cheap then use your logic, placeholder prices happen. If they had no ETA the chances is that they had no prices. I don't see a shortage of decent DDR4 so it definitely isn't a supply and demand problem. Perhaps you need to talk to the manufacturer to get their guideline prices.HomeworldFound - Sunday, March 5, 2017 - link
Not really. If developers wanted to enhance AMD platforms, or it was actually worth it they'd have done it by now. It's now just an excuse to explain either underperformance or an inability to work with the industry.Notmyusualid - Tuesday, March 7, 2017 - link
@ sedraIt certainly should not be forgotten, that is for sure.
Rene23 - Monday, March 6, 2017 - link
yet people here mentioned multiple times "settled in 2009"; pretending it is not happening anymore, sick :-/GeoffreyA - Monday, March 6, 2017 - link
I kind of vaguely knew that benchmarks were often unfairly optimised for Intel CPUs; but I never knew this detailed information before, and from such a reputable source: Agner Fog. I know that he's an authority on CPU microarchitectures and things like that. Intel is evil. Even now with Ryzen, it seems the whole software ecosystem is somewhat suboptimal on it, because of software being tuned over the last decade for the Core microarchitecture. Yet, despite all that, Ryzen is still smashing Intel in many of the benchmarks.Outlander_04 - Monday, March 6, 2017 - link
Settled in 2009 .Not relevant to optimisation for Ryzen in any way
Rene23 - Monday, March 6, 2017 - link
settled in 2009 does not mean their current compiler and libraries are not doing it anymore, e.g. it could simply not run the best SSE/AVX code path disguised as simply not matching new AMD cpus properly.cocochanel - Saturday, March 4, 2017 - link
One thing that is not being mentioned by many is the increase in savings when you buy a CPU + mobo. Intel knows how to milk the consumer. On their 6-8 core flagships, a mobo with a top chipset will set you back 300-400 $ or even more. That's a lot for a mobo. Add the overpriced CPU. I expect AMD mobos to offer better value. Historically, they always did.On top of that, a VEGA GPU will probably be a better match for Ryzen than an Nvidia card, but I say probably and not certainly.
If I were to replace my aging gaming rig for Christmas, this would be my first choice.
mapesdhs - Sunday, March 5, 2017 - link
Bang goes the saving when one asks about a RAM kit awaiting an ETA and the seller hikes the price by 80% overnight (see my comment above).