Building a core like Zen 2 requires more than just building a core. The interplay between the core, the SoC design, and then the platform requires different internal teams to come together to create a level of synergy that working separately lacks. What AMD has done with the chiplet design and Zen 2 shows great promise, not only in taking advantage of smaller process nodes, but also driving one path on the future of compute.

When going down a process node, the main advantages are lower power. That can be taken in a few ways: lower power for operation at the same performance, or more power budget to do more. We see this with core designs over time: as more power budget is opened or different units within the core get more efficient, that extra power is used to drive cores wider, hopefully increasing raw instruction rate. It’s not an easy equation to solve, as there are many trade-offs: one such example in the Zen 2 core is the relationship between the reduced L1 I-cache that has allowed AMD to double the micro-op cache, which overall AMD expects to help with performance and power. Going into the minutae of what might be possible, at least at a high level, is like playing with Lego for these engineers.

All that being said, Zen 2 looks a lot like Zen. It is part of the same family, which means it looks very similar. What AMD has done with the platform, enabling PCIe 4.0, and putting the design in place to rid the server processors of the NUMA-like environment is going to help AMD in the long run. The outlook is good for AMD here, depending on how high it can drive the frequency of the server parts, but Zen 2 plus Rome is going to remove a good number of questions that customers on the fence had about Zen.

Overall AMD has quoted a +15% core performance improvement with Zen 2 over Zen+. With the core changes, at a high level, that certainly looks feasible. Users focused on performance will love the new 16-core Ryzen 9 3950X, while the processor seems nice an efficient at 105W, so it will be interesting so see what happens at lower power. We're also anticipating a very strong Rome launch here over the next few months, especially with features like double FP performance and QoS, and the raw multithreading performance of 64 cores is going to be an interesting disruptor to the market, especially if priced effectively. We’ll be getting the hardware on hand here soon to present our findings when the processors are launched on July 7th.

Cache and Infinity Fabric
Comments Locked

216 Comments

View All Comments

  • Teutorix - Tuesday, June 11, 2019 - link

    If TDPs are accurate they should reflect power consumption.

    If a chip needs 95W cooling it's using 95W of power. The heat doesn't come out of nowhere.
  • zmatt - Tuesday, June 11, 2019 - link

    I think technically it would be drawing a more than its TDP. The heat generated by electronics is waste due to the inefficiency of semi conductors. If you had a perfect conductor with zero resistance in a perfect world then it shouldn't make any heat. However the TDP cannot exceed power draw as that's where the heat comes from. How much TDP differs from power draw would depend on a lot of things such as what material the semiconductor is made or, silicon, germanium etc. And I'm sure design also factors in a great deal.

    If you read Gamers Nexus, they occasionally measure real power draw on systems, https://www.gamersnexus.net/hwreviews/3066-intel-i...
    And you can see that draw massively exceeds TDP in some cases, especially at the high end. This makes sense, if semiconductors were only 10% efficient then they wouldn't perform nearly as well as they do.
  • Teutorix - Tuesday, June 11, 2019 - link

    "I think technically it would be drawing a more than its TDP"

    Yeah, but if a chip is drawing more power than its TDP it is also producing more heat than its TDP. Making the TDP basically a lie.

    "The heat generated by electronics is waste due to the inefficiency of semi conductors. If you had a perfect conductor with zero resistance in a perfect world then it shouldn't make any heat"

    Essentially yes, there is a lower limit on power consumption but its many orders of magnitude below where we are today.

    "How much TDP differs from power draw would depend on a lot of things such as what material the semiconductor is made or, silicon, germanium etc. And I'm sure design also factors in a great deal."

    No. TDP = the "intended" thermal output of the device. The themal output is directly equal to the power input. There's nothing that will ever change that. If your chip is drawing 200W, its outputting 200W of heat, end of story.

    Intel defines TDP at base clocks, but nobody expects a CPU to sit at base clocks even in extended workloads. So when you have a 9900k for example its TDP is 95W, but only when its at 3.6GHz. If you get up to its all core boost of 4.7 its suddenly draining 200W sustained assuming you have enough cooling.

    Speaking of cooling. If you buy a 9900k with a 95W TDP you'd be forgiven for thinking that a hyper 212 with a max capacity of 180W would be more than capable of handling this chip. NOPE. Say goodbye to that 4.7GHz all core boost.

    "If you read Gamers Nexus, they occasionally measure real power draw on systems, https://www.gamersnexus.net/hwreviews/3066-intel-i...
    And you can see that draw massively exceeds TDP in some cases, especially at the high end. This makes sense, if semiconductors were only 10% efficient then they wouldn't perform nearly as well as they do."

    None of that makes any difference. TDP is supposed to represent the cooling capacity needed for the chip. If a "95W" chip can't be sufficiently cooled by a 150W cooler there's a problem.

    Both Intel and AMD need to start quoting TDPs that match the boost frequencies they use to market the chips.
  • Cooe - Tuesday, June 11, 2019 - link

    ... AMD DOES include boost in their TDP calculations (unlike Intel), and always have. They make their methodology for this calculation freely available & explicit.
  • Spoelie - Wednesday, June 12, 2019 - link

    Look at these power tables for 2700X
    https://www.anandtech.com/show/12625/amd-second-ge...

    =>You are only hitting 'TDP' figures at close to full loading, so "frequency max" is not limited by TDP but by the silicon.
    =>Slightly lowering frequency *and voltage* really adds up the power savings over many cores. The load table of the 3700 will look on the whole different than for the 3600X. The 3700 will probably lose out in some medium threaded scenarios (not lightly and not heavily threaded)
  • Gastec - Wednesday, June 12, 2019 - link

    That's not actually the real power consumption. Most likely you will get a 3700X with 70-75 W (according to the software app indications) but a bit more if tested with a multimeter. Add to that the inefficiency of the PSU, say 85-90%, and you have about 85 W of real power consumption. Somewhat better than my current 110W i7-860 or the 150+W Intel 9000 series ones I would say :)
  • xrror - Monday, June 10, 2019 - link

    funny you say that. AMD TDP and Intel TDP differ. I think.

    HEY IAN, does AMD still measure TDP as "real" (total) dissipation power or Intel's weaksauce "Typical" dissipation power?
  • Teutorix - Tuesday, June 11, 2019 - link

    Intel rate TDP at base clocks. AMD do something a little more complex.

    Neither of them reflect real world power consumption for sustained workloads.
  • FreckledTrout - Tuesday, June 11, 2019 - link

    In desktops they are simply starting points for the cooling solution needed. They do a lot better in the laptop/tablet space where TDP's make or break designs.
  • Cooe - Tuesday, June 11, 2019 - link

    Yes they do. A 2700X pulls almost exactly 105W under the kind of conditions you describe. Just because Intel's values are completely nonsense doesn't mean they all are.

Log in

Don't have an account? Sign up now