AMD Stock Coolers: Wraith v2

When AMD launched the Wraith cooler last year, bundled with the premium FX CPUs and highest performing APUs, it was a refreshing take on the eternal concept that the stock cooler isn’t worth the effort of using if you want any sustained performance. The Wraith, and the 125W/95W silent versions of the Wraith, were built like third party coolers, with a copper base/core, heatpipes, and a good fan. In our roundup of stock coolers, it was clear the Wraith held the top spot, easily matching $30 coolers in the market, except now it was being given away with the CPUs/APUs that needed that amount of cooling.

That was essentially a trial run for the Ryzen set of Wraith coolers. For the Ryzen 7 launch, AMD will have three models in play.

These are iterative designs on the original, with minor tweaks and aesthetic changes, but the concept is still the same – a 65W near silent design (Stealth), a 95W near silent design (Spire), and a 95W/125W premium model (Max). The 125W models come with an RGB light (which can be disabled), however AMD has stated that the premium model is currently destined for OEM and SI designs only. The other two will be bundled with the CPUs or potentially be available at retail. We have asked that we get the set in for review, to add to our Wraith numbers.

Memory Support

With every generation of CPUs, each one comes with a ‘maximum supported memory frequency’. This is typically given as a number, with the number aligning with the industry standard JEDEC sub-timings. Technically most processors will go above and beyond the memory frequency as the integrated memory controller supports a lot more; but the manufacturer only officially guarantees up to the maximum supported frequency on qualified memory kits.

The frequency, for consumer chips, is usually given as a single number no matter how many memory slots are populated. In reality when more memory modules are in play, it puts more strain on the memory controller so there is a higher potential for errors. This is why qualification is important – if the vendor has a guaranteed speed, any configuration for a qualified kit should work at that speed.

In the server market, a CPU manufacturer might list support a little differently – a supported frequency depending on how many memory modules are in play, and what type of modules. This arguably makes it very confusing when applied at a consumer level, but on a server level it is expected that OEMs can handle the varying degree of support.

For Ryzen, AMD is taking the latter approach. What we have is DDR4-2666 for the simplest configuration – one module per channel of single rank UDIMMs. This moves through to DDR4-1866 for the most strenuous configuration at two modules per channel with dual-rank UDIMMs. For our testing, we were running the memory at DDR4-2400, for lack of a fixed option, however we will have memory scaling numbers in due course. At present, ECC is not supported ECC is supported.

Chipsets and Motherboards Benchmarking Suite 2017
Comments Locked

574 Comments

View All Comments

  • deltaFx2 - Wednesday, March 8, 2017 - link

    @Meteor2: No. Consumer GPUs have poor throughput for Double precision FP. So you can't push those to the GPU (unless you own those super-expensive Nvidia compute cards). Apparently, many rendering/video editing programs use GPUs for preview but do the final rendering on CPU. Quality, apparently, and might be related to DP FP. I'm not the expert, so if you know otherwise, I'd be happy to be corrected and educated. Also, you could make the same argument about AVX-256.

    The quoted paragraph is probably the only balanced statement in that entire review. Compare the tone of that review with AT review above.

    On an unrelated note, there's the larger question of running games at low res on top-end gpus and comparing frame-rates that far exceed human perception. I know, they have to do something, so why not just do this. The rationale is: " In future a faster GPU in future will create a bottleneck ". If this is true, it should be easy to demonstrate, right? Just dig through a history of Intel desktop CPUs paired with increasingly powerful GPUs and see how it trends. There's not one reviewer that has proven that this is true. It's being taken as gospel. OTOH, plenty of folks seem happy with their Sandy Bridge + Nvidia 1080, so clearly the bottleneck isn't here 5 years after SB. Maybe, just maybe, it's because the differences are imperceptible?

    Ryzen clearly has some bottlenecks but the whole gaming thing is a tempest in a tea-cup.
  • theuglyman0war - Thursday, March 9, 2017 - link

    ZBRUSH

    probably 90% of all 3d assets that are created from concept ( NOT SCANNED )
    Went through Zbrush at some point.

    Which means no GPU acceleration at all.
    Renderman
    Maxwell
    Vray
    Arnold
    still all use CPU rendering As do a mountain of other renderers.
    Arnold will be getting an option
    But the two popular GPU renderers are Otoy Octane and Redshift...
    The have their excellent expensive place. But the majority of rendering out there is still suffered through software rendering. And will always be a valid concern as long as they come FREE built into major DCC applications.
  • theuglyman0war - Thursday, March 9, 2017 - link

    Saw that same GPU trumps CPU render validity concerns...
    Comment and had a good laugh.
    I'll remember to spread that around every time I see Renderman Vray Arnold Maxwell sans GPU rendering going on.
    Or the next time a Mercury engine update negates all non Quadro GPU acceleration.

    To be fair a lot of creative pros and tech artists seem to disagree with me but...
    The only time between pulling vrts in Maya and brushing a surface in Zbrush that I really feel that I am suffering buckets of tears and desire a new CPU ( still on i7-980x ) is when I am cussing out a progress bar that is teasing me with it's slow progress. And that means CORES! encoding... un compressing... Rendering! Otherwise I could probably not notice day to day on a ten year old CPU. ( excluding CPU bound gaming of course... talking bout day to day vrt pulling )
    I was just as productive in 2007 as I am today.
  • MaidoMaido - Saturday, March 4, 2017 - link

    Been trying to find a review including practical benchmarks for common video editing / motion graphics applications like After Effects, Resolve, Fusion, Premiere, Element 3D.

    In a lot of these tasks, the multithreading is not always the best, as a result quad core 6700K often outperforms the more expensive Xeon and 5960X etc
  • deltaFx2 - Saturday, March 4, 2017 - link

    I would recommend this response to the GamersNexus hit piece: https://www.reddit.com/r/Amd/comments/5xgonu/analy...

    The i5 level performance is a lie.
  • Notmyusualid - Saturday, March 4, 2017 - link

    @ deltaFx2

    Sorry, not reading a 4k worded response. I'll wait for Anand to finish its Ryzen reviews before I draw any final conclusions.
  • Meteor2 - Tuesday, March 7, 2017 - link

    @deltaFX2 RE: in the 4k word Reddit 'rebuttal', what that person seems to be saying, is that once you've converted your $500 Ryzen 1800X into a 8C/8T chip, _then_ it beats a $240 i5, while still falling short of the $330 i7. Out-of-the-box, it has worse gaming performance than either Intel chip.

    That's not exactly a ringing endorsement.

    The analysis in the Anandtech forums, which concludes that in a certain narrow and low power band a heavily down-clocked 1800X happens to get excellent performance/W, isn't exactly thrilling either.
  • deltaFx2 - Wednesday, March 8, 2017 - link

    @ Meteor2: The anandtech forum thing: Perf/watt matters for servers and laptop. Take a look at the IPC numbers too. His average is that Zen == Broadwell IPC, and ~10% behind Sky/Kaby lake (except for AVX256 workloads). That's not too shabby at all for a $300 part.

    You completely missed the point of the reddit rebuttal. The GN reviewer drops i5s from plenty of tests citing "methodological reasons", but then says R7==i5 in gaming. The argument is that plenty of games use >4 threads and that puts i5 at a disadvantage.
  • tankNZ - Sunday, March 5, 2017 - link

    yes I agree, it's even better than okay for gaming[img]http://smsh.me/li3a.png[/img]
  • deltaFx2 - Monday, March 6, 2017 - link

    You may wish to see this though: https://forums.anandtech.com/threads/ryzen-strictl... Way, way, more detailed than any tech media review site can hope to get. No, it's got nothing to do with gaming. Gaming isn't the story here. AMD's current situation in x86 market share had little to do with gaming efficiency, but perf/watt.

    I'll quote the author: "850 points in Cinebench 15 at 30W is quite telling. Or not telling, but absolutely massive. Zeppelin can reach absolutely monstrous and unseen levels of efficiency, as long as it operates within its ideal frequency range."

Log in

Don't have an account? Sign up now