An Unusual Launch Cycle: OEMs now, Individual Units Later

The launch of Bristol Ridge APUs for desktop is taking a slightly different strategy to previous AMD launches. Typically we expect to see CPUs/APUs and OEM systems with that hardware launched on the day of the announcement, with stock of the hardware getting to shelves over the next few weeks. In order to do this, AMD needs to work with all the OEMs (HP, Lenovo, Dell) and platform partners (ASUS, GIGABYTE, MSI, ASRock) and potentially the memory manufacturers (Crucial, Kingston, G.Skill, ADATA, etc) to synchronize a launch with expected hardware, platform control and settings.

This time around, AMD has focused on the OEMs first, with all-in-one PCs and desktop systems being their focus. Typically the big OEMs develop their own PCBs and manage the full gamut of support, as well as being mindful of firmware that can be a work in progress up until the launch date. This allows the launch to be focused on a few models of complete experience systems, rather than the comparative free-for-all with custom build machines. Typically one might argue that the standard motherboard designers take longer to design their product, as it becomes their brand on offer, whereas HP/Lenovo sells the system as a brand, so not every stage has to be promoted, advertised and polished in the same way.

Of course, from an enthusiast perspective, I would prefer everything to come out on day one, and a deep dissection into the platform. But because Bristol Ridge is sharing a platform with the upcoming new microarchitecture, Zen, AMD has to balance the wishes of OEMs along with product expectations. As a result, the base announcement from AMD was somewhat of a brief overview, and we delayed writing this piece until we were able to source certain nuggets of information which make sense when individual units (and motherboards) are on sale for DIY users, as well as some insights into what Zen might offer.

But by focusing on OEMs first, it makes it more difficult for us to source review units! Watch this space, we’re working on it.

The CPU Roadmap

A lot of the recent talk regarding AMD’s future in the desktop CPU space has revolved around its next-generation CPU architecture called Zen. In August, AMD opened up to a significant part of the underlying Zen microarchitecture, detailing a micro-op cache, a layered memory hierarchy, dual schedulers and other information. Nonetheless Zen is initially aiming for the high-end desktop (HEDT) market, and AMD has always stated that Zen will share the AM4 platform with new mainstream CPUs, under the Bristol Ridge and Stoney Ridge names, initially based on an updated Excavator microarchitecture.

AMD’s roadmap seems to be the following:

The latest AMD announcements are for that mainstream segment, but we can see that AMD is moving from a three-socket configuration of AM3, FM2+ and AM1 into a singular AM4 platform from top to bottom, with the budget element perhaps being more embedded focused. This has positives and negatives associated with it, which is part of the reason why AMD is staggering the release of Bristol Ridge and the 7th Generation APUs between OEMs and PIBs.

The positive from the unified problem is that AMD’s OEM customers can have a one size fits all solution that spans from the budget to the premium, which makes OEM designs easier to translate from a high powered platform to a budget system. The downside is variety and compatibility – if a vendor designs a platform purely for a budget system, and has fewer safeguards, then a user cannot simply put in the most powerful CPU/APU available. Luckily we are told that all AM4 systems should be dual channel, which migrates away from the Carrizo/Carrizo-L problem we had in notebooks late last year.

AMD 7th Gen Bristol Ridge and AM4: The CPUs, Overclocking The Integrated GPU
Comments Locked

122 Comments

View All Comments

  • ddriver - Saturday, September 24, 2016 - link

    Hey, at least Trump is only preposterous and stupid. Hillary is all that PLUS crazy and evil. She is just as racist as Trump, if not more so, but she is not in the habit of being honest, she'd prefer to claim the votes of minorities.

    Politics is a joke and the current situation is a very good example of it. People deserve all shit that coming their way if they still put faith in the political process after this.
  • ClockHound - Friday, September 23, 2016 - link

    +101

    Particularly enjoyed the term: "walled garden spyware milking station" model

    Ok, not really enjoyed, cringed at the accuracy, however. ;-)
  • msroadkill612 - Wednesday, April 26, 2017 - link

    An adage I liked "If its free, YOU are the product."
  • hoohoo - Friday, September 23, 2016 - link

    I see what you did there! Nicely done.
  • patrickjp93 - Saturday, September 24, 2016 - link

    No they aren't. If Geekbench optimized for x86 the way it does for ARM, the difference in performance per clock is nearly 5x
  • ddriver - Saturday, September 24, 2016 - link

    You have no idea what you are talking about. Geekbench is very much optimized, there are basically three types of optimization:

    optimization done by the compiler - it eliminates redundant code, vertorizes loops and all that good stuff, that happens automatically

    optimization by using intrinsics - do manually what the compiler does automatically, sometimes you could do better, but in general, compiler optimizations are very mature and very good at doing what they do

    "optimization" of the type "if (CPUID != INTEL) doWorse()" - harmful optimization that doesn't really optimize anything in the true sense of the word, but deliberately chooses a less efficient code path to purposely harm the performance of a competitor - such optimizations are ALWAYS in the favor of the TOP DOG - be that intel or nvidia - companies who have excess of money to spend on such idiotic things. Smaller and less profitable companies like amd or arm - they don't do that kind of shit.

    Finally, performance is not magic, you can't "optimize" and suddenly get 5X the performance. Process and TDP are a limiting factor, there is only so much performance you can get out of a chip produced at a given process for a given thermal budget. And that's if it is some perfectly efficient design. A 5W 20nm x86 chip could not possibly be any faster than a 5W 20nm ARM chip, intel has always had a slight edge in process, but if you manufacture an arm and a x86 chip on identical process (not just the claimed node size) with the same thermal budget the amr chip will be a tad faster, because the architecture is less bloated and more efficient.

    It is a part of a dummy's belief system that arm chips are somehow fundamentally incapable of running professional software - on the contrary, hardware wise they are perfectly capable, only nobody bothers to write professional software for them.
  • patrickjp93 - Saturday, September 24, 2016 - link

    I have a Bachelor's in computer science and specialized in high performance parallel, vectorized, and heterogeneous computing. I've disassembled Geekbench on x86 platforms, and it doesn't even use anything SSE or higher, and that's ancient Pentium III instructions.

    It does not happen automatically if you don't use the right compiler flags and don't have your data aligned to allow the instructions to work.

    You need intrinsics for a lot of things. Clang and GCC both have huge compiler bug forums filled with examples of where people beat the compilers significantly.

    Yes you can get 5x the performance by optimizing. Geekbench only handles 1 datem at a time on Intel hardware vs. the 8 you can do with AVX and AVX2. Assuming you don't choke on bandwidth, you can get an 8x speedup.

    ARM is not more efficient on merit, and x86 is not bloated by any stretch. Both use microcode now. ARM is no longer RISC by any strict definition.

    Cavium has. Oracle has. Google has. Amazon has. In all cases ARM could not keep up with Avoton and Xeon D in performance/watt/$ and thus the industry stuck with Intel instead of Qualcomm or Cavium.
  • Toss3 - Sunday, September 25, 2016 - link

    This is a great post, and I just wanted to post an article by PC World where they discussed these things in simpler terms: http://www.pcworld.com/article/3006268/tablets/tes...

    As you can see the performance gains aren't really that great when it comes to real world usage, and as such we should probably start to use other benchmarks as well, and not just use Geekbench or browser javascript performance as indicators of actual performance of these SoCs especially when comparing one platform to another.
  • amagriva - Sunday, September 25, 2016 - link

    Good post. To any interested a good paper on the subject : http://etn.se/images/expert/FD-SOI-eQuad-white-pap...
  • ddriver - Sunday, September 25, 2016 - link

    I've been using GCC mostly, and in most of the cases after doing explicit vectorization I found no perf benefits, analyzing assembly afterwards revealed that the compiled has done a very good job at vectorizing wherever possible.

    However, I am highly skeptical towards your claims, I'll believe it when I see it. I can't find the link now, but last year I've read detailed analysis, showing that A9X core performance per watt better than skylake over most of the A9X's clock range. And not in geekbench, but in SPEC.

    As for geekbench, you make it sound as if they actually disabled vectorization explicitly. Which would be an odd thing. Not entirely clear what you mean by "1 datem at a time", but if you mean they are using scalar rather than vector instructions, that would be quite odd too. Luckily, I have better things to do than rummage about in geekbench machine code, so I will take your word that it is not properly optimized.

    And sure, 256bit wide SIMD will have higher throughput than 128bit SIMD, but nowhere nearly 8 or even 5 times. And that doesn't make arm chips any less capable of running devices, which are more than useless toys. Those chips are more powerful than workstations were some 10 years ago, but their usability is nowhere near that. As the benchmarks from the link Toss3 posted indicate, the A9X is only some ~40% slower than i5-4300U in the "true/real world benchmarks", and that's a 15 watt chip vs the A9X is like what, 5-ish or something like that? And ARM is definitely more efficient once you account for intel's process advantage. This will become obvious if intel ever dare to manufacture arm cores at the same process as their own products. And it is not because of the ISA bloat but because of the design bloat.

    Naturally, ARM chips are a low margin product, one cannot expect a 50$ chip to outperform a 300$ chip, but the gap appears to be closing, especially keeping in mind the brickwall process is going to hit the next decade. A 50$ chip running equal to a 300$ (and much wider design) chip from 2 year ago opens up a lot of possibilities, but I am not seeing any of them being realized by the industry.

Log in

Don't have an account? Sign up now