AMD’s Industry Problem

A significant number of small form factor and portable devices have been sold since the start of the century - this includes smartphones, tablets, laptops, mini-PCs and custom embedded designs. Each of these markets is separated by numerous facets: price, performance, mobility, industrial design, application, power consumption, battery life, style, marketing and regional influences. At the heart of all these applications is the CPU that takes input, performs logic, and provides output dependent on both the nature of the device and the interactions made. Both the markets for the devices, and the effort placed into manufacturing the processors, is large and complicated. As a result we have several multi-national and worldwide companies hiring hundreds or thousands of engineers and investing billions of dollars each year into processor development, design, fabrication and implementation. These companies, either by developing their own intellectual property (IP) or licensing then modifying other IP, aim to make their own unique products with elements that differentiate them from everyone else. The goal is to then distribute and sell, so their products end up in billions of devices worldwide.

The market for these devices is several hundreds of billions of dollars every year, and thus to say competition is fierce is somewhat of an understatement. There are several layers between designing a processor and the final product, namely marketing the processor, integrating a relationship with an original equipment manufacturer (OEM) to create a platform in which the processor is applicable, finding an entity that will sell the platform under their name, and then having the resources (distribution, marketing) to the end of the chain in order to get the devices into the hands of the end user (or enterprise client). This level of chain complexity is not unique to the technology industry and is a fairly well established route for many industries, although some take a more direct approach and keep each stage in house, designing the IP and device before distribution (Samsung smartphones) or handling distribution internally (Tesla motors).

In all the industries that use semiconductors however, the fate of the processor, especially in terms of perception and integration, is often a result of what happens at the end of the line. If a user, in this case either an end user or a corporate client investing millions into a platform, tries multiple products with the same processor but has a bad experience, they will typically relate the negativity and ultimately their purchase decision towards both the device manufacturer and the manufacturer of the processor. Thus it tends to be in the best interest of all parties concerned that they develop devices suitable for the end user in question and avoid negative feedback in order to develop market share, recoup investment in research and design, and then generate a profit for the company, the shareholders, and potential future platforms. Unfortunately, with many industries suffering a race-to-the-bottom, cheap designs often win due to budgetary constraints, which then provides a bad user experience, giving a negative feedback loop until the technology moves from ‘bearable’ to ‘suitable’.

Enter Carrizo

One such platform that was released in 2015 is that of AMDs Carrizo APU (accelerated processor unit). The Carrizo design is the fourth generation of the Bulldozer architecture, originally released in 2011. The base design of the microarchitecture is different to the classical design of a processor - at a high level, rather than one core having one logic pipeline sharing one scheduler, one integer calculation port and one floating point calculation port resulting in one thread per core, we get a compute module with two logic pipelines sharing two schedulers, two integer calculation ports and only one floating point pipeline for two threads per module (although the concept of a module has been migrated to that of a dual core segment). With the idea that the floating point pipeline is being used infrequently in modern software and compilers, sharing one between two aims to save die area, cost, and additional optimizations therein.

The deeper reasons for this design lie in typical operating system dynamics - the majority of logic operations involving non-mathematical interpretations are integer based, and thus an optimization of the classical core design can result in the resources and die area that would normally be used for a standard core design to be focused on other more critical operations. This is not new, as we have had IP blocks in both the desktop and mobile space that have shared silicon resources, such as video decode codecs sharing pipelines, or hybrid memory controllers covering two memory types, to save die area but enable both features in the market at once.

While interesting in the initial concept, the launch of Bulldozer was muted due to its single threaded performance compared to that of AMD’s previous generation product as well as AMD’s direct competitor, Intel, whose products could ultimately process a higher number of instructions per clock per thread. This was countered by AMD offering more cores for the same die area, improving multithreaded performance for high workload throughput, but other issues plagued the launch. AMD also ran at higher frequencies to narrow the performance deficit, and at higher frequencies, the voltage required to maintain those frequencies related in a higher power consumption compared to the competition. This was a problem for AMD as Intel started to pull ahead on processor manufacturing technology taking advantage of lower operating voltages, especially in mobile devices.

Also, AMD had an issue with operating system support. Due to the shared resource module design of the processor, Microsoft Windows 7 (the latest at the time) had trouble distinguishing between modules and threads, often failing to allocate resources to the most suitable module at runtime. In some situations, it would cause two threads would run on a single core, with the other cores being idle. This latter issue was fixed via an optional update and in future versions of Microsoft Windows but still resulted in multiple modules being on 'active duty', affecting power consumption.

As a result, despite the innovative design, AMDs level of success was determined by the ecosystem, which was rather unforgiving in both the short and long term. The obvious example is in platforms where power consumption is directly related to battery life, and maintaining a level of performance required for those platforms is always a balance in managing battery concerns. Ultimately the price of the platform is also a consideration, and along with historical trends from AMD, in order to function this space as a viable alternative, AMD had to use aggressive pricing and adjust the platforms focus, potentially reducing profit margins, affecting future developments and shareholder return, and subsequently investment.

The Devices: #5 The Lenovo Y700 (Carrizo, FX-8800P + R9 385MX) How to Iterate Through Design
Comments Locked

175 Comments

View All Comments

  • karakarga - Friday, February 5, 2016 - link

    Including all, AMD and nVidia both at their funeral state! They can not possibly open 22, 14, 10 etc. micron fabric.

    Intel spended 5 billion dollars to open their new Arizona factory, they will pass lower processes there as well. AMD and nVidia can not get, even a billion dollar profit in these years. It is impossible for them to spend that much money to a new low process factory.

    Those little tweaks can not help them to survive....
  • testbug00 - Friday, February 5, 2016 - link

    They don't build factories. TSMC and Samsung (and GloFo to a lesser extent) build factories and do R&D for these processes. Nvidia, AMD. Samsung, Qualcomm, MediaTek and many other companies design chips to the standards of TSMC/Samsung/GloFo and pay money for wafers and running the wafers through the fab.

    The cost for this per wafer is meant to get all that money back in a few years. And than the process keeps on running for over 10 years sometimes.

    It is getting more expensive to get to smaller nodes and the performance increase and power decrease is getting smaller. And costs more to design chips and run wafers. So it is getting harder to find the funds to shrink. Which is one of the reasons Intel has delayed their 10nm process.
  • yannigr2 - Friday, February 5, 2016 - link

    Thanks for this review. Really needed for sometime. It was missing from the internet, not just Anandtech.

    As for the laptops, they say as much as there is to tell. Small Chinese makers, who no one knows they exist, would built better laptops than these. HP, Toshiba and Lenovo in this case, multibillion international giants that seems have all the technicians and the R&D funds necessary, end up producing Laptops with "strange" limitations, bad choices, low quality parts and in the end put prices that, even with all those bad choices and limitations, are NOT lower than those on Intel alternatives. It's almost as if Intel makes the choices for the parts in those laptops. Maybe their is a "trololol" sticker on them somewhere hidden addressed to AMD. I guess that way those big OEM don't make Intel too angry and at the same time, if there is another legal battle between AMD and Intel in the future, they will have enough excuses to show to the judge in their defense, if accused that they supported a monopoly.
  • ToTTenTranz - Friday, February 5, 2016 - link

    This article is what makes Anandtech great. Just keep being like this guys, your work is awesome!
    I'm going to spend some time clicking your ads, you deserve it :)

    As for the "poll" about who's to blame, IMHO it is:

    1 - AMD for letting OEMs place Carrizo in designs with terrible panels and single-channel solutions. It's just not good for the brand. "You can't put a Carrizo with single-channel cheap RAM because that's not how it was designed. You want to build bottom-of-the-barrel laptop? We have Carrizo-L for you."
    I'm pretty sure Intel has this conversation regarding Core M and Atom/Pentium/Celeron solutions. I know AMD is in a worse solution to negotiate, but downplaying Carrizo like this isn't good for anyone but Intel.
    In the end, what AMD needs is a guy who can properly sell their product. Someone who convince the OEMs that good SoCs need to be paired with decent everything-else.
    $500 is plenty for a 12/13" IPS/VA screen (even if it's 720/800p), 128GB SSD and 4+4GB DDR3L. Why not pull a Microsoft's Surface and build a decent SKU for that price range so that other OEMs can follow? Contract one OEM to make the device they envisioned, sell it and see all others following suit.

    2 - OEMs for apparently not having this ONE guy who calls the shots and knows that selling a crappy system automatically means losing customers. And this ONE other guy (or the same) for not knowing that constantly favoring Intel with their solutions is bound to make the whole company's life miserable if Intel's only competitor kicks the bucket. The consumer isn't meant to know these things, but the OEMs certainly are.
    It's 2016. We're way past the age of tricking the customer to buy a terrible user experience through big numbers (like "1TB drive woot"). He/She will feel like the money just wasn't and next time will buy a mac.
    Want a $300-400 price point? Get a Carrizo-L with a 128GB SSD and a 720p IPS panel. Want $500-700 Price point? Get a Carrizo with dual-channel, 256GB SSD and 900p/1080p IPS screen.
  • joex4444 - Friday, February 5, 2016 - link

    Anything under 1080p is simply not usable. All these 1366x768 panels are just awful. I have an old netbook with one (12.1") and I've put a small SSD in there and loaded it with Ubuntu. I cannot have a Google Hangouts window open and a web browser open wide enough to view most pages. Basic web browsing + IM - 1366x768 completely fails at the task.
  • testbug00 - Friday, February 5, 2016 - link

    768p panels are fine if they are good quality, in 11" laptops.
    900p good up to 13", and 1080p minimum for 14+.

    Honestly I wish we stayed with 8:5 14x9, 16x10, 19x12z
  • jabber - Saturday, February 6, 2016 - link

    Indeed, 768p is fine on my 11" Samsung Chromebook but I would not tolerate it on anything bigger. IMO 1600x900 should be the minimum screen res for budget machines. 1080p for midrange and whatever you like for higher end.
  • jjpcat@hotmail.com - Monday, February 8, 2016 - link

    Resolution is not as important as the quality of the panel. I used a Lenovo X1 Carbon. It has a 14" 1080p screen. But it's a TN panel and that just makes it a pain in the ass. I am amazed that Lenovo uses such a lousy panel in its $1k+ laptop while some 10" sub-$200 tablets use IPS.
  • testbug00 - Friday, February 5, 2016 - link

    Toshiba can make a $400 chromebook with a good 1080p display. Fully agreed.

    1080p panel, make it thicker so you can put a larger battery and so the laptop can handle up to 35W from the APU. Do dual channel.

    When plugged change APU power mad to 35W, when in battery make it 15W. Probably can be done for $500 for a 15" laptop with an A8. $50/100 upgrade to 128/256GB SSD and $50/100 upgrade to A10/FX.
  • Dobson123 - Friday, February 5, 2016 - link

    "The APU contains integrated ‘R6’ level graphics based on GCN 1.0, for 384 streaming processors at a frequency of 533 MHz."

    Isn't it GCN 1.1?

Log in

Don't have an account? Sign up now