Final Words

In many cases, AMD's FX-8150 is able to close the gap between the Phenom II X6 and Intel's Core i5 2500K. Given the right workload, Bulldozer is actually able to hang with Intel's fastest Sandy Bridge parts. We finally have a high-end AMD CPU with power gating as well as a very functional Turbo Core mode. Unfortunately the same complaints we've had about AMD's processors over the past few years still apply here today: in lightly threaded scenarios, Bulldozer simply does not perform. To make matters worse, in some heavily threaded applications the improvement over the previous generation Phenom II X6 simply isn't enough to justify an upgrade for existing AM3+ platform owners. AMD has released a part that is generally more competitive than its predecessor, but not consistently so. AMD also makes you choose between good single or good multithreaded performance, a tradeoff that we honestly shouldn't have to make in the era of power gating and turbo cores.

Bulldozer is an interesting architecture for sure, but I'm not sure it's quite ready for prime time. AMD clearly needed higher clocks to really make Bulldozer shine and for whatever reason it was unable to attain that. With Piledriver due out next year, boasting at least 10-15% performance gains at the core level it seems to me that AMD plans to aggressively address the shortcomings of this architecture. My only concern is whether or not a 15% improvement at the core level will be enough to close some of the gaps we've seen here today. Single threaded performance is my biggest concern, and compared to Sandy Bridge there's a good 40-50% advantage the i5 2500K enjoys over the FX-8150. My hope is that future derivatives of the FX processor (perhaps based on Piledriver) will boast much more aggressive Turbo Core frequencies, which would do wonders at eating into that advantage.

AMD also shared with us that Windows 7 isn't really all that optimized for Bulldozer. Given AMD's unique multi-core module architecture, the OS scheduler needs to know when to place threads on a single module (with shared caches) vs. on separate modules with dedicated caches. Windows 7's scheduler isn't aware of Bulldozer's architecture and as a result sort of places threads wherever it sees fit, regardless of optimal placement. Windows 8 is expected to correct this, however given the short lead time on Bulldozer reviews we weren't able to do much experimenting with Windows 8 performance on the platform. There's also the fact that Windows 8 isn't expected out until the end of next year, at which point we'll likely see an upgraded successor to Bulldozer.

So what do you do if you're buying today? If you have an existing high-end Phenom II system, particularly an X4 970 or above or an X6 of any sort, I honestly don't see much of a reason to upgrade. You're likely better off waiting for the next (and final) iteration of the AM3+ lineup if you want to stick with your current platform. If you're considering buying new, I feel like the 2500K is a better overall part. You get more predictable performance across the board regardless of application type or workload mix, and you do get features like Quick Sync. In many ways, where Bulldozer is a clear win is where AMD has always done well: heavily threaded applications. If you're predominantly running well threaded workloads, Bulldozer will typically give you performance somewhere around or above Intel's 2500K.

I was hoping for Bulldozer to address AMD's weakness rather than continue to just focus on its strengths. I suspect this architecture will do quite well in the server space, but for client computing we may have to wait a bit longer for a more competitive part from AMD. The true culprit for Bulldozer's lackluster single-threaded performance is difficult to track down. The easy answer would seem to be clock speed. We've heard of issues at Global Foundries and perhaps Bulldozer is the latest victim. If AMD's clock targets were 30% higher than Phenom II, it simply didn't make them with the FX-8150. I've heard future derivatives will focus more on increasing IPC indepedent of process technology and clock speed, but if you asked me what was the one limit to success I would say clock speed. As a secondary factor, AMD appeared to make some tradeoffs to maintain a reasonable die size at 32nm. Even then Bulldozer can hardly be considered svelte. I suspect as AMD is able to transition to smaller transistor geometries, it will be able to address some of Bulldozer's physical shortcomings.

The good news is AMD has a very aggressive roadmap ahead of itself; here's hoping it will be able to execute against it. We all need AMD to succeed. We've seen what happens without a strong AMD as a competitor. We get processors that are artificially limited and severe restrictions on overclocking, particularly at the value end of the segment. We're denied choice simply because there's no other alternative. I don't believe Bulldozer is a strong enough alternative to force Intel back into an ultra competitive mode, but we absolutely need it to be that. I have faith that AMD can pull it off, but there's still a lot of progress that needs to be made. AMD can't simply rely on its GPU architecture superiority to sell APUs; it needs to ramp on the x86 side as well—more specifically, AMD needs better single threaded performance. Bulldozer didn't deliver that, and I'm worried that Piledriver alone won't be enough. But if AMD can stick to a yearly cadence and execute well with each iteration, there's hope. It's no longer a question of whether AMD will return to the days of the Athlon 64, it simply must. Otherwise you can kiss choice goodbye.

Overclocking
Comments Locked

430 Comments

View All Comments

  • THizzle7XU - Wednesday, October 12, 2011 - link

    Well, why would you target the variable PC segment when you can program for a well established, large user-base platform with a single configuration and make a ton more money with probably far less QA work since there's only one set (two for multi-platform PS3 games) of hardware to test?

    And it's not like 360/PS3 games suddenly look like crap 5-6 years into their cycles. Think about how good PS2 games looked 7 years into that system's life cycle (God of War 2). Devs are just now getting the most of of the hardware. It's a great time to be playing games on 360/PS3 (and PC!).
  • GatorLord - Wednesday, October 12, 2011 - link

    Consider what AMD is and what AMD isn't and where computing is headed and this chip is really beginning to make sense. While these benches seem frustrating to those of us on a desktop today I think a slightly deeper dive shows that there is a whole world of hope here...with these chips, not something later.

    I dug into the deal with Cray and Oak Ridge, and Cray is selling ORNL massively powerful computers (think petaflops) using Bulldozer CPUs controlling Nvidia Tesla GPUs which perform the bulk of the processing. The GPUs do vastly more and faster FPU calculations and the CPU is vastly better at dishing out the grunt work and processing the results for use by humans or software or other hardware. This is the future of High Performance Computing, today, but on a government scale. OK, so what? I'm a client user.

    Here's what: AMD is actually best at making GPUs...no question. They have been in the GPGPU space as long as Nvidia...except the AMD engineers can collaborate on both CPU and GPU projects simultaneously without a bunch of awkward NDAs and antitrust BS getting in the way. That means that while they obviously can turn humble server chips into supercomputers by harnessing the many cores on a graphics card, how much more than we've seen is possible on our lowly desktops when this rebranded server chip enslaves the Ferraris on the PCI bus next door...the GPUs.

    I get it...it makes perfect sense now. Don't waste real estate on FPU dies when the one's next door are hundreds or thousands of times better and faster too. This is not the beginning of the end of AMD, but the end of the beginning (to shamlessely quote Churchill). Now all that cryptic talk about a supercomputer in your tablet makes sense...think Llano with a so-so CPU and a big GPU on the same die with some code tweaks to schedule the GPU as a massive FPU and the picture starts taking shape.

    Now imagine a full blown server chip (BD) harnessing full blown GPUs...Radeon 6XXX or 7XXX and we are talking about performance improvements in the orders of magnitude, not percentage points. Is AMD crazy? I'm thinking crazy like a fox.

    Oh..as a disclaimer, while I'm long AMD...I'm just an enthusiast like the rest of you and not a shill...I want both companies to make fast chips that I can use to do Monte Carlos and linear regressions...it just looks like AMD seems to have figured out how to play the hand they're holding for change...here's to the future for us all.
  • Menoetios - Wednesday, October 12, 2011 - link

    I think you bring up a very good point here. This chip looks like it's designed to be very closely paired with a highly programmable GPU, which is where the GPU roadmaps are leading over the next year and a half. While the apples-to-apples nature of this review draw a disappointing picture, I'm very curious how AMD's "Fusion" products next year will look, as the various compute elements of the CPU and GPU become more tightly integrated. Bulldozer appears to fit perfectly in an ecosystem that we don't quite have yet.
  • GatorLord - Wednesday, October 12, 2011 - link

    Exactly. Ecosystem...I like it. This is what it must feel like to pick up a flashlight at the entrance to the tunnel when all you're used to is clubs and torches. Until you find the switch, it just seems worse at either...then viola!
  • actionjksn - Wednesday, October 12, 2011 - link

    Wow I hope that made you feel better about the crappy chip also known a "Man With A Shovel"
    I was just hoping AMD would quit forcing Intel to have to keep on crippling their chips, just to keep them from putting AMD out of business. AMD better fix this abortion quick, this is getting old.
  • GatorLord - Thursday, October 13, 2011 - link

    Feeling fine. Not as good in the short run, but feeling better about the long run. Unfortunately, due to constraints, it takes AMD too long to get stuff dialed in and by the time they do, Intel has already made an end run and beat them to the punch.

    Intel can do that, they're 40x as big as AMD. Actually, and this may sound crazy until you digest it, the smartest thing Intel could do is spin off a couple of really good dev labs as competitors. Relying on AMD to drive your competition is risky in that AMD may not be able to innovate fast enough to push Intel where it could be if they had more and better sharks in the water nipping at their tails.

    You really need eight or more highly capable, highly aggressive competitors to create a fully functioning market free of monopolistic and oligopolistic sluggishness and BS hand signalling between them. This space is too capital intensive for that at the time being with the current chip making technology what it is.
  • yankeeDDL - Wednesday, October 12, 2011 - link

    Just to be the devil's advocate ...
    The launch event in London sported 2 PC, side by side, running Cinebench.
    One had the core i5-2500k, the other the FX8150.
    Of course, these systems are prepared by AMD, so the results from Anand are clearly more reliable (at least all the conditions are documented).
    Nevertheless, it is clear that in the demo from AMD, the FX runs faster. Not by a lot, but it is clearly faster than the i5.
    Video: http://www.viddler.com/explore/engadget/videos/335...

    Even so, assuming that this was a valid datapoint, things won't change too much: the i5-2500k is cheaper and (would be) slightly slower than the FX8150 in the most heavily threaded benchmark. But it would be slightly better than Anand's results show.
  • KamikaZeeFu - Wednesday, October 12, 2011 - link

    "Nevertheless, it is clear that in the demo from AMD, the FX runs faster. Not by a lot, but it is clearly faster than the i5."

    Check the review, cinebench r11.5 multithreaded chart.
    Anand's numbers mirror the ones by AMD. Multithreaded workloads are the only case where the 8150 will outperform an i5 2500k because it can process twice the amount of threads.

    Really disappointed in AMD here, but I expected subpar performance because it was eerily quiet about the FX line as far as performance went.

    Desktop BD is a full failure, they were aiming for high clock speeds and made sacrifices, but still failed their objective. By the time their process is mature and 4 GHz dozers hit the channel, Ivy bridge will be out.

    As far as server performance goes, not even sure they will succeed there.
    As seen in the review, clock for clock performance isn't up compared to the prvious generation, and in some cases it's actually slower. Considering that servers run at lower clocks in the first place, I don't see BD being any threat to intels server lineup.

    4 years to develop this chip, and their motto seemed to be "we'll do netburst but in not-fail"
  • medi01 - Wednesday, October 12, 2011 - link

    So CPU is a bottleneck in your games eh?
  • TekDemon - Wednesday, October 12, 2011 - link

    It's not but people don't buy CPUs for today's games, generally you want your system to be future proof so the more extra headroom there is in these CPU benchmarks the better it holds up over the long term. Look back at CPU benchmarks from 3-4 years ago and you'll see that the CPUs that barely passed muster back then easily bottleneck you whereas CPUs that had extra headroom are still usable for gaming. For example the Core 2 Duo E8400 or E8500 is still a very capable gaming CPU, especially when given a mild overclock and frankly in games that only use a few threads (like Starcraft 2) it gives Bulldozer a run for the money.
    I'm not a fanboy either way since I own that E8400 as well as a Phenom II (unlocked to X4, OC'ed to 3.9Ghz) and a i5 2500K but if I was building a new system I sure as heck would want extra headroom for future-proofing.
    That said? Of course these chips will be more than enough power for general use. They're just not going to be good for high end systems. But in a general use situation the problem is that the power consumption is just crappy compared to the intel solutions, even if you can argue that it's more than enough power for most people why would you want to use more electricity?

Log in

Don't have an account? Sign up now