Gaming Performance

AMD clearly states in its reviewer's guide that CPU bound gaming performance isn't going to be a strong point of the FX architecture, likely due to its poor single threaded performance. However it is useful to look at both CPU and GPU bound scenarios to paint an accurate picture of how well a CPU handles game workloads, as well as what sort of performance you can expect in present day titles.

Civilization V

Civ V's lateGameView benchmark presents us with two separate scores: average frame rate for the entire test as well as a no-render score that only looks at CPU performance.

Civilization V—1680 x 1050—DX11 High Quality

While we're GPU bound in the full render score, AMD's platform appears to have a bit of an advantage here. We've seen this in the past where one platform will hold an advantage over another in a GPU bound scenario and it's always tough to explain. Within each family however there is no advantage to a faster CPU, everything is just GPU bound.

Civilization V—1680 x 1050—DX11 High Quality

Looking at the no render score, the CPU standings are pretty much as we'd expect. The FX-8150 is thankfully a bit faster than its predecessors, but it still falls behind Sandy Bridge.

Crysis: Warhead

Crysis Warhead Assault Benchmark—1680 x 1050 Mainstream DX10 64-bit

In CPU bound environments in Crysis Warhead, the FX-8150 is actually slower than the old Phenom II. Sandy Bridge continues to be far ahead.

Dawn of War II

Dawn of War II—1680 x 1050—Ultra Settings

We see similar results under Dawn of War II. Lightly threaded performance is simply not a strength of AMD's FX series, and as a result even the old Phenom II X6 pulls ahead.

DiRT 3

We ran two DiRT 3 benchmarks to get an idea for CPU bound and GPU bound performance. First the CPU bound settings:

DiRT 3—Aspen Benchmark—1024 x 768 Low Quality

The FX-8150 doesn't do so well here, again falling behind the Phenom IIs. Under more real world GPU bound settings however, Bulldozer looks just fine:

DiRT 3—Aspen Benchmark—1920 x 1200 High Quality

Dragon Age

Dragon Age Origins—1680 x 1050—Max Settings (no AA/Vsync)

Dragon Age is another CPU bound title, here the FX-8150 falls behind once again.

Metro 2033

Metro 2033 is pretty rough even at lower resolutions, but with more of a GPU bottleneck the FX-8150 equals the performance of the 2500K:

Metro 2033 Frontline Benchmark—1024 x 768—DX11 High Quality

Metro 2033 Frontline Benchmark—1920 x 1200—DX11 High Quality

Rage vt_benchmark

While id's long awaited Rage title doesn't exactly have the best benchmarking abilities, there is one unique aspect of the game that we can test: Megatexture. Megatexture works by dynamically taking texture data from disk and constructing texture tiles for the engine to use, a major component for allowing id's developers to uniquely texture the game world. However because of the heavy use of unique textures (id says the original game assets are over 1TB), id needed to get creative on compressing the game's textures to make them fit within the roughly 20GB the game was allotted.

The result is that Rage doesn't store textures in a GPU-usable format such as DXTC/S3TC, instead storing them in an even more compressed format (JPEG XR) as S3TC maxes out at a 6:1 compression ratio. As a consequence whenever you load a texture, Rage needs to transcode the texture from its storage codec to S3TC on the fly. This is a constant process throughout the entire game and this transcoding is a significant burden on the CPU.

The Benchmark: vt_benchmark flushes the transcoded texture cache and then times how long it takes to transcode all the textures needed for the current scene, from 1 thread to X threads. Thus when you run vt_benchmark 8, for example, it will benchmark from 1 to 8 threads (the default appears to depend on the CPU you have). Since transcoding is done by the CPU this is a pure CPU benchmark. I present the best case transcode time at the maximum number of concurrent threads each CPU can handle:

Rage vt_benchmark—1920 x 1200

The FX-8150 does very well here, but so does the Phenom II X6 1100T. Both are faster than Intel's 2500K, but not quite as good as the 2600K. If you want to see how performance scales with thread count, check out the chart below:

Starcraft 2

Starcraft 2

Starcraft 2 has traditionally done very well on Intel architectures and Bulldozer is no exception to that rule.

World of Warcraft

World of Warcraft

Windows 7 Application Performance Power Consumption
Comments Locked

430 Comments

View All Comments

  • Hrel - Wednesday, October 12, 2011 - link

    yes, I use it as a term that means being cheap. My friends often call me jewish cause I hunt for bargains pretty relentlessly.
  • silverblue - Friday, October 14, 2011 - link

    I get called Scottish for the same thing. ;)
  • poohbear - Wednesday, October 12, 2011 - link

    so disappointing to read this. What on earth were they doing all this time?? AMD's NEW cpu can't even outperform its OLD CPU? well atleast i can stick with my PhenomII X6 till Ivy Bridge comes out & thank goodness i didnt buy a pricey AM3+ before reading reviews.:p So sad to see AMD has come to this.....
  • OutsideLoopComputers - Wednesday, October 12, 2011 - link

    I think when Anand publishes benchmarks with a couple of Bulldozers working together in a dual or quad-socket board (Opteron), THEN we will see why AMD designed it the way they did. If the FX achieves parity and sometimes superiority in heavily multithreaded apps vs Sandy Bridge in a single socket, then imagine how two or four of these working together will do in server applications vs Sandy Bridge Xeon. I'll bet we see superiority in most server disciplines.

    I don't think this silicon was designed to go after Intel desktop processors, but to perform directly with dual and quad socket Xeon.

    Its intended to be an Opteron right now, and as an afterthought-to be sold as an FX desktop single socket part, to bridge the gap between A-series and Opteron.
  • JohanAnandtech - Wednesday, October 12, 2011 - link

    Indeed. The market for high-end desktop parts is very small, with low margins, and shrinking! The mobile market is growing, so AMD A6 en A8 CPUs make a lot more sense.

    The server market keeps growing, and the profit margins are excellent because a large percentage of the market wants high end parts (compare that to the desktop market, where almost every one wants the midrange and budgets). the Zip and crypting benchmarks show that Bulldozer is definitely not a complete failure. We'll see :-)
  • g101 - Wednesday, October 12, 2011 - link

    Good to see an intelligent reviewer that knows how to do more than run synthetic benchmarks and games.

    It's funny seeing all the uneducated gamer "complete failure" comments.
  • bassbeast - Thursday, February 9, 2012 - link

    I'm sorry but you are wrong sir and here is why: They are marketing this chip at the CONSUMER and NOT the server, which makes it a total Faildozer.

    If they would have kept P2 for the consumer and kept BD for the Opteron then you sir would have been 100% correct, but by killing their P2 they have just admitted they are out of the desktop CPU business and for a company that small that is a seriously DUMB move. Their Athlon and P2 have been the "go to" chip for many of us system builders because it gave "good enough" performance in the apps that people use, but Faildozer is a hot pig of a chip that is worse for consumer loads in every. single. way. over the P2.

    I'm just glad i bought an X6 when i did, but when i can no longer get the P2 and Athlon II for new builds i'll be switching to intel, the BD simply is worthless for the consumer market and NEVER should have been marketed to it in the first place! so please get off your high horse and admit the truth, the BD chip should have never been sold for anything but servers.
  • haplo602 - Wednesday, October 12, 2011 - link

    This is a server CPU abused for the desktop.

    Have a look at FPU performance. Almost clock for clock (3.3G vs 3.6G) it beats 6 FPU units in Phenom X6. That's quite nice.

    Once they do some optimisations on a mature process, this will achieve SB performance levels. However until then I am going for 2389 optys ....
  • GourdFreeMan - Wednesday, October 12, 2011 - link

    You introduce the fact that AMD lengthened the pipeline transitioning to Bulldozed without explicitly mentioning the pipeline length. How many stages exactly is Bulldozer's pipeline?
  • duploxxx - Wednesday, October 12, 2011 - link

    Well there clearly seems to be something wrong with the usage of the modules in combination with the way to high latency on any cache and memory. single threaded performance is hit by that and so does lack any gaming performance.

    So I hope anandtech can have a clear look at the following thread and continue to seek further:
    http://www.xtremesystems.org/forums/showthread.php...

    secondly during OC just like previous gen, do something more with NB oc in stead of just upping the GHZ, there is more to an architecture then just the ghz....

Log in

Don't have an account? Sign up now