Gaming Performance

AMD clearly states in its reviewer's guide that CPU bound gaming performance isn't going to be a strong point of the FX architecture, likely due to its poor single threaded performance. However it is useful to look at both CPU and GPU bound scenarios to paint an accurate picture of how well a CPU handles game workloads, as well as what sort of performance you can expect in present day titles.

Civilization V

Civ V's lateGameView benchmark presents us with two separate scores: average frame rate for the entire test as well as a no-render score that only looks at CPU performance.

Civilization V—1680 x 1050—DX11 High Quality

While we're GPU bound in the full render score, AMD's platform appears to have a bit of an advantage here. We've seen this in the past where one platform will hold an advantage over another in a GPU bound scenario and it's always tough to explain. Within each family however there is no advantage to a faster CPU, everything is just GPU bound.

Civilization V—1680 x 1050—DX11 High Quality

Looking at the no render score, the CPU standings are pretty much as we'd expect. The FX-8150 is thankfully a bit faster than its predecessors, but it still falls behind Sandy Bridge.

Crysis: Warhead

Crysis Warhead Assault Benchmark—1680 x 1050 Mainstream DX10 64-bit

In CPU bound environments in Crysis Warhead, the FX-8150 is actually slower than the old Phenom II. Sandy Bridge continues to be far ahead.

Dawn of War II

Dawn of War II—1680 x 1050—Ultra Settings

We see similar results under Dawn of War II. Lightly threaded performance is simply not a strength of AMD's FX series, and as a result even the old Phenom II X6 pulls ahead.

DiRT 3

We ran two DiRT 3 benchmarks to get an idea for CPU bound and GPU bound performance. First the CPU bound settings:

DiRT 3—Aspen Benchmark—1024 x 768 Low Quality

The FX-8150 doesn't do so well here, again falling behind the Phenom IIs. Under more real world GPU bound settings however, Bulldozer looks just fine:

DiRT 3—Aspen Benchmark—1920 x 1200 High Quality

Dragon Age

Dragon Age Origins—1680 x 1050—Max Settings (no AA/Vsync)

Dragon Age is another CPU bound title, here the FX-8150 falls behind once again.

Metro 2033

Metro 2033 is pretty rough even at lower resolutions, but with more of a GPU bottleneck the FX-8150 equals the performance of the 2500K:

Metro 2033 Frontline Benchmark—1024 x 768—DX11 High Quality

Metro 2033 Frontline Benchmark—1920 x 1200—DX11 High Quality

Rage vt_benchmark

While id's long awaited Rage title doesn't exactly have the best benchmarking abilities, there is one unique aspect of the game that we can test: Megatexture. Megatexture works by dynamically taking texture data from disk and constructing texture tiles for the engine to use, a major component for allowing id's developers to uniquely texture the game world. However because of the heavy use of unique textures (id says the original game assets are over 1TB), id needed to get creative on compressing the game's textures to make them fit within the roughly 20GB the game was allotted.

The result is that Rage doesn't store textures in a GPU-usable format such as DXTC/S3TC, instead storing them in an even more compressed format (JPEG XR) as S3TC maxes out at a 6:1 compression ratio. As a consequence whenever you load a texture, Rage needs to transcode the texture from its storage codec to S3TC on the fly. This is a constant process throughout the entire game and this transcoding is a significant burden on the CPU.

The Benchmark: vt_benchmark flushes the transcoded texture cache and then times how long it takes to transcode all the textures needed for the current scene, from 1 thread to X threads. Thus when you run vt_benchmark 8, for example, it will benchmark from 1 to 8 threads (the default appears to depend on the CPU you have). Since transcoding is done by the CPU this is a pure CPU benchmark. I present the best case transcode time at the maximum number of concurrent threads each CPU can handle:

Rage vt_benchmark—1920 x 1200

The FX-8150 does very well here, but so does the Phenom II X6 1100T. Both are faster than Intel's 2500K, but not quite as good as the 2600K. If you want to see how performance scales with thread count, check out the chart below:

Starcraft 2

Starcraft 2

Starcraft 2 has traditionally done very well on Intel architectures and Bulldozer is no exception to that rule.

World of Warcraft

World of Warcraft

Windows 7 Application Performance Power Consumption
Comments Locked

430 Comments

View All Comments

  • saneblane - Wednesday, October 12, 2011 - link

    What was the cpu usage like, i have a sinking feeling that cpu usage was low for most of the Review. I heard rumors that Amd are working on a patch, it would make sense because Zambezi losses to the atlon x4 sometimes, and that doesn't make any sense to me at all. Their has to be a performance loss on the cpu, whether it is based on the cpu or maybe it's design is hard for windows to handle.this processor can't be this slow.
  • punchcore47 - Wednesday, October 12, 2011 - link

    Look back when the first Phenom hit the street, I think AMD will right the ship and update over
    time and fix any problems. The gaming performance really looks sad though.
  • bhima - Wednesday, October 12, 2011 - link

    BD will have to drop their prices pretty hard to compete with these benchmarks. They are designed for an even smaller niche than gamers: People who use heavily threaded applications all day.

    I also don't see why anyone would ever put these procs into a server, with over 100 watts extra of heat running through your system compared to the i5 and i7. Interlagos may be more efficient but the architecture already is very power hungry compared to intel's offering.

    Really great way to end the review though Anand, AMD must return to its glory days so Intel doesn't continue to jack consumers. Hell after these benchmarks I could see intel INCREASING their prices instead of decreasing them.
  • haukionkannel - Thursday, October 13, 2011 - link

    Hmm... It seems that BD is leaking a lot of energy when running high freguency! But I am guite sure, that is very good in low 95w usage, with lower freguency. So I think that BD is actually really good and low energy CPU for server use, but the desk top usege is very problematic indeed.

    Seems to be a lot like Phenom release. A lot of current leakage and you got either good power and weak porformance or a little better performance and really bad power consumption... Next BD upgrade can remedy a lot of this, but it can not make miracles...

    I am guite sure, that BD will become reasonable CPU with upgrades and tinkering, but is it enough? The 32nm production technology will get better in time, so the power usage will get better, so they can upgrade freguencies. The problem with single threath speed is the main problem... If, bye some divine intervertion, programers really learn to use multible cores and streams, the future is bright... But most propably the golden amount of cores is 2-4 to far distant future... (not counting some speacial programs...) And that is bad. It would reguire a lot of re-engineering the BD to make it better in single stream aplications and that may be too expensive at this moment. There is some real potential in BD, but it would reguire too much from computer program side to harnes that power, when Intel has so huge lead in single core speed... Same reason Intel burried their "multicore" GPU project some time ago...

    We can only hope that fusion and GPU department keeps AMD floating long enough... Or we will have to face the long dark of Intel monopoly... It would be the worst case scenario.
  • Shining Arcanine - Wednesday, October 12, 2011 - link

    Anand, your compilation benchmark tests only single threaded improvements. Would it be possible to do multithreaded benchmark? Just do compilation on Linux with MAKEOPTS=-j9.

    Also, most of your benchmarks only test floating point performance. It was obvious to me that Bulldozer would be bad at that and I am not surprised. Is it possible to test parallel integer heavy workloads like a LAMP server? Compilation is another one, but I mentioned that above.
  • know of fence - Wednesday, October 12, 2011 - link

    Here is to hoping, that reviews to follow will offer at least some perspective on why single thread performance is still important. Instead just harping on it (as did reviews before it).

    Everybody can run a benchmark, but it's the broad context and perspective that I came to appreciate to read about in Anandtech reviews, beyond "I suspect this architecture will do quite well in the server space". Mind you I'm not referring to the big AMD vs. INTEL broad strokes, but the nitty-gritty.
  • geforce912 - Wednesday, October 12, 2011 - link

    Honestly, i think AMD would have been better off shrinking phenom II to 32nm and slapping on two more cores.
  • tech4tac - Wednesday, October 12, 2011 - link

    Agreed. An enhanced 8 core Phenom II X8 on 32nm process would have used ~1.2B transistors on ~244mm^2 die (smaller than Deneb & about the size of Gulftown) as opposed to the monstrous ~2B and 315mm^2 of a Bulldozer 8 core. Given the same clock speed, my estimates have it outperforming the i7-2600 on most multi-threaded applications. And, with a few tweaks for more aggressive turbo under single core workloads, it would have at least been somewhat competitive in games.

    Bulldozer is a BIG disappointment! It would need at least another 4 cores (2 modules) tacked on to be worth while for multi-threaded applications. AMD has stated it is committed to providing as many cores as Intel has threads (Gulftown has 12 threads so 12 core Bulldozer?), so maybe this will happen. Still... nothing can help its abysmal single core performance. If they can do a 12 core Bulldozer for less than $300, I might get one for a work machine but stick with an Intel chip for my gaming rig.
  • Shadowmaster625 - Wednesday, October 12, 2011 - link

    Companies this incompetent should not be allowed to survive. They bought a GPU company 5 years ago, and have done absolutely nothing to create any type of fusion between the cpu and gpu. You still have a huge multi-layer, multi-company software bloat separating the two pieces of hardware. They have done nothing to address this, and it is clear they never will. Which makes the whole concept a failure. It was a total waste of money.
  • HalloweenJack - Wednesday, October 12, 2011 - link

    and the day after intel triples its cpu prices... is that what you want?

    $500 entry level cpu`s?

Log in

Don't have an account? Sign up now