Gaming Performance

AMD clearly states in its reviewer's guide that CPU bound gaming performance isn't going to be a strong point of the FX architecture, likely due to its poor single threaded performance. However it is useful to look at both CPU and GPU bound scenarios to paint an accurate picture of how well a CPU handles game workloads, as well as what sort of performance you can expect in present day titles.

Civilization V

Civ V's lateGameView benchmark presents us with two separate scores: average frame rate for the entire test as well as a no-render score that only looks at CPU performance.

Civilization V—1680 x 1050—DX11 High Quality

While we're GPU bound in the full render score, AMD's platform appears to have a bit of an advantage here. We've seen this in the past where one platform will hold an advantage over another in a GPU bound scenario and it's always tough to explain. Within each family however there is no advantage to a faster CPU, everything is just GPU bound.

Civilization V—1680 x 1050—DX11 High Quality

Looking at the no render score, the CPU standings are pretty much as we'd expect. The FX-8150 is thankfully a bit faster than its predecessors, but it still falls behind Sandy Bridge.

Crysis: Warhead

Crysis Warhead Assault Benchmark—1680 x 1050 Mainstream DX10 64-bit

In CPU bound environments in Crysis Warhead, the FX-8150 is actually slower than the old Phenom II. Sandy Bridge continues to be far ahead.

Dawn of War II

Dawn of War II—1680 x 1050—Ultra Settings

We see similar results under Dawn of War II. Lightly threaded performance is simply not a strength of AMD's FX series, and as a result even the old Phenom II X6 pulls ahead.

DiRT 3

We ran two DiRT 3 benchmarks to get an idea for CPU bound and GPU bound performance. First the CPU bound settings:

DiRT 3—Aspen Benchmark—1024 x 768 Low Quality

The FX-8150 doesn't do so well here, again falling behind the Phenom IIs. Under more real world GPU bound settings however, Bulldozer looks just fine:

DiRT 3—Aspen Benchmark—1920 x 1200 High Quality

Dragon Age

Dragon Age Origins—1680 x 1050—Max Settings (no AA/Vsync)

Dragon Age is another CPU bound title, here the FX-8150 falls behind once again.

Metro 2033

Metro 2033 is pretty rough even at lower resolutions, but with more of a GPU bottleneck the FX-8150 equals the performance of the 2500K:

Metro 2033 Frontline Benchmark—1024 x 768—DX11 High Quality

Metro 2033 Frontline Benchmark—1920 x 1200—DX11 High Quality

Rage vt_benchmark

While id's long awaited Rage title doesn't exactly have the best benchmarking abilities, there is one unique aspect of the game that we can test: Megatexture. Megatexture works by dynamically taking texture data from disk and constructing texture tiles for the engine to use, a major component for allowing id's developers to uniquely texture the game world. However because of the heavy use of unique textures (id says the original game assets are over 1TB), id needed to get creative on compressing the game's textures to make them fit within the roughly 20GB the game was allotted.

The result is that Rage doesn't store textures in a GPU-usable format such as DXTC/S3TC, instead storing them in an even more compressed format (JPEG XR) as S3TC maxes out at a 6:1 compression ratio. As a consequence whenever you load a texture, Rage needs to transcode the texture from its storage codec to S3TC on the fly. This is a constant process throughout the entire game and this transcoding is a significant burden on the CPU.

The Benchmark: vt_benchmark flushes the transcoded texture cache and then times how long it takes to transcode all the textures needed for the current scene, from 1 thread to X threads. Thus when you run vt_benchmark 8, for example, it will benchmark from 1 to 8 threads (the default appears to depend on the CPU you have). Since transcoding is done by the CPU this is a pure CPU benchmark. I present the best case transcode time at the maximum number of concurrent threads each CPU can handle:

Rage vt_benchmark—1920 x 1200

The FX-8150 does very well here, but so does the Phenom II X6 1100T. Both are faster than Intel's 2500K, but not quite as good as the 2600K. If you want to see how performance scales with thread count, check out the chart below:

Starcraft 2

Starcraft 2

Starcraft 2 has traditionally done very well on Intel architectures and Bulldozer is no exception to that rule.

World of Warcraft

World of Warcraft

Windows 7 Application Performance Power Consumption
Comments Locked

430 Comments

View All Comments

  • saneblane - Thursday, October 13, 2011 - link

    Man, you have a lot of optimism. I am a big Amd fan, but even i can remain optimistic after this mess, I mean how do you make a chip that is slow, expensive and losses to it's older brothers. Barcelona was a huge success compare to this, it only seemed bad because Expectations were high, this time around though they became higher because no one expect Amd to actually go backwards in performance. WOW that's all i can say WOW
  • Pipperox - Thursday, October 13, 2011 - link

    I don't understand why you all think it's slower than its older brothers.
    It's not, it's faster than Thuban in practically all benchmarks...

    Or do you really care about stuff like SuperPi?
  • Pipperox - Thursday, October 13, 2011 - link

    But maybe you guys think that it's slower "clock for clock" or "core for core".
    It doesn't matter how you achieve performance.
    What matters is the end performance.

    Bulldozer architecture allows it to have higher clock speed and more *threads* than Phenom.
    The penalty is single threaded performance.

    Again you can't compare it to an hypothetical 8 core 4.0GHz Thuban, because they couldn't have made it (and make any money out of it).

    I'll repeat, the FX-8150 is NOT an 8-core CPU.
    Otherwise the i7-2600K is also an 8-core CPU... both can execute 8 threads in parallel, but each pair of threads shares execution resources.

    The main difference is that Sandy Bridge can "join" all the resources of 2 threads to improve the performance of a single thread, while Bulldozer cannot.
    They probably didn't do it to reduce HW complexity and allow easier scalability to more threads and higher clock speed.

    Because the future (and to a large extent, the present) is heavily multithreaded, and because Bulldozer is targeted mainly at servers. (and the proof is its ridiculous cache)
  • bryman - Thursday, October 13, 2011 - link

    how about some bios screenshots? Is there a way in the bios to disable the northbridge in the chip and use the northbridge on the motherboard? Possibly better performance, or maybe add a new ability to x-fire northbridges? (Yah imah Dreamer). imo, I dont think adding the northbridge to the cpu was a good idea especially if it pulls away from other resources on the chip, I understand what adding the northbridge to the processor does, but does it turn off the northbridge thats already on the motherboard? The northbridge on the chip makes sense for an APU but not for a perfomance CPU, why is the nothbridge even in there. I myself would rather see the northbridge on the motherboard utilizing that space intstead of the space on the cpu.
    If there isnt a way to turn off the northbridge on the cpu in the bios, i think the motherboard manufactures should include the ability to turn off the northbridge on the cpu. Add the ability to use the onboard northbridge in there bios, so you can atleast get bios or firmware updates to the northbridge and perhaps get more performance out of the cpu/gpu.
    When the new Radeon 7000 series video cards come out, if I buy this CPU with the 6000 series northbridge in it, am I going to take a performance hit or am i going to have to buy a new processor with the 7000 series northbridge in it? or will they come out with a 7000 series motherboard that utilizes a 7000 series northbridge that turns off the 6000 series northbridge in the chip, which in turn makes it useless anyways. I myself dont like the fact if i buy this product, if i want to upgrade my northbridge/ motherboard, I might have to buy a new processor/ perhaps a new motherboard or am i just paranoid or not understanding something.

    Who knows, maybe in the next couple of weeks, Mcrosoft and/or AMD will come out with a performance driver for the new processors.
    If they would have come out with this processor when planned originally, it really would have kicked butt. instead we get conglimerated ideas over the five year period, which looks like the beginning idea, thrown into a 2011 die.
    I am i die-hard AMD fanboy and always will be, Just kinda dissappointed, excuse my rants. I will be buying a 4 core when they hit the streets, hopefully in a couple weeks.
  • saneblane - Thursday, October 13, 2011 - link

    From the caching issues, to the bad glofo process, to the windows scheduler, i recon that this processor wasn't ready for prime time. Amd didn't have any choice i mean they almost took an entire year extra for peet sake. Even though my i5 2500 is on it's way, am not stupid enough to believe this is the best the arch can do. Their is a good reason that interlagos cannot be bought in stores, Amd know for a fact that they cannot sell this cpu to server maker, so they are busy working on it, i expect that it might take one or even 2 more stepping to fix this processor, the multithread performance is their so they only need to get a mature 32nm process to crank up the speeds and maintain the power consumptions. IMO
  • arjuna1 - Thursday, October 13, 2011 - link

    Reviews @ other sites like toms hardware and guru 3d are starting to make this look bad. How come everyone but Anand got to review it with watercooling?? Is this site in such bad terms with AMD?
  • B3an - Thursday, October 13, 2011 - link

    Water cooling isn't magically going to help performance or power consumption in any way so why does it matter?? When you buy this CPU it comes with air cooling, and Anand was right to use that for this review.
  • marcelormt - Thursday, October 13, 2011 - link

    http://www.tomshardware.com/reviews/does-amds-athl...

    Patrick: The 6000+ is the fastest Athlon 64 X2 dual core processor ever, but what happened to the FX family?

    Damon: Patrick, you are right. The X2 6000+ is the fastest AMD64 dual-core processor ever... so why isn't it called FX? To answer that I have to explain what FX is all about... pushing the boundaries of desktop PCs. FX-51 did that right out of the gate, with multiple advantages over other AMD processors, and a clear lead on the competition. Move forward a bit to where AMD put high-performance, native dual-core computing into a single socket with the FX-60. Fast forward again and you see FX pushing new boundaries as "4x4" delivers four high-performance cores with a direct-connect, SLI platform that is ready to be upgraded to 8 cores later this year
  • Ryomitomo - Thursday, October 13, 2011 - link

    I'm a little surprised you only posted Win7/Win8 comparison figures for FX-8150. It would give a much complete picture if you would also post i7-2600k Win7/Win8 comparison.
  • czerro - Thursday, October 13, 2011 - link

    I think anand handled this review fine. Bulldozer is a little underwhelming, but we still don't know where the platform is going to go from here. Is everyone's memory so short term that they don't remember the rocky SandyBridge start?

Log in

Don't have an account? Sign up now