Cache and Memory Performance

I mentioned earlier that cache latencies are higher in order to accommodate the larger caches (8MB L2 + 8MB L3) as well as the high frequency design. We turned to our old friend cachemem to measure these latencies in clocks:

Cache/Memory Latency Comparison
  L1 L2 L3 Main Memory
AMD FX-8150 (3.6GHz) 4 21 65 195
AMD Phenom II X4 975 BE (3.6GHz) 3 15 59 182
AMD Phenom II X6 1100T (3.3GHz) 3 14 55 157
Intel Core i5 2500K (3.3GHz) 4 11 25 148

Cache latencies are up significantly across the board, which is to be expected given the increase in pipeline depth as well as cache size. But is Bulldozer able to overcome the increase through higher clocks? To find out we have to convert latency in clocks to latency in nanoseconds:

Memory Latency

We disable turbo in order to get predictable clock speeds, which lets us accurately calculate memory latency in ns. The FX-8150 at 3.6GHz has a longer trip down memory lane than its predecessor, also at 3.6GHz. The higher latency caches play a role in this as they are necessary to help drive AMD's frequency up. What happens if we turn turbo on and peg the FX-8150 at 3.9GHz? Memory latency goes down. Bulldozer still isn't able to get to main memory as quickly as Sandy Bridge, but thanks to Turbo Core it's able to do so better than the outgoing Phenom II.

L3 Cache Latency

L3 access latency is effectively a wash compared to the Phenom II thanks to the higher clock speeds enabled by Turbo Core. Latencies haven't really improved though, and Bulldozer has a long way to go before it reaches Sandy Bridge access latencies.

The Impact of Bulldozer's Pipeline Windows 7 Application Performance
Comments Locked

430 Comments

View All Comments

  • medi01 - Thursday, October 13, 2011 - link

    Slightest "problem" imaginable with AMD GPUs would make it into titles.

    nVidia article would go with comparing cherry picked overclocked board vs standard from AMD, with laughable "explanations" of "oh nVidia marketing asked us to do it, we kinda refused but then we thought that since we've already kinda refused, we might still do what they've asked".

    "Objectively", are you kidding me?
  • JKflipflop98 - Thursday, October 13, 2011 - link

    Anand runs the test, then writes down the number. Then he runs the test on the other PC, and writes down the number.

    If your number is lower, then it's physics "badmouthing" your precious, and not the site.
  • actionjksn - Wednesday, October 12, 2011 - link

    @medi01 Considering the results I think Anand were more than kind enough to AMD.
  • medi01 - Thursday, October 13, 2011 - link

    I recall low power AMD CPUs being tested on 1000Watt PSUs on this very site. How normal was that, cough? iPhones "forgoten in pocket" (authors comment) on comparison photos where they would look unfavourably)

    Thing with tests is, you have games that favour one manufacturer, then other games that favour another. Choose "right" set of games, and viola...

    The move with 1000Watt PSU on 35W TDP CPU is TOO DAMN LOW and should never happen.

    On top of it, absolute majority of games is more GPU sensitive, than CPU sensitive. Now one could reduce resolution to ridiculously low levels so that CPU becomes a bottleneck. but then, who on earth would care whether you get 150 or 194 frames per second at a resolution which you'll never use?
  • Stas - Thursday, October 13, 2011 - link

    Not sure what the deal is with PSUs or what article you're referring to. I'm assuming it made AMD power consumption look worse than it was because 1kW PSU was running at 10% load, thus way out of efficiency range. But w/e. My comment is mostly on CPU performance in games. Just because you don't run a game on the top-end CPU with $800 in multi-gpu tandem at lowest settings, doesn't mean it shouldn't be used to determine CPU performance. By making the CPU the bottleneck, you make it do as much as it can side-by-side with the GPU spiting out frames while whistling tunes and picking it's finger nails. There is more load on CPU than GPU. Which ever CPU is faster - that CPU will provide more FPS. Simple as that.
    Sure, no one will see 20%-30% performance difference using more appropriate resolution and quality settings. But we're enthusiasts, we want to see peak performance difference and extreme loads. Most synthetic tests are irrelevant in everyday use, but performance has been measured that way for decades.
  • jleach1 - Friday, October 14, 2011 - link

    I haven't seen one single sentence that was questionable in a and graphics review. In fact I'm glad to say that I'm a big fan of Intel CPU and and hour combos, and have never had even as much as a hint of bias.

    As a over exaggeration, in an age where were all stuffing multiple cards in our systems, and cards are efficient, reliable, powerful, and they run cool. yes the drivers have sucked in the past, but they don't really.

    (emphasis on the word seem)

    NvIdia cards have just seemed clunky and hot as hell since the 400 series. I don't feel like gaming next to a space heater. And I definitely don't want to pay 40 percent more for ten percent performance just to have a space heater and bragging rights.

    its like amd graphics are similar to intels CPU lineup, they're great performance per dollar parts, and they're efficient. But NvIdia and Intel graphics are like amd CPUs, they're either inefficient, or they're good at only a few things.

    The moral? what the *$&* amd....you might as well write off the whole desktop business if the competition IS fifty percent faster and gaining ground....that 15 percent you're promising next year better be closer to 50 or I'm going to forget about your processors altogether.
  • jleach1 - Friday, October 14, 2011 - link

    Intel CPU and amd combos*....sorry for the bat grammar. Writing on a tablet with Swype.
  • CeriseCogburn - Wednesday, March 21, 2012 - link

    40% more cost and 10% more performance?
    You said that's across the board.
    I'm certainly glad you aren't the reviewer here on anything. I mean really that was over the top.
  • CeriseCogburn - Friday, June 8, 2012 - link

    They went fullblown favor the bullsnoozer by using the GPU limited amd hd5870 to make the stupid amd cpu look good.

    Thank your lucky stars they did that much for you.
  • MJEvans - Thursday, October 13, 2011 - link

    I think your later point is exactly why the FPU support isn't as strong. (most) tasks that use FPU appear to be operating on large matrices of data, while sequential processing seems to have a good design idea (even if the implementation is a little immature and a little early), but slower latency l1/l2 cache access. I hope that's an area that will be addressed by the next iteration.

Log in

Don't have an account? Sign up now