Original Link: https://www.anandtech.com/show/2477



It was literally a week before we received our Phenom samples that many within AMD learned of a serious erratum in the processor that could potentially have a significant impact on system stability or performance, depending on how it was handled. Microprocessor erratum are quite common - no CPU is perfect and many are patched with fixes for these erratum through BIOS updates throughout the life of the CPU.

However, every now and then an erratum comes along that is a little more dangerous, its impact a little more serious, and that's when microprocessors either get recalled or tackled by a software workaround immediately. Phenom hardly had a smooth launch and its traction in the marketplace has been nearly nonexistant, partially because of the TLB issue but also because of a relative inability to compete, even with AMD's own dual-core products in many cases.

AMD is looking to relaunch Phenom this year with a new revision of the core and higher clock speeds. This new core was designed specifically to address the TLB erratum that crept up late last year and we managed to get our hands on a pre-release sample from one of AMD's partners before final production samples shipped. What follows is a quick explanation of the erratum and a look at how, and if, the B3 stepping core does indeed fix things.

Phenom needs help and B3 would at least be the first step towards giving it some much needed aid.



The "TLB Bug" Explained

Phenom is a monolithic quad core design, each of the four cores has its own internal L2 cache and the die has a single L3 cache that all of the cores share. As programs are run, instructions and data are taken into the L2 cache, but page table entries are also cached.

Virtual memory translation is commonplace in all modern OSes, the premise is simple: each application thinks it has contiguous access to all memory locations so they don't have to worry about complex memory management. When an application attempts to load or store something in memory, the address it thinks it's accessing is simply a virtual address - the actual location in memory can be something very different.

The OS stores a table of all of these mappings from virtual to physical addresses, the CPU can cache frequently used mappings so memory accesses take place much quicker.

If the CPU didn't cache page table entries, each memory access would proceed as follows:

1) Read from a pagetable directory
2) Read a pagetable entry
3) Then read the translated address and access memory

Then there's something called a Translation Lookaside Buffer (TLB) which takes the addresses and maps them one to one, so you don't even need to perform a cache lookup - there's just a direct translation stored in the TLB. The TLB is much smaller than the cache so you can't store too many items in the TLB, but the benefit is that with a good TLB algorithm you can get good hit rates within your TLB. If the mapping isn't contained in the TLB then you have to do a lookup in cache, and if it's not there then you have to go out to main memory to figure out the actual memory address you want to access.

Page table entries eventually have to be updated, for example there are situations when the OS decides to move a set of data to another physical location so all of the virtual addresses need to be updated to reflect the new address.

When page table entries are updated the cached entries stored in a core's L2 cache also need to be updated. Page table entries are a special case in the L2, not only does the cache controller have to modify the data in the entries to reflect their new values, but it also needs to set a couple of status bits in the page table entries in order to mark that the data has been modified.

Page table entries in cache are very different than normal data. With normal data you simply modify it and your cache coherency algorithms take care of making sure everything knows that the data is modified. With page table entries the cache controller must manually mark it by setting access and dirty bits because page tables and TLBs are actually managed by the OS. The cache line has to be modified, have a couple of bits set and then put back into the cache - an exception to the standard operating procedure. And herein lies the infamous TLB erratum.

When a page table entry is modified, the core's cache controller is supposed to take the cached entry, place it in a register, modify it and then put it back in the cache. However there is a corner case whereby if the core is in the middle of making this modification and about to set the access/dirty bits and some other activity goes into the cache, hits the same index that the page table entry is stored in and if the page table entry is marked as the next thing to be evicted, then the data will be evicted from the L2 cache and put into the L3 cache in the middle of this modification process. The line that's evicted from L2 is without its access and dirty bits set and now in L3, and technically incorrect.

Now the update operation is still taking place and when it finishes setting the appropriate bits, the same page table data will be placed into the core's L2 cache again. Now the L2 and L3 cache have the same data, which shouldn't happen given AMD's exclusive cache hierarchy.

If the line in L2 gets evicted once more, it'll be sent off to the L3 and there will be a conflict creating a L3 protocol error. But the more dangerous situation is what happens if another core requests the data.

If another core requests the data it will first check for it in L3, and of course find it there, not knowing that an adjacent core also has the data in its L2. The second core will now pull the data from L3 and place it in its L2 cache but keep in mind that the data is marked as unmodified, while the first core has it marked as modified in its L2.

At this point everything is ok since the data in both L2 caches is identical, but if the second core modifies the page table data that could create a dangerous problem as you end up in a situation where two cores have different data that is expected to be the same. This could either result in a hard lock of the system or even worse, silent data corruption.



The BIOS fix

The workaround in B2 stepping Phenoms is a BIOS fix that tells the TLB it can't look in the cache for page table entries upon lookup. Obviously this drives memory latencies up significantly as it adds additional memory requests to all page table accesses.

The hardware fix implemented in B3 Phenoms is that whenever a page table entry is modified, it's evicted out of L2 and placed in L3. There's a very minor performance penalty because of this but no where near as bad as the software/BIOS TLB fix mentioned above.

AMD gave us two confirmed situations where the TLB erratum would rear its ugly head in real world usage:

1) Windows Vista 64-bit running SPEC CPU 2006
2) Xen Hypervisor running Windows XP and an unknown configuration of applications

AMD insisted that the TLB erratum was a highly random event that would not occur during normal desktop usage and we've never encountered it during our testing of Phenom. Regardless, the two scenarios listed above aren't that rare and there could be more that trigger the problem, which makes a great case for fixing the problem



The first B3 Stepping Phenom

We managed to get our hands on a 2.2GHz engineering sample B3 stepping Phenom:

AMD will begin shipping production B3 Phenoms later this quarter, presumably at higher clock speeds than the 2.2GHz - 2.3GHz launch parts. Our B3 sample was very similar to our B2 chip in that we could get it stable at 2.6GHz but didn't have much luck getting it to run comfortably any faster. We suspect that it'll take a move to 45nm before AMD can really start to push the clock speed on Phenom.

To get an idea of how much of a performance hit the software/BIOS TLB fix incurs we took a small selection of our normal CPU tests and ran with the patch enabled/disabled on a B2 stepping Phenom 9600 (2.3GHz):


  SYSMark 2007 DivX CineBench R10 3dsmax 9 WinRAR
AMD Phenom 9600 (B2 Stepping) - TLB Fix Disabled 117 74.3 fps 7396 7.20 1348 KB/s
AMD Phenom 9600 (B2 Stepping) - TLB Fix Enabled 105 72.0 fps 7031 6.47 367 KB/s
Performance Impact -10.3% -3.1% -4.9% -10.1% -72.8%

 

The smallest performance impact was a meager 3.1% reduction, but we suspect that 10%+ would be far more typical. WinRAR is a particularly extreme case where performance dropped by over 70%, which AMD indicated would happen given the heavy memory access nature of file decompression applications.

The new B3 stepping Phenom shouldn't perform any differently to a B2 stepping chip with the TLB fix disabled, but to confirm we ran the most extreme test once more:

512 256MB
  WinRAR
AMD Phenom 9600 (B2 Stepping) - TLB Fix Disabled 1348 KB/s
AMD Phenom 9600 (B2 Stepping) - TLB Fix Enabled 367 KB/s
AMD Phenom B3 @ 2.3GHz 1357 KB/s

 

As expected, all is good with B3. The TLB Fix option actually disappeared from the Gigabyte 780G's BIOS upon inserting a B3 chip, it's like the problem never existed.

Final Words

With the TLB erratum fixed in B3, AMD is one step closer to a competitive Phenom part. Unfortunately Phenom still suffers from low clock speeds and that's something AMD will be working on in the coming months. It will take a combination of higher clock speeds and very competitive pricing to really save Phenom.

Log in

Don't have an account? Sign up now