Barts: The Next Evolution of Cypress

At the heart of today’s new cards is Barts, the first member of AMD’s Northern Island GPUs. As we quickly hinted at earlier, Barts is a very direct descendant of Cypress. This is both a product of design, and a product of consequences.

It should come as no surprise that AMD was originally looking to produce what would be the Northern Islands family on TSMC’s 32nm process; as originally scheduled this would line up with the launch window AMD wanted, and half-node shrinks are easier for them than trying to do a full-node shrink. Unfortunately the 32nm process quickly became doomed for a number of reasons.

Economically, per-transistor it was going to be more expensive than the 40nm process, which is a big problem when you’re trying to make an economical chip like Barts. Technologically, 32nm was following TSMC’s troubled 40nm process; TSMC’s troubles ended up being AMD’s troubles when they launched the 5800 series last year, as yields were low and wafers were few, right at a time where AMD needed every chip they could get to capitalize on their lead over NVIDIA. 32nm never reached completion so we can’t really talk about yields or such, but it’s sufficient to say that TSMC had their hands full fixing 40nm and bringing up 28nm without also worrying about 32nm.

Ultimately 32nm was canceled around November of last year. But even before that AMD made the hard choice to take a hard turn to the left and move what would become Barts to 40nm. As a result AMD had to make some sacrifices and design choices to make Barts possible on 40nm, and to make it to market in a short period of time.

For these reasons, architecturally Barts is very much a rebalanced Cypress, and with the exception of a few key changes we could talk about Barts in the same way we talked about Juniper (the 5700 series) last year.


Click to enlarge

Barts continues AMD’s DirectX 11 legacy, building upon what they’ve already achieved with Cypress. At the SPU level, like Cypress and every DX10 AMD design before it continues to use AMD’s VLIW5 design. 5 stream processors – the w, x, y, z, and t units – work together with a branch unit and a set of GPRs to process instructions. The 4 simple SPs can work together to process 4 FP32 MADs per clock, while the t unit can either do FP32 math like the other units or handle special functions such as a transcendental. Here is a breakdown of what a single Barts SPU can do in a single clock cycle:

  • 4 32-bit FP MAD per clock
  • 4 24-bit Int MUL or ADD per clock
  • SFU : 1 32-bit FP MAD per clock

Compared to Cypress, you’ll note that FP64 performance is not quoted, and this isn’t a mistake. Barts isn’t meant to be a high-end product (that would be the 6900 series) so FP64 has been shown the door in order to bring the size of the GPU down. AMD is still a very gaming-centric company versus NVIDIA’s philosophy of GPU computing everywhere, so this makes sense for AMD’s position, while NVIDIA’s comparable products still offer FP64 if only for development purposes.

Above the SPs and SPUs, we have the SIMD. This remains unchanged from Cypress, with 80 SPs making up a SIMD. The L1 cache and number of texture units per SIMD remains at 16KB L1 texture, 8KB L1 compute, and 4 texture units per SIMD.

At the macro level AMD maintains the same 32 ROP design (which combined with Barts’ higher clocks, actually gives it an advantage over Cypress). Attached to the ROPs are AMD’s L2 cache and memory controllers; there are 4 128KB blocks of L2 cache (for a total of 512KB L2) and 4 64bit memory controllers that give Barts a 256bit memory bus.

Barts is not just a simple Cypress derivative however. For non-gaming/compute uses, UVD and the display controller have both been overhauled. Meanwhile for gaming Barts did receive one important upgrade: an enhanced tessellation unit. AMD has responded to NVIDIA’s prodding about tessellation at least in part, equipping Barts with a tessellation unit that in the best-case scenario can double their tessellation performance compared to Cypress. AMD has a whole manifesto on tessellation that we’ll get in to, but for now we’ll work with the following chart:

AMD has chosen to focus on tessellation performance at lower tessellation factors, as they believe these are the most important factors for gaming purposes. From their own testing the advantage over Cypress approaches 2x between factors 6 and 10, while being closer to a 1.5x increase before that and after that up to factor 13 or so. At the highest tessellation factors Barts’ tessellation unit falls to performance roughly in line with Cypress’, squeezing out a small advantage due to the 6870’s higher clockspeed. Ultimately this means tessellation performance is improved on AMD products at lower tessellation factors, but AMD’s tessellation performance is still going to more-or-less collapse at high factors when they’re doing an extreme amount of triangle subdivision.

So with all of this said, Barts ends up being 25% smaller than Cypress, but in terms of performance we’ve found it to only be 7% slower when comparing the 6870 to the 5870. How AMD accomplished this is the rebalancing we mentioned earlier.

Based on AMD’s design decisions and our performance data, it would appear that Cypress has more computing/shading power than it necessarily needs. True, Barts is slower, but it’s a bit slower and a lot smaller. AMD’s various compute ratios, such as compute:geometry and compute:rasterization would appear to be less than ideal on Cypress. So Barts changes the ratios.

Compared to Cypress and factoring in 6870/5870 clockspeeds, Barts has about 75% of the compute/shader/texture power of Cypress. However it has more rasterization, tessellation, and ROP power than Cypress; or in other words Barts is less of a compute/shader GPU and a bit more of a traditional rasterizing GPU with a dash of tessellation thrown in. Even in the worst case scenarios from our testing the drop-off at 1920x1200 is only 13% compared to Cypress/5870, so while Cypress had a great deal of compute capabilities, it’s clearly difficult to make extremely effective use of it even on the most shader-heavy games of today.

However it’s worth noting that internally AMD was throwing around 2 designs for Barts: a 16 SIMD (1280 SP) 16 ROP design, and a 14 SIMD (1120 SP) 32 ROP design that they ultimately went with. The 14/32 design was faster, but only by 2%. This along with the ease of porting the design from Cypress made it the right choice for AMD, but it also means that Cypress/Barts is not exclusively bound on the shader/texture side or the ROP/raster side.

Along with selectively reducing functional blocks from Cypress and removing FP64 support, AMD made one other major change to improve efficiency for Barts: they’re using Redwood’s memory controller. In the past we’ve talked about the inherent complexities of driving GDDR5 at high speeds, but until now we’ve never known just how complex it is. It turns out that Cypress’s memory controller is nearly twice as big as Redwood’s! By reducing their desired memory speeds from 4.8GHz to 4.2GHz, AMD was able to reduce the size of their memory controller by nearly 50%. Admittedly we don’t know just how much space this design choice saved AMD, but from our discussions with them it’s clearly significant. And it also perfectly highlights just how hard it is to drive GDDR5 at 5GHz and beyond, and why both AMD and NVIDIA cited their memory controllers as some of their biggest issues when bringing up Cypress and GF100 respectively.

Ultimately all of these efficiency changes are necessary for AMD to continue to compete in the GPU market, particularly in the face of NVIDIA and the GF104 GPU powering the GTX 460. Case in point, in the previous quarter AMD’s graphics division only made $1mil in profit. While Barts was in design years before that quarter, the situation still succinctly showcases why it’s important to target each market segment with an appropriate GPU; harvested GPUs are only a stop-gap solution, in the end purposely crippling good GPUs is a good way to cripple a company’ s gross margin.

Index Seeing the Future: DisplayPort 1.2
Comments Locked

197 Comments

View All Comments

  • Setsunayaki - Friday, October 22, 2010 - link

    There was a graph where a 4XXX series card beat the 6XXX series card...There were many where the 5XXX series was higher...Tesellation performance is higher on the 460 GTX and SLI scales better than crossfire...

    What the tesselation performance graph really means is that if you were to take an 460 GTX and 6870 and turn off tesselation and play a game....the 6870 gets a higher framerate, but if you turn on Tesselation on Both cards and go full force with Tesselation and other features (considering that Nvidia has support for PhysX and most games now have some physics implementation)...the outcome shows the 6870 taking such a performance hit that as far as framerates go....a 460 actually matches it or beats it outright.

    What ATI/AMD really needs to work on is Integrating more technologies on its card to actually have more options during a game. No physics processing, Just an optimization on AA and AF...and tesselation performance that doesn't come close to a 460, along with horrible linux support...I really wonder and hope that their flagship card shows something steller....

    Not to argue against it, but for the deserving ATI/AMD fans who have stuck with them over the years. ^_^
  • Alilsneaky - Friday, October 22, 2010 - link

    Prices are high for both in my country (Belgium).

    199 Euro for the 6850 and 279 euro (in the cheaper shops, upto 350 in others) for the 6870.

    Very bland release for us, nothing to get excited about at that price point.

    I also take offense to the naming scheme, why pick a name that will inevitable deceive many people into buying a sidegrade.
  • Pastuch - Friday, October 22, 2010 - link

    There was not nearly enough discussion on DTS HD MA and TrueHD pass through in this article. Gaming is 50% of the reason to upgrade, the rest of my focus is HTPC use. Please compare the GTX 460 vs the 6870 regarding bit-streaming, video quality and hardware decoding.

    Thanks.

    P.S. Nvidia usually does a pathetic job on anything not related to gaming.
  • Scootiep7 - Friday, October 22, 2010 - link

    I think you guys are a little off on calling the 6870 the $200 price point King. The cheapest retail for the card right now is $239.99 for any model and then you have to add in another $5~10 for shipping. That sticks it at $245 - $250. That's no where near the $200 price point. And with most GTX 460 1GBs sitting at about $170 - $190 (w/ shipping), this card is not competing with them on price at all. Maybe in a few months if prices drop, but not now. It's more in the GTX 470 range and that is much tougher competition. I'm sorry, but the 6870 is NOT the $200 price point King. It's not even close.
  • Lolimaster - Sunday, October 24, 2010 - link

    HD6850 offers better performance tha 460 1GB
    HD6850 costs $175

    HD6870 kill both of them, and also 470 performance/power consumption (80w less)
  • Scootiep7 - Sunday, October 24, 2010 - link

    Ok, I'm sorry, but I have to laugh at this. Where the hell are you finding a 6850 for $175. The cheapest ANYWHERE is $199 and you still have to factor in #8ish shipping. Re-read my post and realize that the prices I quoted are accurate and you're still looking at a $30 price difference between the 6850 and the 460 1gb. Yes the performance is better, but it's not amazingly better and I don't think it justifies it. Hey, I'm all for the red team this time around. I picked up a 5770 which is an amazing bang for the buck card. I'm just saying that calling the 6870 or the 6850 the new $200 price point king is wrong. Too many variables.
  • orthancstone - Friday, October 22, 2010 - link

    I'm especially pleased to see the 4870 included in some benchmarks. As someone who owns one and who was never impressed with the performance boost/cost ratio of the 58/59xx lines, I've been wondering how the 6xxx line would compare to the two generation old stuff. I'd love to see it included in the third party 6xxx reviews.
  • Edison5do - Friday, October 22, 2010 - link

    As a owner of a HD 4850 was planning to get an HD 5770 but at this point it looks like HD 6850 looks like a better option with a few more bucks.. or wait to see if the HD 5770 will drop price a little more....
  • Sando_UK - Friday, October 22, 2010 - link

    Anandtech is one of my favourite review sites and it's a real shame to see what's happened here. I don't know the reasons why you guys needed to include the 460 OC in this review (does sound like a fine card btw, but this wasn't the place for it) - can't see any reason this wouldn't have been much better compared in a separate article. The fact Tom's hardware did a very similar thing makes the whole thing fishy...

    New generations/architectures don't come along very often and deserve proper comparison and coverage - I'm not an AMD or Nvidia fanboi (happy to go with whichever is best price/performance/extras at the time) but we rely on you guys to give us the facts on a level playing field. I'm sure you have in this case, but even the suggestion of impropriety damages you (extremely good) reputation, and I think it's something you should really try to avoid in the future - be it AMD or Nvidia reviews.

    Otherwise, thanks for all your hard work.
  • Natfly - Friday, October 22, 2010 - link

    It's sad to say, but this review fucking sucks. UVD and the display controller have been overhauled but you make no mention of any of the changes. Are there still only 2 RAMDAC clocks? Or can you now use passive DP converters while using both of both DVI ports?

    And including an OC'd card because nVidia pushed you into it? Way to take a shot to your credibility. And no mention of its clocks or price... AND no overclocking numbers for these new cards when you are specifically comparing it to an OC'd card? I mean wtf, this review is not up to previous Anandtech standards.

Log in

Don't have an account? Sign up now