Barts: The Next Evolution of Cypress

At the heart of today’s new cards is Barts, the first member of AMD’s Northern Island GPUs. As we quickly hinted at earlier, Barts is a very direct descendant of Cypress. This is both a product of design, and a product of consequences.

It should come as no surprise that AMD was originally looking to produce what would be the Northern Islands family on TSMC’s 32nm process; as originally scheduled this would line up with the launch window AMD wanted, and half-node shrinks are easier for them than trying to do a full-node shrink. Unfortunately the 32nm process quickly became doomed for a number of reasons.

Economically, per-transistor it was going to be more expensive than the 40nm process, which is a big problem when you’re trying to make an economical chip like Barts. Technologically, 32nm was following TSMC’s troubled 40nm process; TSMC’s troubles ended up being AMD’s troubles when they launched the 5800 series last year, as yields were low and wafers were few, right at a time where AMD needed every chip they could get to capitalize on their lead over NVIDIA. 32nm never reached completion so we can’t really talk about yields or such, but it’s sufficient to say that TSMC had their hands full fixing 40nm and bringing up 28nm without also worrying about 32nm.

Ultimately 32nm was canceled around November of last year. But even before that AMD made the hard choice to take a hard turn to the left and move what would become Barts to 40nm. As a result AMD had to make some sacrifices and design choices to make Barts possible on 40nm, and to make it to market in a short period of time.

For these reasons, architecturally Barts is very much a rebalanced Cypress, and with the exception of a few key changes we could talk about Barts in the same way we talked about Juniper (the 5700 series) last year.


Click to enlarge

Barts continues AMD’s DirectX 11 legacy, building upon what they’ve already achieved with Cypress. At the SPU level, like Cypress and every DX10 AMD design before it continues to use AMD’s VLIW5 design. 5 stream processors – the w, x, y, z, and t units – work together with a branch unit and a set of GPRs to process instructions. The 4 simple SPs can work together to process 4 FP32 MADs per clock, while the t unit can either do FP32 math like the other units or handle special functions such as a transcendental. Here is a breakdown of what a single Barts SPU can do in a single clock cycle:

  • 4 32-bit FP MAD per clock
  • 4 24-bit Int MUL or ADD per clock
  • SFU : 1 32-bit FP MAD per clock

Compared to Cypress, you’ll note that FP64 performance is not quoted, and this isn’t a mistake. Barts isn’t meant to be a high-end product (that would be the 6900 series) so FP64 has been shown the door in order to bring the size of the GPU down. AMD is still a very gaming-centric company versus NVIDIA’s philosophy of GPU computing everywhere, so this makes sense for AMD’s position, while NVIDIA’s comparable products still offer FP64 if only for development purposes.

Above the SPs and SPUs, we have the SIMD. This remains unchanged from Cypress, with 80 SPs making up a SIMD. The L1 cache and number of texture units per SIMD remains at 16KB L1 texture, 8KB L1 compute, and 4 texture units per SIMD.

At the macro level AMD maintains the same 32 ROP design (which combined with Barts’ higher clocks, actually gives it an advantage over Cypress). Attached to the ROPs are AMD’s L2 cache and memory controllers; there are 4 128KB blocks of L2 cache (for a total of 512KB L2) and 4 64bit memory controllers that give Barts a 256bit memory bus.

Barts is not just a simple Cypress derivative however. For non-gaming/compute uses, UVD and the display controller have both been overhauled. Meanwhile for gaming Barts did receive one important upgrade: an enhanced tessellation unit. AMD has responded to NVIDIA’s prodding about tessellation at least in part, equipping Barts with a tessellation unit that in the best-case scenario can double their tessellation performance compared to Cypress. AMD has a whole manifesto on tessellation that we’ll get in to, but for now we’ll work with the following chart:

AMD has chosen to focus on tessellation performance at lower tessellation factors, as they believe these are the most important factors for gaming purposes. From their own testing the advantage over Cypress approaches 2x between factors 6 and 10, while being closer to a 1.5x increase before that and after that up to factor 13 or so. At the highest tessellation factors Barts’ tessellation unit falls to performance roughly in line with Cypress’, squeezing out a small advantage due to the 6870’s higher clockspeed. Ultimately this means tessellation performance is improved on AMD products at lower tessellation factors, but AMD’s tessellation performance is still going to more-or-less collapse at high factors when they’re doing an extreme amount of triangle subdivision.

So with all of this said, Barts ends up being 25% smaller than Cypress, but in terms of performance we’ve found it to only be 7% slower when comparing the 6870 to the 5870. How AMD accomplished this is the rebalancing we mentioned earlier.

Based on AMD’s design decisions and our performance data, it would appear that Cypress has more computing/shading power than it necessarily needs. True, Barts is slower, but it’s a bit slower and a lot smaller. AMD’s various compute ratios, such as compute:geometry and compute:rasterization would appear to be less than ideal on Cypress. So Barts changes the ratios.

Compared to Cypress and factoring in 6870/5870 clockspeeds, Barts has about 75% of the compute/shader/texture power of Cypress. However it has more rasterization, tessellation, and ROP power than Cypress; or in other words Barts is less of a compute/shader GPU and a bit more of a traditional rasterizing GPU with a dash of tessellation thrown in. Even in the worst case scenarios from our testing the drop-off at 1920x1200 is only 13% compared to Cypress/5870, so while Cypress had a great deal of compute capabilities, it’s clearly difficult to make extremely effective use of it even on the most shader-heavy games of today.

However it’s worth noting that internally AMD was throwing around 2 designs for Barts: a 16 SIMD (1280 SP) 16 ROP design, and a 14 SIMD (1120 SP) 32 ROP design that they ultimately went with. The 14/32 design was faster, but only by 2%. This along with the ease of porting the design from Cypress made it the right choice for AMD, but it also means that Cypress/Barts is not exclusively bound on the shader/texture side or the ROP/raster side.

Along with selectively reducing functional blocks from Cypress and removing FP64 support, AMD made one other major change to improve efficiency for Barts: they’re using Redwood’s memory controller. In the past we’ve talked about the inherent complexities of driving GDDR5 at high speeds, but until now we’ve never known just how complex it is. It turns out that Cypress’s memory controller is nearly twice as big as Redwood’s! By reducing their desired memory speeds from 4.8GHz to 4.2GHz, AMD was able to reduce the size of their memory controller by nearly 50%. Admittedly we don’t know just how much space this design choice saved AMD, but from our discussions with them it’s clearly significant. And it also perfectly highlights just how hard it is to drive GDDR5 at 5GHz and beyond, and why both AMD and NVIDIA cited their memory controllers as some of their biggest issues when bringing up Cypress and GF100 respectively.

Ultimately all of these efficiency changes are necessary for AMD to continue to compete in the GPU market, particularly in the face of NVIDIA and the GF104 GPU powering the GTX 460. Case in point, in the previous quarter AMD’s graphics division only made $1mil in profit. While Barts was in design years before that quarter, the situation still succinctly showcases why it’s important to target each market segment with an appropriate GPU; harvested GPUs are only a stop-gap solution, in the end purposely crippling good GPUs is a good way to cripple a company’ s gross margin.

Index Seeing the Future: DisplayPort 1.2
Comments Locked

197 Comments

View All Comments

  • Donkey2008 - Friday, October 22, 2010 - link

    Can you provide a link to your website so I can read your review of the cards? That would be awesome.
  • Natfly - Friday, October 22, 2010 - link

    Sure, right here: http://tinyurl.com/36ag36d
  • BlendMe - Friday, October 22, 2010 - link

    So you're telling me I can get two 6870 and spend lest money, use less power and have more performance than a GTX 480? I like the idea of going back to what made the 48xx cards so great. Small, cheap and expandable.

    Can't wait for the rest of the line up.
  • tpurves - Friday, October 22, 2010 - link

    how is it that the nvidia cards go UP in framerate when you increase the resolution from 1650 to 1920 and add 4xAA? Did you mix up some test run numbers?
  • mapesdhs - Friday, October 22, 2010 - link


    It's a pity the charts don't include SLI results for the EVGA 460. I would like to have seen
    how close it came to 470 SLI, given the 470s inferior power, GPU load temp and noise
    results. The 470 GPU load temps under Crysis for just one card are particularly scary;
    the idea of using two 470s SLI, and even more so oc'ing them, seems like a recipe for
    thermal mayhem - alien astronomers with IR telescopes would wonder what the heck
    they've spotted. :D

    The price drop on the 470 is interesting, but the EVGA 460 still looks like a better buy
    because of the power/heat issues, especially so for those considering SLI (as I am),
    and also the fact that the EVGA is as good or better than the 6870. This graph is the
    one that interests me:

    http://images.anandtech.com/graphs/graph3987/33232...

    The stock 460 SLI is clearly nowhere near as good as 6870 CF or 470 SLI, but given
    a single EVGA 460 matches the 6870, I'd really like to know how two EVGAs perform.
    Any chance you could add the data later?

    On the other hand, one could assume the 6870 should have some oc'ing headroom,
    but toms' review didn't show that much of a gain from oc'd 6870s.

    The 6870 here in the UK seems to be about 200 UKP (Aria, Scan), though the XFX
    version looks to be an exception (178 from Scan). The EVGA is 174 (Scan, but no
    stock yet). For those who don't want to spend that much, the 800MHz Palit Sonic
    Platinum 460 has dropped down to only 163 (last week it was 183). I almost bought
    two of the Palit cards last week, so I'm glad I waited.

    Obviously the pricing is all over the place atm, and likely to wobble all over again
    when the next 6xxxx cards are released. Either way, despite the lack of major
    performance increases atm, at least there's finally some pricing/value competition.
    I think I'll wait until the dust settles re pricing, then decide. Quite likely many others
    will do the same.

    Ian.
  • AtenRa - Friday, October 22, 2010 - link

    Why did you run at 1920x1200 and not 1920x1080 ??

    most 1920x1200 monitors are extincted from the market and 1920x1080 is becoming the defacto resolution.
  • Lunyone - Friday, October 22, 2010 - link

    Well with bowing down to nVidia on the selection of "what" GPU to use, you have lost all credibility in my eyes. Even Tom's Hardware took a higher road and agreed to use the "hand picked" GPU, but limited the clocks to near stock settings, so there was a more "real" world comparison. Who nows if this isn't the first time that this has happened at Anandtech. I notice no rebuttals on Anand's part, so I'm guessing that they are quite amazed that people are seeing how one sided this issue is. This article wouldn't affect my purchase, since I look a several sites to draw a conclusion from. But my confidence in quality and fair reporting from Anandtech's reviews have been compromised, IMHO. I don't know if I will put any merit to any on Anand's reviews, time will tell.
  • Sunburn74 - Friday, October 22, 2010 - link

    Gee. You know whats all this about Anandtech losing credibility? Nvidia specifically asked them to test one card and the consumer benefits from having this information. Its not like anandtech didn't include the reference gtx460 as well. Anything that tells the consumer more about how valuable his dollar really is, is a good thing imo.

    I currently have an oced radeon 5850 and it annoyed the hell out of me trying to justify whether or not the extra 30 bucks I eventually ended up paying for it, was worth it. There weren't any reviews at the time you see...
  • SandmanWN - Friday, October 22, 2010 - link

    You can't gauge value of an overclocked card against a stock card. You have no idea what the other card can do. What you are saying is nonsense if you really put two seconds into thinking about what you just said.
  • mindbomb - Friday, October 22, 2010 - link

    we are talking about factory oc'd cards.
    It's not like Anand was playing around in rivatuner.

Log in

Don't have an account? Sign up now