Refresher: The 6800 Series’ New Features

Back in October AMD launched the first 6000 series cards, the Barts-based Radeon HD 6800 series. At their core they are a refreshed version of the Cypress GPU that we saw on the 5800 series, but AMD used the opportunity to make some enhancements over the standard Cypress. All of these enhancements apply throughout the 6000 series, so this includes the 6900 series. As such for those of you who didn’t pay much attention to the 6800 series, we’re going to quickly recap what’s new in order to lay the groundwork for further comparisons of the 6900 series to the 5800 series.

We’ll start with the core architecture. Compared to Cypress, Barts is nearly identical save 1 difference: the tessellator. For Barts AMD implemented what they call their 7th generation tessellator, which focused on delivering improved tessellation performance at lower tessellation factors that AMD felt were more important. Cayman takes this one step further and implements AMD’s 8th generation tessellator, which as the naming conventions implies is the 7th generation tessellator with even further enhancements (particularly those necessary for load balancing).

The second change we saw with Barts and the 6800 series was AMD’s refined texture filtering engine. AMD’s texture filtering engine from the 5800 set new standards by offering angle independent filtering, but it had an annoying quirk with highly regular/noisy textures where it didn’t do a good enough job blending together various mipmaps, resulting in visible transitions between them. For the 6800 series AMD fixed this, and it can now properly blend together noisy textures. At the same time in a controversial move AMD tweaked its default filtering optimizations for the 5800 series and entire 6000 series, leading to these cards producing imagines subtly different (and depending on who you ask, subtly worse) than they were on the 5800 series prior to the Catalyst 10.10 drivers.

Radeon HD 5870 Radeon HD 6870 GeForce GTX 480

The third change we saw was the introduction of a new anti-aliasing mode, initially launched on the 6800 series and backported to the 5800 series shortly thereafter. Morphological Anti-Aliasing (MLAA) is a post-processing filter that works on any (and all) images, looking for high contrast edges (jaggies) and blending them to reduce the contrast. Implemented as a compute shader, it works with all games. As it’s a post-processing filter the results can vary – the filter has no knowledge of depth, polygons, or other attributes of the rendered world beyond the final image – so it’s prone to blending everything that looks like aliasing. On the plus side it’s cheap to use as it was originally designed for consoles with their limited resources, so by not consuming large amounts of memory & memory bandwidth like SSAA/MSAA it usually has a low performance hit.

Last but not least, AMD made a number of changes to their display hardware. The Universal Video Decoder (UVD) was upgraded to version 3, bringing full decode support for MPEG-2, MPEG-4 ASP, and H.264 MVC (packed frame video for 3D movies). For the 6900 series this is not of great importance as MPEG-2 and MPEG-4 ASP are low complexity codecs, but it does play an important role for AMD’s future APU products and low-end GPUs, where offloading these low complexity codecs is still going to be a big relief for the slower CPUs they’re paired with. And on that note the first public version of the DivX codec with support for UVD3 will be shipping today, letting 6800/6900 series owners finally take advantage of this functionality.


Click to enlarge

The second of the major display changes was the addition of support for the DisplayPort 1.2 standard. DP1.2 doubles DisplayPort’s bandwidth to 21.6Gbps, finally giving DisplayPort a significant bandwidth lead over dual-link DVI. With double the bandwidth it’s now possible to drive multiple monitors off of a single root DisplayPort, a technology called Multi Stream Transport (MST). AMD is heavily banking on this technology, as the additional bandwidth coupled with the fact that DisplayPort doesn’t require a clock source for each monitor/stream means AMD can drive up to 6 monitors off of a single card using only a pair of mini-DP ports. AMD is so cutting edge here that like the 6800 series the 6900 series is technically only DP1.2 ready – there won’t be any other devices available for compliance testing until 2011.

Finally, the 6800 series also introduced support for HDMI 1.4a and support for color correction in linear space. HDMI 1.4a support is fairly straightforward: the 6000 series can drive 3D televisions in either the 1080p24 or 720p60 3D modes. Meanwhile support for color correction in linear space allows AMD to offer accurate color correction for wide gamut monitors; previously there was a loss of accuracy as color correction had to be applied in the gamma color space, which is only meant for use for display purposes. This is particularly important for integrating wide gamut monitors in to traditional gamut workflows, as sRGB is misinterpreted on a wide gamut monitor without color correction.

While all of these features were introduced on the 6800 series, they’re fundamental parts of the entire 6000 series, meaning they’re part of the 6900 series too. This provides us with a baseline set of improvements over AMD’s 5800 series, on top of the additional improvements Cayman and AMD’s VLIW4 architecture brings.

Index Cayman: The Last 32nm Castaway
Comments Locked

168 Comments

View All Comments

  • AnnihilatorX - Thursday, December 16, 2010 - link

    I disagree with you rarson

    This is what sets Anandtech apart, it has quality over quantity.
    Anandtech is the ONLY review site which offers me comprehensive information on the architecture, with helpful notes on the expected future gaming performance. It mention AMD intended the 69xx to run on 35nm, and made sacrifices. If you go to Guru3D''s review, the editor in the conclusion stated that he doesn't know why the performance lacks the wow factor. Anandtech answered that question with the process node.

    If you want to read reviews only, go onto google and search for 6850 review, or go to DailyTech's daily recent hardware review post, you can find over 15 plain reviews. Even easier, just use the Quick Navigation menu or the Table of Content in the freaking first page of article. This laziness does not entrice sypathy.
  • Quidam67 - Thursday, December 16, 2010 - link

    Rarson's comments may have been a little condescending in their tone, but I think the critism was actually constructive in nature.

    You can argue the toss about whether the architecture should be in a separate article or not, but personally speaking, I actually would prefer it was broken out. I mean, for those who are interested, simply provide a hyper-link, that way everyone gets what they want.

    In my view, a review is a review and an analysis on architecture can compliment that review but should not actually a part of the review itself. A number of other sites follow this formula, and provide both, but don't merge them together as one super-article, and there are other benefits to this if you read on.

    The issue of spelling anf grammer is trivial, but in fact could be symptomatic of a more serious problem, such as the sheer volume of work Ryan has to perform in the time-frame provided, and the level of QA being squeesed in with it. Given the nature of NDA's, perhaps it might take the pressure off if the review did come first, and the architecture second, so the time-pressures weren't quite so restrictive.

    Lastly, employing a professional proof-reader is hardly an insult to the original author. It's no different than being a software engineer (which I am) and being backed up by a team of quality test analysts. It certainly makes you sleep better when stuff goes into production. Why should Ryan shoulder all the responsibility?
  • silverblue - Thursday, December 16, 2010 - link

    I do hope you're joking. :) (can't tell at this early time)
  • Arnulf - Thursday, December 16, 2010 - link

    "... unlike Turbo which is a positive feedback mechanism."

    Turbo is a negative feedback mechanism. If it was a positive feedback mechanism (= a consequence of an action resulting in further action in same direction) the CPU would probably burn up almost instantly after Turbo triggered as its clock would increase indefinitely, ever more following each increase, the higher the temperature, the higher the frequency. This is not how Turbo works.

    Negative feedback mechanism is a result of an action resulting in reaction (= action in the opposite direction). In the case of CPUs and Turbo it's this to temperature reaction that keeps CPU frequency under control. The higher the temperature, the lower the frequency. This is how Turbo and PowerTune work.

    The fact that Turbo starts at lower frequency and ramps it up and that PowerTune starts at higher frequency and brings it down has no bearing on whether the mechanism of control is called "positive" or "negative" feedback.

    Considering your fondness for Wikipedia (as displayed by the reference in the article) you might want to check out these:

    http://en.wikipedia.org/wiki/Negative_feedback
    http://en.wikipedia.org/wiki/Positive_feedback

    and more specifically:

    http://en.wikipedia.org/wiki/Negative_feedback#Con...
  • Ryan Smith - Thursday, December 16, 2010 - link

    Hi Arnulf;

    Fundamentally you're right, so I won't knock you. I guess you could say I'm going for a very loose interpretation there. The point I'm trying to get across is that Turbo provides a performance floor, while PowerTune is a performance ceiling. People like getting extra performance for "free" more than they like "losing" performance. Hence one experience is positive and one is negative.

    I think in retrospect I should have used positive/negative reinforcement instead of feedback.
  • Soda - Thursday, December 16, 2010 - link

    Anyone noticed that the edge missing og the boards 8-pin power connector ?

    Apparently the AMD made a mistake in the reference design of the board and didn't calculating the space needed by the cooler.

    If you look closely on the power connector in http://images.anandtech.com/doci/4061/6970Open.jpg you'll notice the missing edge.

    For a full story on the matter you can go to http://www.hardwareonline.dk/nyheder.aspx?nid=1060...
    For the english speaking people I suggest the googlish version here http://translate.google.com/translate?hl=da&sl...

    There are some pictures to backup the claim the mistake made AMD here.

    Though it haven't been confirmed by AMD if this is only a mistake on the review boards or all cards of the 69xx series.
  • versesuvius - Thursday, December 16, 2010 - link

    I have a 3870, on a 17 inch monitor, and everything is fine as long as games go. The hard disk gets in the way sometimes, but that is just about it. All the games run fine. No problem at all. Oh, there's more: They run better on the lousy XBOX. Why the new GPU then? Giant monitors? Three of them? Six of them? (The most fun I had on Anandtech was looking at pictures of AT people trying to stabilize them on a wall). Oh, the "Compute GPU"? Wouldn't that fit on a small PCI card, and act like the old 486 coprecessor, for those who have some use for it? Or is it just a silly excuse for not doing much at all, or rather not giving much to the customers, and still charge the same? The "High End"! In an ideal world the prices of things go down, and more and more people can afford them. That lovely capitalist idea was turned on its head, sometime in the eighties of the last century, and instead the notion of value was reinvented. You get more value, for the same price. You still have to pay $400 for your graphic card, even though you do not need the "Compute GPU", and you do not need the aliased superduper antialiasing that nobody yet knows how to achieve in software. Can we have a cheap 4870? No that is discontinued. The 58 series? Discontinued. There are hundreds of thousands or to be sure, millions of people who will pay 50 dollars for one. All ATI or Nvidia need to do is to fine tune the drivers and reduce power consumption. Then again, that must be another "High End" story. In fact the only tale that is being told and retold is "High End"s and "Fool"s, (i.e. "We can do whatever we want with the money that you don't have".) Until better, saner times. For now, long live the console. I am going to buy one, instead of this stupid monstrosity and its equally stupid competitive monstrosity. Cheaper, and gets the job done in more than one way.

    End of Rant.
    God Bless.
  • Necc - Thursday, December 16, 2010 - link

    So True.
  • Ananke - Thursday, December 16, 2010 - link

    Agree. I have 5850 and it does work fine, and I got it on day one at huge discount, but still - it is kind of worthless. Our entertainment comes more exclusively from consoles, and I discrete high end card that commands above $100 price tag is worthless. It is nice touch, but I have no application for it in everyday life, and several months later is already outdated or discontinued.

    My guess, integrated in the CPU graphics will take over, and the mass market discrete cards will have the fate of the dinosaurs very soon.
  • Quidam67 - Thursday, December 16, 2010 - link

    Wonderfully subversive commentary. Loved it.

    Still, the thing I like about the High end (I'll never buy it until my Mortgage is done with) is that it filters down to the middle/low end.

    Yes, lots of discontinued product lines but for example, I thought the HD5770 was a fantastic product. Gave ample performance for maintstream gamers in a small form-factor (you can even get it in single slot) with low heat and power requirements meaning it was a true drop-in upgrade to your existing rig, with a practical upgrade path to Crossfire X.

    As for the xbox, that hardware is so outdated now that even the magic of software optimisation (a seemingly lost art in the world of PC's) cannot disguise the fact that new games are not going to look any better, or run any faster, than those that came out at launch. Was watching GT5 in demo the other day and with all the hype about how realistic it looks (and plays) I really couldn't get past the massive amount of Jaggies on screen. Also, very limited damage modelling, and in my view that's a nod towards hardware limitations rather than a game-design consideration.

Log in

Don't have an account? Sign up now