Back to Article

  • linuxgtwindos3gtmucs - Sunday, June 07, 2009 - link

    Does it run cuda?
  • Obsy - Thursday, June 04, 2009 - link

    Does tessellation becoming a standard apply to my DX10.1 card, a Radeon HD 4850? Or do I have to get a DX11 card? Reply
  • philosofool - Thursday, June 04, 2009 - link

    but shouldn't this article be talking about ATI, not AMD. I realize the former is a subsidiary of the later, but this seems similar to talking about what Disney Corp. says about a Miramax movie. Reply
  • DerekWilson - Thursday, June 04, 2009 - link

    ATI is actually more like a brand owned by AMD than a subsidiary. The company that is doing the doing is actually AMD ... the products they are making are ATI Radeon video cards. At least, that's my understanding of it. Reply
  • GourdFreeMan - Wednesday, June 03, 2009 - link

    DirectX 11 shipped with the Windows 7 Release Candidate. You can install the DirectX 11 SDK and run sample projects (such as the one from AMD's presentation) today... albeit with software rendering as no one outside of AMD has their hands on "Evergreen" yet. I imagine AMD will be getting samples out to select software development partners (e.g. Valve) as soon as validation is completed. Given typical development times consumers could then expect the earliest DirectX 11 games to actually make significant use of DX11 late 2010 or early 2011.

    It looks like they are running SubD11 from the DirectX 11 SDK. Too bad they are not using the character model that ships with the SDK, or you could compare ref. rast. performance vs. hardware performance in one of the older DirectX10 sample projects, and do some back-of-the-envelope calculations to get performance estimates. It probably wouldn't be worth your time though, anyway, as SubD11 should be using the hardware tesselator and not just shaders like some of the other SDK sample projects.
  • pattycake0147 - Wednesday, June 03, 2009 - link

    Does anybody know if the next round of GPUs will such as Evergreen will support bit-streaming TrueHD and HD-Master Audio formats through HDMI? Reply
  • Drazick - Wednesday, June 03, 2009 - link

    I hope ATI will optimize the Chip as a General Purpose GPU.
    Easier to program, Better Optimized for Open CL.

    Any info on that?
  • Natfly - Wednesday, June 03, 2009 - link

    The question is how is TSMC's 40nm process coming along? The yield for the HD4770 seems so poor that no retailers seems to be able to keep them in stock. Are they really going to be able to mass produce mainstream 40nm products any time soon? Reply
  • doncerdo1 - Wednesday, June 03, 2009 - link

    As always, Wilson shows how little he knows about GPUs and proves he is only good at benching tons of hardware (in other words, his opinion means crap).

    Mr. Wilson check out Anandtech (yes the site you work at) to see since when tesselation was available on ATI's hardware in pre- DX9 cards. Still since you have never researched anything, you simply regurgitate what AMD sells you. Sad and pathetic.
  • AnnonymousCoward - Thursday, June 04, 2009 - link

    > "I love AT but seriously guys like this bring the content quality down."

    doncerdo1, guys like YOU bring the quality down. Fine if you want to question the R600 thing, but why be a complete jerk in the process?
  • DerekWilson - Wednesday, June 03, 2009 - link

    TruForm was horrible.

    It was tacked on tessellation that users could enable through the driver and caused very bad looking artifacts. developers could modify their game to not make truform look horrible (which it did most of the time), but they had no real, useful, workable control over the tessellation and had to design models in a restricted way to make it work right.

    ATI always tries to add truform to their list of reasons they know tessellation, but if I were them I would certainly want to distance myself from that technological horror.

    Developers have had the option of building their games to include sane, good, and appropriate tessellation with good control over what and how something is tessellated since the R600. Despite the fact that it was not available in DX and developers did not do it as a rule, it was possible to implement tessellation in a game on R600 and later hardware.

    I stand by my statement. Anyone could put a geometry amplifier in their hardware, close their eyes and hope it works out for the best. Truform did a great job of showing that is not a good idea.

    I am sorry about the grammar errors, though. I hope I've fixed all the errors by now.
  • Goty - Wednesday, June 03, 2009 - link

    TruForm (implemented since the Radeon 8500 days) is tessellation.

    Nice try, now go troll somewhere else, please.

  • doncerdo1 - Wednesday, June 03, 2009 - link

    Your answer was pretty whole point was precisely talking about ATI's TruForm (available as you said since the R200) showing that Wilson's "Tessellation has been an option since R600" statement is false (as most of the crap he writes basically because he never cares to check facts.) So seriously dude WTF with your answer?

    I love Anandtech but seriously Wilson makes this site look really bad.
  • TA152H - Wednesday, June 03, 2009 - link

    The grammar in this article is horrible too. It was really poorly thrown together, and just plain bad reading. I resent it when people write articles and do not put forth an effort to write it properly. For something this small, it's unsupportable. I am sure they realize how many people read these articles. Why write them to be sub-literate like this is? He's surely able to do better, so it's lack of effort. It's not good. Reply
  • samspqr - Wednesday, June 03, 2009 - link

    a) relax, people!

    b) I didn't think the article was written any worse than the average internet text; the bar is so low, I just got used to it: unless it's truly impossible, my brain transparently translates everything to a language it can understand

    c) in any case, you have to understand that he's (most probably) at computex, so it's either rush the story in a hurry, or wait a week and post it when you're back home; I prefer the first option

    d) with respect to tesselation, I think Wilson is referring to the fact that R600 included most of what's going to be DX11's tesselation, because it was supposed to be a part of DX10, but got dropped from the spec at the last moment, because of nvidia pressure; I don't know about TruForm, but I'd be quite surprised if R200 were compatibe with DX11's tightly defined tesselation
  • SilentSin - Wednesday, June 03, 2009 - link

    In response to d), not even the R7xx implementations of tessellation will be compatible with the newer DX11 spec because of slight differences. I've read about it in various articles where AMD responds to changes with DX11, but I can't recall exactly where atm.

    It might have been good proof of concept testing for AMD to put in their chips, but I wouldn't hold my breath hoping that game devs will branch their code paths just to make tessellation work with AMD's DX10 hardware no matter how close it is to DX11. AMD might try to get that kind of backwards functionality via drivers, just as some DX10.1 features (mostly AA stuff) have been made possible on NV hardware with driver revisions, but I don't think it's likely.

    I think you're right in saying the only real reason we see this on the 2000-4000 series is because of how DX10 was supposed to be. They didn't have a real reason to take it out and it probably wasn't taking up much room so they just left it in for R7xx. I guess it also gave PC devs the option of having that feature if they were porting an X360 game, but I don't think that has ever been used.

    If you look at the actual slides that AMD showed at this conference (see or google) then you will see they put TruForm in the same evolutionary line as the more modern implementations of tessellation. It also gives 2 real world usage examples which are, lo and behold, both examples from X360 games. They mention Mass Effect, not sure if the tessellation carried over to the PC for ATi hardware but I have a feeling it wasn't.

    At any rate, with DX11 tessellation should start to take off. AMD can at least say they've been around that block once or twice, but with something like this I'm not too sure that will give them much of a leg up on NVidia once they get the ball rolling too.
  • bobvodka - Wednesday, June 03, 2009 - link

    The R600+ tessellation hasn't shown up simply because until early this year there has been NO way to program it.

    DX9 doesn't have support, DX10 doesn't have support and it was only recently that the OpenGL implimentation started exposing the extension string to get at it, and given that game wise OpenGL is basically dead on the PC (I say this as an ex-OpenGL developer; iD are the only company of note to use it, even WoW ships with D3D enabled by default on windows) no one has bothered to deal with it.

    As for DX11 itself, it's important for a number of reasons beyond the extra hardware;
    - multi-threading is MUCH easier, while the final draw still has to happen on the main thread other threads can submit resources and build 'display lists' to be quickly executed
    - Feature levels; DX9's capbits without the explosion... iirc this will even allow the DX11 API to be used with DX9 hardware.

    There are a few more, but it's late and my memory has escaped me on the matter.
  • doncerdo1 - Wednesday, June 03, 2009 - link

    a) I am ;)

    b) That's what I don't like about AT part 1: The whole idea of this site was the QUALITY of its content...maybe the internet is like this but AT was one of the few bastions that were free from being mediocre

    c) Rush justifies a piss poor reporting job?

    d) Inferring what he might have meant is not the best way to go. In the end, you end up giving a guy like Mr. Wilson the credibility he doesn't deserve...seroiously he's pretty bad whenever he states an opionion. At most he should be tester but let someone else write for him.

    In the end I insist, I love AT but seriously guys like this bring the content quality down. Besides AT has a big influence and is very well respected as a professional source of information. Why should mediocrity be justified?
  • Viditor - Thursday, June 04, 2009 - link

    "The whole idea of this site was the QUALITY of its content"

    And the idea for these comments is to give useable feedback, not rude tirades. Seriously, I don't know if you had a valid point or not because your comments were so nasty that to me you were illegible.

    You should attempt constructive criticism on specifics if you really want to accomplish anything more than just venting.

    Specifically, lines like:
    "As always, Wilson shows how little he knows about GPUs and proves he is only good at benching tons of hardware"

    What good does that do except to show that you can be an asshole?
  • sbuckler - Wednesday, June 03, 2009 - link

    d) I am sure Mr Wilson knows perfectly well what TruForm is. Most people who've been around graphics cards for a few years know about it. This is mainly because Ati fan boys banged on about how great it was for years.

    Hence all you do is sound like a bitter fan boy upset about the fact the rest of the world never embraced TruForm.

    The point I thought he was trying to make in the article is that in DX11 tessellation is now in the standard. Be happy that it therefore might actually start to get used and stop nit picking about the past.
  • Goty - Wednesday, June 03, 2009 - link

    I attribute any weirdness of my answer to the sentence structure in your first post. =P Reply
  • MadBoris - Wednesday, June 03, 2009 - link

    DX9 is still here to stay and will be more popular until the next console refresh.

    Unfortunately consoles drive the state of hardware adoption and game integration. DX11 still won't be really utilized or leveraged until the next gen console comes out.

    Glad to see technology still moving forward but it must be hard to produce technology and hardware knowing it's only a building block towards something else in the future when the next gen consoles set a new hardware standard.

    New GPU's need to provide considerably more performance with enhanced features (not DX11 features that devs won't use) then I may consider a new GPU.
    My 8800 still plays all multiplatform games fine so finding a good justification for upgrading is still difficult.

    I wonder how long it will take until these GPU companies realize they can't spit out hardware like this without losing money when developers are now tuned to console product cycles.
  • Griswold - Wednesday, June 03, 2009 - link

    Hello clueless! Reply
  • iwodo - Wednesday, June 03, 2009 - link

    Arh, i forgot to mention how Nvidia did so well when ATI were split into three group to design chips for Xbox 360, Wii and the PC part. After everything was finished ATI seems to hit back very hard. Any thoughts? Reply
  • seriouscat - Wednesday, June 03, 2009 - link

    After all the promises of Cinema 2.0 and then having NONE of the demos being released to the public, I'll sit back and wait on this one. Reply
  • JAKra - Wednesday, June 03, 2009 - link

    The core looks so symmetrical, like a dual core CPU.
    Anyway it looks way different than RV770.
    180mm^2 for a performance part? I'm not used to this... :D
    Eager to see the performance. :)
  • iwodo - Wednesday, June 03, 2009 - link

    I forgot Nvidia as well, Since my beloved Apple is in bed with Nvidia. I seriously hope Nvidia has something good to counter it. On paper Greenland looks like to offer some very good performance / watt / price..... Reply
  • iwodo - Wednesday, June 03, 2009 - link

    Looks like it will be MUCH more profitable then Rv770. Yield problem as mentioned above would be improved by the time they ship the first batch.

    However i have doubt on performance improvement. Consider the extra transistors are needed for Direct X Compute and Direct X 11 features. Apart from Clock speed i dont see any speed improvement over Rv870. ( I hope i am wrong )

    I hope we get UVD improvement as well.
  • plonk420 - Wednesday, June 03, 2009 - link

    i'm more interested in GPGPU/DX11/OpenCL capability ... and just a WEEEE bit of gaming ;) Reply
  • Dante80 - Wednesday, June 03, 2009 - link

    So, from the looks of it (I assume we are seeing RV870 and not RV840 there), RV870 would be a tweaked for DX11 RV790 @40nm, without an increase in shader count.

    If AMD goes again for best value/cost for the performance segment (employing the same binning scalability that the RV770 strategy used), they will have the same competitive products in the 200-300$ range, with better margins per chip (but less yields are implied, at least atm due to TSMC difficulties with the process).

    Can that be enough to counter the Nvidia loch ness monster?...

    Also, will it blend?...XD

    ps: First post in AT :)
  • samspqr - Wednesday, June 03, 2009 - link

    this new chip should have around 13% more transistors than RV790

    if what charlie demerjian says is true, that most of DX11 features were already implemented in old ati chips, there could be some resource increases (more shaders)

    yet, if that's not to come, and clock rates are not improved by much either because of tsmc's problems, it'd be hard to try and sell this stuff for $300
  • psychobriggsy - Wednesday, June 03, 2009 - link

    Maybe ATI is going for clock speed increases over shader increases.

    This Evergreen might have, say, 960 shaders as opposed to the 800 in RV790 (I'm sure someone can do a 40nm die analysis to work out what percentage of the RV740 die 640 shaders are, then work out how many there would be on the bigger RV870 die, assuming non-shader stuff was pretty much the same).

    But maybe the shader clocks are higher (as with NVIDIA's offerings), so AMD can offer good performance with small (and hence cheap) die sizes.
  • jessicafae - Tuesday, June 16, 2009 - link

    It does look more and more like a 40nm,DX11 RV790. Even in the DX11/evergreen video demo that ATI did, the speaker slipped and mentioned "800 giga processors" at 34secs">

    And about a year ago ATI/AMD was talking about how the next generation chips where going to be MCM (multi-chip-modules) like the first Intel quad cores (2 dies in single chip carrier).">
    It would be very ease to put two of these evergreen GPUs into a MCM chip carrier and have a 1600 shader, 80texture, 32ROP "chip". Using the MCM approach will give better yields than building a 2billion transistor monster chip to do the same thing. The extra transistors in Evergreen may be a type of interconnect/bridge to better enable the MCM and maybe some extra DX11 logic.

    at least this is my guess based on what was said.

Log in

Don't have an account? Sign up now