Depression Sets in but the Team Goes On

The entire RV770 design took around three years, which means that while we were beating ATI up over the failure that was R600, those very engineers had to go into work and be positive about RV770. And it was tough to, after all ATI had just completely lost the crown with R600 and Carrell, Rick Bergman and others were asking the team to ignore what happened with R600, ignore the fact that they lost the halo, and try to build a GPU that aimed at a lower market segment.

Through all of my interviews, the one thing that kept coming up was how impressed ATI was with the 770 team - never once did the team fall apart, despite disagreements, despite a shaky direction, the team powered through.

The decision not to go for the king of the hill part was a decision that made a lot of sense with ATI, but there was so much history about what would happen if you didn’t get the halo part; it took a very strong discipline to cast history aside and do what the leads felt was right, but the team did it without question.

The discipline required wasn’t just to ignore history, but to also fight the natural tendency for chips to grow without limits during their design phase. What ATI achieved with RV770 reminded me a lot of Intel’s Atom design team, each member of that team had strict limits on how big their blocks could be and those limits didn’t waver.

Adversity tends to bring the best out of people. The best stories I’ve been told in this industry, the Intel folks who made Banias and the ATIers that were responsible for RV770 put their hearts and souls into their work, despite being beat down. Passion has a funny way of being a person’s strongest ally.

The Power Paradigm

We were all guilty for partaking in the free lunch. Intel designed nearly five years of processors without any concern for power consumption and the GPU guys were no different.

In the R300 and R420 days ATI was almost entirely ignoring power, since estimating how much power the parts would use was so off from the final product that they just didn’t care. It was such a non-issue in those days that ATI didn’t even have a good way to estimate power even if it wanted to, it was impossible to design for a specific TDP. Today ATI’s tools are a lot better, now targeting a specific TDP is no different than aiming for a specific clock speed or die size, it’s another variable that can now be controlled.

These days power doesn’t change much, the thermal envelopes that were carved out over the past couple of years are pretty much stationary (ever wonder why the high end CPUs always fall around 130W?). Everyone designs up to their power envelope and stays there. What matters now is every year or two increasing performance while staying within the same power budget. Our processors, both CPUs and GPUs, are getting more athletic, rather than just putting on pounds to be able to lift more weight.

One of the more interesting things about architecting for power is that simply moving data around these ~1 billion transistor chips takes up a lot of power. Carrell told me that by the time ATI is at 45nm and 32nm, it will take as much power to move the data to the FPU as it does to do the multiply.

Given that data movement is an increasingly power hungry task a big focus going forward is going to be keeping data local when possible, minimizing moving to registers and on-chip caches. We may see more local register files and more multi-tiered memory hierarchies. As chips get more complex, keeping the register file in one central location becomes a problem.

ATI admitted to making a key manufacturing mistake with R600. The transistor technology selected for R600 was performance focused, designed to reach high clock speeds and yielded a part that didn’t have good performance per watt - something we noticed in our review. ATI has since refocused somewhat away from the bleeding edge and now opts for more power efficiency within a given transistor node. With leakage a growing problem as you go to smaller transistors it’s not worth it to be super leaky to gain a few picoseconds. If you’ve got a 100W GPU, do you want to waste 40W of that budget on leakage? Or would you rather do 80W of real work and only waste 20W? It’s the same realization that Intel recognized during the Pentium 4’s term and it’s the mentality that gave us the Core microarchitecture. It’s an approach that just makes sense.

If it Ain’t Broke... Just One Small Problem: We Need a New Memory Technology
Comments Locked

116 Comments

View All Comments

  • MrSpadge - Saturday, December 6, 2008 - link

    Exactly what I was thinking! That's why I got a 8500LE back then, when Geforce 4 was not in (public) sight yet.
  • FireSnake - Wednesday, December 3, 2008 - link

    ... which one is Anand (on the picture at the beginning of the article)?

    I always wondered how he looks like ... I guess the one on the right.
  • 3DoubleD - Wednesday, December 3, 2008 - link

    I've had Anandtech as my home page for 5 years and I've read almost every article since (and even some of the older ones). This is by far one of your greatest works!

    Thanks
  • hellstrider - Wednesday, December 3, 2008 - link

    Kudos to Anand for such a great article, extremely insightful. I may even go out and purchase AMD stock now :)

    I love AMD even when it’s on the bottom, I own 780G + X2 + hd4850, in hopes that Deneb (or AM3 processors for that matter) will come in time to repeat the success of rv770 launch, at which point I will upgrade my obsolete X2 and have a sweet midrange machine.

    My only concern is that Nvidia is looking at all this smirking and planning an onslaught with the 55nm refresh. There is a very “disturbing” article at Xbitlabs that Nvidia is stock-piling the 55nm GT200 parts; seems like that’s something they would do – start selling those soon and undercut 4800 series badly.
    I’m just a concerned hd4850 owner and I don’t want to see my card obsolete within couple of months. I don’t really see AMD’s answer to 55nm GT200 in such short period of time?!?!

    Any thoughts?
  • Goty - Wednesday, December 3, 2008 - link

    I don't think you'll have to worry too badly about the 55nm G200s. NVIDIA won't drop prices much, if at all; they're already smarting from the price drops enacted after the RV770 launch. There's also the fact that the 4850 isn't in the same market space as any of the G200 cards, so they're not really competitive anyhow.
  • ltcommanderdata - Wednesday, December 3, 2008 - link

    I always imagined designing GPUs would be very stressful given you're trying to guess things years in advance, but this inside look at how things are done was very informative.

    On GDDR5, it's interesting to read that ATI was pushing so hard for this technology and they felt it was their only hope for the RV770. What about GDDR4? I thought ATI was a big supporter of it too and was the first to implement it. I'm pretty sure Samsung announced GDDR4 that could run at 3.2GBit/s in 2006 which isn't far from the 3.6GBit/s GDDR5 used in the 4870, and 4GBit/s GDDR4 was available in 2007. I guess there are still power savings to be had from GDDR5, but performance-wise I don't think it would have been a huge loss if GDDR5 had been delayed and ATI had to stick with GDDR4.

    And another interesting point in your article was definitely about the fate of the 4850. You report that ATI felt that the 4870 was perfectly specced and wasn't changed. I guess that meant they were always targeting the 750MHz core frequency that it launched with. Yet ATI was originally targeting the 4850 at 500MHz clock. With the 4870 being clocked 50% faster, I think it should be obvious to anyone just looking at the clock speed that there would be a huge performance gap between the 4850 and 4870. I believe the X1800XL and X1800XT had a similarly large performance gap. Thankfully Dave Baumann convinced them to clock the 4850 up to a more reasonable 625MHz core.

    One thing that I feel was missing from the article was how the AMD acquisition effected the design of the RV770. Perhaps there wasn't much change or the design was already set so AMD couldn't have changed things even if they wanted to, but they must have had an opinion. AMD was probably nervous that they bought ATI at it's height when the R580 was out and top, but once acquired, the R600 came out and underperformed. Would be interesting to know what AMD's initial opinion of ATI's small die, non-top tier targetted strategy was although it now seems to be more consistent with AMD's CPU strategy since they aren't targeting the high-end there anymore either.
  • hooflung - Wednesday, December 3, 2008 - link

    The final frontier market share wise is to steal a major vendor like eVGA. If they can get an eVGA, BFG or XFX to just sell boards with their warranties AMD would be really dominant.
  • JonnyDough - Wednesday, December 3, 2008 - link

    The best thing I've ever read on a tech site. This is why you're better than THG.

    Only one typo! It was a "to" when it should have been a "too."

    Chalk one up for the red team. This makes my appreciation for AMD rise even more. Anyone willing to disclose internal perspectives about the market like this is a team with less secrecy that I will support with my hard earned cash. So many companies could stand up and take a lesson here from this (i.e. Apple, MS).

    Keep articles like this coming, and I'll keep coming back for more.

    Sincerely,

    ~Ryan
  • epyon96 - Wednesday, December 3, 2008 - link

    I have been an avid reader of this site for close to 8 years. I used to read almost every CPU, GPU and novelty gadget articles page to page. But over the years, my patience is much lower and I realize I get just as much enjoyment and information from just reading the first page and last page and skimming a few benchmarks.

    However, this is the first article in a while that I spent reading all of it and I thoroughly enjoyed it. These little back stories with a human element in one of the most interesting recent launches provides a refreshing change from boring benchmark-oriented articles.

    I hope to find an article based on Nehpalem of a similar nature and other Intel launches.

  • GFC - Wednesday, December 3, 2008 - link

    Wow, all i can say is that i loved this review. It was realy enjoyable to read, and i must give my thanks to Anandtech and Carrell!

Log in

Don't have an account? Sign up now