Adjusting Trajectory & Slipping Schedule

Carrell didn’t believe in building big chips anymore. It wasn’t that it was too difficult, it’s that it took too long for a $600 GPU to turn into a $200 GPU. AMD believed that the most important market was the larger (both in quantity and revenue) performance mainstream segment.

Rather than making the $200 - $300 market wait for new technology, Carrell wanted to deliver it there first and then scale up/down to later address more expensive/cheaper markets.

The risk in RV770 was architecture and memory technology. The risk in RV870 was architecture and manufacturing process, the latter which was completely out of AMD’s control.

Early on Carrell believed that TSMC’s 40nm wasn’t mature enough and that when it was ready, its cost was going to be much higher than expected. While he didn’t elaborate on this at the time, Carrell told me that there was a lot of information tuning that made TSMC’s 40nm look cheaper than it ended up being. I'll touch on this more later on in the article.

Carrell reluctantly went along with the desire to build a 400+ mm2 RV870 because he believed that when engineering wakes up and realizes that this isn’t going to be cheap, they’d be having another discussion.

In early 2008, going into Februrary, TSMC started dropping hints that ATI might not want to be so aggressive on what they think 40nm is going to cost. ATI’s costs might have been, at the time, a little optimistic.

Engineering came back and said that RV870 was going to be pretty expensive and suggested looking at the configuration a second time.

Which is exactly what they did.

The team met and stuck with Rick Bergman’s compromise: the GPU had to be at least 2x RV770, but the die size had to come down. ATI changed the configuration for Cypress (high end, single GPU RV870) in March of 2008.

And here’s where the new ATI really showed itself. We had a company that had decided to both 1) not let schedule slip, and 2) stop designing the biggest GPU possible. Yet in order to preserve the second belief, it had to sacrifice the first.

You have to understand, changing a chip configuration that late in the game, 1.5 years before launch, screws everything up. By the time RV770 came out, 870 was set in stone. Any changes even a year prior to that resets a lot of clocks. You have to go back and redo floorplan and configuration, there’s a lot of adjusting that happens. It takes at least a couple of weeks, sometimes a couple of months. It impacted schedule. And ATI had to work extremely hard to minimize that where possible. The Radeon HD 5870 was around 30 - 45 days late because of this change.

Remember ATI’s nothing-messes-with-schedule policy? It took a lot of guts on the part of the engineering team and Rick Bergman to accept a month+ hit on redesigning RV870. If you don’t show up to the fight, you lose by default, and that’s exactly what ATI was risking by agreeing to a redesign of Cypress.

This is also super important to understand, because it implies that at some point, NVIDIA made a conscious decision to be late with Fermi. ATI wasn’t the only one to know when DX11/Windows 7 were coming. NVIDIA was well aware and prioritized features that delayed Fermi rather than align with this market bulge. GPUs don’t get delayed without forewarning. AMD risked being late in order to make a smaller chip, NVIDIA risked being late to make a bigger one. These two companies are diverging.


The actual RV870

Engineering was scrambling. RV870 had to be a lot smaller yet still deliver 2x the computational power of RV770. Features had to go.

The Other Train - Building a Huge RV870 Carrell Loses His Baby: Say Goodbye to Sideport
Comments Locked

132 Comments

View All Comments

  • AdiQue - Sunday, February 14, 2010 - link

    I fully subscribe to point raised by a few previous posters. Namely, the article being such a worthy read, it actually justifies the creation of an account for the sheer reason of expressing appreciation to your fantastic work, which does stand out in the otherwise well saturated market of technology blogs.
  • geok1ng - Sunday, February 14, 2010 - link

    "I almost wonder if AMD’s CPU team could learn from the graphics group's execution. I do hope that along with the ATI acquisition came the open mindedness to learn from one another"

    it would be a true concern if based on mere observation, but the hard facts are soo much terrible: AMD fired tons of ATI personnel, hence ATI drivers are years behind NVIDIA- we are still begging for centered timings on ATO cards, a feature that NVIDIA offers 6 generations past! ATI produces cards that are gameless. DirectX 10.1?! There was a single game with DirectX 10.1 support, and NVIDIA made the game developer REMOVE DirectX 10.1 features with a game patch that "increased" performance. DirectX 11?! ATI has to put money on driver developing team and spend TONS of cash in game developing.

    I would be a happier costumer if the raw performance of my 4870X2 was paired with the seamless driver experience of my previous 8800GT.

    And another game that AMD was too late is the netbook and ultralow voltage mobile market. A company with the expertise in integrated graphics and HTPCs GPUs with ZERO market share on this segment?! give me a break!
  • LordanSS - Monday, February 15, 2010 - link

    Funny... after the heaps of problems I had with drivers, stability and whatnot with my old 8800GTS (the original one, 320MB), I decided to switch to ATI with a 4870. Don't regret doing that.

    My only gripe with my current 5870 is the drivers' and the stupid giant mouse cursor. The Catalyst 9.12 hotfix got rid of it, but it came back on the 10.1.... go figure. Other than that, haven't had problems with it and have been getting great performance.
  • blackbrrd - Monday, February 15, 2010 - link

    I think the reason he had issues with the X2 is that it's a dual card. I think most gfx card driver problems comes from dual cards in any configuration (dual, crossfire, sli)

    The reason you had issues with the 320mb card is that it had some real issues because of the half-memory. The 320mb cards where cards originally intended as gtx cards, but binned as gts cards that again got binned as 320mb cards instead of 640mb cards. Somehow Nvidia didn't test these cards good enough.
  • RJohnson - Sunday, February 14, 2010 - link

    Please get back under your bridge troll...
  • Warren21 - Sunday, February 14, 2010 - link

    Are you kidding me? Become informed before you spread FUD like this. I've been able to choose centered timings in my CCC since I've had my 2900 Pro back in fall 2007. Even today on my CrossFire setup you can still use it.

    As for your DX10.1 statement, thank NVIDIA for that. You must remember that THEY are the 600lb gorilla of the graphics industry - I fail to see how the exact instance you cite does anything other than prove just that.

    As for the DX11 statement, if NVIDIA had it today I bet you'd be singing a different tune. The fact that it's here today is because of Microsoft's schedule which both ATI and NVIDIA follow. NV would have liked nothing more than to have Fermi out in 2009, believe that.
  • Kjella - Sunday, February 14, 2010 - link

    "AMD fired tons of ATI personnel, hence ATI drivers are years behind NVIDIA-"

    Wow, you got it backwards. The old ATI drivers sucked horribly, they may not be great now either but whatever AMD did or didn't do the drivers have been getting better, not worse.
  • Scali - Sunday, February 14, 2010 - link

    It's a shame that AMD doesn't have its driver department firing on all cylinders like the hardware department is.
    The 5000-series are still plagued with various annoying bugs, such as the video playback issues you discovered, and the 'gray screen' bug under Windows 7.
    Then there's OpenCL, which still hasn't made it into a release driver yet (while nVidia has been winning over many developers with Cuda and PhysX in the meantine, while also offering OpenCL support in release drivers, which support a wider set of features than AMD, and better performance).
    And through the months that I've had my 5770 I've noticed various rendering glitches aswell, although most of them seem to have been solved with later driver updates.
    And that's just the Windows side. Linux and OS X aren't doing all that great either. FreeBSD isn't even supported at all.
  • hwhacker - Sunday, February 14, 2010 - link

    I don't log in and comment very often, but had to for this article.

    Anand, these type of articles (Rv770,'Rv870',and SSD) are beyond awesome. I hope it continues for Northern Islands and beyond. Everything from the RV870 jar tidbit to the original die spec to the SunSpotting info. It's great that AMD/ATi allows you to report this information, and that you have the journalistic chops to inquire/write about it. Can not provide enough praise. I hope Kendell and his colleagues (like Henri Richard) continue this awesome 'engineering honesty' PR into the future. The more they share, within understandable reason, the more I believe a person can trust a company and therefore support it.

    I love the little dropped hints BTW. Was R600 supposed to be 65nm but early TSMC problems cause it revert to 80nm like was rumored? Was Cypress originally planned as ~1920 shaders (2000?) with a 384-bit bus? Would sideport have helped the scaling issues with Hemlock? I don't know these answers, but the fact all of these things were indirectly addressed (without upsetting AMD) is great to see explored, as it affirms my belief I'm not the only one interested in them. It's great to learn the informed why, not just the unsubstantiated what.

    If I may preemptively pose an inquiry, please ask whomever at AMD when NI is briefed if TSMC canceling their 32nm node and moving straight to 28nm had anything to do with redesigns of that chip. There are rumors it caused them to rethink what the largest chip should be, and perhaps revert back to what the original Cypress design (as hinted in this article?) for that chip, causing a delay from Q2-Q3 to Q3-Q4, not unlike the 30-45 day window you mention about redesigning Cypress. I wonder if NI was originally meant to be a straight shrink?
  • hwhacker - Sunday, February 14, 2010 - link

    I meant Carrell above. Not quite sure why I wrote Kendell.

Log in

Don't have an account? Sign up now