Dave Baumann Saves the Radeon HD 4850

ATI had this habit of finding good reviewers and bringing them on staff. Our first Graphics Editor, Matthew Witheiler, went to work for ATI after graduating from Duke. He was with AnandTech for a good three years before ATI snagged him, he ended up being ATI’s youngest Product Manager (congrats on the engagement Matthew). One other prominent reviewer ATI grabbed ahold of was Dave Baumann of Beyond3D fame and brought him on to do technical marketing.

One of Baumann’s strongpoints was the ability to analyze the competitive landscape given that’s what he did for Beyond3D before ATI. One of Dave’s first major tasks at ATI was to compare R600 to G80 internally, which wasn’t exactly the best job in November of 2006. Obviously, G80 had a significant impact on RV770. While the architecture was set in stone, clock speeds, board layout and memory sizes were all variable until early 2008.

Initially, RV770 was targeted at 1.5x the performance of R600, which looking back would not have been enough. During the next 1.5 years that 1.5x turned into 2x R600 and finally settled at 2.5x the speed of R600, at a price in the $200 - $300 range.

Dave became a product manager on RV770 by February 2008, which was a big deal given that he hadn’t been with ATI that long and this was a very important product. RV670 saw ATI return to competition in the year prior, but RV770 needed to put ATI back on top.

When Dave took the 770 under his wing a lot of the product had already been mapped out, the chip was back from the fabs and at this point ATI’s engineering team wasn’t ready or eager to make any changes. The RV770 XT sat well with Mr. Baumann (the XT was the internal name of the Radeon HD 4870), in his words “the specifications were perfect”. There was a late change to the 4870 that gave it its second PCIe power connector, but that’s it. Arguably the more important version, the RV770 Pro that would become the Radeon HD 4850, concerned him - it was a bit under spec’d.

Here’s a quick put-yourself-in-ATI’s-shoes test. Your engineering team has spent the past three years on a product that may fail miserably because it’s a radical departure from how you’ve designed GPUs in the past. Your last major GPU architecture launch failed miserably (R600), and the last refresh (RV670) did ok but still didn’t really snag real mindshare from NVIDIA. You’ve just finished this radical new design, and this young new PM with an accent comes in three months before you’re supposed to enter production and tells you that you need to make changes. It was a ballsy move by Baumann, but he wasn’t interested in saving face, he was trying to help his team win. The engineers could’ve just as easily cast him aside, but they listened and they worked, oh did they work. The final stretch is rarely the quickest or the easiest, and this is very true about RV770.

The Radeon HD 4850 was originally a 256MB card with a 500MHz core clock and 900MHz memory clock. Dave insisted that the card needed 512MB of GDDR3 and 625MHz core / 993MHz memory clock, it’s not just that he insisted, but that he convinced the engineers to make such a late change. Dave took the engineers through his reasoning of why and where ATI needed to be in the competitive landscape, by the end of the discussion he didn’t need to persuade them, the board and ASIC teams were championing the changes.

Had it not been for these modifications, the 4850 would not have put as much pressure on NVIDIA’s GeForce 9800 GTX and its pricing wouldn’t have needed to fall so quickly.

Thanks Dave.

Just One Small Problem: We Need a New Memory Technology The Last Hiccup and Recon from Taiwan
POST A COMMENT

116 Comments

View All Comments

  • Sahrin - Monday, January 25, 2010 - link

    Anand,

    I love this piece. Not sure if you'll get notified, but while doing some research on the performance of Hybrid Crossfire, I came back - it was interesting to see the tone of the piece, and hear about the guys at ATI talking vageuly about what would become the 5870. Fascinating stuff, I've got to put a bookmark in my calendar to remind me to come back to this next year when RV970 is released (pending no further difficulties).

    Any chance of a follow-up piece with the guys in SC?
    Reply
  • caldran - Wednesday, December 24, 2008 - link

    the gpu industry is squeezing more and more transistors(SP s or what ever) .it would be energy efficient if it could disable some cores when there is less load than reducing clock frequency and 2D mode.just like in the latest AMD processor.a HD 4350 would consume power less than HD 4850 in IDLE right. Reply
  • bupkus - Wednesday, December 10, 2008 - link

    I couldn't put it down until I had finished.

    Extremely enjoyable write!
    Reply
  • yacoub - Tuesday, December 9, 2008 - link

    "that R580 would be similar in vain"

    You want vein. Not vain, not vane. Vein. =)
    Reply
  • CEO Ballmer - Sunday, December 7, 2008 - link

    You people don't mention their alliance with MS!

    http://fakesteveballmer.blogspot.com">http://fakesteveballmer.blogspot.com
    Reply
  • BoFox - Sunday, December 7, 2008 - link

    LOL!!!!! Reply
  • BoFox - Sunday, December 7, 2008 - link

    Great article--a nice read!

    However...

    From how I remember history:

    In 2006, when the legendary X1900XTX took the world by surprise, actually beating the scarce and coveted 7800GTX-512, I bought it. It was king of the hill from January 2006 until the 7950GX2 stole the crown back for the fastest "single-slot" solution about 6 months later around June 2006, only a few months after the smaller 90nm 7900GTX was *finally* released in April 2006. Everybody started hailing Nvidia again although it was really an SLI dual-gpu solution sandwiched into one PCI-E slot. Perhaps it was the quad-gpu thingy that sounded so cool. It was obviously over-hyped but really took the attention away from ATI.

    GDDR4 on the X1950XTX hardly did any good, since it was a bit late (Sept 2006) with only like 3-4 performance increase over the X1900. Well then the 8800GTX came in Nov 2006 and had a similar impact that the 9700Pro had.

    As everybody wanted to see how the R600 would do, it was delayed, and disappointed hugely in June 2007. The 8800GTX/Ultra kept on selling for around $600 for nearly 12 months straight, making history. 80nm just did not cut it for the R600, so ATI wanted to have its dual-GPU single card REVENGE against Nvidia. And it would be even better this time since it's done on a single PCB, not a sandwiched solution like Nvidia's 7900GX2. Hence the tiny RV770 chips made on unexpected 55nm process! The 3870X2 did beat the 8800GTX in most reviews, but had to use Crossfire just like with SLI. Also, the 3870X2 only used GDDR3, unlike the single 3870 with fast GDDR4.

    But Nvidia still took the attention away from the 3870 series by tossing an 8800GT up for grabs. When the 3870X2 came out in Jan 2008, Nvidia touted its upcoming 9800GX2 (to be released one month afterwards). So, Nvidia stopped ATI with an ace up its sleeve.

    Round 2 for ATI's revenge: The 4870X2. And it worked this time! There was no way that Nvidia could expect the 4870 to be *that much* better than the 3870. Everybody was saying the 4870 would be 50% faster, and Nvidia yawned at that, thinking that the 4870 still couldnt touch the 9800GTX or 9800GX2 when crossfired. Plus Nvidia expected the 4870 to still have the "AA bug" since the 3870 did not fix it from the 2900XT, and the 4870 had a similar architecture. Boy, Nvidia was all wrong there! The 4870 actually ended up being *50%* faster than the 9800GTX in some games.

    So, now ATI has earned its vengeance with its single-slot dual-GPU solution that Nvidia had with its 7900GX2 and 9800GX2 a while ago. With the 4870X2 destroying the GTX 280, ATI does indeed have its crown or "halo".

    Unfortunately, Quad-crossfire hardly does well against the GTX 280 in SLI. We now know that quad-GPU solutions give a far lower "bang-per-GPU" due to poor driver optimizations, etc.. So most enthusiast gamers with the money and a 2560x1600 monitor are running two GTX 280's right now instead of two 4870X2's.. oh well!

    One thing not mentioned about GDDR5 is that it eats power like mad! The memory alone consumes 40W, even at idle, and that is one of the reasons why the 4870 does not idle so well. If ATI reduces the speed low enough, it messes up the Aero graphics in Vista. It would have been nice if ATI released an intermediate 4860 version with GDDR4 memory at 2800+MHz effective.

    Now, I cannot even start to expect what the RV870 will be like. I think Nvidia is going to really want its own revenge this time around, being so financially hurt with the whole 9800 - GTX 200 range plus being unable to release a 55nm version of G200 to this day. Nvidia just cannot beat the 4870X2 with a dual G200 on 55nm, and this is the reason for the re-spins (delays) with an attempt to reduce the power consumption while maintaining the necessary clock speed. Pardon me for pointing out the obvious...

    Hope my mini-article was a nice supplement to the main article! :)
    Reply
  • CarrellK - Sunday, December 7, 2008 - link

    Not bad at all.

    BTW, 55nm has less to do with how good the RV770 is than the re-architecture & re-design our engineers did post-RV670.

    To illustrate, scale the RV770 from 55nm to 65nm (only core scales, not pads & analog) and see how big it is. Now compare that to anything else in 65nm.

    Pretty darned good engineers I'd say.

    Reply
  • BoFox - Sunday, December 7, 2008 - link

    True, and nowhere in the article was it pointed out that since the AA algorithm relied on the shaders, simply upping the shader units from 320 to a whopping 800 completely solved the weak AA performance that plagued 2900's and 3870's. It did not cost too much chip die size or power consumption either. ATI certainly did design the R600 with the future in mind (by moving AA to the shader units, with future expansion). Now the 4870 does amazing well with 8x FSAA, even beating the GTX 280 in some games.

    I wanted to edit my above post by saying that the dual G200 needed to have low enough power consumption so that it could still be cooled effectively in a single-slot sandwich cooling solution. The 4870X2 has a dual-slot cooler, but Nvidia just cannot engineer the G200 on a single PCB with the architecture that they are currently using (monster chip die size, and 16 memory chips that scales with 448-bit to 512-bit bandwidth instead of using 8 memory chips with 512-bit bandwidth). That is why Nvidia must make the move to GDDR5 memory, or else re-design the memory architecture to a greater degree. Just my thoughts... I still have no idea what we'll be seeing in 2009!

    Reply
  • papapapapapapapababy - Saturday, December 6, 2008 - link

    more like uber mega retards, right? if they are so smart... why do they keep making such terrible, horrible, shitty drivers?

    why?

    i really really, really want to buy a 4850, i really do. but im not going to do it. im going to go and buy the 9800gt. And i know is just a re branded 8800gt. And i know nvidia is making shitty @ explosive hardware ( my 8600gt just died) And i know that gpu is slower, older, oced 65nm tech. And that nvidia is pushing gimmicky tricks " physics" and buying devs. but guess what? NVIDIA = good, clean drivers. New game? New drivers there. Fast. UN-Bloated drivers, that work, is is that hard ati? Really. or maybe you guys just suck?

    Im going to pick all tech@ because of that. Thats how much i fkn hate your bloated and retarded drivers ATI. Install ms, framework for a broken control panel? stupid. And whats up with all those unnecessary services eating my memory and cpu cycles? ATI Hotkey poller?, ATI Smart?, ATI2EVXX.exe, ATI2EVXX.exe,NET 2.0 ? always there and the damm thing takes forever to load? Nvidia dsnt use any bloated crap, so why do you feel entitled to polute my pc with your bloated drivers?

    AGAIN HORRIBLE DRIVERS ATI! I DONT WANT A SINGLE EXTRA SERVICE! i just build a pc for a friend. I choose the hd4670, beautiful card, really cool, fast, efficient. I love it. I want one for myself. But the drivers? ARg, i ended up using just the display driver and still the memory consumption was utterly retarded compared to my nvidia card.

    so geniuses? move your asses and fix your drivers.

    thanks, and good job with your hard.
    Reply

Log in

Don't have an account? Sign up now