There's rarely ever a dull moment in computer graphics land, and 2008 has been no exception. The merged ATI/AMD seems in a constant state of flux, and NVIDIA hasn't had quite the best run of luck lately. Even so, we've seen quite a few exiting developments in hardware in recent months.

Until this past summer, AMD hadn't really put out a really clearly compelling graphics product. Some of the RV6xx hardware wasn't bad, but it just didn't have the complete package. In spite of the fact that RV770 isn't the fastest single GPU out there, AMD's strategy has seemed to be working really well.

The R700 (4870 X2), despite its short comings, has proven to be the fastest single card solution around. We hope, if single-card multi-GPU solutions will be the continuing focus of AMD, that we'll see fundamentally better multi-chip architectures down the line (once again, we need a shared framebuffer).

But the real key to AMD's recent success has been pricing. They've consistently brought out their products at incredibly aggressive prices. Value has always been our major focus, and having a halo product (the absolute top performer) doesn't go as far as having the best product at any given price point below the top.

We do have to give credit where it is due to NVIDIA though: after seeing the RV770 hit, NVIDIA very quickly and adequately reduced the pricing on their GT200 hardware. We certainly appreciated that (as good competition is great for the consumer), but it can't have been pleasant for them to have to cut so deep into their margins on their brand new architecture.

Of course, it hasn't been quite enough to stave off the onslaught AMD has thrown at them. While NVIDIA continues to release G9x hardware in new products (essentially rebadging the GeForce 8000 series), AMD recently pulled off something quite impressive. For the first time in years, we have seen a graphics card company complete a top to bottom roll out of a new architecture in about three months. Everything from $40 up over $500 from AMD is now built on their RV7xx architecture.

We've been asking for a move like this for a long time. We have speculated that when Intel gets into the game we might actually see something like a top to bottom launch all at once (as they currently do with CPUs), and that something like this may be a wake up call to AMD and NVIDIA. It looks like AMD is already gearing up for compressing their launch time frames, and we absolutely hope they continue down this current path (and that NVIDIA gets the memo).

As far as NVIDIA goes, the GTX 260 and GTX 280 are very solid parts. If you want the best of the best, you'll need to pick up three GTX 280 cards (though we have yet to test this we are sure it'll be the fastest solution around). Of course, this isn't really a wise buy, as your return on investment is limited (you won't get anywhere near 3x performance for the 3x price tag) and your power bill will skyrocket as well.

From a simply geek perspective, I appreciate the design approach of NVIDIA's GT200. NVIDIA has done a pretty good job minimizing the swing between worst average and best case scenarios, but the alternate route AMD (while less consistent) shows a very good average and best case while the potentially very poor worst case doesn't come up as much. And that's what engineering is all about: taking your design constraints and hitting a price and performance target.

I've always said I'd much rather be a scientist than an engineer. I don't like the walls that cost and resources keep us bound to, but that doesn't mean they don't exist. And no one would ever sell a product without engineers who can figure out how to package amazing innovations in affordable products.

But I digress.

We still only have two and a half GT200 based parts in the market (the core 216 variant to the GTX 260 is kind of an odd man out right now, and doesn't quite give NVIDIA what it needed to compete with the 1GB 4870). And these parts aren't really the killer products NVIDIA needs. The loss of the competitive advantage NVIDIA held for so long has really hurt, especially in light of the chip failures on some products that NVIDIA has pledged $200M to fixing.

These and other realities caused NVIDIA to eliminate 360 jobs a week and a half ago. That isn't an attractive position to be in, but we hope the reorganization will help them to focus on delivering their very well designed products at more realistic prices when the next launch comes around. If they can't pull this out in the near term, then it will be even more important to focus on value as a goal for their next architecture revision.

We do hope NVIDIA will continue to push down the Tesla and CUDA path, as even if those specific technologies don't take off, HPC and consumer level GPU computing is a major area of potential benefit to everyone on the planet. The key word though is potential. NVIDIA would love it if we would mention PhysX or CUDA as an advantage of NVIDIA hardware in every review we do, but the fact is that there just isn't anything compelling out there right now that adds real immediate value to the product for the end user.

We'd love to see PhysX and CUDA take off, but until they do, they are just checkboxes on a feature list that may or may not gain support. In the face of its uselessness and lack of value to the end user, even DirectX 10.1 is more relevant, as the industry is driven by DirectX at this point. Whatever Microsoft does to direct the future of the DirectX API, all game developers and hardware engineers are eventually going to have to follow. It's an unfortunate reality, but there it is. There is nothing, other than inherent potential and possibly NVIDIA's money, that will otherwise compel developers to include PhysX and CUDA code in their projects. We want to see NVIDIA's push into GPU computing pay off, but we can't rely on that to recommend products for our readers.

Moving away from NVIDIA, as I alluded to earlier, everything hasn't been coming up roses for AMD on all fronts either. After the merger, neither AMD GPUs or CPUs were doing very well, and it took until recently for the RV770 to really turn that around for them. Since the merger, we've seen personnel move around and bits of the business being sold off (like the TV division of ATI). We are even hearing rumblings that more piece of AMD might split off, but we don't have any confirmation here. One of the rumors seems to be a possible spinning off of the fab business in order to enable AMD to compete with the likes of TSMC as an independent silicon manufacturer while leaving the rest of AMD to become a fabless design house.

Nothing is very straight forward at this point. Certainly the successes that AMD has had with their latest graphics card lineup will help, but nothing is ever certain until it has already happened in this industry. But we are certainly thankful that despite the turbulence that AMD and NVIDIA sometimes face in the business world hasn't taken away their ability to develop and deliver amazing products.

And remember also that this generations products or business decisions don't necessarily predict the future. From the Radeon 9700 and GeForce FX 5800 through today we've seen ATI and NVIDIA trade blows. One year one will lead the the next it will switch. In recent memory, the only real exception to this has been the G80 which remained on top for a ridiculous amount of time and lead a field of products that were all more attractive than NVIDIA's rival's. We might be able to chalk some of this up to the ATI/AMD merger and the time it took to sort everything out, and maybe part of the reason NVIDIA isn't on top once again is because they took a little too much time to bask in the glory of their creation.

No matter what happens, and despite all the industry turmoil, we expect the next round of hardware to be even more exciting for the consumer than this one. Maybe we won't see leaps and bounds in performance, but the new game in town is delivering as much as possible for the lowest reasonable price.

Which is frankly very exciting to us.

Here's to competition. And here's to hoping Intel strikes enough fear into the hearts of AMD and NVIDIA that they just can't help but over deliver on performance (as if that were even possible).

Man, I love this industry.

Comments Locked

27 Comments

View All Comments

  • jeffrey - Tuesday, September 30, 2008 - link

    Specifically in two areas:

    #1 Memory Technology
    -- ATI/AMD how now implemented and shipped numerous boards with GDDR4 and GDDR5 memory. NVIDIA has been stuck at GDDR3 memory during the time ATI/AMD has led the way with GDDR4 and GDDR5. This is a huge disadvantage for NVIDIA considering the have to route a 512-bit memory bus in order to provide bandwith which ATI/AMD can provide with a 256-bit memory bus


    #2 Process Technology
    -- ATI/AMD now has two generations of products at 55nm during which time NVIDIA has been stuck at 65nm. This is a huge disadvantage for NVIDIA considering the enourmous transistor count on their chips.
  • anonymous x - Saturday, October 4, 2008 - link

    about your #2
    My GTX+ is 55nm...
    anyways, its faster than its main competitor, the 4850, so I don't really care if its a old architecture or not
  • Goty - Sunday, October 5, 2008 - link

    It would be a good idea to qualify your statement by saying "fast in certain scenarios."
  • hemmy - Wednesday, October 1, 2008 - link

    Because they were complacent with their dominance since the release of G80 at the end of 2006.

    And saying using GDDR3 is a 'huge' disadvantage is a retarded statement.

    You could go a few years back and ask why ATI was behind in multi-gpu technology, lack of PS3.0, etc.

    It is just a cycle, each having their ups and downs.
  • jeffrey - Wednesday, October 1, 2008 - link

    "And saying using GDDR3 is a 'huge' disadvantage is a retarded statement."

    512-bit bus due to using GDDR3. Big expensive chip on relatively older process technology (ATI has used 55nm for two generations) with a huge bus to route is a huge disadvantage vs. smaller process technology, more efficient design, with a bus size that requires 1/2 the routing.

    I can see passing over GDDR4 due to limited benefit, but passing over GDDR5 just amazed me for their gtx200 chips. Brand new billion transistor chip on a generation old process technology with two generation old memory?
  • jmurbank - Wednesday, October 1, 2008 - link

    512-bit bus is just a bus size. It has nothing to do with memory technologies that are used. Just about any memory technology can be used. Just put it in perspective a 512-bit bus uses eight (8) memory chips that has 64-bit data bus. You can say it is an eight channel memory bus. The use of GDDR5 compared to previous GDDR generations is just providing an illusion of performance to the customer. Does GDDR5 perform better than previous generations? I do not think so since each generation introduces new latency specs. Graphics cards needs the lowest latency memory technology to provide the best performance.

    nVidia did not get behind. They just picked the wrong stuff to go further with their products. AMD/ATI picked the right stuff to go further with their graphic cards and at the same time they picked the right stuff to make their high-end models cheaper than we as consumers have thought.
  • jeffrey - Wednesday, October 1, 2008 - link

    "512-bit bus is just a bus size. It has nothing to do with memory technologies that are used. Just about any memory technology can be used. Just put it in perspective a 512-bit bus uses eight (8) memory chips that has 64-bit data bus."

    My point is exactly what you have stated. NVIDIA needed to route 8 memory chips with a 64-bit data bus. If GDDR5 was used eight 32-bit data buses would provide approximately the same bandwith.

    ATI Radeon HD4870
    GDDR5 256-bit bus 900MHz 115.2GB/s

    NVIDIA GTX 260
    GDDR3 448-bit bus 999MHz 111.9GB/s

    ATI is able to achieve higher bandwith, with a smaller bus width, and a lower memory clock speed compared to NVIDIA.

Log in

Don't have an account? Sign up now