Meet the GTX 480 and GTX 470, Cont

Moving beyond the GF100 GPU itself, we have the boards. With NVIDIA once more forgoing a power-of-two sized memory bus on their leading part, the number of memory chips and the total memory size of the GTX 400 series is once again an odd amount. On the GTX 480 there are 12 128MB GDDR5 memory chips for a total of 1536MB of VRAM, while the GTX 470 has 10 chips for a total of 1280MB. This marks the first big expansion in memory capacity we’ve seen out of NVIDIA in quite some time; after introducing the 8800GTX in 2006 with 768MB of RAM, we haven’t seen them do anything besides bump up 256-bit/512-bit memory bus parts to 1GB over the years. And with 256MB GDDR5 due in volume later this year, we wouldn’t be surprised to see NVIDIA push a 3GB part before the year is out.

Meanwhile in a stark difference from the GTX 200 series, the GTX 400 series does not share a common design and cooler. This leads to the GTX 480 and GTX 470 being remarkably different cards. The GTX 470 is essentially a slight variation on the GTX 200 series design, utilizing a similar fully shrouded design as those cards. Meanwhile the GTX 480 is positively alien coming from the GTX 200 series in two distinct ways. The first is the 4 heatpipes from the top of the card (with a 5th one staying within the card), and the second is the fully exposed heatsink grill on the front of the card. That’s exactly what it looks like folks – that’s the top of the heatsink on the GTX 480. At this point it’s mostly an intellectual curiosity (we have no idea whether it makes the GTX 480’s cooler all that better) but we did learn the hard way that it’s not just cosmetic, it can get very hot.

One new thing that both cards do share in common is that the shroud is no longer a single large device; on the GTX 480 and GTX 470 the top of the shroud can be snapped on and off, allowing easy access to the heatsink and fan assemblies. We can’t imagine that most users will ever want to remove the top of the shroud, but this is one of the cooler design elements we’ve seen in a video card in recent years. It’ll be interesting to see if this proves to be beneficial for aftermarket coolers, as this should make installation/uninstallation much more expedient.

One other common element between the cards is that they have a cut-out PCB for pulling in air both from the front side and the back side of the card. We’ve seen this before on the GTX 295, but this is the first time we’ve seen this design element on a single-GPU card.

For those of you working with cramped cases, you should find these cards to be a pleasant surprise. The GTX 470 is 9.5”, making it the same length as the Radeon 5850 (or nearly 1” shorter than the GTX 200 series). On the other hand the GTX 480 measures 10.5”, which is ever so slightly longer than the GTX 200 series which we measure at 10.45”. We’re also happy to report that NVIDIA put the PCIe power plugs on the top of both cards, rather than on the rear of the card as AMD did on the Radeon 5850. Practically speaking, both of these cards should fit in to a wider array cases than AMD’s respective cards.

Even though these cards will fit in to smaller cases though, airflow will be paramount due to the high TDP of these cards. NVIDIA’s own reviewers guide even goes so far as to recommend spacing your cards out as far as possible for SLI use. This actually isn’t a bad idea no matter what cards are involved since it ensure neither card is restricted by the other, however given that not every board with a 3rd PCIe x16 slot offers full bandwidth to that slot, it’s not a practical suggestion for all cases. If you can’t separate your cards, you’re going to want great airflow instead, such as putting a fan directly behind the cards.

Up next is the port layout of the GTX 400 series. Unlike AMD, NVIDIA’s TDP is too high here to go with a half-slot vent here, so NVIDIA is limited to what ports they can fit on a single full slot. In this case their reference design is a pair of DVI ports and a mini-HDMI port (this being the first cards with that port in our labs). Bear in mind that GF100 doesn’t have the ability to drive 3 displays with a single card, so while there are 3 DVI-type outputs here, you can only have two at once.

After having seen DisplayPort on virtually every AMD card in our labs, we were caught a bit off guard by the fact that NVIDIA didn’t do the same and go with something like a mini-DisplayPort here for a 2x DVI + 1x mini-DP configuration like we’ve seen on the Radeon 5970. NVIDIA tells us that while they could do such a thing, their market research has shown that even their high-end customers are more likely to purchase a monitor with HDMI than with DP, hence the decision to go with mini-HDMI. This is somewhat academic since DVI can easily be converted to HDMI, but this allows NVIDIA’s partners to skip the dongles and makes it easier to do audio pass-through for monitors with built-in speakers.

Speaking of audio, let’s quickly discuss the audio/video capabilities of the GTX 400 series. GF100 has the same audio/video capabilities as the 40nm GT 200 series launched late last year, so this means NVIDIA’s VP4 for video decoding (H.264/MPEG-2/VC-1/MPEG-4 ASP) and internal passthrough for audio. Unfortunately the latter means that the GTX 400 series (and other first-generation Fermi derivatives) won’t be able to match AMD’s Radeon 5000 series in audio capabilities – NVIDIA can do compressed lossy audio(DD/DTS) and 8 channel uncompressed LPCM, but not lossless compressed audio formats such as DTS-HD and Dolby TrueHD. This leaves the HTPC crown safely in AMD’s hands for now.

Finally we have bad news: availability. This is a paper launch; while NVIDIA is launching today, the cards won’t be available for another two and a half weeks at a minimum. NVIDIA tells us that the cards are expected to reach retailers on the week of April 12th, which hopefully means the start of that week and not the end of it. In either case we have to chastise NVIDIA for this; they’ve managed to have hard launches in the past without an issue, so we know they can do better than this. This is a very bad habit to get in to.

Once these cards do go on sale, NVIDIA is telling us that the actual launch supply is going to be in the tens-of-thousands of units. How many tens-of-thousands? We have no idea. For the sake of comparison AMD had around 30,000 units for the 5800 series launch, and those were snapped up in an instant. We don’t think NVIDIA’s cards will sell quite as quickly due to the pricing and the fact that there’s viable competition for this launch, but it’s possible to have tens-of-thousands of units and still sell out in a heartbeat. This is something we’ll be watching intently in a couple of weeks.

The availability situation also has us concerned about card prices. NVIDIA is already starting off behind AMD in terms of pricing flexibility; 500mm3+ dies and 1.5GB of RAM does not come cheap. If NVIDIA does manage to sell the GTX 400 series as fast as they can send cards out then there’s a good chance there will be a price hike. AMD is in no rush to lower prices and NVIDIA’s higher costs mean that if they can get a higher price they should go for it. With everything we’ve seen from NVIDIA and AMD, we’re not ready to rule out any kind of price hike, or to count on any kind of price war.

Index The GF100 Recap
Comments Locked

196 Comments

View All Comments

  • henrikfm - Tuesday, March 30, 2010 - link

    Now it would be easier to believe only idiots buy ultra-high end PC hardware parts.
  • ryta1203 - Tuesday, March 30, 2010 - link

    Is it irresponsible to use benchmarks desgined for one card to measure the performance of another card?

    Sadly, the "community" tries to hold the belief that all GPU architectures are the same, which is of course not true.

    The N-queen solver is poorly coded for ATI GPUs, so of course, you can post benchmarks that say whatever you want them to say if they are coded that way.

    Personally, I find this fact invalidates the entire article, or at least the "compute" section of this article.
  • Ryan Smith - Wednesday, March 31, 2010 - link

    One of the things we absolutely wanted to do starting with Fermi is to include compute benchmarks. It's going to be a big deal if AMD and NVIDIA have anything to say about it, and in the case of Fermi it's a big part of the design decision.

    Our hope was that we'd have some proper OpenCL/DirectCompute apps by the time of the Fermi launch, but this hasn't happened. So our decision was to go ahead with what we had, and to try to make it clear that our OpenCL benchmarks were to explore the state of GPGPU rather than to make any significant claims about the compute capabilities of NVIDIA or AMD's GPUs. We would rather do this than to ignore compute entirely.

    It sounds like we didn't make this clear enough for your liking, and if so I apologize. But it doesn't make the results invalid - these are OpenCL programs and this is what we got. It just doesn't mean that these results will carry over to what a commercial OpenCL program may perform like. In fact if anything it adds fuel to the notion that OpenCL/DirectCompute will not be the great unifier we had hoped for them to be if it means developers are going to have to basically write paths optimized around NVIDIA and AMD's different shader structure.
  • ryta1203 - Tuesday, March 30, 2010 - link

    The compute section of this article is just nonsense. Is this guy a journalist? What does he know about programming GPUs?
  • Firen - Tuesday, March 30, 2010 - link

    Thanks for this comprehensive review, it covers some very interesting topics betwen Team Green and Team Red.

    Yet, I agree with one of the comments here, you missed how easy that ATI 5850 and 5870 can be overlocked thanks to their lite design, a 5870 can easily deliver more or less the same performance as a 480 card while still running cooler and consumes less power..

    Some people might point out that our new 'champion' card can be overlocked as well..that's true..however, doesn't it feel terrifying to have a graphic card running hotter than boiling water!
  • Fulle - Tuesday, March 30, 2010 - link

    I wonder what kind of overclocking headroom the 470 has.... since someone with a 5850 can easily bump the voltage up a smidge, and get about a 30% overclock with minimal effort... people who tinker can usually safely reach about 1GHz core, for about a 37% overclock.

    Unless the 470 has a bit of overclocking headroom, someone with a 5850 could easily overclock to have superior performance, lower heat, lower noise, and lower power consumption.

    After all these months and months of waiting, Nvidia has basically released a few products that ATI can defeat by just binning their current GPUs and bumping up the clockspeed? *sigh* I really don't know who would buy these cards.
  • Shadowmaster625 - Tuesday, March 30, 2010 - link

    You're being way too kind to Nvidia. Up to 50% more power consumption for a very slight (at best) price/performance advantage? This isnt a repeat of the AMD/Intel thing. This is a massive difference in power consumption. We're talking about approximately $1 a year per hour a week of gaming. If you game for 20 hours a week, expect to pay $20 a year more for using the GTX470 vs a 5850. May as well add that right to the price of the card.

    But the real issue is what happens to these cards when they get even a modest coating of dust in them? They're going to detonate...

    Even if the 470 outperformed the 5850 by 30%, I dont think it would be worth it. I cant stand loud video cards. It is totally unacceptable to me. I again have to ask the question I find myself asking quite often: what kind of world are you guys living in? nVidia should get nothing more than a poop-in-a-box award for this.
  • jujumedia - Wednesday, March 31, 2010 - link

    with those power draws and the temps it reaches for daily operation i see gpu failure rates high on the gtx 480 and 470 as they are already faulty from the fab lab. Ill stick with ATI for 10 fps less.
  • njs72 - Wednesday, March 31, 2010 - link

    I been holding on for months to see what Fermi would bring in the world of GPUs. After reading countless reviews of this card i dont think its a justifyable upgrade for my gtx260. I mean yeah the performance is much higher but in most reviews of benchmarks with games like Crysis this card barely wins against the 5870, but buying this card i would need to upgrade the psu and posibly a new case for ventilation. I keep loading up Novatechs website and and almost adding a 5870 to the basket, and not pre ordering gtx480 like i was intending. What puts me off more than anything with the new nvidia card is its noise and temps. I cant see this card living for very long.

    Ive been a nvidia fan ever since the the first geforce card came out, which i still have tucked away in a draw somewhere. I find myself thinking of switching to ATI, but read too many horror stories about their driver implementation that puts me off. Maybe i should just wait for Nvidia to refresh its new card and keep hold of my 260 for a bit longer. i really dont know :-(
  • Zaitsev - Wednesday, March 31, 2010 - link

    There is an error with the Bad Company 2 image mouse overs for the GTX 480. I think the images for 2xAA and 4xAA have been mixed up. 2xAA clearly has more AA than the 4xAA image.

    Compare GTX 480 2x with GTX 285 4x and they look very similar. Also compare 480 4x with 285 2x.

    Very nice article, Ryan! I really enjoyed the tessellation tests. Keep up the good work.

Log in

Don't have an account? Sign up now