Meet the GTX 480 and GTX 470, Cont

Moving beyond the GF100 GPU itself, we have the boards. With NVIDIA once more forgoing a power-of-two sized memory bus on their leading part, the number of memory chips and the total memory size of the GTX 400 series is once again an odd amount. On the GTX 480 there are 12 128MB GDDR5 memory chips for a total of 1536MB of VRAM, while the GTX 470 has 10 chips for a total of 1280MB. This marks the first big expansion in memory capacity we’ve seen out of NVIDIA in quite some time; after introducing the 8800GTX in 2006 with 768MB of RAM, we haven’t seen them do anything besides bump up 256-bit/512-bit memory bus parts to 1GB over the years. And with 256MB GDDR5 due in volume later this year, we wouldn’t be surprised to see NVIDIA push a 3GB part before the year is out.

Meanwhile in a stark difference from the GTX 200 series, the GTX 400 series does not share a common design and cooler. This leads to the GTX 480 and GTX 470 being remarkably different cards. The GTX 470 is essentially a slight variation on the GTX 200 series design, utilizing a similar fully shrouded design as those cards. Meanwhile the GTX 480 is positively alien coming from the GTX 200 series in two distinct ways. The first is the 4 heatpipes from the top of the card (with a 5th one staying within the card), and the second is the fully exposed heatsink grill on the front of the card. That’s exactly what it looks like folks – that’s the top of the heatsink on the GTX 480. At this point it’s mostly an intellectual curiosity (we have no idea whether it makes the GTX 480’s cooler all that better) but we did learn the hard way that it’s not just cosmetic, it can get very hot.

One new thing that both cards do share in common is that the shroud is no longer a single large device; on the GTX 480 and GTX 470 the top of the shroud can be snapped on and off, allowing easy access to the heatsink and fan assemblies. We can’t imagine that most users will ever want to remove the top of the shroud, but this is one of the cooler design elements we’ve seen in a video card in recent years. It’ll be interesting to see if this proves to be beneficial for aftermarket coolers, as this should make installation/uninstallation much more expedient.

One other common element between the cards is that they have a cut-out PCB for pulling in air both from the front side and the back side of the card. We’ve seen this before on the GTX 295, but this is the first time we’ve seen this design element on a single-GPU card.

For those of you working with cramped cases, you should find these cards to be a pleasant surprise. The GTX 470 is 9.5”, making it the same length as the Radeon 5850 (or nearly 1” shorter than the GTX 200 series). On the other hand the GTX 480 measures 10.5”, which is ever so slightly longer than the GTX 200 series which we measure at 10.45”. We’re also happy to report that NVIDIA put the PCIe power plugs on the top of both cards, rather than on the rear of the card as AMD did on the Radeon 5850. Practically speaking, both of these cards should fit in to a wider array cases than AMD’s respective cards.

Even though these cards will fit in to smaller cases though, airflow will be paramount due to the high TDP of these cards. NVIDIA’s own reviewers guide even goes so far as to recommend spacing your cards out as far as possible for SLI use. This actually isn’t a bad idea no matter what cards are involved since it ensure neither card is restricted by the other, however given that not every board with a 3rd PCIe x16 slot offers full bandwidth to that slot, it’s not a practical suggestion for all cases. If you can’t separate your cards, you’re going to want great airflow instead, such as putting a fan directly behind the cards.

Up next is the port layout of the GTX 400 series. Unlike AMD, NVIDIA’s TDP is too high here to go with a half-slot vent here, so NVIDIA is limited to what ports they can fit on a single full slot. In this case their reference design is a pair of DVI ports and a mini-HDMI port (this being the first cards with that port in our labs). Bear in mind that GF100 doesn’t have the ability to drive 3 displays with a single card, so while there are 3 DVI-type outputs here, you can only have two at once.

After having seen DisplayPort on virtually every AMD card in our labs, we were caught a bit off guard by the fact that NVIDIA didn’t do the same and go with something like a mini-DisplayPort here for a 2x DVI + 1x mini-DP configuration like we’ve seen on the Radeon 5970. NVIDIA tells us that while they could do such a thing, their market research has shown that even their high-end customers are more likely to purchase a monitor with HDMI than with DP, hence the decision to go with mini-HDMI. This is somewhat academic since DVI can easily be converted to HDMI, but this allows NVIDIA’s partners to skip the dongles and makes it easier to do audio pass-through for monitors with built-in speakers.

Speaking of audio, let’s quickly discuss the audio/video capabilities of the GTX 400 series. GF100 has the same audio/video capabilities as the 40nm GT 200 series launched late last year, so this means NVIDIA’s VP4 for video decoding (H.264/MPEG-2/VC-1/MPEG-4 ASP) and internal passthrough for audio. Unfortunately the latter means that the GTX 400 series (and other first-generation Fermi derivatives) won’t be able to match AMD’s Radeon 5000 series in audio capabilities – NVIDIA can do compressed lossy audio(DD/DTS) and 8 channel uncompressed LPCM, but not lossless compressed audio formats such as DTS-HD and Dolby TrueHD. This leaves the HTPC crown safely in AMD’s hands for now.

Finally we have bad news: availability. This is a paper launch; while NVIDIA is launching today, the cards won’t be available for another two and a half weeks at a minimum. NVIDIA tells us that the cards are expected to reach retailers on the week of April 12th, which hopefully means the start of that week and not the end of it. In either case we have to chastise NVIDIA for this; they’ve managed to have hard launches in the past without an issue, so we know they can do better than this. This is a very bad habit to get in to.

Once these cards do go on sale, NVIDIA is telling us that the actual launch supply is going to be in the tens-of-thousands of units. How many tens-of-thousands? We have no idea. For the sake of comparison AMD had around 30,000 units for the 5800 series launch, and those were snapped up in an instant. We don’t think NVIDIA’s cards will sell quite as quickly due to the pricing and the fact that there’s viable competition for this launch, but it’s possible to have tens-of-thousands of units and still sell out in a heartbeat. This is something we’ll be watching intently in a couple of weeks.

The availability situation also has us concerned about card prices. NVIDIA is already starting off behind AMD in terms of pricing flexibility; 500mm3+ dies and 1.5GB of RAM does not come cheap. If NVIDIA does manage to sell the GTX 400 series as fast as they can send cards out then there’s a good chance there will be a price hike. AMD is in no rush to lower prices and NVIDIA’s higher costs mean that if they can get a higher price they should go for it. With everything we’ve seen from NVIDIA and AMD, we’re not ready to rule out any kind of price hike, or to count on any kind of price war.

Index The GF100 Recap
Comments Locked

196 Comments

View All Comments

  • palladium - Saturday, March 27, 2010 - link

    clock for clock, the 920 is faster than the 860 thanks to its triple channel memory - the 860 is faster because of its aggressive turbo mode. X58 is definitely the route to go, espeacially if you're benchmarking SLI/CF setups (dual PCIe x16).
  • randfee - Sunday, March 28, 2010 - link

    go ahead and try Crysis with 3,33GHz and 4,x, minimum fps scale strangely with the CPU.
  • palladium - Saturday, March 27, 2010 - link

    shit double post, sry
  • palladium - Saturday, March 27, 2010 - link

    Clock for clock, the 920 is faster than the 860 (860 is faster because of its aggressive turbo mode). Using the P55/860 would limit cards to PCIe x8 bandwidth when benchmarking SLI/CF (unless of course you get a board with nF200 chip), which can be more significant (espeacially with high-end cards) than a OC-ing a CPU from 3.33GHz to 4GHz.
  • Roland00 - Saturday, March 27, 2010 - link

    It doesn't really add to the framerates, and having a 4ghz cpu could in theory bring stability issues.

    http://www.legionhardware.com/articles_pages/cpu_s...">http://www.legionhardware.com/articles_...scaling_...
  • B3an - Friday, March 26, 2010 - link

    You're good at making yourself look stupid.

    A 920 will reach 4GHz easy. I've got one to 4.6GHz. And a 920 is for the superior X58 platform and will have Tri-Channel memory.
  • Makaveli - Friday, March 26, 2010 - link

    I have to agree with that guy.

    Your post is silly everyone knows the X58 platform is the superior chipset in the intel line up. Secondly do you honestly think 3.33Ghz vs 4Ghz is going to make that much of a difference at those high resolutions?

  • randfee - Friday, March 26, 2010 - link

    sorry guys but I know what I'm talking about, using Crysis for instance, I found that minimum fps scale quite nicely with CPU clock whereas the difference a quad core makes is not so big (only 2 threads in the game afaik). FarCry 2, huge improvements with higher end (=clocked) cpus. The Core i7 platform has a clear advantage, yes, but the clock counts quite a bit.

    As I said... no offense intended and no, not arguing against my favorite site anandtech ;). Just stating what I and others have observed. I'd just always try and minimize other possible bottlenecks.
  • randfee - Friday, March 26, 2010 - link

    well.... why not test using the 920 @ 4.xGHz, why possibly bottleneck the System at the CPU by using "only" 3,3?

    No offense intended but I find it a valid question. Some games really are CPU bound, even at high settings.
  • Ph0b0s - Friday, March 26, 2010 - link

    These new cards from ATI and Nvidia are very nice and for a new PC build it is a no brainer, to pick up one of these cards. But for those like me with decent cards from the last generation (GTX285 SLI) I don't really feel a lot of pressure to upgrade.

    Most current PC games are Directx 9 360 ports that last gen cards can handle quite well. Even Directx 10 games are not too slow. The real driver for these cards are Directx 11 games, the amount of which I can count on one hand and not very many upcomming.

    Those that are out don't really bring much over DX10 so I don't really feel like I am missing anything yet. I think Crysis 2 may change this, but by it's release date there will probably be updated / shrunk versions of these new GPU's avaliable.

    Hence why Nvidia and ATI need really ecstatic reviews to convince us to buy their new cards when there is not a lot of software that (in my opinion) really needs them.

Log in

Don't have an account? Sign up now