AMD's Radeon HD 6970 & Radeon HD 6950: Paving The Future For AMD
by Ryan Smith on December 15, 2010 12:01 AM ESTThe Test
For the launch of the Radeon HD 6900 series, AMD supplied us with a 6900-enabled version of the Catalyst 10.11 driver, version 8.79.6.2RC2. This is older than the Catalyst 10.12 preview released Monday, which was 8.8xx.
Otherwise our test setup has not significantly changed from the GTX 570 launch last week. For our existing AMD cards we’re still using the Catalyst 10.10e, while for NVIDIA it’s a mix of 262.99 and 263.09. Note that we do not have a 2nd GTX 570 yet for GTX 570 SLI comparisons; given the equality between the 570 and 480, the GTX 480 in SLI is a reasonable stand-in.
Finally, all tests were done with the default driver settings unless otherwise noted.
CPU: | Intel Core i7-920 @ 3.33GHz |
Motherboard: | Asus Rampage II Extreme |
Chipset Drivers: | Intel 9.1.1.1015 (Intel) |
Hard Disk: | OCZ Summit (120GB) |
Memory: | Patriot Viper DDR3-1333 3 x 2GB (7-7-7-20) |
Video Cards: |
AMD Radeon HD 6970 AMD Radeon HD 6950 AMD Radeon HD 6870 AMD Radeon HD 6850 AMD Radeon HD 5970 AMD Radeon HD 5870 AMD Radeon HD 5850 AMD Radeon HD 5770 AMD Radeon HD 4870 NVIDIA GeForce GTX 580 NVIDIA GeForce GTX 570 NVIDIA GeForce GTX 480 NVIDIA GeForce GTX 470 NVIDIA GeForce GTX 460 1GB NVIDIA GeForce GTX 460 768MB NVIDIA GeForce GTS 450 NVIDIA GeForce GTX 285 NVIDIA GeForce GTX 260 Core 216 |
Video Drivers: |
NVIDIA ForceWare 262.99 NVIDIA ForceWare 263.09 AMD Catalyst 10.10e AMD Catalyst 8.79.6.2RC2 |
OS: | Windows 7 Ultimate 64-bit |
168 Comments
View All Comments
B3an - Thursday, December 16, 2010 - link
Very stupid uninformed and narrow-minded comment. People like you never look to the future which anyone should do when buying a graphics card, and you completely lack any imagination. Theres already tons of uses for GPU computing, many of which the average computer user can make use of, even if it's simply encoding a video faster. And it will be use a LOT more in the future.Most people, especially ones that game, dont even have 17" monitors these days. The average size monitor for any new computer is at least 21" with 1680 res these days. Your whole comment is as if everyone has the exact same needs as YOU. You might be happy with your ridiculously small monitor, and playing games at low res on lower settings, and it might get the job done, but lots of people dont want this, they have standards and large monitors and needs to make use of these new GPU's. I cant exactly see many people buying these cards with a 17" monitor!
CeepieGeepie - Thursday, December 16, 2010 - link
Hi Ryan,First, thanks for the review. I really appreciate the detail and depth on the architecture and compute capabilities.
I wondered if you had considered using some of the GPU benchmarking suites from the academic community to give even more depth for compute capability comparisons. Both SHOC (http://ft.ornl.gov/doku/shoc/start) and Rodinia (https://www.cs.virginia.edu/~skadron/wiki/rodinia/... look like they might provide a very interesting set of benchmarks.
Ryan Smith - Thursday, December 16, 2010 - link
Hi Ceepie;I've looked in to SHOC before. Unfortunately it's *nix-only, which means we can't integrate it in to our Windows-based testing environment. NVIDIA and AMD both work first and foremost on Windows drivers for their gaming card launches, so we rarely (if ever) have Linux drivers available for the launch.
As for Rodinia, this is the first time I've seen it. But it looks like their OpenCL codepath isn't done, which means it isn't suitable for cross-vendor comparisons right now.
IdBuRnS - Thursday, December 16, 2010 - link
"So with that in mind a $370 launch price is neither aggressive nor overpriced. Launching at $20 over the GTX 570 isn’t going to start a price war, but it’s also not so expensive to rule the card out. "At NewEgg right now:
Cheapest GTX 570 - $509
Cheapest 6970 - $369
$30 difference? What are you smoking? Try $140 difference.
IdBuRnS - Thursday, December 16, 2010 - link
Oops, $20 difference. Even worse.IdBuRnS - Thursday, December 16, 2010 - link
570...not 580.../hangsheadinshame
epyon96 - Thursday, December 16, 2010 - link
This was a very interesting discussion to me in the article.I'm curious if Anandtech might expand on this further in a future dedicated article comparing what NVIDIA is using to AMD.
Are they also more similar to VLIW4 or VLIW5?
Can someone else shed some light on it?
Ryan Smith - Thursday, December 16, 2010 - link
We wrote something almost exactly like you're asking for for our Radeon HD 4870 review.http://www.anandtech.com/show/2556
AMD and NVIDIA's compute architectures are still fundamentally the same, so just about everything in that article still holds true. The biggest break is VLIW4 for the 6900 series, which we covered in our article this week.
But to quickly answer your question, GF100/GF110 do not immediately compare to VLIW4 or VLIW5. NVIDIA is using a pure scalar architecture, which has a number of fundamental differences from any VLIW architecture.
dustcrusher - Thursday, December 16, 2010 - link
The cheap insults are nothing but a detriment to what is otherwise an interesting argument, even if I don't agree with you.As far as the intellect of Anandtech readers goes, this is one of the few sites where almost all of the comments are worth reading; most sites are the opposite- one or two tiny bits of gold in a big pan of mud.
I'm not going to "vastly overestimate" OR underestimate your intellect though- instead I'm going to assume that you got caught up in the moment. This isn't Tom's or Dailytech, a little snark is plenty.
Arnulf - Thursday, December 16, 2010 - link
When you launch an application (say a game), it is likely to be the only active thread running on the system, or perhaps one of very few active threads. CPU with Turbo function will clock up as high as possible to run this main thread. When further threads are launched by the application, CPU will inevitably increase its power consumption and consequently clock down.While CPU manufacturers don't advertise this functionality in this manner, it is really no different from PowerTune.
Would PowerTune technology make you feel any better if it was marketed the other way around, the way CPUs are ? (mentioning lowest frequencies and clock boost provided that thermal cap isn't met yet)