Crysis

Crysis is a game that can beat down all cards. We're once again using the high settings with the shaders at very high, and even at a fairly tame resolution of 1680x1050 only 8 cards manage to get past the magical 30fps rate, with nearly half of those are just squeaking by. Crysis is particularly punishing on the HD3000 series cards at these kinds of settings, where only the HD3870 X2 was competitive without resorting toCrossfire. This makes the placement of the HD4000 series all the more important.


Click to Enlarge

What we see with the 4870 is very promising. Here it leapfrogs the $100 more expensive GTX 260 and delivers 7% more performance at the same time, delivering manageable framerates. It does struggle a bit to separate itself from its cheaper brother the 4850 however, with only a 20% boost in performance for a 50% boost in price. This isn't unexpected of course, it's almost exactly in line with the shader power difference between the two and we've known that this test is shader-bound for some time, but we're not seeing the memory bandwidth make even a slight difference here even at the more unplayable 1900x1200 resolution.

Neither HD4000 card can crack 30fps at higher resolutions however, after 1680x1050 you either need to turn the settings down or start throwing down additional cash for one of NVIDIA's more expensive cards or a Crossfire/SLI solution. In either case, this situation highlights the fact that on a dollar-for-dollar basis, the HD4000 series has negated NVIDIA's commanding lead with Crysis merely two weeks ago.

The Test Call of Duty 4
Comments Locked

215 Comments

View All Comments

  • BusterGoode - Sunday, June 29, 2008 - link

    Thanks, great article by the way Anandtech is my first stop for reviews.
  • jay401 - Wednesday, June 25, 2008 - link

    Good but I just wish AMD would give it a full 512-bit memory bus bandwidth. Tired of 256-bit. It's so dated and it shows in the overall bandwidth compared to NVidia's cards with 512-bit bus widths. All that fancy GDDR4/5 and it doesn't actually shoot them way ahead of NVidia's cards in memory bandwidth because they halve the bus width by going with 256-bit instead of 512-bit. When they offer 512-bit the cards will REALLY shine.
  • Spoelie - Thursday, June 26, 2008 - link

    Except that when R600 had a 512bit bus, it didn't show any advantage over RV670 with a 256bit bus. And that was with GDDR3 vs GDDR3, not GDDR5 like in RV770 case.
  • JarredWalton - Thursday, June 26, 2008 - link

    R600 was 512-bit ring bus with 256-bit memory interface (four 64-bit interfaces). http://www.anandtech.com/showdoc.aspx?i=2552&p...">Read about it here for a refresh. Besides being more costly to implement, it used a lot of power and didn't actually end up providing provably better performance. I think it was an interesting approach that turned out to be less than perfect... just like NetBurst was an interesting design that turned out to have serious power limitations.
  • Spoelie - Thursday, June 26, 2008 - link

    Except that it was not, that was R520 ;) and R580 is the X19x0 series. That second one proved to be the superior solution over time.

    R600 is the x2900xt, and it had a 1024bit ring bus with 512bit memory interface.
  • DerekWilson - Sunday, June 29, 2008 - link

    yeah, r600 was 512-bit

    http://www.anandtech.com/showdoc.aspx?i=2988&p...">http://www.anandtech.com/showdoc.aspx?i=2988&p...

    looking at external bus width is an interesting challenge ... and gddr5 makes things a little more crazy in that clock speed and bus width can be so low with such high data rates ...

    but the 4870 does have 16 memory modules on it ... so that's a bit of a barrier to higher bit width busses ...
  • JarredWalton - Wednesday, June 25, 2008 - link

    I'd argue that the 512-bit memory interface on NVIDIA's cards is at least partly to blame for their high pricing. All things being equal, a 512-bit interface costs a lot more to implement than a 256-bit interface. GDDR5 at 900MHz is effectively the same as GDDR3 at 1800MHz... except no one is able to make 1800MHz GDDR3. Latencies might favor one or the other solution, but latencies are usually covered by caching and other design decisions in the GPU world.
  • geok1ng - Wednesday, June 25, 2008 - link

    The tests showed what i feared: my 8800GT is getting old to pump my Apple at 2560x1600 even without AA! But the tests also showed that the 512MB of DDR5 on the 4870 justifies the higher price tag over the 4850, something that the 3870/3850 pair failed to demonstrate. It remains the question: will 1GB of DDR5 detrone NVIDIA and rule the 30 inches realm of single GPU solutions?
  • IKeelU - Wednesday, June 25, 2008 - link

    "It is as if AMD and NVIDIA just started pulling out hardware and throwing it at eachother"

    This makes me crack up...I just imagine two bruised and sweaty middle-aged CEO's flinging PCBs at each other, like children in a snowball fight.
  • Thorsson - Wednesday, June 25, 2008 - link

    The heat is worrying. I'd like to see how aftermarket coolers work with a 4870.

Log in

Don't have an account? Sign up now