Performance Preview

Since we don't have hardware, we are left with the charts that ATI provided. Obviously, you need to take these benchmark results with a huge grain of salt, but for now it's all we have to go on. ATI provided results comparing performance of their new and old Performance (5650 vs. 4650) and Enthusiast (5870 vs. 4870) solutions, an additional chart looking at WUXGA single vs. CrossFire performance, and two more charts comparing performance of high-end and midrange ATI vs. NVIDIA. Here's what we have to look forward to, based on their testing.






The performance is about what we would expect based on the specifications. Other than adding DX11 support, there's nothing truly revolutionary going on. The latest 5870 part has a higher core clock and more memory bandwidth - 27% more processing power and 11% more bandwidth compared to the standard HD 4870, to be exact. The average performance improvement appears to be 20~25%, which is right in line with those figures. Crysis appears to hit some memory bandwidth constraints, which is why the performance increase isn't as high as in other titles (see the 5650 slide for a better example of this).

On the midrange ("Performance") parts, the waters are a little murky. The highest clocked 5600 part (5750 or 5757) can run at 650MHz and provides a whopping 62% boost in core performance relative to the 4650, but that's only 10% more than the HD 4670. With GDDR5, the 5750 also offers a potential 100% increase in memory bandwidth. That said, in this slide we're not looking at the 5750 or 4670; instead we have the 5650 clocked at 550MHz with 800MHz DDR3 and ATI compares it to the 4650 with unknown clocks (550MHz/800MHz DDR3 are typical). That makes the 5650 25% faster in theoretical core performance with the same memory bandwidth. The slide shows us a performance improvement of 20~25% once more, which is expected, except Crysis clearly hits a memory bottleneck this time. On a side note, running a midrange GPU at 1920x1200 is going to result in very poor performance in any of these titles, so while the 5650 is 20% faster, we might be looking at 12 FPS vs. 10 FPS.

Moving to the ATI vs. NVIDIA slides, it's generally pointless to compare theoretical GFLOPS between the ATI and NVIDIA architectures as they're not the same. The 5870 has a theoretical ~100% GFLOPS advantage over the GTX 280M but only 5% more bandwidth. The average performance advantage of the 5870 is around 25%, with BattleForge showing a ~55% improvement. Obviously, the theoretical 100% GFLOPS advantage isn't showing up here. The midrange showdown between the 5650 and GT 240M shows closer to a ~30% performance increase, with BattleForge, L4D, and UT3 all showing >50% performance improvements. The theoretical performance of the 5650 is 144% higher than the GT 240M, but memory bandwidth is essentially the same (the GT 240M has a 1% advantage). There may be cases where ATI can get better use of their higher theoretical performance, but these results suggest that NVIDIA "GFLOPS" are around 70% more effective than ATI "GFLOPS".

For reference, ATI uses a desktop system for the ATI vs. ATI charts and some form of Core 2 Duo 2.5GHz setup for the ATI vs. NVIDIA charts. This isn't really something fishy, considering there are as yet no laptops with the new ATI hardware and no one offers an identical laptop with support for both ATI and NVIDIA GPUs (well, Alienware has the m17x, but that's about as close as we get). Here are the test details:



The first slide states a clock speed of 450MHz on the 5650 but the second says 550MHz. Given the figures in the ATI performance comparison, we think the 550MHz clock is correct.

Mobile DirectX 11 Arrives… Where Are the Games? Initial Thoughts
Comments Locked

32 Comments

View All Comments

  • Hauk - Thursday, January 7, 2010 - link

    That's exactly what I got. Here's what it does with Win 7 Index: http://i133.photobucket.com/albums/q62/steelsix/De...">http://i133.photobucket.com/albums/q62/steelsix/De...

    Games run okay but with this res, medium settings at best on demanding titles. It needs a bit more and the 58xx card would do it me thinks.

    I plan to investigate and will post any good details in the forum
  • SlyNine - Friday, January 8, 2010 - link

    Are you aware of the throttling issues when plugged in with the 90watt AC power. Apparently it throttles down to very low clock speeds when using AC power and the laptop screen.

    The Battery runs at full speed though. But Dell promises a fix.


    Huge 250 page thread here, very helpfull as well.
  • Hauk - Friday, January 8, 2010 - link

    Mine works fine. On delivery I flashed to the latest bios, installed an Intel SSD, and reinstalled OS. Idles at 931 core, runs well.
  • SlyNine - Saturday, January 9, 2010 - link

    Glad to hear it, supposedly mine is on the way, but dell keeps switching from shipped to delivery preparation. But I've been reading plugged in to the 90 watt adapter that it throttles, have you checked it running prime 95 and some sort of video intensive task ( stay away from FurMark, that is potentially dangerous to your card) maybe a Windows game so you can eye your clock speeds.
  • SlyNine - Saturday, January 9, 2010 - link

    Sorry meant to say a windowed game, not windows game
  • SlyNine - Friday, January 8, 2010 - link

    http://forum.notebookreview.com/showthread.php?t=4...">http://forum.notebookreview.com/showthread.php?t=4...
  • R3MF - Thursday, January 7, 2010 - link

    is the midrange 56xx series a native 400 shader part, or is it cut down from ~480 original die?

    keeping the low end 54xx series at 80 shaders is deeply disappointing.
  • AlB80 - Thursday, January 7, 2010 - link

    1. Rumors say that 56xx or 55xx desktop parts will have 400sp.
    2. RS880 (785G) have 40sp, thus 80sp with DX11 is not bad start for discrete card.
  • R3MF - Thursday, January 7, 2010 - link

    1. cheers for the info

    2. but we have had forty shaders in integrated from AMD for years now, and it has long since been surpassed by the nVidia 9300/Ion, and even surpassed by intels new clarksdale iGPU's

    given that high end mobile gpu's have 800 shaders, and mid-range cards have 400, i should have hoped that the low end would at least have advanced to 160 shaders.
  • zicoz - Thursday, January 7, 2010 - link

    Does this support bitstreamning like it's desktop brother?

Log in

Don't have an account? Sign up now