Compute Performance & Synthetics

While the GT 430 isn’t meant to be a computing monster and you won’t see NVIDIA presenting it as such, it’s still a member of the Fermi family and possesses the family’s compute capabilities. This includes the Fermi cache structure, along with the 48 CUDA core SM that was introduced with GF104/GTX 460. This also means that it has a greater variation of performance than the past-generation NVIDIA cards; the need to extract ILP means the card performs between a 64 CUDA core card and a 96 CUDA core card depending on the application.

Meanwhile being based on the GF104 SM, the GT 430 is FP64 capable at 1/12th FP32 speeds  (~20 GFLOPS FP64), a first for a card of this class.

For our look at compute performance we’ll turn to our trusty benchmark copy of Folding @ Home. We’ve also included the GT 240, a last-generation 96 CUDA core card just like the GT 430. This affords us an interesting opportunity to see the performance of Fermi compared to GT200 with the same number of CUDA cores in play, although GT 430 has a clockspeed advantage here that gives it a higher level of performance in theory.

The results are interesting, but also a bit distressing. GT 430’s performance as compared to the GTS 450’s performance is quite a bit lower, but this is expected. GT 240 however manages to pull ahead by nearly 17%, which is quite likely a manifestation of Fermi’s more variable performance. This makes the GT 220 comparison all the more appropriate, as if Fermi’s CUDA cores are weaker on average then GT 430 can’t hope to keep pace with GT 240.

To take a second look at CUDA core performance, we’ve also busted 3DMark Vantage out of the vault. As we’ve mentioned before we’re not huge fans of synthetic tests like 3DMark since they encourage non-useful driver optimizations for the benchmark instead of real games, but the purely synthetic tests do serve a useful purpose when trying to get to the bottom of certain performance situations.

We’ll start with the Perlin Noise test, which is supposed to be computationally bound, similar to Folding @ Home.

Once more we see the GT 430 come in behind the GT 240, even though the GT 430 has the theoretical advantage due to clockspeed. The loss isn’t nearly as great as it was under Folding @ Home, but this lends more credit to the theory that Fermi shaders are less efficient than GT21x CUDA cores. As a card for development GT 430 still has a number of advantages such as the aforementioned FP64 support and C++ support in CUDA, but if we were trying to use it as a workhorse card it looks like it wouldn’t be able to keep up with GT 240. Based on our gaming results earlier, this would seem to carry over to shader-bound games, too.

Moving on, we also used this opportunity to look at 3DMark Vantage’s color fill test, which is a ROP-bound test. With only 4 ROPs on the GT 430, this is the perfect synthetic test for seeing if having fewer ROPs really is an issue when we’re comparing GT 430 to older cards.

And the final verdict? A not very useful yes and no. GT 220 and GT 240 both have 8 ROPs, with GT 220 having the clockspeed advantage. This is why GT 220 ends up coming out ahead of GT 240 here by less than 100 MPixels/sec. But on the other hand, GT 430 has a clockspeed advantage of its own while possessing half the ROPs. The end result is that GT 430 is effectively tied with these previous-generation cards, which is actually quite a remarkable feat for having half the ROPs.

NVIDIA worked on making the Fermi ROPs more efficient and it has paid off by letting them use 4 ROPs to do what took 8 in the last generation. With this data in hand, NVIDIA’s position that 4 ROPs is enough is much more defensible, as they’re at least delivering last-generation ROP performance on a die not much larger than GT216 (GT 220). This doesn’t provide enough additional data to clarify whether the ROPs alone are the biggest culprit in the GT 430’s poor gaming performance, but it does mean that we can’t rule out less efficient shaders either.

Do note however that while Fermi ROPs are more efficient than GT21x ROPs, it’s only a saving grace when doing comparisons to past-generation architectures. GT 430 still only has ¼ the ROP power as GTS 450, which definitely hurts the card compared to its more expensive sibling.

Wolfenstein Power, Temperature, & Noise
Comments Locked

120 Comments

View All Comments

  • heflys - Monday, October 11, 2010 - link

    Seriously?
  • Belard - Tuesday, October 12, 2010 - link

    Overall, this card isn't impressive at all... the PRO's are there, and AMD does need 3D and physics abilities.

    But at $80, it goes against the 5650 cards and easily loses.

    About HDMI 1.4b... it doesn't really matter. HDMI is dead... faster than it should be, but there is no future in it. CAT6-A/V will start replacing HDMI in 2011.... all the big TV players are on board - they don't have to pay licensing fees or use special expensive connectors or cabling of HDMI.

    And HTPC's will not get very popular until the Cable companies loosen up about people access channels like HBO, SHO, etc. Windows7 Media player is nice, but the interface is still rather weak for power users compare to some of the others out there. For example, the program grid is HORRIBLE... when others allow 2~4hours of blocks and around 20 channels at a time... none of this 1.5hr / 6 channel junk. Oh, and the DRM of Media player makes archiving your shows near impossible. Like if you have to reinstall the OS or do a system upgrade.
  • heflys - Tuesday, October 12, 2010 - link

    According to most review sites, things like PhysX and 3d vision are nothing but gimmicks that contribute little to actual performance. Instead, most view them as pointless system hogs.
  • Belard - Tuesday, October 12, 2010 - link

    er... PhysX and 3D has never been about improving performance. It was about adding to the visual experience. Like Avatar looks great in 2D and 3D... but 3D sucks you in a bit more.

    Games like Mirror's Edge come more realistic with PhysX, even thou it doesn't improve game play one bit.

    Those technologies are new, and until PhysX becomes shared/standard on all video cards - it will be more gimmick then a standard. But who knows...

    Hmmm... back around 1988 when computers were 8~16mhz, only Mac and Amigas pretty much had a native GUI OS, MS was horrible MS-DOS with 8.3 file names, no multi-tasking, horrible graphics and forget about sound. Someone from the DOS camp said "Who needs graphics and sound, those are for toys. PC are REAL Computers".

    Uh huh. And now we have 1000Mhz cell phones with 16GB of RAM.

    The 1986 vintage Amiga had Graphics, sound and Multi-tasking... was it a gimmick?
  • heflys - Tuesday, October 12, 2010 - link

    "Performance" was a typo on my part, since I clearly indicated that it was a system hogs. Physx, in most cases-as displayed titles such as Mafia II- contribute little to nothing (in some games) towards graphics. Most players won't even notice such things as enhanced physics or improved decals. In fact, the most noticeable thing displayed in Mafia II was the presence of debris. Players will, however, notice the impressive amount of lag brought on by such features.

    3d Vision, as displayed in one review, rendered the GPU (a gtx 460 1gb) to unplayable frame-rates. It essentially required the player go to SLI. Which brings me to another point.....Why are you bringing up Physx or 3d vision in regards to this product? You seriously think this cheap HTPC card could handle any of the above features, particularly when a 1gb 460 struggles to?

    And are we seriously comparing the Amiga to such an insignificant thing as cheesy video game effects? You can't be serious. Particularly when there are other physics engines (Havok being one of the most prominent) doing some of the same things.

    However, please tell me how Physx made Mirror's Edge a more realistic experience. Particularly since that game, like Mafia II, only added physics to debris.
  • Belard - Tuesday, October 12, 2010 - link

    I agree with you on the first paragraph. We want constant visual abilities, but without the cost of general performance.

    This was one of the arguments of 3Dfx's Voodoo3 vs TNT cards -
    Performance with 16bit graphics vs nVidia's 24bit.

    When I played the JSF game around 1999-2000, the 16bit limitation was noticed BIG time on my Voodoo1, but the frame rate murdered the ATI I had. It was a trade off. This is always a constant battle with out GPUs... remember when AA was added? Even today, AA effects the performance of every single video card - but unlike 8 years ago, it no longer renders most cards useless.

    Yeah, 3d Vision & PhysX is useless on the GF430... pretty much like the ATI Eyefinity's tech doesn't belong on every ATI card (reduce the cost by $10, improve airflow) - especially for the low-end, but its very handy for business users.

    You said: "And are we seriously comparing the Amiga to such an insignificant thing as cheesy video game effects?"

    Yes, in that PhysX and 3D tech is still baby tech. In a few years, we'll be start seeing 3D TV's that don't require glasses. PhysX or Havok or other becomes more standard - or perhaps MS adds it to DX12. It's going to be years before we see results of the latest technology. Just like the PC folk's of the 80's who said the Amiga was a toy and computers didn't need graphics and sound. And yes, my Amigas still work.

    "please tell me how Physx made Mirror's Edge a more realistic experience." Look up the various side-by side videos. It adds cloth effects, broken glass and yes, debris. A side by same example: http://www.youtube.com/watch?v=w0xRJt8rcmY and check out batman too.

    Of course, that didn't help to actually POPULATE the city of Mirror's edge with people... funny, a huge modern city with only a few people and police, with all that construction - where are the workers? Another example. A burger that is just meat and bread is bland... but add some tomatoes, lettuce, cheese and it becomes a better meal.
  • heflys - Tuesday, October 12, 2010 - link

    Thanks for the civil discussion....I half expected you to call me an idiot for some reason......Don't know why.....

    I think ATI's just going to bide its time with the 3d/Physics display, since at this point, they don't really need to invest in that platform. Maybe in the future.
  • Belard - Wednesday, October 13, 2010 - link

    Would it make you feel better if I did? :)

    I've been into computers for a long long time - and I do my best to NOT be a fanboy. Give credit where credit is due... Apple, Intel, MS, AMD, Nvidia, Opera, FireFox, etc.

    What gives me/us the best deal at the time of purchase.

    In 2015, our graphics on consoles (don't know about computers) will make todays GPUs look like GeForce 5900/ATI 9700 in terms of performance and abilities.

    We'll see. Perhaps Archive this page?
  • drjonz - Tuesday, October 12, 2010 - link

    Why no comparison to integrated Intel Clarksdale? Many of us with HTPC went with that since we're not gamers. I've been really happy with it. Maybe once per Blu-Ray watching, I'll get a stutter. Not sure if it's because I'm underpowered or what. Would be cool to see what more I'd get for $100.
  • ganeshts - Tuesday, October 12, 2010 - link

    We have mentioned the HQV score for Clarkdale (Intel HD Graphics) as 133, much lower than 5570 and slightly lower than the 430.

    Please take a look at the Core 100 review we carried a few months back. It reviewed the Arrandale platform for HTPCs and it is quite good for casual HTPC users.

Log in

Don't have an account? Sign up now