Compute and Synthetics

Moving on from our look at gaming performance, we have our customary look at compute performance. Kepler’s compute performance has been hit and miss as we’ve seen on GK104 cards, so it will be interesting to see how GK107 fares.

Our first compute benchmark comes from Civilization V, which uses DirectCompute to decompress textures on the fly. Civ V includes a sub-benchmark that exclusively tests the speed of their texture decompression algorithm by repeatedly decompressing the textures required for one of the game’s leader scenes. Note that this is a DX11 DirectCompute benchmark.

Because this is a compute benchmark the massive increase in ROPs coming from GT 440 to GT 640 doesn’t help the GT 640, which means the GT 640 is relying on the smaller increase in shader performance. The end result is that the GT 640 neither greatly improves on the GT 440 nor is it competitive with the 7750. Compared to the GT 440 compute shader performance only improved by 28%, and the 7750 is some 50% faster here. I suspect memory bandwidth is still a factor here, so we’ll have to see what GDDR5 cards are like.

Our next benchmark is SmallLuxGPU, the GPU ray tracing branch of the open source LuxRender renderer. We’re now using a development build from the version 2.0 branch, and we’ve moved on to a more complex scene that hopefully will provide a greater challenge to our GPUs.

NVIDIA’s poor OpenCL performance under Kepler doesn’t do them any favors here. Even the GT 240 – a DX10.1 card that doesn’t have the compute enhancements of Fermi – manages to beat the GT 640 here. And the GT 440 is only a few percent behind the GT 640.

For our next benchmark we’re looking at AESEncryptDecrypt, an OpenCL AES encryption routine that AES encrypts/decrypts an 8K x 8K pixel square image file. The results of this benchmark are the average time to encrypt the image over a number of iterations of the AES cypher.

The GT 640 is at the very bottom of the chart. NVIDIA’s downplaying of OpenCL performance is a deliberate decision, but it’s also a decision with consequences.

Our fourth benchmark is once again looking at compute shader performance, this time through the Fluid simulation sample in the DirectX SDK. This program simulates the motion and interactions of a 16k particle fluid using a compute shader, with a choice of several different algorithms. In this case we’re using an (O)n^2 nearest neighbor method that is optimized by using shared memory to cache data.

All indications are that our fluid simulation benchmark is light on memory bandwidth usage and heavy on cache usage, which makes this a particularly exciting benchmark. Our results back this theory, as for the first and only time the GT 640 shoots past the GTS 450 and coms close to tying the GTX 550Ti. The 7750 still handily wins here, but based on the specs of GK107 I believe this is the benchmark most representative of what GK107 is capable of when it’s not facing such a massive memory bandwidth bottleneck. It will be interesting to see what GDDR5 GK107 cards do here, if only to further validate our assumptions about this benchmark’s memory bandwidth needs.

Our final benchmark is a look at CUDA performance, based on a special benchmarkable version of the CUDA Folding@Home client that NVIDIA  and the Folding@Home group have sent over. Folding@Home and similar initiatives are still one of the most popular consumer compute workloads, so it’s something NVIDIA wants their GPUs to do well at.

Folding@Home has historically pushed both shader performance and memory bandwidth, so it’s not particularly surprising that the GT 640 splits the difference. It’s faster than the GT 440 by 32%, but the GTS 450 still has a 25% lead in spite of the fact that the GT 640 has the greater theoretical compute performance. This is another test that will be interesting to revisit once GDDR5 cards hit the market.

Synthetics

Jumping over to synthetic benchmarks quickly, it doesn’t look like we’ll be able to tease much more out of GK107 at this time. GT 640 looks relatively good under 3DMark in both Pixel Fill and Texel fill, but as we’ve seen real-world performance doesn’t match that. Given that the GT 640 does this well with DDR3 however, it’s another sign that a GDDR5 card may be able to significantly improve on the DDR3 GT 640.

Tessellation performance is also really poor here, however there’s no evidence that this is a memory bandwidth issue. The culprit appears to be the scalability of NVIDIA’s tessellation design – it scales down just as well as it scales up, leaving cards with low numbers of SMXes with relatively low tessellation performance. NVIDIA’s improvements to their Polymorph Engines do shine through here as evidences by the GT 640’s performance improvement relative to the GT 440, but it’s not a complete substitute to just having more Polymorph Engines.

Portal 2, Battlefield 3, Starcraft II, Skyrim, & Civ V Power, Temperature, & Noise
Comments Locked

60 Comments

View All Comments

  • HighTech4US - Wednesday, June 20, 2012 - link

    At least hen the GT240 was released it came in both DDR3 and GDDR5.
  • UNhooked - Wednesday, June 20, 2012 - link

    I wish there was some sort of Video encoding benchmark. I have been told AMD/ATI cards aren't very good when it comes to video encoding.
  • mosu - Thursday, June 21, 2012 - link

    who told you that kind of crap ?Please check the internet.
  • Rumpelstiltstein - Thursday, June 21, 2012 - link

    Did this low-end offering really manage to pull off these kind of numbers? I'm impressed. Not something I would buy personally, but I would have no problems recommending this to someone else.
  • Samus - Thursday, June 21, 2012 - link

    DDR3....ruined a perfectly good chip.
  • Deanjo - Thursday, June 21, 2012 - link


    Really the only thing we don’t have a good handle on for HTPC usage right now is video encoding through NVENC. We’ve already seen NVENC in action with beta-quality programs on the GTX 680, but at this point we’re waiting on retail programs to ship with support for both NVENC and VCE so that we can better evaluate how well these programs integrate into the HTPC experience along with evaluating the two encoders side-by-side. But for that it looks like we won’t have our answer next month.


    Noooooo! Come on, post some benchmarks as it is right now. Some of us do not want to wait for AMD to get their VCE in order. People have been waiting for VCE for months and there is no valid reason to hold off NVENC waiting for their competitor to catch up. When and if VCE support comes out then run a comparison then.
  • ganeshts - Thursday, June 21, 2012 - link

    NVIDIA indicated that official NVENC support in CyberLink / ArcSoft transcoding applications would come in July only. Till then, it is beta, and has scope for bugs.
  • Deanjo - Thursday, June 21, 2012 - link

    So? That didn't prevent them benching trinity and it's encoding capabilities despite it all being beta there.

    http://www.anandtech.com/show/5835/testing-opencl-...
  • drizzo4shizzo - Thursday, June 21, 2012 - link

    So... do these new cards still support HDTV 1080i analog signals for those of us who refuse to give up our 150lb 34" HDTV CRTs?

    ie. ship with a breakout dongle cable that plugs into the DVI-I port? If they don't ship with one can anyone tell me if they are compatible with a 3rd party solution? For it to work the card has to convert to the YUV colorspace. My old 7600gt *did* support this feature, but none of the new cards mention it...

    Upgrading my TV also means buying a new receiver for HDMI switching to the projector, fishing cable in walls, and all manner of other unacceptable tradeoffs. Plus monay.

    Thanks!
  • philipma1957 - Thursday, June 21, 2012 - link

    I have a sapphire hd7750 ultimate passive cooled card.

    This card seems to be worse in every case except it is 1 slot not 2.

    The passive hd7750 is 125 usd this is 110 usd.

    I am not sure that I would want this until they make a passive version.

Log in

Don't have an account? Sign up now