Compute

Our final set of performance benchmarks is compute performance, which for dual-GPU cards is always a mixed bag. Unlike gaming where the somewhat genericized AFR process is applicable to most games, when it comes to compute the ability for a program to make good use of multiple GPUs lies solely in the hands of the program’s authors and the algorithms they use.

At the same time while we’re covering compute performance for completeness, the high price and unconventional cooling apparatus for the 295X2 is likely to deter most serious compute users.

In any case, our first compute benchmark is LuxMark2.0, the official benchmark of SmallLuxGPU 2.0. SmallLuxGPU is an OpenCL accelerated ray tracer that is part of the larger LuxRender suite. Ray tracing has become a stronghold for GPUs in recent years as ray tracing maps well to GPU pipelines, allowing artists to render scenes much more quickly than with CPUs alone.

Compute: LuxMark 2.0

As one of the few compute tasks that’s generally multi-GPU friendly, ray tracing is going to be the best case scenario for compute performance for the 295X2. Under LuxMark AMD sees virtually perfect scaling, with the 295X2 nearly doubling the 290X’s performance under this benchmark. No other single card is currently capable of catching up to the 295X2 in this case.

Our second compute benchmark is Sony Vegas Pro 12, an OpenGL and OpenCL video editing and authoring package. Vegas can use GPUs in a few different ways, the primary uses being to accelerate the video effects and compositing process itself, and in the video encoding step. With video encoding being increasingly offloaded to dedicated DSPs these days we’re focusing on the editing and compositing process, rendering to a low CPU overhead format (XDCAM EX). This specific test comes from Sony, and measures how long it takes to render a video.

Compute: Sony Vegas Pro 12 Video Render

Sony Vegas Pro on the other hand sees no advantage from multiple GPUs. The 295X2 does just as well as the other Hawaii cards at 22 seconds, sharing the top of the chart, but the second GPU goes unused.

Our third benchmark set comes from CLBenchmark 1.1. CLBenchmark contains a number of subtests; we’re focusing on the most practical of them, the computer vision test and the fluid simulation test. The former being a useful proxy for computer imaging tasks where systems are required to parse images and identify features (e.g. humans), while fluid simulations are common in professional graphics work and games alike.

Compute: CLBenchmark 1.1 Fluid Simulation

Compute: CLBenchmark 1.1 Computer Vision

Like Vegas Pro, the CLBenchmark sub-tests we use here don't scale with additional GPUs. So the 295X2 can only match the performance of the 290X on these benchmarks.

Moving on, our fouth compute benchmark is FAHBench, the official Folding @ Home benchmark. Folding @ Home is the popular Stanford-backed research and distributed computing initiative that has work distributed to millions of volunteer computers over the internet, each of which is responsible for a tiny slice of a protein folding simulation. FAHBench can test both single precision and double precision floating point performance, with single precision being the most useful metric for most consumer cards due to their low double precision performance. Each precision has two modes, explicit and implicit, the difference being whether water atoms are included in the simulation, which adds quite a bit of work and overhead. This is another OpenCL test, as Folding @ Home has moved exclusively to OpenCL this year with FAHCore 17.

Compute: Folding @ Home: Explicit, Single Precision

Compute: Folding @ Home: Explicit, Double Precision

Unlike most of our compute benchmarks, Folding@Home does see some degree of multi-GPU scaling. However the outcome is really a mixed bag; single-precision performance ends up being a wash (if not a slight regression) while double-precision is seeing sub-50% scaling.

Wrapping things up, our final compute benchmark is an in-house project developed by our very own Dr. Ian Cutress. SystemCompute is our first C++ AMP benchmark, utilizing Microsoft’s simple C++ extensions to allow the easy use of GPU computing in C++ programs. SystemCompute in turn is a collection of benchmarks for several different fundamental compute algorithms, as described in this previous article, with the final score represented in points. DirectCompute is the compute backend for C++ AMP on Windows, so this forms our other DirectCompute test.

Compute: SystemCompute v0.5.7.2 C++ AMP Benchmark

Our final compute benchmark has the 295X2 and 290X virtually tied once again, as this is another benchmark that doesn’t scale up with multiple GPUs.

Synthetics Power, Temperature, & Noise
Comments Locked

131 Comments

View All Comments

  • mickulty - Wednesday, April 9, 2014 - link

    Well, Arctic's 6990 cooler wasn't far off. The arctic mono is good for 300W and it should be possible to fit two such heatsinks on one card. So it's possible. The resulting card would be absolutely huge though, and wouldn't be nearly as popular with gaming PC boutiques (IE the target market).

    Oh, VRM cooling might be an issue too. I guess a thermaltake-style heatpipe arrangement would fix that.
  • SunLord - Tuesday, April 8, 2014 - link

    Huh looking at that board and layout of the cooling setup you can swap in two independent closed looped coolers pretty easily and try and overclock it if you want and since your rich if you buy this it's totally viable for any owner
  • nsiboro - Tuesday, April 8, 2014 - link

    Ryan, thank you for a wonderfully written and informative review. Appreciate much.
  • behrouz - Tuesday, April 8, 2014 - link

    Ryan Smith , Please Confirm this :

    The new nv's Driver Does Overclock GTX 780 Ti, From 928 to 1019Mhz.if So Temp should be increased.
  • behrouz - Tuesday, April 8, 2014 - link

    and also Power Consumption
  • Ryan Smith - Tuesday, April 8, 2014 - link

    Overclock GTX 780 Ti? No. I did not see any changes in clockspeeds or temperatures that I can recall.
  • PWRuser - Tuesday, April 8, 2014 - link

    I have a Antec Signature 850W sitting in the closet. 295X2 too much for it?
    It's this one: http://www.jonnyguru.com/modules.php?name=NDReview...
  • Dustin Sklavos - Tuesday, April 8, 2014 - link

    Word of warning: do not use daisy-chained PCIe power connectors (i.e. one connection to the power supply and two 8-pins to the graphics card). If AMD wasn't going over the per-connector power spec it wouldn't be an issue, but they are, which means you can melt the connector at the power supply end. Those daisy-chained PCIe connectors are meant for 300W max, not 425W.

    We've been hearing about this from a bunch of partners and I believe end users should be warned.
  • PWRuser - Tuesday, April 8, 2014 - link

    Thank you. According to specs my PSU could handle these GPU separately, I guess utilizing 2 PCIE slots via 2 separate cards alleviates the strain.
  • extide - Tuesday, April 8, 2014 - link

    No it has nothing to do with how many cards or slots. It's how many CABLES from the PSU.

    Sometimes you can have a single cable with two pcie connectors on the end, one daisy chained of the other. What he is saying is, don't use connectors like that, use two individual cables instead.

    Although, unless the PSU you are using has really crappy (thin) power cables, it should be OK even with a single cable. But yeah, it's definitely a good idea to use two!

Log in

Don't have an account? Sign up now