The Latest CUDA App: MotionDSP’s vReveal

NVIDIA had more slides in its GTX 275 presentation about non-gaming applications than it did about how the 275 performed in games. One such application is MotionDSP’s vReveal - a CUDA enabled video post processing application than can clean up poorly recorded video.

The application’s interface is simple:

Import your videos (anything with a supported codec on your system pretty much) and then select enhance.

You can auto-enhance with a single click (super useful) or even go in and tweak individual sliders and settings on your own in the advanced mode.

The changes you make to the video are visible on the fly, but the real time preview is faster on a NVIDIA GPU than if you rely on the CPU alone.

When you’re all done, simply hit save to disk and the video will be re-encoded with the proper changes. The encoding process takes place entirely on the GPU but it can also work on a CPU.

First let’s look at the end results. We took three videos, one recorded using Derek’s wife’s Blackberry and two from me on a Canon HD cam (but at low res) in my office.

I relied on vReveal’s auto tune to fix the videos and I’ve posted the originals and vReveal versions on YouTube. The videos are below:

In every single instance, the resulting video looks better. While it’s not quite the technology you see in shows like 24, it does make your videos look better and it does do it pretty quickly. There’s no real support for video editing here and I’m not familiar enough with the post processing software market to say whether or not there are better alternatives, but vReveal does do what it says it does. And it uses the GPU.

Performance is also very good on even a reasonably priced GPU. It took 51 seconds for the GeForce GTX 260 to save the first test video, it took my Dell Studio XPS 435’s Core i7 920 just over 3 minutes to do the same task.

It’s a neat application. It works as advertised, but it only works on NVIDIA hardware. Will it make me want to buy a NVIDIA GPU over an ATI one? Nope. If all things are equal (price, power and gaming performance) then perhaps. But if ATI provides a better gaming experience, I don’t believe it’s compelling enough.

First, the software isn’t free - it’s an added expense. Badaboom costs $30, vReveal costs $50. It’s not the most expensive software in the world, but it’s not free.

And secondly, what happens if your next GPU isn’t from NVIDIA? While vReveal will continue to work, you no longer get GPU acceleration. A vReveal-like app written in OpenCL will work on all three vendors’ hardware, as long as they support OpenCL.

If NVIDIA really wants to take care of its customers, it can start by giving away vReveal (and Badaboom) to people who purchase these high end graphics cards. If you want to add value, don’t tell users that they should want these things, give it to them. The burden of proof is on NVIDIA to show that these CUDA enabled applications are worth supporting rather than waiting for cross-vendor OpenCL versions.

Do you feel any differently?

CUDA - Oh there’s More The Rest of the Performance Charts - Age of Conan Performance
Comments Locked

294 Comments

View All Comments

  • 7Enigma - Thursday, April 2, 2009 - link

    Deja vu again, and again, and again. I've posted in no less than 3 other articles how bad some of the conclusions have been. There is NO possible way you could conclude the 275 is the better card at anything other than the 30" display resolution. Not only that, but it appears with the latest Nvidia drivers they are making things worse.

    Honestly, does anyone else see the parallel between the original OCZ SSD firmware and these new Nvidia drivers? Seems like they were willing to sacrifice 99% of their customers for the 1% that have 30" displays (which probably wouldn't even be looking at the $250 price point). Nvidia, take a note from OCZ's situation; lower performance at 30" to give better performance at 22-24" resolutions would do you much better in the $250 price segment. You shot yourselves in the foot on this one...
  • Gary Key - Thursday, April 2, 2009 - link

    The conclusion has been clarified to reflect the resolution results. It falls right into line with your thoughts and others as well as our original thoughts that did not make it through the edits correctly.
  • 7Enigma - Thursday, April 2, 2009 - link

    Yup, I responded to Anand's post with a thank you. We readers just like to argue, and when something doesn't make sense, we're quick to go on the attack. But also quick to understand and appreciate a correction.
  • duploxxx - Thursday, April 2, 2009 - link

    Just some thoughts:

    There is only 1 single benchmark out of 7 where the 275 has better frame rates for 1680 and 1920 resolution against the 4890 and yet your final words are that you favor the 275???? Only in 2560 the 275 is clearly the better choice. Are you already in the year 2012 where 2560 might be the standard resolution of the sales, it is only very recent that the 1680 became standard and even then this resolution is high for global OEM market sales. Your 2560 is not even few % of the market.

    I think you have to clarify your final words a bit more with your choice.... Perhaps if we see power consumption, fan noice etc that would be added value to the choice, but for now, TWIMTBP is really not enough push to prefer the card, I am sure the red team will improve there drivers as usual also.

    anything else i missed in your review that could counter my thoughts?
  • SiliconDoc - Monday, April 6, 2009 - link

    Derek has been caught in the 2560 wins it all no matter what with the months on end of ati taking that cake since the 4870 releasse. No lower resolutions mattered for squat since the ati lost there - so you'll have to excuse his months long brainwashing.
    Thankfully anand checked in and smacked it out of his review just in time for the red fanboy to start enjoying lower resolution wins while nvidia takes the high resolution crown, which is- well.. not a win here anymore.
    Congratulations, red roosters.
  • duploxxx - Thursday, April 2, 2009 - link

    just as addon, I also checked some other reviews (yes i always read anandtech first as main source of info) and i saw that it is cooler then a 4870 and actually consumes 10% less then a 4870 so this can't be the reason either while the 275 stays at the same 280 power consumption. Also OC parts are already shown GPU above 1000....
  • cyriene - Thursday, April 2, 2009 - link

    I would have liked to see some information on heat output and the temperatures of the cards while gaming.
    Otherwise, nice article.
  • 7Enigma - Thursday, April 2, 2009 - link

    This is an extreme omission. The fact that the 4890 is essentially an overclocked 4870 means with virtually nothing changed you HAVE to show the temps. I still stick by my earlier comment that the Vapo-chill model of the Sapphire 4870 is possibly a better card since it's temps are significantly lower than the stock 4870, while already being overclocked. I could easily imagine that for $50-60 less you could have the performance of the 4890 at cooler temps (by OC'ing the vapochill further).

    Comon guys, you have to give thought to this!
  • SiliconDoc - Monday, April 6, 2009 - link

    Umm, they - you know the AT bosses, don't like the implications of that. So many months, even years, spent on screeching like women about nvidia rebranding has them in a very difficult position.
    Besides, they have to keep the illusion of superior red power useage, so only after demand will they put up the power chart.
    They tried to get away with not, but they couldn't do it.
  • initialised - Thursday, April 2, 2009 - link

    GPU-z lists the R790 as having a surface area of 282mm2 while the R770 has 256mm2 but both are listed as having the same transistor count.

Log in

Don't have an account? Sign up now