Gaming Performance

Sure, compute is useful. But be honest: you came here for the 4K gaming benchmarks, right?

Battlefield 1 - 3840x2160 - Ultra Quality

Battlefield 1 - 99th Percentile - 3840x2160 - Ultra Quality

Ashes of the Singularity: Escalation - 3840x2160 - Extreme Quality

Ashes: Escalation - 99th Percentile - 3840x2160 - Extreme Quality

Already after Battlefield 1 (DX11) and Ashes (DX12), we can see that Titan V is not a monster gaming card, though it still is faster than Titan Xp. This is not unexpected, as Titan V's focus is quite far away from gaming as opposed to the focus of the previous Titan cards.

Doom - 3840x2160 - Ultra Quality

Doom - 99th Percentile - 3840x2160 - Ultra Quality

Ghost Recon Wildlands - 3840x2160 - Very High Quality

Deus Ex: Mankind Divided - 3840x2160 - Ultra Quality

Grand Theft Auto V - 3840x2160 - Very High Quality

Grand Theft Auto V - 99th Percentile - 3840x2160 - Very High Quality

Total War: Warhammer - 3840x2160 - Ultra Quality

Despite being generally ahead of Titan Xp, it's clear Titan V is suffering from lack of gaming optimization. And for that matter, the launch drivers definitely have bugs in them as far as gaming is concerned. Titan V on Deus Ex resulted in small black box artifacts during the benchmark; Ghost Recon Wildlands experienced sporadic but persistant hitching, and Ashes occasionally suffered from fullscreen flickering.

And despite the impressive 3-digit FPS in the Vulkan-powered DOOM, the card actually falls behind Titan Xp in 99th percentile framerates. For such high average framerates, even a 67fps 99th percentile can reduce perceived smoothness. Meanwhile, running Titan V under DX12 for Deus Ex and Total War: Warhammer resulted in less performance. But with immature gaming drivers, it is too early to say if these are representative of low-level API performance on Volta itself.

Overall, the Titan V averages out to around 15% faster than the Titan Xp, excluding 99th percentiles, but with the aforementioned caveats. Titan V's high average FPS in DOOM and Deus Ex are somewhat marred by stagnant 99th percentiles and minor but noticable artifacting, respectively.

So as a pure gaming card, our preview results indicate that this would not the best gaming purchase at $3000. Typically, a $1800 premium for around 10 - 20% faster gaming over the Titan Xp wouldn't be enticing, but it seems there are always some who insist.

Synthetic Graphics Performance But Can It Run Crysis?
POST A COMMENT

112 Comments

View All Comments

  • mode_13h - Sunday, December 31, 2017 - link

    True that Cuda seems to dominate HPC. I think Nvidia did a good job of cultivating the market for it.

    The trick for them now is that most deep learning users use frameworks which aren't tied to any Nvidia-specific APIs. I know they're pushing TensorRT, but it's certainly not dominant in the way Cuda dominates HPC.
    Reply
  • tuxRoller - Monday, January 1, 2018 - link

    The problem is that even the gpu accelerated nn frameworks are still largely built first using cuda. torch, caffe and tensorflow offer varying levels of ocl support (generally between some and none).
    Why is this still a problem? Well, where are the ocl 2.1+ drivers? Even 2.0 is super patchy (mainly due to nvidia not officially supporting anything beyond 1.2). Add to this their most recent announcements about merging ocp into vulkan and you have yourself an explanation for why cuda continues to dominate.
    My hope is that khronos announce vulkan 2.0, with ocl being subsumed, very soon. Doing that means vendors only have to maintain a single driver (with everything consuming spirv) and nvidia would, basically, be forced to offer opencl-next. Bottom-line: if they can bring the ocl functionality into vulkan without massively increasing the driver complexity, I'd expect far more interest from the community.
    Reply
  • mode_13h - Friday, January 5, 2018 - link

    Your mistake is focusing on OpenCL support as a proxy for AMD support. Their solution was actually developing OpenMI as a substitute for Nvidia's cuDNN. They have forks of all the popular frameworks to support it - hopefully they'll get merged in, once ROCm support exists in the mainline Linux kernel.

    Of course, until AMD can answer the V100 on at least power-effeciency grounds, they're going to remain an also-ran, in the market for training. I think they're a bit more competitive for inferencing workloads, however.
    Reply
  • CiccioB - Thursday, December 21, 2017 - link

    What are you suggesting?
    GPU are a very customized piece of silicon and you have to code for them with optimization for each single architecture if you want to exploit them at the maximum.
    If you think that people buy $10.000 cards to be put in $100.000 racks for a multiple $1.000.000 server just to use open source not optimized not supported not guarantee code in order to make AMD fanboys happy, well, not, it's not like the industry works.
    Grow up.
    Reply
  • mode_13h - Wednesday, December 27, 2017 - link

    I don't know if you've heard of OpenCL, but there's not reason why a GPU needs to be programmed in a proprietary language.

    It's true that OpenCL has some minor issues with performance portability, but the main problem is Nvidia's stubborn refusal to support anything past version 1.2.

    Anyway, lots of businesses know about vendor lock-in and would rather avoid it, so it sounds like you have some growing up to do if you don't understand that.
    Reply
  • CiccioB - Monday, January 1, 2018 - link

    Grow up.
    I repeat. None is wasting millions in using not certified, supported libraries. Let's avoid talking about entire frameworks.
    If you think that researches with budgets of millions are nerds working in a garage with avoiding lock-in strategies as their first thought in the morning, well, grow up kid.
    Nvidia provides the resources to allow them to exploit their expensive HW at the most of its potential reducing time and other associated costs. Also when upgrading the HW with a better one. That's what counts when investing millions for a job.
    For you kid's home made AI joke, you can use whatever alpha library with zero support and certification. Others have already grown up.
    Reply
  • mode_13h - Friday, January 5, 2018 - link

    No kid here. I've shipped deep-learning based products to paying customers for a major corporation.

    I've no doubt you're some sort of Nvidia shill. Employee? Maybe you bought a bunch of their stock? Certainly sounds like you've drunk their kool aid.

    Your line of reasoning reminds me of how people used to say businesses would never adopt Linux. Now, it overwhelmingly dominates cloud, embedded, and underpins the Android OS running on most of the world's handsets. Not to mention it's what most "researchers with budgets of millions" use.
    Reply
  • tuxRoller - Wednesday, December 20, 2017 - link

    "The integer units have now graduated their own set of dedicates cores within the GPU design, meaning that they can be used alongside the FP32 cores much more freely."

    Yay! Nvidia caught up to gcn 1.0!
    Seriously, this goes to show how good the gcn arch was. It was probably too ambitious for its time as those old gpus have aged really well it took a long time for games to catch up.
    Reply
  • CiccioB - Thursday, December 21, 2017 - link

    <blockquote>Nvidia caught up to gcn 1.0!</blockquote>
    Yeah! It is known to the entire universe that it is nvidia that trails AMD performances.
    Luckly they managed to get this Volta out in time before the bankruptcy.
    Reply
  • tuxRoller - Wednesday, December 27, 2017 - link

    I'm speaking about architecture not performance. Reply

Log in

Don't have an account? Sign up now