But Can It Run Crysis?

Even if the Titan V isn't a major leap in gaming performance, we couldn't help ourselves. We have a Titan, we have Crysis. The ultimate question must be answered. Can it run Crysis?

Crysis: Warhead (DX10) - 3840x2160 - Enthusiast Quality, 4xSSAA

Yes, it can run Crysis.

And in fact, it is the only Titan that can reach the coveted 60fps mark. Perhaps Titan V is the card that can finally run Crysis the way it's meant to be played: maximum resolution, maximum details, and maximum anti-aliasing. At the end of the day, only one Titan stands above the rest when it comes to Crytek's testament to graphical intensity.

Gaming Performance Power, Temperature, & Noise
Comments Locked

111 Comments

View All Comments

  • maroon1 - Wednesday, December 20, 2017 - link

    Correct if I'm wrong, Crysis warhead running 4K with 4xSSAA means it is running 8K (4 times as much as 4K) and then downscale to 4K
  • Ryan Smith - Wednesday, December 20, 2017 - link

    Yes and no. Under the hood it's actually using a rotated grid, so it's a little more complex than just rendering it at a higher resolution.

    The resource requirements are very close to 8K rendering, but it avoids some of the quality drawbacks of scaling down an actual 8K image.
  • Frenetic Pony - Wednesday, December 20, 2017 - link

    A hell of a lot of "It works great but only if you buy and program exclusively for Nvidia!" stuff here. Reminds me of Sony's penchant for exclusive lock in stuff over a decade ago when they were dominant. Didn't work out for Sony then, and this is worse for customers as they'll need to spend money on both dev and hardware.

    I'm sure some will be shortsighted enough to do so. But with Google straight up outbuying Nvidia for AI researchers (reportedly up to, or over, 10 million for just a 3 year contract) it's not a long term bet I'd make.
  • tuxRoller - Thursday, December 21, 2017 - link

    I assumed you've not heard of CUDA before?
    NVIDIA had long been the only game in town when it comes to gpgpu HPC.
    They're really a monopoly at this point, and researchers have no interest in making they're jobs harder by moving to a new ecosystem.
  • mode_13h - Wednesday, December 27, 2017 - link

    OpenCL is out there, and AMD has had some products that were more than competitive with Nvidia, in the past. I think Nvidia won HPC dominance by bribing lots of researchers with free/cheap hardware and funding CUDA support in popular software packages. It's only with Pascal that their hardware really surpassed AMD's.
  • tuxRoller - Sunday, December 31, 2017 - link

    Ocl exists but cuda has MUCH higher mindshare. It's the de facto hpc framework used and taught in schools.
  • mode_13h - Sunday, December 31, 2017 - link

    True that Cuda seems to dominate HPC. I think Nvidia did a good job of cultivating the market for it.

    The trick for them now is that most deep learning users use frameworks which aren't tied to any Nvidia-specific APIs. I know they're pushing TensorRT, but it's certainly not dominant in the way Cuda dominates HPC.
  • tuxRoller - Monday, January 1, 2018 - link

    The problem is that even the gpu accelerated nn frameworks are still largely built first using cuda. torch, caffe and tensorflow offer varying levels of ocl support (generally between some and none).
    Why is this still a problem? Well, where are the ocl 2.1+ drivers? Even 2.0 is super patchy (mainly due to nvidia not officially supporting anything beyond 1.2). Add to this their most recent announcements about merging ocp into vulkan and you have yourself an explanation for why cuda continues to dominate.
    My hope is that khronos announce vulkan 2.0, with ocl being subsumed, very soon. Doing that means vendors only have to maintain a single driver (with everything consuming spirv) and nvidia would, basically, be forced to offer opencl-next. Bottom-line: if they can bring the ocl functionality into vulkan without massively increasing the driver complexity, I'd expect far more interest from the community.
  • mode_13h - Friday, January 5, 2018 - link

    Your mistake is focusing on OpenCL support as a proxy for AMD support. Their solution was actually developing OpenMI as a substitute for Nvidia's cuDNN. They have forks of all the popular frameworks to support it - hopefully they'll get merged in, once ROCm support exists in the mainline Linux kernel.

    Of course, until AMD can answer the V100 on at least power-effeciency grounds, they're going to remain an also-ran, in the market for training. I think they're a bit more competitive for inferencing workloads, however.
  • CiccioB - Thursday, December 21, 2017 - link

    What are you suggesting?
    GPU are a very customized piece of silicon and you have to code for them with optimization for each single architecture if you want to exploit them at the maximum.
    If you think that people buy $10.000 cards to be put in $100.000 racks for a multiple $1.000.000 server just to use open source not optimized not supported not guarantee code in order to make AMD fanboys happy, well, not, it's not like the industry works.
    Grow up.

Log in

Don't have an account? Sign up now