Hitman

The final game in our 2016 benchmark suite is the 2016 edition of Hitman, the latest title in the stealth-action franchise. The game offers two rendering paths: DirectX 11 and DirectX 12, with the latter being the case of DirectX 12 being added after the fact. As with past Hitman games, the latest proves to have a good mix of scenery and high model counts to stress modern video cards.

Hitman - 3840x2160 - Ultra Quality

Hitman - 2560x1440 - Ultra Quality

Hitman - 1920x1080 - Ultra Quality

Because Hitman supports both DX11 and DX12, for the moment we’ve gone ahead and benchmarked it with both. In practice the performance impact of DX12 is very mixed; NVIDIA cards prior to Pascal lose performance and Pascal cards can either gain or lose performance. AMD cards on the other hand tend to gain performance. The image quality is the same with both renderers, so it’s simply a matter of picking the render path that produces the best performance for a given card.

In any case, the GTX 1080 continues to top the charts here. 60fps still isn’t attainable at 4K, but it can deliver a reasonably playable 49fps. Alternatively, at 1440p it does better than 85fps. Meanwhile the GTX 1070 isn’t a great option at 4K, but at 1440p it can easily stay north of 60fps, delivering 69.4fps.

Thanks in part to the DX12 code path, this is another game where the GTX 1070 performs as expected versus GTX 1080, but still can’t hold on to second place. Rather the Radeon Fury X takes second place at all but 1080p.

Looking at our generational comparisons one last time, this final game has the Pascal cards performing better than expected. At 1440p and above, the GTX 1080 hits 86% better performance than the GTX 980 under DirectX 11, and the GTX 1070 bests the GTX 970 by an average of 63% in the same circumstances. As best as I can tell, there is just something about the Pascal cards that is slightly more in tune with this game than was the Maxwell 2 cards, leading to the performance we’re seeing here. Otherwise the gap between the GTX 1080 and GTX 1070 is pretty typical at about 25% at the higher resolutions.

Finally, in our last time checking in on the GTX 680, the GTX 1080 offers a commanding performance improvement. GTX 1080 is 4.1x faster than GTX 680 under DirectX 11, reinforcing just how much progress NVIDIA had made in 4 years and a single full manufacturing node upgrade.

Grand Theft Auto V Compute
Comments Locked

200 Comments

View All Comments

  • Robalov - Tuesday, July 26, 2016 - link

    Feels like it took 2 years longer than normal for this review :D
  • extide - Wednesday, July 27, 2016 - link

    The venn diagram is wrong -- for GP104 it says 1:64 speed for FP16 -- it is actually 1:1 for FP16 (ie same speed as FP32) (NOTE: GP100 has 2:1 FP16 -- meaning FP16 is twice as fast as FP32)
  • extide - Wednesday, July 27, 2016 - link

    EDIT: I might be incorrect about this actually as I have seen information claiming both .. weird.
  • mxthunder - Friday, July 29, 2016 - link

    its really driving me nuts that a 780 was used instead of a 780ti.
  • yhselp - Monday, August 8, 2016 - link

    Have I understood correctly that Pascal offers a 20% increase in memory bandwidth from delta color compression over Maxwell? As in a total average of 45% over Kepler just from color compression?
  • flexy - Sunday, September 4, 2016 - link

    Sorry, late comment. I just read about GPU Boost 3.0 and this is AWESOME. What they did, is expose what previously was only doable with bios modding - eg assigning the CLK bins different voltages. The problem with overclocking Kepler/Maxwell was NOT so much that you got stuck with the "lowest" overclock as the article says, but that simply adding a FIXED amount of clocks across the entire range of clocks, as you would do with Afterburner etc. where you simply add, say +120 to the core. What happened here is that you may be "stable" at the max overclock (CLK bin), but since you added more CLKs to EVERY clock bin, the assigned voltages (in the BIOS) for each bin might not be sufficient. Say you have CLK bin 63 which is set to 1304Mhz in a stock bios. Now you use Afterburner and add 150 Mhz, now all of a sudden this bin amounts to 1454Mhz BUT STILL at the same voltage as before, which is too low for 1454Mhz. You had to manually edit the table in the BIOS to shift clocks around, especially since not all Maxwell cards allowed adding voltage via software.
  • Ether.86 - Tuesday, November 1, 2016 - link

    Astonishing review. That's the way Anandtech should be not like the mobile section which sucks...
  • Warsun - Tuesday, January 17, 2017 - link

    Yeah looking at the bottom here.The GTX 1070 is on the same level as a single 480 4GB card.So that graph is wrong.
    http://www.hwcompare.com/30889/geforce-gtx-1070-vs...
    Remember this is from GPU-Z based on hardware specs.No amount of configurations in the Drivers changes this.They either screwed up i am calling shenanigans.
  • marceloamaral - Thursday, April 13, 2017 - link

    Nice Ryan Smith! But, my question is, is it truly possible to share the GPU with different workloads in the P100? I've read in the NVIDIA manual that "The GPU has a time sliced scheduler to schedule work from work queues belonging to different CUDA contexts. Work launched to the compute engine from work queues belonging to different CUDA contexts cannot execute concurrently."
  • marceloamaral - Thursday, April 13, 2017 - link

    Nice Ryan Smith! But, my question is, is it truly possible to share the GPU with different workloads in the P100? I've read in the NVIDIA manual that "The GPU has a time sliced scheduler to schedule work from work queues belonging to different CUDA contexts. Work launched to the compute engine from work queues belonging to different CUDA contexts cannot execute concurrently."

Log in

Don't have an account? Sign up now