Hitman

The final game in our 2016 benchmark suite is the 2016 edition of Hitman, the latest title in the stealth-action franchise. The game offers two rendering paths: DirectX 11 and DirectX 12, with the latter being the case of DirectX 12 being added after the fact. As with past Hitman games, the latest proves to have a good mix of scenery and high model counts to stress modern video cards.

Hitman - 3840x2160 - Ultra Quality (DX11)

Hitman - 2560x1440 - Ultra Quality (DX11)

Wrapping things up on the gaming side, we have Hitman. While several DX11 games have added DX12 over the last year, Hitman is perhaps the most interesting case both for driver optimization purposes, and just what the developers have been able to wring out. For the latest generation of cards, the game’s DX12 performance is more or less a wash; it’s not consistently better than DX11 in GPU-bound scenarios. However once we get CPU-bound, the threading and CPU overhead improvements of DX12 make themselves felt, improving performance on even the all-powerful GTX 1080 Ti at 1440p.

By the numbers then, under DX12 the GTX 1080 Ti picks up 24% over the GTX 1080, which is actually a smaller than average gain. Against the GTX 980 Ti however, NVIDIA’s latest card leads by 83%, making for a very strong generational improvement.

Grand Theft Auto V Compute
Comments Locked

161 Comments

View All Comments

  • Jon Tseng - Thursday, March 9, 2017 - link

    Launch day Anandtech review?

    My my wonders never cease! :-)
  • Ryan Smith - Thursday, March 9, 2017 - link

    For my next trick, watch me pull a rabbit out of my hat.
  • blanarahul - Thursday, March 9, 2017 - link

    Ooh.
  • YukaKun - Thursday, March 9, 2017 - link

    /claps

    Good article as usual.

    Cheers!
  • Yaldabaoth - Thursday, March 9, 2017 - link

    Rocky: "Again?"
  • Ryan Smith - Thursday, March 9, 2017 - link

    No doubt about it. I gotta get another hat.
  • Anonymous Blowhard - Thursday, March 9, 2017 - link

    And now here's something we hope you'll really like.
  • close - Friday, March 10, 2017 - link

    Quick question: shouldn't the memory clock in the table on the fist page be expressed in Hz instead of bps being a clock and all? Or you could go with throughput but that would be just shy of 500GBps I think...
  • Ryan Smith - Friday, March 10, 2017 - link

    Good question. Because of the various clocks within GDDR5(X)*, memory manufacturers prefer that we list the speed as bandwidth per pin instead of frequency. The end result is that the unit is in bps rather than Hz.

    * http://images.anandtech.com/doci/10325/GDDR5X_Cloc...
  • close - Friday, March 10, 2017 - link

    Probably due to the QDR part that's not obvious from reading a just the frequency. Thanks.

Log in

Don't have an account? Sign up now