The Division

The final first person shooter in our benchmark suite, The Division is a multiplayer-only game powered by Ubisoft’s Snowdrop engine. The game’s design focuses on detailed urban environments and utilizes dynamic global illumination for parts of its lighting. For our testing we use the game’s built-in benchmark, which cycles through a number of scenes/areas of the game.

The Division - 3840x2160 - Ultra Quality

The Division - 3840x2160 - High Quality

The Division - 2560x1440 - Ultra Quality

As a bit of an unknown when it comes to engines, we went ahead and benchmarked this game at 4K with both Ultra and High settings, to see how performance was impacted by reducing the image quality. The result is that even at High quality, the GTX 1080 isn’t going to be able to hit 60fps. When it comes to The Division and 4K, your options are to either put up with a framerate in the mid-40s or make greater image quality sacrifices. That said, the GTX 1080 does get the distinction of being the only card to even crack 40fps at 4K; the GTX 1070 isn’t doing much better than 30fps.

More than anything else, this game is unexpectedly sensitive to the differences between the GTX 1080 and GTX 1070. Normally the GTX 1080 would lead by 25% or so, but in The Division that’s a 33% to 40% lead. It’s more than you’d expect given the differences between the two cards’ configurations, and while I suspect it’s a combination of memory bandwidth differences and ALU throughput differences, I’m also not 100% convinced it’s not a bug of some kind. So we’ll have to see if this changes at all.

In any case, the more significant gap between the Pascal cards means that while GTX 1080 is comfortably leading, this is one of the only cases where GTX 1070 isn’t at least at parity with GTX 980 Ti. The gap closes with the resolution, but at all points GTX 1070 comes up short. It’s not a total wash for the GTX 1070 since it’s both significantly cheaper and significantly more energy efficient than GTX 980 Ti, but it’s very rare for the card not to be hanging relatively close to GTX 1080.

Looking at the generational differences, GTX 1080 enjoys a solid lead over GTX 980. With the exception of 1440p, it improves on its direct predecessor by 60% or more. Meanwhile GTX 1070, despite its greater handicap, is a consistent 50%+ faster than GTX 970.

The Witcher 3 Grand Theft Auto V
Comments Locked

200 Comments

View All Comments

  • Robalov - Tuesday, July 26, 2016 - link

    Feels like it took 2 years longer than normal for this review :D
  • extide - Wednesday, July 27, 2016 - link

    The venn diagram is wrong -- for GP104 it says 1:64 speed for FP16 -- it is actually 1:1 for FP16 (ie same speed as FP32) (NOTE: GP100 has 2:1 FP16 -- meaning FP16 is twice as fast as FP32)
  • extide - Wednesday, July 27, 2016 - link

    EDIT: I might be incorrect about this actually as I have seen information claiming both .. weird.
  • mxthunder - Friday, July 29, 2016 - link

    its really driving me nuts that a 780 was used instead of a 780ti.
  • yhselp - Monday, August 8, 2016 - link

    Have I understood correctly that Pascal offers a 20% increase in memory bandwidth from delta color compression over Maxwell? As in a total average of 45% over Kepler just from color compression?
  • flexy - Sunday, September 4, 2016 - link

    Sorry, late comment. I just read about GPU Boost 3.0 and this is AWESOME. What they did, is expose what previously was only doable with bios modding - eg assigning the CLK bins different voltages. The problem with overclocking Kepler/Maxwell was NOT so much that you got stuck with the "lowest" overclock as the article says, but that simply adding a FIXED amount of clocks across the entire range of clocks, as you would do with Afterburner etc. where you simply add, say +120 to the core. What happened here is that you may be "stable" at the max overclock (CLK bin), but since you added more CLKs to EVERY clock bin, the assigned voltages (in the BIOS) for each bin might not be sufficient. Say you have CLK bin 63 which is set to 1304Mhz in a stock bios. Now you use Afterburner and add 150 Mhz, now all of a sudden this bin amounts to 1454Mhz BUT STILL at the same voltage as before, which is too low for 1454Mhz. You had to manually edit the table in the BIOS to shift clocks around, especially since not all Maxwell cards allowed adding voltage via software.
  • Ether.86 - Tuesday, November 1, 2016 - link

    Astonishing review. That's the way Anandtech should be not like the mobile section which sucks...
  • Warsun - Tuesday, January 17, 2017 - link

    Yeah looking at the bottom here.The GTX 1070 is on the same level as a single 480 4GB card.So that graph is wrong.
    http://www.hwcompare.com/30889/geforce-gtx-1070-vs...
    Remember this is from GPU-Z based on hardware specs.No amount of configurations in the Drivers changes this.They either screwed up i am calling shenanigans.
  • marceloamaral - Thursday, April 13, 2017 - link

    Nice Ryan Smith! But, my question is, is it truly possible to share the GPU with different workloads in the P100? I've read in the NVIDIA manual that "The GPU has a time sliced scheduler to schedule work from work queues belonging to different CUDA contexts. Work launched to the compute engine from work queues belonging to different CUDA contexts cannot execute concurrently."
  • marceloamaral - Thursday, April 13, 2017 - link

    Nice Ryan Smith! But, my question is, is it truly possible to share the GPU with different workloads in the P100? I've read in the NVIDIA manual that "The GPU has a time sliced scheduler to schedule work from work queues belonging to different CUDA contexts. Work launched to the compute engine from work queues belonging to different CUDA contexts cannot execute concurrently."

Log in

Don't have an account? Sign up now