The Division

The final first person shooter in our benchmark suite, The Division is a multiplayer-only game powered by Ubisoft’s Snowdrop engine. The game’s design focuses on detailed urban environments and utilizes dynamic global illumination for parts of its lighting. For our testing we use the game’s built-in benchmark, which cycles through a number of scenes/areas of the game.

The Division - 3840x2160 - Ultra Quality

The Division - 3840x2160 - High Quality

The Division - 2560x1440 - Ultra Quality

As a bit of an unknown when it comes to engines, we went ahead and benchmarked this game at 4K with both Ultra and High settings, to see how performance was impacted by reducing the image quality. The result is that even at High quality, the GTX 1080 isn’t going to be able to hit 60fps. When it comes to The Division and 4K, your options are to either put up with a framerate in the mid-40s or make greater image quality sacrifices. That said, the GTX 1080 does get the distinction of being the only card to even crack 40fps at 4K; the GTX 1070 isn’t doing much better than 30fps.

More than anything else, this game is unexpectedly sensitive to the differences between the GTX 1080 and GTX 1070. Normally the GTX 1080 would lead by 25% or so, but in The Division that’s a 33% to 40% lead. It’s more than you’d expect given the differences between the two cards’ configurations, and while I suspect it’s a combination of memory bandwidth differences and ALU throughput differences, I’m also not 100% convinced it’s not a bug of some kind. So we’ll have to see if this changes at all.

In any case, the more significant gap between the Pascal cards means that while GTX 1080 is comfortably leading, this is one of the only cases where GTX 1070 isn’t at least at parity with GTX 980 Ti. The gap closes with the resolution, but at all points GTX 1070 comes up short. It’s not a total wash for the GTX 1070 since it’s both significantly cheaper and significantly more energy efficient than GTX 980 Ti, but it’s very rare for the card not to be hanging relatively close to GTX 1080.

Looking at the generational differences, GTX 1080 enjoys a solid lead over GTX 980. With the exception of 1440p, it improves on its direct predecessor by 60% or more. Meanwhile GTX 1070, despite its greater handicap, is a consistent 50%+ faster than GTX 970.

The Witcher 3 Grand Theft Auto V
Comments Locked

200 Comments

View All Comments

  • patrickjp93 - Wednesday, July 20, 2016 - link

    That doesn't actually support your point...
  • Scali - Wednesday, July 20, 2016 - link

    Did I read a different article?
    Because the article that I read said that the 'holes' would be pretty similar on Maxwell v2 and Pascal, given that they have very similar architectures. However, Pascal is more efficient at filling the holes with its dynamic repartitioning.
  • mr.techguru - Wednesday, July 20, 2016 - link

    Just Ordered the MSI GeForce GTX 1070 Gaming X , way better than 1060 / 480. NVidia Nail it :)
  • tipoo - Wednesday, July 20, 2016 - link

    " NVIDIA tells us that it can be done in under 100us (0.1ms), or about 170,000 clock cycles."

    Is my understanding right that Polaris, and I think even earlier with late GCN parts, could seamlessly interleave per-clock? So 170,000 times faster than Pascal in clock cycles (less in total time, but still above 100,000 times faster)?
  • Scali - Wednesday, July 20, 2016 - link

    That seems highly unlikely. Switching to another task is going to take some time, because you also need to switch all the registers, buffers, caches need to be re-filled etc.
    The only way to avoid most of that is to duplicate the whole register file, like HyperThreading does. That's doable on an x86 CPU, but a GPU has way more registers.
    Besides, as we can see, nVidia's approach is fast enough in practice. Why throw tons of silicon on making context switching faster than it needs to be? You want to avoid context switches as much as possible anyway.

    Sadly AMD doesn't seem to go into any detail, but I'm pretty sure it's going to be in the same ballpark.
    My guess is that what AMD calls an 'ACE' is actually very similar to the SMs and their command queues on the Pascal side.
  • Ryan Smith - Wednesday, July 20, 2016 - link

    Task switching is separate from interleaving. Interleaving takes place on all GPUs as a basic form of latency hiding (GPUs are very high latency).

    The big difference is that interleaving uses different threads from the same task; task switching by its very nature loads up another task entirely.
  • Scali - Thursday, July 21, 2016 - link

    After re-reading AMD's asynchronous shader PDF, it seems that AMD also speaks of 'interleaving' when they switch a graphics CU to a compute task after the graphics task has completed. So 'interleaving' at task level, rather than at instruction level.
    Which would be pretty much the same as NVidia's Dynamic Load Balancing in Pascal.
  • eddman - Thursday, July 21, 2016 - link

    The more I read about async computing in Polaris and Pascal, the more I realize that the implementations are not much different.

    As Ryan pointed out, it seems that the reason that Polaris, and GCN as a whole, benefit more from async is the architecture of the GPU itself, being wider and having more ALUs.

    Nonetheless, I'm sure we're still going to see comments like "Polaris does async in hardware. Pascal is hopeless with its software async hack".
  • Matt Doyle - Wednesday, July 20, 2016 - link

    Typo in the lead sentence of HPC vs. Consumer: Divergence paragraph: "Pascal in an architecture that..."

    "is" instead of "in"
  • Matt Doyle - Wednesday, July 20, 2016 - link

    Feeding Pascal page, "GDDR5X uses a 16n prefetch, which is twice the size of GDDR5’s 8n prefect."

    Prefect = prefetch

Log in

Don't have an account? Sign up now