The Division 2

The Division 2 - 3840x2160 - Ultra Quality

The Division 2 - 2560x1440 - Ultra Quality

The Division 2 - 1920x1080 - Ultra Quality

The Division 2 - 99th PCTL - 2560x1440 - Ultra Quality

The Division 2 - 99th PCTL - 3840x2160 - Ultra Quality

The Division 2 - 99th PCTL - 1920x1080 - Ultra Quality

Total War: Three Kingdoms Grand Theft Auto V
Comments Locked

135 Comments

View All Comments

  • eastcoast_pete - Tuesday, July 9, 2019 - link

    Actually, I like the idea used in the graph you linked to: $ per fps, averaged from 18 games, all at 1080p very high settings. It allows a value comparison all the way from lower to high-end cards.
  • Meteor2 - Monday, July 8, 2019 - link

    C'mon jjj you're better than that.
  • sgkean - Monday, July 8, 2019 - link

    How does enabling the various advanced features (Ray Tracing, AMD Fidelity FX, AMD Image Sharpening) affect the game scores? With the performance being so close, and these new features/technologies being the main difference, would be nice to see what effect they have on performance.
  • Wardrop - Monday, July 8, 2019 - link

    I assume the noise of these is such due to the use of a blower? I'm guessing we'll have to wait for custom PCB's and coolers to get something quieter, or otherwise got with water cooling.
  • xrror - Tuesday, July 9, 2019 - link

    Argh... yet again, it seems like AMD is pushing beyond the sweet spot of the process node to try and force as much raw performance out as they can.

    I really don't want to be yet another person bashing on Raja. He probably did get a bit of "short changed" on personnel resources at AMD as Ryzen really DID need to succeed else AMD dies. And he did deliver on giving good GPU compute GPU cores for the higher margin workstation markets.

    But... it just feels like AMD needs to get to terms with their fabrication node and how to get GPU cores to "kickith the butt" beyond beating Intel IGP graphics.

    Which... feels unfair in a way. The only reason AMD "sucks" is that nVidia right now is so stupid dominant in discrete graphics (and major kudo's to nVidia for mastering that on an "older node" even). I mean even Intel had really bad problems porting it's IGP graphics to 10nm Cannon Lake.

    But that all said, RX 5700 really feels like it's fighting against the process node to not suck. Intel may (hopefully, might) actually get it's s**t together and bring forth a competitive descrete card (and if they "fail" guess what, that fail will hammer the lower end market) and nVidia...

    well like, nVidia even -2 process nodes behind at this rate would probably still be faster. Which is stupid. All credit to nVidia, it's just I really hoped for a few more process "rabbits out of the hat" before GPU's slammed into the silicon stagnation wall.

    I just wish we could have gotten maybe a doubling of graphics performance for VR before "market forces" determined that a VR/4K capable video setup is going to cost you over $1000.
  • Meteor2 - Tuesday, July 9, 2019 - link

    "RX 5700 really feels like it's fighting against the process node to not suck." -- what are you talking about?
  • peevee - Thursday, July 11, 2019 - link

    Actually, for GPUs with their practically linear scaling of performance from ALUs, using the densest nodes is the right approach. They probably should have used denser, low-power variant (libraries) of TSMC's "7nm" process and add more ALUs in the same space at the expense of frequency, but that would be different from what Ryzen 3, so add the extra expense to R&D.
  • CiccioB - Tuesday, July 9, 2019 - link

    In few words, AMD just used a lot of transistors and W just to get near Pascal efficiency.
    Thanks to the new 7nm PP they manage to create something that looks like acceptable.
    But as we already saw in the past, they somewhat filled the gap only because Nvidia is still waiting for the new PP to become cheaper.
    Once it will, Nvidia new architecture is going to leave these useless piece of engineering in the dust. Be it just a Turing shrink with no other enhancements.
    10 Billions transistors to improve IPC of about 1.25x and spare just few W thanks to the 7nm PP. And be on par to Pascal at the end. 10 Billions transistors without the support of a single advanced feature that Turing has, such has VRS that is going to improve performances a lot in future games and is going to be the real trump card for Nvidia against this late Pascal, no mesh shading or similar, no FP+INT, no RT and no tensors that can be used for many things included advanced AI.
    10 billions transistors that simply have given evidence that GCN is problematic and really needs a lot of workarounds to perform well. 4.4 millions transistors used to improve GCN efficiency. And that resulted in a mere 1.25x.
    10 billions transistors spent on fixing a crap architecture that would not be enough to make it look good but, again, if the frequency/W curve would not have been ignored completely making this chip consume the same as the rival which is on a older PP. Like for all the previous failing architectures starting from Tahiti.

    In the end this architecture is a try to fix an un-fixable GCN and relies only on the delay that Nvidia has in the 7nm adoption. On the same node it would have been considered the same as Polaris or Vega, big, hot, worthless to the point to be sold with no margins.
    As we can see this is equal in being a waste of transistors and W and has been discounted even before launched. Worthless piece of engineering that will be "steamrolled" by next Nvidia architecture that will pose the basic path for all the next graphics evolution while already extending what is already available today thought Turing.
    AMD has still to put all those missing features, and it already has a really big transistor budget to handle today. 7nm, though by some revision, are here to stay for long time. If AMD is not going to change RDNA completely they won't be able to compete but by skipping the support of the more advanced features in the next years and are going to enjoy this match of the performances for just few months. Of course the missing features will be considered useless until they will eventually catch up. And they have still the console weapon to help them keep the market to a stall as they are quite behind with what the market can provide in the next years. RT is just the point of the iceberg. But also advanced geometry like mesh shading features that could already boos the scene complexity to the moon. But we just learnt that with NAVI AMD just managed to match Maxwell geometry capacity. Worthless piece of silicon, already discounted before launch.
  • Meteor2 - Tuesday, July 9, 2019 - link

    "In few words, AMD just used a lot of transistors and W just to get near Pascal efficiency." -- that makes no sense at all.

    Didn't bother reading the rest of your comment, sorry not sorry.
  • CiccioB - Wednesday, July 10, 2019 - link

    I just wonder what you have seen.
    NAVI gets the same perf/W that Pascal has and the same exact features.
    No RT, no tensor, No VSR, no geometry shading, no Voxel acceleration (that was already in Maxwell), no doble projection (for VR).
    7nm and 10 billions transistor to be just a bit faster than a 1080 that is based on a 5.7 billion transistor chip. And using more power do to so.

    Don't bother reading. It is clear you can't understand what's written.

Log in

Don't have an account? Sign up now