CPU Scaling

When it comes to how well a game scales with a processor, DirectX 12 is somewhat of a mixed bag. This is due to two reasons – it allows GPU commands to be issued by each CPU core, therefore removing the single core performance limit that hindered a number of DX11 titles and aiding configurations with fewer core counts or lower clock speeds. On the other side of the coin is that it because it allows all the threads in a system to issue commands, it can pile on the work during heavy scenes, moving the cliff edge for high powered cards further down the line or making the visual effects at the high end very impressive, which is perhaps something benchmarking like this won’t capture.

For our CPU scaling tests, we took the two high end cards tested and placed them in each of our Core i7 (6C/12T), Core i5 (4C/4T) and Core i3 (2C/4T) environments, at three different resolution/setting configurations similar to the previous page, and recorded the results.

Fable Legends Beta: GTX 980 Ti Scaling

Fable Legends Beta: GTX 980 Ti Scaling %

Looking solely at the GTX 980 Ti to begin, and we see that for now the Fable Benchmark only scales at the low resolution and graphics quality. Moving up to 1080p or 4K sees similar performance no matter what the processor – perhaps even a slight decrease at 4K but this is well within a 2% variation.

Fable Legends Beta: AMD Fury X Scaling

Fable Legends Beta: AMD Fury X Scaling %

On the Fury X, the tale is similar and yet stranger. The Fable benchmark is canned, so it should be running the same data each time – but in all three circumstances the Core i7 trails behind the Core i5. Perhaps in this instance there are too many threads on the processor contesting for bandwidth, giving some slight cache pressure (one wonders if some eDRAM might help). But again we see no real scaling improvement moving from Core i3 to Core i7 for our 1920x1080 and 3840x2160.

Fable Legends Beta: Other CPU Scaling %, 720p

As we’ve seen in previous reviews, the effects of CPU scaling with regards resolution are dependent on both the CPU architecture and the GPU architecture, with each GPU manufacturer performing differently and two different models in the same silicon family also differing in scaling results. To that end, we actually see a boost at 1280x720 with the AMD 7970 and the GTX 680 when moving from the Core i3 to the Core i7.

If we look at the rendering time breakdown between GPUs on high end configurations, we get the following data. Numbers here are listed in milliseconds, so lower is better:

Fable Legends Beta: Render Sub-Results (Core i7, 3840x2160)

Looking at the 980Ti and Fury X we see that NVIDIA is significantly faster at GBuffer rendering, Dynamic Global Illumination, and Compute Shader Simulation & Culling. Meanwhile AMD pulls narrower leads in every other category including the ambiguous 'other'.

Fable Legends Beta: Render Sub-Results (Core i7, 3840x2160)

Dropping down a couple of tiers with the GTX 970 and R9 290X, we see some minor variations. The R9 290X has good leads in dynamic lighting, and 'other', with smaller leads in Compute Shader Simulation & Culling and Post Processing. The GTX 970 benefits on dynamic global illumination significantly.

What do these numbers mean? Overall it appears that NVIDIA has a strong hold on deferred rendering and global illumination and AMD has benefits with dynamic lighting and compute.

Graphics Performance Comparison Discussing Percentiles and Minimum Frame Rates - NVIDIA GTX 980 Ti
Comments Locked

141 Comments

View All Comments

  • TheJian - Saturday, September 26, 2015 - link

    "There is a big caveat to remember, though. In power consumption tests, our GPU test rig pulled 449W at the wall socket when equipped with an R9 390X, versus 282W with a GTX 980. The delta between the R9 390 and GTX 970 was similar, at 121W. "

    You seem to see through rose colored glasses. At these kinds of watt differences you SHOULD dominate everything...LOL. Meanwhile NV guys have plenty of watts to OC and laugh. Your completely ignoring the cost of watts these days when talking a 100w bulb for hours on end for 3-7yrs many of us have our cards. You're also forgetting that most cards can hit strix speeds anyway right? NOBODY buys stock when you can buy an OC version from all vendors for not much more.

    "Early tests have shown that the scheduling hardware in AMD's graphics chips tends to handle async compute much more gracefully than Nvidia's chips do. That may be an advantage AMD carries over into the DX12 generation of games. However, Nvidia says its Maxwell chips can support async compute in hardware—it's just not enabled yet. We'll have to see how well async compute works on newer GeForces once Nvidia turns on its hardware support."

    Also seem to ignore that from your own link (techreport), they even state NV has async turned off for now. I'm guessing just waiting for all the DX12 stuff to hit, see if AMD can catch them, then boom, hello more perf...LOL.

    https://techreport.com/review/28685/geforce-gtx-98...
    "Thanks in part to that humongous cooler, the Strix has easily the highest default clock speeds of any card in this group, with a 1216MHz base and 1317MHz boost"
    A little less than you say, but yes, NV gives you free room to run to WHATEVER your card can do in the allowed limit. Unlike AMD's UP TO crap, with NV you get GUARANTEED X, and more if available. I prefer the latter. $669 at amazon for the STRIX, so for $20 I'll take the massive gain in perf (cheapest at newegg is $650 for 980ti). I'll get it back in watts saved on electricity in no time. You completely ignore Total Cost of Ownership, not to mention DRIVERS and how RARE AMD drops are. NV puts out a WHQL driver monthly or more.

    https://techreport.com/review/28685/geforce-gtx-98...
    Any time you offer me ~15% perf for 3% cost I'll take it. If you tell me electric costs mean nothing, in the same sentence I'll tell you $20 means nothing then, on the price of card most live with for years.

    Frostbite is NOT brand agnostic. Cough, Mantle, 8mil funding, Cough...The fact that MANY games run better in DX11 for Nv is just DRIVERS and time spent with DEVS (Witcher3, Project Cars etc, devs said this). This should be no surprise when R&D is down for 4yrs at AMD while the reverse is true at NV (who now spends more on R&D then AMD who has a larger product line).

    Shocker ASHES looks good for AMD when it was a MANTLE engine game...ROFL. Jeez guy...Even more funny that once NV optimized for Star Swarm they had massive DX12 improvements and BEAT AMD in it, and not to mention the massive DX11 improvement too (which AMD ignored). Gamers should look at who has the funding to keep up in DX11 for a while too correct? AMD seems to have moved on to dx12 (not good for those poor gamers who can't afford new stuff right?). You seem to only see your arguments for YOUR side. Near as I can see, NV looks good until you concentrate where I will not play (1280x720, or crap cpus). Also, you're basing all your conclusions on BETA games and current state of drivers before any of this stuff is real...LOL. You can call unreal 4 engine unrealistic, but I'll remind you it is used in TONS of games over the last two decades so AMD better be good here at some point. You can't repeatedly lose in one of the most prolific engines on the planet right? You can't just claim "that engine is biased" and ignore the fact that it is REALITY that it will be used a LOT. If all engines were BIASED towards AMD, I would buy AMD no matter what NV put out if AMD wins everything...ROFL. I don't care about the engine, I care about the result of the cards running on the games I play. IF NV pays off every engine designer, I'll buy NV because...well, DUH. You can whine all you want, but GAMERS are buying 82% NV for a reason. I bought INTEL i7 for a REASON. I don't care if they cheat, pay someone off, use proprietary tech etc, as long as they win, I'll buy it. I might complain about the cheating, but if it wins, I'll buy it anyway...LOL.

    IE, I don't have to LIKE Donald Trump to understand he knows how to MAKE money, unlike most of congress/Potus. He's pretty famous for FIRING people too, which again, congress/potus have no idea how to get done apparently. They also have no idea how to manage a budget, which again, TRUMP does. They have no idea how to protect the border, despite claiming they'll do it for a decade or two. I'll take that WALL please trump (which works in israel, china, etc), no matter how much it costs compared to decades of welfare losses, education dropping, medical going to illegals etc. The wall is CHEAP (like an NV card over 3-7yrs of usage at 120w+ or more savings as your link shows). I can hate trump (or Intel, or NV) and still recognize the value of his business skills, negotiation skills, firing skills, budget skills etc. Get it? If ZEN doesn't BURY Intel in perf, I'll buy another i7 for my dad...LOL.

    http://www.anandtech.com/show/9306/the-nvidia-gefo...
    Even anandtech hit strix speeds with ref. Core clocks of 250mhz free on 1000mhz? OK, sign me up. 4 months later likely everything does this or more as manufacturing only improves over time. All of NV cards OC well except for the bottom rungs. Call me when AMD wins where most gamers play (above 720P and with good cpus). Yes DX12 bodes well for poor people, and AMD's crap cpus. But I'm neither. Hopefully ZEN fixes the cpu side so I can buy AMD again. They still have a shot at my die shrunk gpu next year too, but not if they completely ignore DX11, keep failing to put out game ready drivers, lose the watt war etc. ZEN's success (or not) will probably influence my gpu sale too. If ZEN benchmarks suck there will probably be no profits to make my gpu drivers better etc. Think BIGGER.
  • anubis44 - Friday, October 30, 2015 - link

    As already mentioned, nVidia pulled out the seats, the parachutes and anything else they could unscrew and thew them out of the airplane to lighten the load. Maxwell's low-power usage comes at a price, like no hardware based scheduler, and now DX12 games will frequently make use of this for context switching and dynamic reallocation of shaders between rendering and compute. Why? Because the XBOX One and the PS4, having AMD Radeon graphics CGN cores, can do this. So in the interest of getting the power usage down, nVidia left out a hardware feature even the PS4 and XBOX One GPUs have. Does that sound smart? It's called 'marketing': "Hey look! Our card uses LESS POWER than the Radeon! It's because we're using super-duper, secret technologies!" No, you're leaving stuff off the die. No wonder it uses less power.
  • RussianSensation - Thursday, September 24, 2015 - link

    925mhz HD7970 is beating GTX960 by 32%. R9 280X currently sells for $190 on Newegg and it has another 13.5% increase in GPU clocks, which implies it would beat 960 by a whopping 40-45%!

    R9 290X beating 970 by 13% in a UE4 engine is extremely uncharacteristic. I can't recall this ever happen. Also, other sites are showing $280 R9 390 on the heels of the $450 GTX980.

    http://www.pcgameshardware.de/DirectX-12-Software-...

    That's an extremely bad showing for NV in each competing pricing segment, except for the 980Ti card. And because UE4 has significantly favoured NV's cards under DX11, this is actually a game engine that should have favoured NV's Maxwell as much as possible. Now imagine DX12 in a brand agnostic game engine like CryEngine or Frostbite?

    At the end it's not going to matter to gamers who upgrade every 2 years but for budget gamers who cannot afford to do so, they should pay attention.
  • CiccioB - Friday, September 25, 2015 - link

    925mhz HD7970 is beating GTX960 by 32%

    Ahahahah.. and that should prove that? A chip twice ad big and consuming twice the energy can perform 32% more than another?
    Oh, sorry, you were speaking about prices... yes... so you are just claiming that that power sucking beast has hard time selling like the winning micro hero that is filing nvidia's pokets while the competing can only be obtained when a stock cleaning operation is done?
    Can't really understand these kind of comparisons. GTX960 runs against Radeon 285 or now 380 card. It performs fantastically for the size of its die and the power it sucks. And has pretty cornered AMD margins on boards that mount beefy GPU like Tahiti or Tonga.
    The only hope for AMD to come out of this pitiful situation is to hope that with next generation and new PP perfomance/die space ratios are closer to competition, or they won't go to gain a singe cent out of graphics division for a few years again.
  • The_Countess - Friday, September 25, 2015 - link

    ya you seem to have forgotten that the hd7970 is 3+ years old while the gtx960 was released this year. and it has only 30% more transistors (~4.3billion vs ~3)

    and the only reason nvidia's power consumption is better is because they cut double precision performance on all their cards down to nothing.
  • MapRef41N93W - Saturday, September 26, 2015 - link

    So wrong it's not even funny. Maybe you aren't aware of this, but small die Kepler already had DP cut. Only GK100/GK110 had full DP with Kepler. That has nothing to do with why GM204/206 have such low power draw. The Maxwell architecture is the main reason.
  • Azix - Saturday, September 26, 2015 - link

    cut hardware scheduler?
  • Asomething - Sunday, September 27, 2015 - link

    Sorry to burst your bubble but nvidia didnt cut DP completely on small keplar, they cut down some from fermi but disabled the rest so they could keep DP on their Quadro series, there were softmods to unlock that DP, for maxwell they did actually completely cut DP to save on die space and power consumption. amd did the same for GCN1.2's fiji in order to get it on 28nm.
  • CiccioB - Monday, September 28, 2015 - link

    I don't really care how old is Tahiti. I know it was used as comparison with a chip which is half its size and power consumption ON THE SAME PP. So how old it is doesn't really matter. Same PP, so what's should be important is how good both architectures are.
    What counts is that AMD has not done anything radical to improve its architecture. It replaced Tahiti with a similar beefy GPU, Tonga, which didn't really stand a chance against Maxwell. They were the new proposal of both companies. Maxwell vs GCN 1.2. See the results.
    So again, go and look at how big GM206 is and how much power it sucks. Then compare with Tonga and the only thing you can see as similar is the price. nvidia solution beats AMD one under all points of view bringing AMD margins to nothing, though nvidia is still selling its GPU at a higher price than it really deserves.
    In reality one should compare Tahiti/Tonga with GM204 for the size and power consumption. The results will simply put AMD GCN architecture into the toilet. Only reasonable move was to lower the price so much that they could sell a higher tier GPU into a lower series of boards.
    Performance based on die space and power consumption doesn't really make GCN a hero in nothing but in having worsened AMD position even more with respect to old VLIW architecture were AMD fought with similar performances but smaller dies (and power consumption).
  • CiccioB - Monday, September 28, 2015 - link

    Forgot.. about double precision... I still don't care about it. Do you use it in your everyday life? How many professional boards is AMD selling that justifies the use of DP units into such GPUs?
    Just for numbers on the well painted box? So DP is a no necessity for 99% off the users.

    And apart that stupid thing, nvidia DP units were not present on GK204/206 as well, so the big efficiency gain has been made by improving their architecture (from Kepler to Maxwell) while AMD just moved from GCN 1.0 to GCN 1.2 with almost null efficiency results.
    The problem is not DP units present or not. It is that AMD could not make its already struggling architecture better in absolute with respect to the old version. An with Fiji they demonstrated that they could even do worse, if someone had any doubts.

Log in

Don't have an account? Sign up now