Conclusions

After testing for this review, one thing is clear in my mind – the performance of CPUs paired with a single GPU is hitting a limit. As games get more complex, those designing the graphics and physics engines know that shifting calculations onto the GPU gives a greater boost in performance. If an engine is written to take advantage of the GPU, then the CPU does not really matter for the most part. If you can transfer textures over to the GPU and keep them in memory, the work of the CPU is essentially done apart from light maintenance or interfacing with the network.

Perhaps a better test would have been with more mid-range GPUs, such as 660 Tis or 7790s; with limited memory on the GPU itself, having that faster CPU and faster DDR3 memory might make a big difference. However the ecosystem may be that a gamer can buy a good GPU and not have to worry that the CPU might be a bit underpowered. Unless you need the performance of a big CPU, the big GPU should be a main priority if it means the CPU is less of a concern at the higher GPU/resolutions.

There is also scope for those using less powerful GPUs, such that the CPU could matter a lot more in this scenario. With limited memory, the CPU would have to organize more texture copies between the memory and the GPU, causing other aspects of the system to become the limiting factor. This is very important when interpreting our results. However, our results for our testing scenarios show several points worth noting.

Firstly, it is important to test both accurately, fairly, and with a good will. Choosing to perform a comparative test when misleading the audience by not understanding how it works underneath is a poor game to play. Leave the bias at home, let the results do the talking.

In three of our games, having a single GPU make almost no difference to what CPU performs the best. Civilization V was the sole exception, which also has issues scaling when you add more GPUs if you do not have the most expensive CPUs on the market. For Civilization V, I would suggest having only a single GPU and trying to get the best out of it.

In DiRT 3, Sleeping Dogs and Metro 2033, almost every CPU performed the same in a single GPU setup. Moving up the GPUs and DiRT 3 leaned towards PCIe 3.0 above two GPUs, Metro 2033 started to lean towards Intel CPUs and Sleeping Dogs needed CPU power when scaling up.

Above three GPUs, the extra horsepower from the single thread performance of an Intel CPU starts to make sense, with as much as 70 FPS difference in DiRT 3. Sleeping Dogs also starts to become sensitive to CPU choice.

We Know What Is Missing

On my list of future updates to this article, we need an i5-3570K processor, as well as dual and tri-module Piledriver and an i7-920 for a roundup. I will have a short window soon to rummage in a large storeroom of processors, which will be a prime opportunity for some of the harder to acquire CPUs. Haswell is just around the corner and should provide an interesting update to data points across the spectrum, in most of its desktop forms. From now on I will aim to cover all the different PCIe lane allocations in a chipset, as well as some of those odd ones caused by PLX chips.

If you have a specific processor you would like me to test for a future article, please leave a note below in the comments, and we will try to cover it. :) Top of that list is an i5-3750K, followed by Haswell, then some more AMD cores. I have 29 more processors on my 'ideal' list (if I can get them), but if anyone has any suggestions that I may not have thought of, please let me know. If I am able to get a hold of Titans, I may be in a position to retest across the board for NVIDIA results, meaning another benchmark or two as well (Bioshock Infinite perhaps).

Recommendations for the Games Tested at 1440p/Max Settings

A CPU for Single GPU Gaming: A8-5600K + Core Parking updates

If I were gaming today on a single GPU, the A8-5600K (or non-K equivalent) would strike me as a price competitive choice for frame rates, as long as you are not a big Civilization V player and don’t mind the single threaded performance. The A8-5600K scores within a percentage point or two across the board in single GPU frame rates with both a HD7970 and a GTX580, as well as feels the same in the OS as an equivalent Intel CPU. The A8-5600K will also overclock a little, giving a boost, and comes in at a stout $110, meaning that some of those $$$ can go towards a beefier GPU or an SSD. The only downside is if you are planning some heavy OS work – if the software is Piledriver-aware all might be well, although most processing is not, and perhaps an i3-3225 or FX-8350 might be worth a look.

A CPU for Dual GPU Gaming: i5-2500K or FX-8350

Looking back through the results, moving to a dual GPU setup obviously has some issues. Various AMD platforms are not certified for dual NVIDIA cards for example, meaning while they may excel for AMD, you cannot recommend them for Team Green. There is also the dilemma that while in certain games you can be fairly GPU limited (Metro 2033, Sleeping Dogs), there are others were having the CPU horsepower can double the frame rate (Civilization V).

After the overview, my recommendation for dual GPU gaming comes in at the feet of the i5-2500K. This recommendation may seem odd – these chips are not the latest from Intel, but chances are that pre-owned they will be hitting a nice price point, especially if/when people move over to Haswell. If you were buying new, the obvious answer would be looking at an i5-3570K on Ivy Bridge rather than the 2500K, so consider this suggestion a minimum CPU recommendation.

On the AMD side, the FX-8350 puts up a good show across most of the benchmarks, but falls spectacularly in Civilization V. If this is not the game you are aiming for and want to invest AMD, then the FX-8350 is a good choice for dual GPU gaming.

A CPU for Tri-GPU Gaming: i7-3770K with an x8/x4/x4 (AMD) or PLX (NVIDIA) motherboard

By moving up in GPU power we also have to boost the CPU power in order to see the best scaling at 1440p. It might be a sad thing to hear but the only CPU in our testing that provides the top frame rates at this level is the top line Ivy Bridge model. For a comparison point, the Sandy Bridge-E 6-core results were often very similar, but the price jump to such as setup is prohibitive to all but the most sturdy of wallets.

As noted in the introduction, using 3-way on NVIDIA with Ivy Bridge will require a PLX motherboard in order to get enough lanes to satisfy the SLI requirement of x8 minimum per CPU. This also raises the bar in terms of price, as PLX motherboards start around the $280 mark. For a 3-way AMD setup, an x8/x4/x4 enabled motherboard performs similarly to a PLX enabled one, and ahead of the slightly crippled x8/x8 + x4 variations. However investing in a PLX board would help moving to a 4-way setup should that be your intended goal. In either scenario, at stock clocks, the i7-3770K is the processor of choice from our testing suite.

A CPU for Quad-GPU Gaming: i7-3770K with a PLX motherboard

A four-way GPU configuration is for those insane few users that have both the money and the physical requirement for pixel power. We are all aware of the law of diminishing returns, and more often than not adding that fourth GPU is taking the biscuit for most resolutions. Despite this, even at 1440p, we see awesome scaling in games like Sleeping Dogs (+73% of a single card moving from three to four cards) and more recently I have seen that four-way GTX680s help give BF3 in Ultra settings a healthy 35 FPS minimum on a 4K monitor. So while four-way setups are insane, there is clearly a usage scenario where it matters to have card number four.

Our testing was pretty clear as to what CPUs are needed at 1440p with fairly powerful GPUs. While the i7-2600K was nearly there in all our benchmarks, only two sets of CPUs made sure of the highest frame rates – the i7-3770K and any six-core Sandy Bridge-E. As mentioned in the three-way conclusion, the price barrier to SB-E is a big step for most users (even if they are splashing out $1500+ on four big cards), giving the nod to an Ivy Bridge configuration. Of course that i7-3770K CPU will have to be paired with a PLX enabled motherboard as well.

One could argue that with overclocking the i7-2600K could come into play, and I don’t doubt that is the case. People building three and four way GPU monsters are more than likely to run extra cooling and overclock. Unfortunately that adds plenty of variables and extra testing which will have to be made at a later date. For now our recommendation at stock, for 4-way at 1440p, is an i7-3770K CPU.

What to Take Away From Our Testing

Ultimately the spectrum for testing this sort of thing is huge - the minute you deal with multiple GPUs in a system, testing different GPUs, testing different resolutions, testing different quality settings, and then extrapolating those across the normal array of benchmarks we apply to a GPU test, we might as well spend a month just looking at a single CPU platform!

We know the testing done here today looks at a niche scenario - 1440p at Max Settings using very powerful GPUs. The trend in gaming, as I see it, will be towards the higher resolution panels, and with Korean 27" monitors coming into the market, if you're ok with that sort of monitor it is a direction to take to improve your gaming experience. 4K is on the horizon, which means either more pixel pushing power or lower resolutions/settings if you want the quality. Testing at 1440p/max settings is something I like to test as it pushes the GPU and hopefully the rest of the system - if you're a gamer, you want the best experience, and finding the hardware to do that is one of the most important things in that process (after getting good at the game you want).

So these results are offered in order to aid a purchasing decision based on our small sample size. No sample size is ever going to be big enough (unless you are able to test in Narnia), but we hope to expand on this in the future. Consider the data, read our conclusions - you may have a different interpretation of the data. Let us know what you think!

GPU Benchmarks: Sleeping Dogs
Comments Locked

242 Comments

View All Comments

  • Pheesh - Wednesday, May 8, 2013 - link

    "2) Min FPS falls under the issue of statistical reporting. If you run a game benchmark (Dirt3) and in one scene of genuine gameplay there is a 6-car pileup, it would show the min FPS of that one scene. So if that happened on an FX-8350 and min-FPS was down to 20 FPS when others didn't have this scene were around 90 FPS for minimum, how is that easily reported and conveyed in a reasonable way to the public? A certain amount of acknowledgement is made on the fact that we're taking overall average numbers, and that users would apply brain matter with regard to an 'average minimum'."

    The point of a benchmark is to provide a consistent test that can be replicated exactly on multiple systems. If you're not able to do that then you aren't really benchmarking anything. That's why 99% of games are not tested in multiplayer but rather single player in experiences they can strictly control. (i.e. with test demos). If for some reason the game engine is just that unpredictable even in a strictly controlled test situation you could do multiple trials to take a minimum average.

    Minimum FPS is an extremely necessary test and its easily possible to do. Other sites include it with all of their gaming benchmarks.
  • Spunjji - Wednesday, May 8, 2013 - link

    That doesn't necessarily mean that the numbers they give you are worth a damn...
  • beginner99 - Thursday, May 9, 2013 - link

    "Minimum FPS is an extremely necessary test and its easily possible to do. Other sites include it with all of their gaming benchmarks."

    Or you could do 5 runs, discard the worst and best and average the rest (min, max average FPS).

    http://en.wikipedia.org/wiki/Truncated_mean

    But yeah statistics is extremely complex and error prone. I once read that a large amount of statistics in scientific publications have errors to a certain degree (but not necessarily making the results and conclusions completely wrong!!!)

    Or if you actually know such a "special scene" can happen, discard all test were it happened.
  • beginner99 - Thursday, May 9, 2013 - link

    The main issue here is actually available time or the amount of work. Averages over 3 aren't really that great. if you could run everything 100 times such "special scenes" would be irrelevant.
  • mapesdhs - Monday, May 20, 2013 - link


    Ian,

    P55 boards can offer very good RAID0 performance with SSDs, or more importantly
    RAID1 or RAID10 (I hope those with RAID0 have some kind of sensible backup
    strategy). See my results:

    http://www.sgidepot.co.uk/misc/ssd_tests.txt

    One will obviously get more out of newer SSDs using native SATA3 mbds for the
    sequential tests, but newer tech won't help 4K numbers that much. In reality few
    would notice the difference between each type of setup. This is especially true
    given how many later mbds use the really awful Marvell controllers for most of the
    SATA3 ports (such a shame only a couple are normally controlled by the Intel or
    other chipset); performance would be better with an older Intel SATA2. I expect
    many just use the non-Marvell ports only if they can.

    What matters is to have an SSD setup of some kind in the 1st place. My P55 system
    (875K) boots very quick with a Vertex3, gives a higher 3DMark13 physics score than
    a 3570K, and GPU performance with two 2x 560Ti is better than a stock 680. It's
    really the previous gen of hw which can present more serious bottlenecks (S775,
    AM2, DDR2, etc.), but even then results can often be surprisingly decent, eg. oc'd
    Ph2 965, etc.

    Also, RAID0 with SSDs often negates the potential of small I/O performance.
    Depending on the game/task, this means SSD RAID0 might at times be slower than a
    single good SSD.

    Dribble is right in that respect, improvements are often not as significant as
    people think or expect (I've read sooo many posts from those who have been
    disappointed with their upgrades), though it does vary by game, settings, etc.
    Games which impose a heavier CPU loading (physics, multiplayer, AI etc.) might see
    more useful speedups from a better CPU, but not always. There are so many factors
    involved, it can become complicated very quickly.

    Ian.
  • Felix_Ram - Sunday, May 26, 2013 - link

    Your 120 hz screen has a frame latency of about 8 ms. Meaning it effectively can't show you more than 60 new fps. Anything above that it shows you the same pixel twice. So basically, you are watching reruns, and anyone who states that he can tell a difference between 60 fps and +60fps is basically kidding himself.

    http://forums.anandtech.com/showthread.php?t=23049...

    http://forums.steamgames.com/forums/showthread.php...
  • Felix_Ram - Sunday, May 26, 2013 - link

    Can't edit. A screen latency of about 16 ms*
  • tehh4ck3r - Wednesday, May 8, 2013 - link

    You should test a Phenom II X4-965 and a i5-3570K.
  • B-Unit1701 - Wednesday, May 8, 2013 - link

    And throw in a 45nm Core2, preferably over 3.0Ghz
  • boulard83 - Wednesday, May 8, 2013 - link

    Really great review and testing. As for the CPU to add to the list, you could add some very cheap solution like the G1610 and G2020 too see how these 40-60$ chip perform againts all other chip or simply compare to an older E6700 like the one on the test. Other than that, you could also add a 3820 in the testing simply to lower the cost of the X79 setup, making it a little more mainstream VS a 600$ 3930k.

Log in

Don't have an account? Sign up now