One question when building or upgrading a gaming system is of which CPU to choose - does it matter if I have a quad core from Intel, or a quad module from AMD? Perhaps something simpler will do the trick, and I can spend the difference on the GPU. What if you are running a multi-GPU setup, does the CPU have a bigger effect? This was the question I set out to help answer.

A few things before we start:

This set of results is by no means extensive or exhaustive. For the sake of expediency I could not select 10 different gaming titles across a variety of engines and then test them in seven or more different configurations per game and per CPU, nor could I test every different CPU made. As a result, on the gaming side, I limited myself to one resolution, one set of settings, and four very regular testing titles that offer time demos: Metro 2033, DiRT 3, Civilization V and Sleeping Dogs. This is obviously not Skyrim, Battlefield 3, Crysis 3 or Far Cry 3, which may be more relevant in your set up.

The arguments for and against time demo testing as well as the arguments for taking FRAPs values of sequences are well documented (time demos might not be representative vs. consistency and realism of FRAPsing a repeated run across a field), however all of our tests can be run on home systems to get a feel for how a system performs. Below is a discussion regarding AI, one of the common usages for a CPU in a game, and how it affects the system. Out of our benchmarks, DiRT 3 plays a game, including AI in the result, and the turn-based Civilization V has no concern for direct AI except for time between turns.

All this combines in with my unique position as the motherboard senior editor here at AnandTech – the position gives me access to a wide variety of motherboard chipsets, lane allocations and a fair number of CPUs. GPUs are not necessarily in a large supply in my side of the reviewing area, but both ASUS and ECS have provided my test beds with HD7970s and GTX580s respectively, such that they have been quintessential in being part of my test bed for 12 and 21 months. The task set before me in this review would be almost a career in itself if we were to expand to more GPUs and more multi-GPU setups. Thus testing up to 4x 7970 and up to 2x GTX 580 is a more than reasonable place to start.

Where It All Began

The most important point to note is how this set of results came to pass. Several months ago I came across a few sets of testing by other review websites that floored me – simple CPU comparison tests for gaming which were spreading like wildfire among the forums, and some results contradicted the general prevailing opinion on the topic. These results were pulling all sorts of lurking forum users out of the woodwork to have an opinion, and being the well-adjusted scientist I am, I set forth to confirm the results were, at least in part, valid.

What came next was a shock – some had no real explanation of the hardware setups. While the basic overview of hardware was supplied, there was no run down of settings used, and no attempt to justify the findings which had obviously caused quite a stir. Needless to say, I felt stunned that the lack of verbose testing, as well as both the results and a lot of the conversation, particularly from avid fans of Team Blue and Team Red, that followed. I planned to right this wrong the best way I know how – with science!

The other reason for pulling together the results in this article is perhaps the one I originally started with – the need to update drivers every so often. Since Ivy Bridge release, I have been using Catalyst 12.3 and GeForce 296.10 WHQL on my test beds. This causes problems – older drivers are not optimized, readers sometimes complain if older drivers are used, and new games cannot be added to the test bed because they might not scale correctly due to the older drivers. So while there are some reviews on the internet that update drivers between testing and keep the old numbers (leading to skewed results), actually taking time out to retest a number of platforms for more data points solely on the new drivers is actually a large undertaking.

For example, testing new drivers over six platforms (CPU/motherboard combinations) would mean: six platforms, four games, seven different GPU configurations, ~10 minutes per test plus 2+ hours to set up each platform and install a new OS/drivers/set up benchmarks. That makes 40+ hours of solid testing (if all goes without a second lost here or there), or just over a full working week – more if I also test the CPU performance for a computational benchmark update, or exponentially more if I include multiple resolutions and setting options.

If this is all that is worked on that week, it means no new content – so it happens rarely, perhaps once a year or before a big launch. This time was now, and when I started this testing, I was moving to Catalyst 13.1 and GeForce 310.90, which by the time this review goes live will have already been superseded! In reality, I have been slowly working on this data set for the best part of 10 weeks while also reviewing other hardware (but keeping those reviews with consistent driver comparisons). In total this review encapsulates 24 different CPU setups, with up to 6 different GPU configurations, meaning 430 data points, 1375 benchmark loops and over 51 hours in just GPU benchmarks alone, without considering setup time or driver issues.

What Does the CPU do in a Game?

A lot of game developers use customized versions of game engines, such as the EGO engine for driving games or the Unreal engine. The engine provides the underpinnings for a lot of the code, and the optimizations therein. The engine also decides what in the game gets offloaded onto the GPU.

Imagine the code that makes up the game as a linear sequence of events. In order to go through the game quickly, we need the fastest single core processor available. Of course, games are not like this – lots of the game can be parallelized, such as vector calculations for graphics. These were of course the first to be moved from CPU to the GPU. Over time, more parts of the code have made the move – physics and compute being the main features in recent months and years.

The GPU is good at independent, simple tasks – calculating which color is in which pixel is an example of this, along with addition processing and post-processing features (FXAA and so on). If a task is linear, it lives on the CPU, such as loading textures into memory or negotiating which data to transfer between the memory and the GPUs. The CPU also takes control of independent complex tasks, as the CPU is the one that can make complicated logic analysis.

Very few parts of a game come under this heading of ‘independent yet complex’. Anything suitable for the GPU but not ported over will be here, and the big one usually quoted is artificial intelligence. Deciding where an NPC is going to run, shoot or fly could be considered a very complex set of calculations, ideal for fast CPUs. The counter argument is that games have had complex AI for years – the number of times I personally was destroyed by a Dark Sim on Perfect Dark on the N64 is testament to either my uselessness or the fact that complex AI can be configured with not much CPU power. AI is unlikely to be a limiting factor in frame rates due to CPU usage.

What is most likely going to be the limiting factor is how the CPU can manage data. As engines evolve, they try and use data between the CPU, memory and GPUs less – if textures can be kept on the GPU, then they will stay there. But some engines are not as perfect as we would like them to be, resulting in the CPU as the limiting factor. As CPU performance increases, and those that write the engines in which games are made understand the ecosystem, CPU performance should be less of an issue over time. All roads point towards the PS4 of course, and its 8-core Jaguar processor. Is this all that is needed for a single GPU, albeit in an HSA environment?

Multi-GPU Testing

Another angle I wanted to test beyond most other websites is multi-GPU. There is content online dealing mostly with single GPU setups, with a few for dual GPU. Even though the number of multi-GPU users is actually quite small globally, the enthusiast markets are clearly geared for it. We get motherboards with support for four GPU cards; we have cases that will support a dual processor board as well as four double-height GPUs. Then there are GPUs being released with two sets of silicon on a PCB, wrapped in a double or triple width cooler.

More often than not on a forum, people will ask ‘what GPU for $xxx’ and some of the suggestions will be towards two GPUs at half the budget, as it commonly offers more performance than a single GPU if the game and the drivers all work smoothly (at the cost of power, heat, and bad driver scenarios). The ecosystem supports multi-GPU setups, so I felt it right to test at least one four-way setup. Although with great power comes great responsibility – there was no point testing 4-way 7970s on 1080p.

Typically in this price bracket, users will go for multi-monitor setups, along the lines of 5760x1080, or big monitor setups like 1440p, 1600p, or the mega-rich might try 4K. Ultimately the high end enthusiast, with cash to burn, is going to gravitate towards 4K, and I cannot wait until that becomes a reality. So for a median point in all of this, we are testing at 1440p and maximum settings. This will put the strain on our Core 2 Duo and Celeron G465 samples, but should be easy pickings for our multi-processor, multi-GPU beast of a machine.

A Minor Problem In Interpreting Results

Throughout testing for this review, there were clearly going to be some issues to consider. Chief of these is the question of consistency and in particular if something like Metro 2033 decides to have an ‘easy’ run which reports +3% higher than normal. For that specific example we get around this by double testing, as the easy run typically appears in the first batch – so we run two or three batches of four and disregard the first batch.

The other, perhaps bigger, issue is interpreting results. If I get 40.0 FPS on a Phenom II X4-960T, 40.1 FPS on an i5-2500K, and then 40.2 FPS on a Phenom II X2-555 BE, does that make the results invalid? The important points to recognize here are statistics and system state.

System State: We have all had times booting a PC when it feels sluggish, but this sluggish behavior disappears on reboot. The same thing can occur with testing, and usually happens as a result of bad initialization or a bad cache optimization routine at boot time. As a result, we try and spot these circumstances and re-run. With more time we would take 100 different measurements of each benchmark, with reboots, and cross out the outliers. Time constraints outside of academia unfortunately do not give us this opportunity.

Statistics: System state aside, frame rate values will often fluctuate around an average. This will mean (depending on the benchmark) that the result could be +/- a few percentage points on each run. So what happens if you have a run of four time demos, and each of them are +2% above the ‘average’ FPS? From the outside, as you will not know the true average, you cannot say if it is valid as the data set is extremely small. If we take more runs, we can find the variance (the technical version of the term), the standard deviation, and perhaps represent the mean, median and mode of a set of results.

As always, the main constraint in articles like these is time – the quicker to publish, the less testing, the larger the error bars and the higher likelihood that some results are going to be skewed because it just so happened to be a good/bad benchmark run. So the example given above of the X2-555 getting a better result is down to interpretation – each result might be +/- 0.5 FPS on average, and because they are all pretty similar we are actually more GPU limited. So it is more whether the GPU has a good/bad run in this circumstance.

For this example, I batched 100 runs of my common WinRAR test in motherboard testing, on an i5-2500K CPU with a Maximus V Formula. Results varied between 71 seconds and 74 seconds, with a large gravitation towards the lower end. To represent this statistically, we normally use a histogram, which separates the results up into ‘bins’ (e.g. 71.00 seconds to 71.25 seconds) of how accurate the final result has to be. Here is an initial representation of the data (time vs. run number), and a few histograms of that data, using a bin size of 1.00 s, 0.75s, 0.5s, 0.33s, 0.25s and 0.1s.


As we get down to the lower bin sizes, there is a pair of large groupings of results between ~71 seconds and ~ 72 seconds. The overall average/mean of the data is 71.88 due to the outliers around 74 seconds, with the median at 72.04 seconds and standard deviation of 0.660. What is the right value to report? Overall average? Peak? Average +/- standard deviation? With the results very skewed around two values, what happens if I do 1-3 runs and get ~71 seconds and none around ~72 seconds?

Statistics is clearly a large field, and without a large sample size, most numbers can be one-off results that are not truly reflective of the data. It is important to ask yourself every time you read a review with a result – how many data points went into that final value, and what analysis was performed?

For this review, we typically take four runs of our GPU tests each, except Civilization V which is extremely consistent +/- 0.1 FPS. The result reported is the average of those four values, minus any results we feel are inconsistent. At times runs have been repeated in order to confirm the value, but this will not be noted in the results.

The Bulldozer Challenge

Another purpose of this article was to tackle the problem surrounding Bulldozer and its derivatives, such as Piledriver and thus all Trinity APUs. The architecture is such that Windows 7, by default, does not accurately assign new threads to new modules – the ‘freshly installed’ stance is to double up on threads per module before moving to the next. By installing a pair of Windows Updates (which do not show in Windows Update automatically), we get an effect called ‘core parking’, which assigns the first series of threads each to its own module, giving it access to a pair of INT and an FP unit, rather than having pairs of threads competing for the prize. This affects variable threaded loading the most, particularly from 2 to 2N-2 threads where N is the number of modules in the CPU (thus 2 to 6 threads in an FX-8150). It should come as no surprise that games fall into this category, so we want to test with and without the entire core parking features in our benchmarks.

Hurdles with NVIDIA and 3-Way SLI on Ivy Bridge

Users who have been keeping up to date with motherboard options on Z77 will understand that there are several ways to put three PCIe slots onto a motherboard. The majority of sub-$250 motherboards will use three PCIe slots in a PCIe 3.0 x8/x8 + PCIe 2.0 x4 arrangement (meaning x8/x8 from the CPU and x4 from the chipset), allowing either two-way SLI or three-way Crossfire. Some motherboards will use a different Ivy Bridge lane allocation option such that we have a PCIe 3.0 x8/x4/x4 layout, giving three-way Crossfire but only two-way SLI. In fact in this arrangement, fitting the final x4 with a sound/raid card disables two-way SLI entirely.

This is due to a not widely publicized requirement of SLI – it needs at least an x8 lane allocation in order to work (either PCIe 2.0 or 3.0). Anything less than this on any GPU and you will be denied in the software. So putting in that third card will cause the second lane to drop to x4, disabling two-way SLI. There are motherboards that have a switch to change to x8/x8 + x4 in this scenario, but we are still capped at two-way SLI.

The only way to go onto 3-way or 4-way SLI is via a PLX 8747 enabled motherboard, which greatly enhances the cost of a motherboard build. This should be kept in mind when dealing with the final results.

Power Usage

It has come to my attention that even if the results were to come out X > Y, some users may call out that the better processor draws more power, which at the end of the day costs more money if you add it up over a year. For the purposes of this review, we are of the opinion that if you are gaming on a budget, then high-end GPUs such as the ones used here are not going to be within your price range.

Simple fun gaming can be had on a low resolution, limited detail system for not much money – for example at a recent LAN I went to I enjoyed 3-4 hours of TF2 fun on my AMD netbook with integrated HD3210 graphics, even though I had to install the ultra-low resolution texture pack and mods to get 30+ FPS. But I had a great time, and thus the beauty of high definition graphics of the bigger systems might not be of concern as long as the frame rates are good.

But if you want the best, you will pay for the best, even if it comes at the electricity cost. Budget gaming is fine, but this review is designed to focus on 1440p with maximum settings, which is not a budget gaming scenario.

Format Of This Article

On the next couple of pages, I will be going through in detail our hardware for this review, including CPUs, motherboards, GPUs and memory. Then we will move to the actual hardware setups, with CPU speeds and memory timings (with motherboards that actually enable XMP) detailed. Also important to note is the motherboards being used – for completeness I have tested several CPUs in two different motherboards because of GPU lane allocations.

We are living in an age where PCIe switches and additional chips are used to expand GPU lane layouts, so much so that there are up to 20 different configurations for Z77 motherboards alone. Sometimes the lane allocation makes a difference, and it can make a large difference using three or more GPUs (x8/x4/x4 vs. x16/x8/x8 with PLX), even with the added latency sometimes associated with the PCIe switches. Our testing over time will include the majority of the PCIe lane allocations on modern setups, but for our first article we are looking at the major ones we are likely to come across.

The results pages will start with a basic CPU analysis, running through my regular motherboard tests on the CPU. This should give us a feel for how much power each CPU has in dealing with mathematics and real world tests, both for integer operations (important on Bulldozer/Piledriver/Radeon) and floating point operations (where Intel/NVIDIA seem to perform best).

We will then move to each of our four gaming titles in turn, in our six different GPU configurations. As mentioned above, in GPU limited scenarios it may seem odd if a sub-$100 CPU is higher than one north of $300, but we hope to explain the tide of results as we go.

I hope this will be an ongoing project here at AnandTech, and over time we can add more CPUs, 4K testing, perhaps even show four-way Titan should that be available to us. The only danger is that on a driver or game change, it takes another chunk of time to get data! Any suggestions of course are greatly appreciated – drop me an email at ian@anandtech.com. Our next port of call will most likely be Haswell, which I am very much looking forward to testing.

CPUs, GPUs, Motherboards, and Memory
Comments Locked

242 Comments

View All Comments

  • TheJian - Thursday, May 16, 2013 - link

    Am I supposed to not respond now? You just said I have no manners, am uncivilized, have no objectivity, and previously I’m offensive and it’s ok to HATE me…ROFL. POT – MEET KETTLE. If you were to take your own advice, shouldn’t you have just said “you could word it differently but I agree with the data” and left it at that? No, you took it much further with what amounts to an ad hominem attack on ME. You posted 333 words yourself to do it. :) But thanks for recognizing the work I put in :) I can type 60+wpm though so, not that much effort really and two to three times that with Dragon Naturally Speaking premium easily (pick up a copy if you can't keep up - 1600 words in about 9 minutes...ROFL v12.5 rocks). The homework takes time, but that was already done before they wrote this article as I read everything I can find on stocks I track and parts I'm interesting in.

    I've watched this site (and toms) since they were born. 1997 I think here. I did leave toms when Tom Pabst himself forced out Van Smith over the sysmark crap years ago (and removed his name from ALL of his articles he wrote there, putting "tomshardware staff" or some such in Van's name's place). That was AWFUL to watch and I loved reading Tom Pabst's stuff for years. Millions of people were snowed there while they made AMD look like crap in articles with sysmark flagging Intel chips and turning off SSE on AMD. Eventually people like Van, I and others said enough that people took notice and it devalued his site before he sold it. Rightfully so if you ask me, as he was basically an Intel shill at that point as many had pointed out by then.

    At some point somebody has to stand up and tell the truth like Van tried to do. It cost him his job, but the message made it through. Someone has to be willing to “take the hate” for other people's benefit. :) Or nothing will ever get fixed right? People reviewing stuff for millions need some kind of checks and balances right? There are NONE right now in our govt and look what’s happening there as they spend us into bankruptcy amid scandal after scandal kicking our financial future down the road time and again. If we had checks and balances for REAL our president would be in jail along with many dirty congress members on both sides (he just got caught wiretapping the AP – freedom of speech is being trampled, gun rights assaulted, our constitution is attacked at every turn!). People are DEAD possibly because this guy did NOTHING to save them in Benghazi for 7 hours under attack. What happened in Boston? Etc…I'm seeing the same stuff happen here that happened at Tomshardware. Someone has to correct them instead of congratulating them right? Otherwise so many people will make the wrong purchasing decisions based on bad advice from influential and supposedly trusted people (I still like this site, just want back to the neutral stance it used to have for years). In this economy I'd be thanking anyone who takes the time and effort to attempt to save me from buying a piece of junk with my hard earned money. In a nutshell this is why I take the time to show another side for people to consider. They don’t have to believe me, that’s the point of the links, quotes from those links etc. I WANT you to look at the data and make up your own minds. Either it costs this site tons of hits eventually and wakes them up or they need to be put out of business. If nobody ever complained about Win8 how long would we get crap like that? Look how fast it got an 8.1 version as a response and the product manager fired. Put their feet to the fire or they don’t stop ever.

    Anand would have to be seeing his sites traffic go down.
    http://www.alexa.com/siteinfo/anandtech.com#
    If someone takes the time to prove you’re putting up bad data article after article and there is no defense put up (because there isn’t a defense) you are eventually taken down. Jared attacked me in Aug 2012. Pity you can’t go back a year but you can see this site is sliding at least at alexa for the last 6 months. Until they quit yanking our chains I’ll keep yanking theirs if my time allows! Toms went from 10mil to 2mil in just a couple years. I’m not sure what he sold for but it was far less than he’d have gotten before attacking Van, the article shenanigans etc.

    Tell me, what parts of my comments were UNCIVILIZED or RUDE? Did I call anyone a name? Say they are stupid? Did I attack ANYONE personally? Did I do what you did? Actually I did quite the opposite. I said they are NOT ignorant and know exactly what they're doing here (hmm, insinuated intelligence…That’s a good comment right?). I even let Ian off multiple times (he's just doing what he's told no doubt) and noted from the get go he did a lot of work, but due to "someone" pushing bad data to hide AMD's faults it's all wasted. I attacked the crap this site is pushing (crap too harsh for you?), not any of the people themselves (who I'm sure are probably nice guys - well, I can't say that about them all, Jarred attacked ME not the data when I buried Ryan's conclusions and benchmarks). Did I swear at someone? Did I spew hate like the guy who gave a one liner to me? He's claiming its ok to HATE me? When did I ever cross a line like that? Is a debate of the facts worthy of HATE today?

    If you hate the length of my post don't read it. Take your own advice, move along please. Was it necessary for you to post 1000 words back? :) I'd say even the HATERS took me seriously (the only ones that responded besides Tential – what 2 total plus a polite tential?) and saw the arguments were valid and listened. ALL of them did in their own way. Only the first below wasn’t rude as you say and just discussed what I was saying- tential - no flare up from him, just good old fashioned debate:
    "I don't agree with your analysis on consoles but everything else sure. Gaming for 98% of people is 1080p."

    Tential clearly got the message despite our console differences (they weren’t the point really). I’m sure tons of others did even if they’re silent about it. I used to be SILENT. You can’t argue with steampowered.com’s data, nor everyone else showing the res you SHOULD be running here. You can confirm via techreport, hardocp, tomshardware, etc I gave plenty of links and quotes for people to analyze.

    "We might all hate this guy (for good reason) but the words he writes regarding CPU performance in this article have a lot of truth."

    WOW...But at least he saw the truth, and his name is hilarious to me :) Did I attack back? NOPE. Even when he seriously crossed a line IMHO I did nothing but a polite rebuttal with some questions – still waiting for why he thinks it’s ok to HATE people for simple comments, but I don’t mind either way, even he got the message. Worse you agreed with the hate...LOL

    Here’s you:
    "Agreed. What he wrote is offending, emotional and hardly objective. However, there's a truth hidden in there somewhere. Consider the following scenario."

    Comic, I said nothing bad about people, just their data. But to you, it's OK to hate me for it and then toss comments about my character...This goes back to the double standard I mentioned in my previous posts.

    There is nothing wrong with a vigorous debate of the facts in any case and I was CIVIL though critical. This was an article about the proper choice of a GAMER cpu. As presented the data is lies as they presented a situation that doesn’t exist (as even you pointed out in your scenario basically). It would be just "incorrect" if they didn't know what they were doing. But they DO know. They know they’re hiding FCAT data as I pointed out. AMD only talks to them as Guru3d recently pointed out (hilbert did). Odd, yes?

    I find it funny I already answered your questions before with comments like this (but why not do another 1600 word essay for you) :) :
    “People will eventually JUDGE this site accordingly if you keep this stuff up. I sincerely hope this site returns to good neutral data soon.”

    This doesn’t tell you why I’m doing it? I claim OTHER websites I pointed to are OBJECTIVE and VALID. I piled on with my own observations, but I was merely quoting others who all disagree with this site. That’s not subjective that’s FACT. It’s not my point of view; it is the same one as EVERY other site reporting this type of data. Hardocp, Techreport, PCper, Tomshardware. How many do I need before you call me objective? I can give more sites and another 1000 words of quotes…LOL. I can scientifically claim the resolution they chose here to make all cpu’s show the same perf because the gpu is bottlenecking everything, represents less than 1% of the population and I will be RIGHT. Introducing a variable that totally invalidates the entire premise of the experiment is not subjective, it’s misleading at best and easily proved wrong as I have done. My message travelled far enough as nobody missed it as far as I can tell. Mission accomplished, gentle or NOT ;)

    If you don’t like my posts, To quote you:
    “why can't you just look past them? What is your problem?”
    “why don't you just... leave?”
    :) Gee, it seems I've upset you ;)

    "What are you doing here - are you some sort of freedom fighter for objective data on the internet?"

    Already answered and YES, why not :) What are you doing here? Are you some kind of smart alec that objects to people voicing their RELEVANT opinions in a "comment" section? Silly me, I thought that's what this section is for. Can we get back to discussing the data now? You've distracted us all from the topic at hand long enough and it isn't changing the data one bit.
  • OwnedKThxBye - Thursday, May 16, 2013 - link

    Sorry for seriously crossing the line good sir but I still reserve the right to hate you if I choose. A wise man once wrote “We are FAR to sensitive today. It's like nobody can take a criticism these days and the person who gives it is evil...LOL.” <--- this is you =). Keep in mind I was also the first one to agree with you… What you write never fails to bring a smile to my face TheJian, and I hope you don’t stop pointing out the truth any time soon. Just try to keep the next comment shorter so we can read it without so much scrolling..... we don't all own LCDs with 1440+ vertical pixels like we are told to. In the end all we can pray for is a few less gamers to run out and buy an A8-5600K for their HD7970 and for a few of your points to be taken into consideration next time round.
  • yhselp - Sunday, May 26, 2013 - link

    First of all, I’d like to apologize for this long-delayed response – I simply didn’t have the time.

    Truly epic. To start off, you haven't upset me, really; not before and not now - I was genuinely curious as to what it is that you think you're accomplishing by all this (not just this article, others as well). Thus, I set forth to playfully provoke you into responding. Success. Now that you’ve answered, and to be fair – more clearly than expected, I have a better understanding of what urges you to do what you do. Such a peculiar case you are, I am fascinated – are you a troll or aren’t you? Somewhere in between I guess. The arguments you provide are sound, although I still think they’re a bit… let’s not use a word as I’m sure you will twist it into a meaning of your choosing (not originally intended); and most of what you say is, well, adequate – all that makes you not-troll after all. Despite that fact that you would’ve probably responded to anything anyway, I still feel that a ‘thank you’ on my side is necessary for your taking the time to respond; and I’m not being ironic here.

    Now, let’s get a few things out of the way. Note that I’m neither defending nor criticizing AnandTech, I’m simply voicing an opinion just the way you are. Very important – I never said it was okay to hate you or anybody for that matter, you deduced that yourself. I simply agreed with the gist of what OwnedKThxBye said. You cannot cling to ever word you read online, I don’t think anybody here truly feels hate, certainly not me. People just throw words around in the heat of the moment just the way you debate vigorously, I’m sure you understand that. The semantic field of the word ‘hate’ in 21st century contemporary English is huge, especially when used in this type of discourse.

    Why would you blame me for distracting “us all” from the topic at hand when you are the King of Sidetracking? Gotta love your insights on US politics – it’s like watching one of those documentaries on History and the like. My favorite part is about “gun rights” – nice, so eloquently put. The only reason we still have the Second Amendment is because the US cannot just change the Bill of Rights which is part of the oldest acting constitution in the world – it’s a matter of national pride. The reason it was written is a historical occurrence no longer valid. During Colonial times the settlers had to harbor British soldiers which often mistreated them, and so the settlers needed a means of protection. That is how the Second Amendment came to be. Obviously, this is no longer the case. You could argue the right to bear arms is part of Americannness, but this doesn’t change the fact that the original, intended reason for the Second Amendment is a thing of the past.

    Checks and balances for the consumer computer industry – so amusing. Manufacturers, Reviewers and Consumers each checking on the others; that is such an utopian concept. You say it doesn’t work for a country’s government, how do you expect it to work for an industry where money is king? There would always be hidden agendas, you can’t stop that.

    I believe I’ve discovered a new form of entertainment, and that is reading Jian’s comments. You, sir, are crazy. I don’t mean this as an insult. Keep on fighting the good fight, I can’t wait to read more of your comments; and, please, never stop sidetracking and using internet abbreviations such as LOL.
  • azdood - Wednesday, May 15, 2013 - link

    Hi Ian, have you ever considered testing time between turns on Civ5? CPU makes a HUGE difference especially as you get deep into a game.
  • tential - Thursday, May 16, 2013 - link

    This is partially at that Jian guy and at everyone. I understand the desire for high end GPU reviews but using your OWN earlier posts, you stated that the majority of people game at 1080p. If that's the case, whats the point of pushing for a 7990, Titan, FCAT review when quite frankly NO ONE HAS THOSE CARDS. According to your own data and posts from the previous page.

    To me it seems like you're just trolling however, because you brought up the point of affordability, I think that that's where the majority of reviews should target. YES I want to see how the 7970 and the GTX 680 perform, yes I want to see the next gen too, but I really don't think we should waste so much time on Multi GPU setups that under 1% of the gaming community has.

    How about more reviews on upgrade paths, Price to Performance, how to get the most performance at a reasonable price point. That's what I care to see. Any review in which the hardware being tested exceeds 2k (I mean additional hardware), to me is just boring because at the end of the day, I'm not buying two titans, or two 7990s, or even 3 7970s.

    This is of course my PERSONAL opinion, but considering data backs it up, I'd like to see some more reviews cater to the average (when I say average I mean average in terms of the gamer who reads reviews and makes educated price to performance ratio choices) gamer.

    This review kind of tries to do that but in all reality, we aren't gaming at 1440p so more reviews at how to get the best performance at 1080p for a good price, while leaving us a decent upgrade path would be nice.
  • FriedZombie - Friday, May 17, 2013 - link

    Could you possibly go to some slightly older processors and GPUs? In particular the i7-990x would be a great start and the lower and upper end of AMDs 6000 series would be nice too (it seems a LOT of people upgraded from the 5000 series to the 7000 series this year) A benchmarking for Witcher 2 would be nice as well as max settings with Ubersampling turned on is extremely taxing on both CPU and GPU because of how inefficient CDProjekt's RED engine is.
  • ol1bit - Friday, May 17, 2013 - link

    All I can say is WOW!

    Nice work!
  • qulckgun - Sunday, May 19, 2013 - link

    62yrs old play ~150hrs a month. Ready to build new PC. Know next to nothing about building new PC. Read various forums and articles and find the comment sections are great at clearing up some of what I didn't understand in the main article. That being said this is one of the most intertaining comment sections I've read in awhile and was pretty informative. It's helped me put into perspective my hardware choices. Please lets agree to disagree but in a respectable manner. Thank you all for your comments and responces, it's an education.
  • Rob94hawk - Sunday, May 19, 2013 - link

    This was a great article! I'm surprised you didn't use a QX9770 for socket 775. Any reason for that?
  • bds71 - Wednesday, May 22, 2013 - link

    Ian - since the new 4k TV's are out, i think these types of reviews are very indicative of what we can expect once we are able to hook a PC up (using multiple outputs - such as eyefinity or nVidia surround) to a single input 4k TV. for those who don't know, the new 4k standard (3840x2160) is equivelant to eyefinity or nVidia surround at 1080p, but with 4 monitors instead of 3, and in a normal 16x9 format rather than the super wide 3 screen setups. ie: --|--|-- vs ==|== note: equivelant resolution, but not actually 4 monitors :)

    can't wait for THAT testing to begin. assuming an owner can turn off overscan (so you can see the taskbar at the bottom) i indeed intend to purchase one (likely, soon) and would definately want to hook my PC to it. my GTX690 would likely be able to do OK at such a resolution, but i would eventually want to get another 690 - as soon as i could figure out how to utilize the second card with only a single HDMI input on the TV.

    as far as blue ray content - if you wait....it will come :)

Log in

Don't have an account? Sign up now