#CPUOverload: What is Realistic?

Truth be told, the concept of a project to benchmark almost 700-900 processors has been rattling around in my head for a few years. I actually wrote the first segment of this article way back in 2016. However, over the course of 2016 and 2017, building new testing suites has taken longer, priorities changed, and the project didn’t so much as get shelved as somewhat pushed down the order on a semi-permanent basis until there was an ideal opening. Those of you who have followed the site may have noticed my responsibilities increase over time, darting 200k miles a year around the world. It can be difficult to keep a large project buoyant without constant attention.

Between 2016 and today, we’ve still be churning though the tests on the hardware, and updating our benchmark database with as many chips as we can find, even if it wasn’t under a governed project. The most recent version of our CPU2019 Bench has 272 CPUs with data recorded on up to 246 benchmark data points for each, just to showcase perhaps what one person can do in a given year. However, the focus of Bench being a specific project wasn’t necessarily a primary target of the site. With the launch of our Bench2020 suite, with a wider variety of tests and analysis, we’re going to put this into action. That’s not to say I have more time than normal (I might have to propose what we can do about getting an intern), but with the recent pandemic keeping me on the ground, it does give a chance to take stock about what users are really after.

With #CPUOverload, the goal is to do more than before, and highlight the testing we do. This is why I’ve spent the best part of 25-30 pages talking about benchmark sustainability, usefulness, automation, and why every benchmark is relevant to some of our user base. Over the last decade, as a hardware tester providing results online for free, one obvious change in the requests from our readers has been to include specific benchmarks that target them, rather than generic ones related to their field. That’s part of what this project is, combined with testing at scale.

Users also want to find their exact CPU, and compare it to an exact CPU potential upgrade – a different model, at least in today’s naming conventions, might have different features. So getting exactly what you want to compare is always going to be better – being able to see how your Intel Core i5-2380P in that Dell OEM system you have had for 7 years compares to a newer Ryzen 7 2700E or Xeon E-2274G is all part of what makes this project exciting. That essence of scale, and trying to test as many different CPU variants as possible, is going to be a vital part of this project.

Obviously the best place to start with a project like this is two-fold: popular processors and modern processors. These get the most attention, and so covering the key parts from Coffee Lake, Kaby Lake, Ryzen and HEDT are going to be high on our list to start. The hardware that we’re also testing for review also gets a priority, so that’s why you might start seeing some Zhaoxin or Xeon/EPYC data enter Bench very soon. One funny element is that if you were to start listing what might be ‘high importance processors’, it very easily come back with a list of between 25-100 SKUs, with various i9/i7/i5/i3 and R7/R5/R3/APU as well as Intel/AMD HEDT and halo parts in there – that’s already 10 segments! Some users might want us to focus on the cheap Xeon parts coming out of China too. Obviously whatever our users want to see be tested, we want to hear about it.

As part of this project, we are also expecting to look at some retrospective performance. Future articles might include ‘how well does Ivy Bridge i5 perform today’, or given AMD and Intel’s tendency to compare five year products to each other, we are looking to do that too, in both short and longer form articles.

When I first approached AMD and Intel’s consumer processor divisions about this project, wondering how much interest there would be for it, both came back to me with positive responses. They filled in a few of my hardware gaps, but cautioned that even as internal PR teams, they won’t have access to most chips, especially the older ones. This means that as we process through the hardware, we might start reaching out to other partners in order to fill in the gaps.

Is testing 900 CPUs ultimately realistic? Based on the hardware I have today, if I had access to Narnia, I could provide data for about 350 of the CPUs. In reality, with our new suite, each CPU takes 20-30 hours to test on the CPU benchmarks, and another 10 hours for the gaming tests. Going for 50-100 CPUs/month might be a tough ask, but let’s see how we get on. We have these dozen or so CPUs in the graphs here to start.

Of course, comments are always welcome. If there’s a CPU, old or new, you want to see tested, then please drop a comment below. It will help how I arrange which test beds get priority.

Gaming Tests: Strange Brigade
Comments Locked

110 Comments

View All Comments

  • ruthan - Monday, July 27, 2020 - link

    Well lots of bla, bla, bla.. I checked graphs in archizlr they are classic just few entries.. there is link to your benchmark database, but here i see preselected some Crysis benchmark, which is not part of article.. and dont lead to some ultimate lots of cpus graphs. So it need much more streamlining.

    i usually using old Geekbench for cpus tests and there i can compare usually what i want.. well not with real applications and games, but its quick too. Otherwise usually have enough knowledge to know if is some cpu good enough for some games or not.. so i dont need some very old and very need comparisions. Something can be found at Phoronix.
    These benchmarks will always lots relevancy with new updates, unless all cpus would in own machines and update and running and reresting constantly - which could be quite waste of power and money.
    Maybe some golden path is some simple multithreaded testing utility with 2 benchmarks one for integers and one for floats.
  • Ian Cutress - Wednesday, August 5, 2020 - link

    When you're in Bench, Check the drop down menu on your left for the individual tests
  • hnlog - Wednesday, July 29, 2020 - link

    > For our testing on the 2020 suite, we have secured three RTX 2080 Ti GPUs direct from NVIDIA.
    Congrats!
  • Koenig168 - Saturday, August 1, 2020 - link

    It would be more efficient to focus on the more popular CPUs. Some of the less popular SKUs which differ only by clock speed can have their performance extrapolated. Testing 900 CPUs sound nice but quickly hit diminishing returns in terms of usefulness after the first few hundred.

    You might also wish to set some minimum performance standards using just a few tests. Any CPU which failed to meet those standards should be marked as "obsolete, upgrade already dude!" and be done with them rather than spend the full 30 to 40 hours testing each of them.

    Finally, you need to ask yourself "How often do I wish to redo this project and how much resources will I be able to devote to it?" Bearing in mind that with new drivers, games etc, the database needs to be updated oeriodically to stay relevant. This will provide a realistic estimate of how many CPUs to include in the database.
  • Meteor2 - Monday, August 3, 2020 - link

    I think it's a labour of love...
  • TrevorX - Thursday, September 3, 2020 - link

    My suggestion would be to bench the highest performing Xeons that supported DDR3 RAM. Why? Because the cost of DDR3 RDIMMs is so amazingly cheap (as in, less than 10%) compared with DDR4. I personally have a Xeon E5-1660v2 @4.1GHz with 128GB DDR3 1866MHz RDIMMs that's the most rock stable PC I've ever had. Moving up to a DDR4 system with similar memory capacity would be eye-wateringly expensive. I currently have 466 tabs open in Chrome, Outlook, Photoshop, Word, several Excel spreadsheets, and I'm only using 31.3% of physical RAM. I don't game, so I would be genuinely interested in what actual benefit would be derived from an upgrade to Ryzen / Threadripper.

    Also very keen to see server/hypervisor testing of something like Xeon E5-2667v2 vs Xeon W-1270P or Xeon Silver 4215R for evaluation of on-prem virtualisation hosts. A lot of server workloads are being shifted to the cloud for very good reasons, but for smaller businesses it might be difficult to justify the monthly expense of cloud hosting (and Azure licensing) when they still have a perfectly serviceable 5yo server with plenty of legs left on it. It would be great to be able to see what performance and efficiency improvements can be had jumping between generations.
  • Tilmitt - Thursday, October 8, 2020 - link

    When is this going to be done?
  • Mil0 - Friday, October 16, 2020 - link

    Well they launched with 12 results if I count correctly, and currently there are 38 listed, that's close to 10/month. With the goal of 900, that would mean over 7 years (in which ofc more CPUs would be released)
  • Mil0 - Friday, October 16, 2020 - link

    Well they launched with 12 results if I count correctly, and currently there are 44 listed, that's about a dozen a month. With the goal of 900, that would mean 6 years (in which ofc more CPUs would be released)
  • Mil0 - Friday, October 16, 2020 - link

    Caching hid my previous comment from me, so instead of a follow up there are now 2 pretty similar ones. However, in the mean time I found Ian is actually updating on twitter, which you can find here: https://twitter.com/IanCutress/status/131350328982...

    He actually did 36 CPU's in 2.5 months, so it should only take 5 years! :D

Log in

Don't have an account? Sign up now