Core: Performance vs. Today

Looking back at Anand’s original review, and at a time where CPU performance made a lot of difference for gaming frame rates at 1600x1200, the conclusion was quite startling.

Intel's Core 2 Extreme X6800 didn't lose a single benchmark in our comparison; not a single one. In many cases, the $183 Core 2 Duo E6300 actually outperformed Intel's previous champ: the Pentium Extreme Edition 965. In one day, Intel has made its entire Pentium D lineup of processors obsolete.

Imagine something like that happening today. (Actually, if you believe what we’ve been told, AMD’s upcoming AM4 platform with Zen and Bristol Ridge might make its current desktop platform obsolete, but that’s a slightly different discussion because of how integrated graphics has adjusted the landscape for CPU focused silicon somewhat.)

That’s Intel vs. Intel though, against AMD it was just as damning.

Compared to AMD's Athlon 64 X2 the situation gets a lot more competitive, but AMD still doesn't stand a chance. The Core 2 Extreme X6800, Core 2 Duo E6700 and E6600 were pretty consistently in the top 3 or 4 spots in each benchmark, with the E6600 offering better performance than AMD's FX-62 flagship in the vast majority of benchmarks.

However, Core 2 Duo has now been out for 10 years. I’ve pulled up some benchmark data from our database to see if we have any matches to compare against processors that cost $214 today. The Core i5-6600 fits our bill perfectly, and there are two benchmarks which match up. I’ve also dotted the graphs with a range of more recent AMD and Intel processors for progression.

3D Particle Movement: Single Threaded

3D Particle Movement: MultiThreaded

FastStone Image Viewer 4.9

Our 3D Particle Movement is more for idealized synthetic workloads, however FastStone is all about image conversion and favors high frequency, high single threaded performance.

Naturally, modern processors nearing 4.00 GHz have a large advantage over the 2.13 GHz version of Core 2 Duo, as well as multiple generations of improved microarchitecture designs and smaller lithography nodes for power efficiency. However, has any processor family had as much nostalgic longevity as the consumer launch of Core? One could argue that while Core put Intel on top of the heap again, Sandy Bridge was a more important shift in design and as a result, many users went from Conroe to Sandy Bridge and have stayed there.

Core: Load Me Up, but no Hyper-Threading or IMC Looking to the Future: International Technology Roadmap for Semiconductors 2.0 Report
Comments Locked

158 Comments

View All Comments

  • Jon Tseng - Wednesday, July 27, 2016 - link

    Great chip. Only just upgraded from my QX6850 last month. Paired with a GTX 970 it was doing just fine running all new games maxed out at 1080p. Amazing for something nearly a decade old!!
  • Negative Decibel - Wednesday, July 27, 2016 - link

    my E6600 is still kicking.
  • tarqsharq - Wednesday, July 27, 2016 - link

    My dad still uses my old E8400 for his main PC. He's getting my old i7-875k soon though.
  • jjj - Wednesday, July 27, 2016 - link

    You can't do DRAM in glasses, not in a real way. Since that's what mobile is by 2025.
    On-package DRAM is next year or soon not 2025.
    You can't have big cores either and you need ridiculous GPUs and extreme efficiency. Parallelism and accelerators, that's where computing needs to go, from mobile to server.
    We need 10-20 mm3 chips not 100cm2 boards. New NV memories not DRAM and so on.
    Will be interesting to see who goes 3D first with logic on logic and then who goes 3D first as the default in the most advanced process.

    At the end of the day, even if the shrinking doesn't stop, 2D just can't offer enough for the next form factor. Much higher efficiency is needed and the size of a planar chip would be far too big to fit in the device while the costs would be mad.Much more is needed. For robots too.The costs and efficiency need to scale and with planar it's at best little.
  • wumpus - Thursday, August 4, 2016 - link

    On package DRAM seems to be a "forever coming" tech. AMD Fury-X basically shipped it, and it went nowhere. I'm guessing it will be used whenever Intel or IBM feel it can be used for serious advantage on some high-core server chip, or possibly when Intel want to build a high-speed DRAM cache (with high-speed-bus) and use 3dXpoint for "main memory".

    The slow rollout is shocking. I'm guessing nvidia eventually gave up with it and went with tiling (see the Kanter demo on left, but ignore the thread: nothing but fanboys beating their chests).
  • willis936 - Wednesday, July 27, 2016 - link

    I'm certainly no silicon R&D expert but I'm very skeptical of those projections.
  • Mr.Goodcat - Wednesday, July 27, 2016 - link

    Typo:
    "On the later, we get the prediction that 450nm wafers should be in play at around 2021 for DRAM"
    450nm wafers would be truly interesting ;-)
  • wumpus - Thursday, August 4, 2016 - link

    I like the rapidly falling static safety. Don't breathe on a 2030 chip.
  • faizoff - Wednesday, July 27, 2016 - link

    My first Core 2 Duo was an E4400 that I bought in 2007 I believe, thing lasted me up to 2011 when I upgraded to an i5 2500k. I should've kept that C2D just for nostalgia's sake, I used it intermittently as a plex server and that thing worked great on FreeNAS. The only issue was it was really noisy and would get hot.
  • Notmyusualid - Thursday, July 28, 2016 - link

    I've got a few old servers kicking around, all with valid Win server licenses, but due to UK electricity costs, just can't bring myself to have them running at home 24/7 just to serve a backup, or yet another Breaking Bad viewing session... :) which we can do locally now.

Log in

Don't have an account? Sign up now