It's actually quite scary how closely the hardware enthusiast resembles the performance automobile enthusiast when you really think about it.  Both spend their free time gawking at the fastest performing devices and dreaming about someday being able to sit in the driver seat of one of those bad boys or run a quick timedemo on an Athlon.  The similarities are uncanny, while followers of performance cars can find worlds of difference between a car that accelerates a mere second faster than another, hardware enthusiasts can find worlds of difference between a system that is just 5 frames per second faster in a game.  Maybe that's the reason so many hardware guys turn out to be huge car fanatics as well.

Needless to say that we're here to review computer hardware, and not cars (although sometimes we wish it the other way around), so the topic at hand for today?  Not the car that can complete the quarter-mile in 15 seconds but rather the CPU that can give you the performance you want, at a price you're willing to pay.  From the perspective of the consumer, you all have a tremendous load on your shoulders the minute you decide to upgrade your computer or invest in a new piece of hardware.  Not only do you have to look out for unscrupulous vendors that will do their best to make sure you pay the most you can for the worst part possible, but you also have to keep in mind that the manufacturers are out there trying to make their products seem like the best buy in town even if they're not (which is true in most cases).  So what are you to do?

Luckily we've put together a (hopefully) helpful guide on what CPU to choose based on the idea that although having the fastest processor out there would be wonderful, it isn't entirely possible for each and every user.  So instead on focusing on a single processor dominating all of the tests this comparison will focus on the fastest, slowest, and best buys out there in the desktop processor market with an attempt to help you make an intelligent purchasing decision on what processor is right for you.  Because, after all, that's what we're here for, isn't it?

The Problem with Benchmarks

It all started when Intel released the Pentium III, February 22, 1999 to be specific.  The problem from Intel's point of view was that the Pentium III was being released at two clock speeds, 450MHz and 500MHz, the former being equivalently clocked to the fastest Pentium II processor available at the time.  That would be just fine if the Pentium III actually held a noticeable performance advantage over the Pentium II, however in most applications (and thus in most benchmarks) the only benefit the Pentium III had to offer, it's SSE instructions, was not taken advantage of.  This made the Pentium III 450 look like a fancy way of dressing up the old Pentium II 450 that had been out for months. 

What did Intel do?  They developed the Intel Performance Measurement Utility and sent it out with all evaluation samples of their Pentium III.  The Performance Measurement Utility was nothing more than a set of benchmarks that show off the performance advantage the Pentium III has over the Pentium II, and even the advantage both the Pentium II and Pentium III have over the Celeron.  Although most of you already know that for a normal user, the Pentium II is generally just as fast as the Pentium III and the Celeron isn't too far behind the two either (it's even faster in some cases).  What Intel was essentially doing was providing the reviewer with a set of benchmarks that illustrate what their processors can do using real world applications.

The only problem with this approach is that although the applications they used were real world applications, they weren't used in a manner that your average Joe would use them in.  It seems like it's a fad to blame Intel for the world's problems these days, and while their Performance Measurement Utility isn't the most accurate indicator of real world performance they didn't do anything "wrong," they just showed the world an application of SSE...and they aren't the only ones to pull something like that.

The industry's favorite underdog, AMD, is guilty as charged of a similar offense.  The AMD Benchmark Suite supplied by AMD for use by reviewers with their Athlon evaluation systems wasn't the most representative of real world performance either.  Honestly, the AMD suite was much less AMD-biased than Intel's suite was Intel-biased, but quite a few of the benchmarks produced interesting results that, although may be neat to look at, aren't necessarily reflective of real world performance.  So what good are these benchmarks at showing?

Well, although making an AMD vs Intel comparison using either one of the suites isn't the best idea, using the Intel suite to compare AMD processors and using the AMD suite to compare Intel processors is an interesting idea that we decided to experiment with.  For example, while using the Intel Performance Measurement Utility to compare the Pentium III 500 to the Athlon 500 would most likely put the Pentium III in a better light than the Athlon, the benchmark utility can't make the Athlon 500 seem any faster or slower than it naturally is when compared to the higher clocked Athlon 550.  Make sense? 

What made it and what didn't
Comments Locked

0 Comments

View All Comments

Log in

Don't have an account? Sign up now