Windows 7 Gaming Performance

Our Bench suite is getting a little long in the tooth, so I added a few more gaming tests under Windows 7 with a new group of processors. We'll be adding some of these tests to Bench in the future but the number of datapoints is obviously going to be small as we build up the results.

Batman is an Unreal Engine 3 game and a fairly well received one at that. Performance is measured using the built in benchmark at the highest image quality settings without AA enabled.

Gaming performance is competitive, but we don't see any huge improvements under Batman.

Dragon Age Origins is another very well received game. The 3rd person RPG gives our CPUs a different sort of workload to enjoy:

Dragon Age on the other hand shows an 11.6% gain vs. the i5 760 and equal performance to the Core i7 880. Given that the i5 2400 is slated to be cheaper than the i5 760, I can't complain.

World of Warcraft needs no introduction. An absurd number of people play it, so we're here to benchmark it. Our test favors repeatability over real world frame rates, so our results here will be higher than in the real world with lots of server load. But what our results will tell you is what the best CPU is to get for playing WoW:

Performance in our WoW test is top notch. The i5 2400 is now the fastest CPU we've ever run through our WoW benchmark, the Core i7 980X included.

We've been working on putting together Starcraft II performance numbers, so here's a quick teaser:

A 12% advantage over the Core i7 880 and an 18% improvement over the Core i5 760.

Archiving Performance Power Consumption
Comments Locked

200 Comments

View All Comments

  • overzealot - Saturday, August 28, 2010 - link

    Now, that's a name I've not heard in a long time. A long time.
  • mapesdhs - Saturday, August 28, 2010 - link


    Seems to be Intel is slowly locking up the overclocking scene because it has no
    competition. If so, and Intel continues in that direction, then it would be a great
    chance for AMD to win back overclocking fans with something that just isn't
    locked out in the same way.

    Looking at the performance numbers, I see nothing which suggests a product that
    would beat my current 4GHz i7 860, except for the expensive top-end unlocked
    option which I wouldn't consider anyway given the price.

    Oh well, perhaps my next system will be a 6-core AMD.

    Ian.
  • LuckyKnight - Saturday, August 28, 2010 - link

    Do we have something more precise about the release date? Q1 is what - Jan/Feb/March/Apri?

    Looking to upgrade a core 2 duo at the moment - not sure whether to wait
  • mino - Saturday, August 28, 2010 - link

    Q1 (in this case) means tricle amounts in Jan/Feb, mainstream availability Mar/April and worth-buying mature mobos in May/June timeframe.
  • tatertot - Saturday, August 28, 2010 - link

    Intel has already announced that shipments for revenue will occur in Q4 of this year. So, January launch.

    They've also commented that Sandy Bridge OEM demand is very strong, and they are adjusting the 32nm ramp up to increase supply. So January should be a decent launch.

    Not surprising-- these parts have been in silicon since LAST summer.
  • chrsjav - Saturday, August 28, 2010 - link

    Do modern clock generators use a quartz resonator? How would that be put on-die?
  • iwodo - Saturday, August 28, 2010 - link

    Since you didn't get this chip directly from Intel , i suspect there were no reviews guideline for you to follow, like which test to run and what test not to run etc.

    Therefore those benchmark from Games were not a results of special optimization in drivers. Which is great, because drivers matter much more then Hardware in GPU. If these are only early indication of what Intel new GPU can do, i expect there are more to extract from drivers.

    You mention 2 Core GPU ( 12 EU ) verus 1 GPU ( 6 EU ), Any Guess as to what "E" stand for? And it seems like a SLI like tech rather then actually having more EU in one chip. The different being SLI or crossfire does not get any advantage unless drivers and games are working together. Which greatly reduces the chances of it working at full performance.

    It also seems every one fail to realize one of the greatest performance will be coming from AVX. AVX will be like MMX again when we had the Pentium. I cant think of any other SSE having as great important to performance as AVX. Once software are specially optimize for AVX we should get another major lift in performance.

    I also heard about rumors that 64bit in Sandy Bridge will work much better. But i dont know if there are anything we could test this.

    The OpenCL sounds like a Intel management decision rather then a technical decision. May be Intel will provide or work with Apple to provide OpenCL on these GPU?

    You also mention that Intel somehow support PCI -Express 2.0 with 1.0 performance. I dont get that bit there. Could you elaborate? 2.5GT/s for G45 Chipset??

    If Intel ever decide to finally work on their drivers, then their GPU will be great for entry levels.

    Are Dual Channel DDR3 1333 enough for Quad Core CPU + GPU? or even Dual core CPU.
    Is GPU memory bandwidth limited?

    Any update on Hardware Decoder? And what about transcoding part?

    Would there be ways to lock the GPU to run at Turbo Clock all the time? Or GPU gets higher priority in Turbo etc..

    How big is the Die?

    P.S - ( Any news on Intel G3 SSD? i am getting worried that next Gen Sandforce is too good for intel. )
  • ssj4Gogeta - Saturday, August 28, 2010 - link

    I believe EU means execution units.
  • DanNeely - Sunday, August 29, 2010 - link

    "You also mention that Intel somehow support PCI -Express 2.0 with 1.0 performance. I dont get that bit there. Could you elaborate? 2.5GT/s for G45 Chipset??"

    PCIE 2.0 included other low level protocol improvements in addition to the doubled clock speed. Intel only implemented the former; probably because the latter would have strangled the DMI bus.

    "Are Dual Channel DDR3 1333 enough for Quad Core CPU + GPU? or even Dual core CPU."

    Probably. The performance gains vs the previous generation isn't that large and it was enough for anything except pathological test cases (eg memory benchmarks). If it wasn't there'd be no reason why Intel couldn't officially support DDR3-1600 in their locked chipsets to give a bit of extra bandwidth.
  • chizow - Saturday, August 28, 2010 - link

    @Anand

    Could you please clarify and expand on this comment please? Is this true for all Intel chipsets that claim support for PCIe 2.0?

    [q]The other major (and welcome) change is the move to PCIe 2.0 lanes running at 5GT/s. Currently, Intel chipsets support PCIe 2.0 but they only run at 2.5GT/s, which limits them to a maximum of 250MB/s per direction per lane. This is a problem with high bandwidth USB 3.0 and 6Gbps SATA interfaces connected over PCIe x1 slots. With the move to 5GT/s, Intel is at feature parity with AMD’s chipsets and more importantly the bandwidth limits are a lot higher. A single PCIe x1 slot on a P67 motherboard can support up to 500MB/s of bandwidth in each direction (1GB/s bidirectional bandwidth).[/q]

    If this is true, current Intel chipsets do not support PCIe 2.0 as 2.5GT/s and 250MB/s is actually the same effective bandwidth as PCIe 1.1. How did you come across this information? I was looking for ways to measure PCIe bandwidth but only found obscure proprietary tools not available publicly.

    If Intel chipsets are only running at PCIe 1.1 regardless of what they're claiming externally, that would explain some of the complaints/concerns about bandwidth on older Intel chipsets.

Log in

Don't have an account? Sign up now