The Test

As was the case with Lynnfield, the current Sandy Bridge CPUs Intel is sampling are slightly different than what will be sold. The Core i5 2400 runs at 3.1GHz, has four cores, 6MB of L3 cache but no Hyper Threading. In order to help Intel’s partners test HT functionality however, the i5 2400s being sampled right now have Hyper Threading enabled. For the purposes of our test I’ve run with HT both enabled (to give you an idea of higher end SB parts) and disabled (to give you an idea of i5 2400 performance).

The other major difference between what’s out today and what’s coming in Q1 is turbo. Early Sandy Bridge samples, ours included, do not have turbo enabled. The CPU simply runs at 3.1GHz all the time, regardless of workload. The final retail 2400 will be able to run at up to 3.4GHz.

In other words, what we show here should be indicative of final performance, but it's probably slower than what will ship in Q1.


Click to Enlarge

On the GPU side, the part I’m testing appears to be the single-core GPU configuration (6 EUs). Intel hasn’t released any info as to what parts will get the dual-core/12 EUs GPU configurations, although it may make sense for Intel to use the 12 EU parts in notebooks given the importance of integrated graphics to the mobile market. Update: The part we're looking at may actually have been a lower clocked 12 EU part, we're still waiting for additional confirmation.

Our test platform was a H67 based motherboard running with 4GB of DDR3-1333, the same memory we use in our Lynnfield testbeds.

I’m comparing to four other CPUs. The Core i7 980X for a high end comparison, the Core i7 880 for a near clock-for-clock comparison (albeit with HT enabled), the Core i5 760 for a potential price comparison and the Phenom II X6 1090T. The latter should be AMD’s fastest offering (if not close to it) when Sandy Bridge ships. Update: Note the Core i5 650 is actually the predecessor to the Core i5 2400, however I didn't feel a dual core vs. quad core comparison was too fair. The i5 760 will actually go head to head with the higher clocked i5 2500 when it launches in Q1.

Motherboard: ASUS P7H57DV- EVO (Intel H57)
Intel DP55KG (Intel P55)
Intel DX58SO (Intel X58)
Intel DX48BT2 (Intel X48)
Gigabyte GA-MA790FX-UD5P (AMD 790FX)
Chipset Drivers: Intel 9.1.1.1015 (Intel)
AMD Catalyst 8.12
Hard Disk: Intel X25-M SSD (80GB)
Memory: Corsair DDR3-1333 4 x 1GB (7-7-7-20)
Corsair DDR3-1333 2 x 2GB (7-7-7-20)
Video Card: eVGA GeForce GTX 280 (Vista 64)
ATI Radeon HD 5870 (Windows 7)
Video Drivers: ATI Catalyst 9.12 (Windows 7)
NVIDIA ForceWare 180.43 (Vista64)
NVIDIA ForceWare 178.24 (Vista32)
Desktop Resolution: 1920 x 1200
OS: Windows Vista Ultimate 32-bit (for SYSMark)
Windows Vista Ultimate 64-bit
Windows 7 x64
Overclocking Controversy Sandy Bridge Integrated Graphics Performance
Comments Locked

200 Comments

View All Comments

  • seapeople - Sunday, August 29, 2010 - link

    So you're saying that integrated graphics should either be able to handle high resolution gaming using at least medium settings on the upper echelon of current games or they should not be included? That's fairly narrow minded. The bottom line is that most people will never need a better graphics card than SB provides, and the people who do are probably going to buy a $200+ graphics card anyway and replace it every summer, so are they really going to care if the integrated graphics drive the price of their $200 processor up by $10-20? Alternatively, this chip is begging for some sort of Optimus-like option, which will allow hardcore gamers to buy the graphics card they want, AND not have to chew up 100W of graphics power while browsing the web or watching a movie.

    Regardless, for people who aren't hard core gamers, the IGP on SB replaces the need to buy something like a Radeon HD 5450, ultimately saving them money. This seems like a positive step to me.
  • chizow - Sunday, August 29, 2010 - link

    No, I'm saying if this is being advertised as a suitable discrete GPU replacement, it should be compared to discrete GPUs at resolutions and settings you would expect a discrete GPU to handle and not IGPs that we already know are too slow to matter. 1024x768 and all lowest settings doesn't fit that criteria. Flash and web-based games don't either, since they don't even require a 3D accelerator in order to run (Intel's workaround Broadcom chip would be fine).

    Again, this card wouldn't even hold a candle to a mid-range $200 GPU from 3 years ago, the 8800GT would still do cartwheels all over it. You can buy these cards for much less than $100, even the GT240 or 4850 for example have been selling for less than $50 after MIR and would be a much more capable gaming card.

    Also, you're badly mistaken if you think this GPU is free by any means, as the cost to integrate a GPU onto SB's die comes at the expense of what could've been more actual CPU....so instead of better CPU performance this generation, you lose that for mediocre graphics performance. There is a price to pay for that relatively massive IGP whether you think so or not, you are paying for it.
  • wut - Sunday, August 29, 2010 - link

    You don't know what you're talking about. You pretend that you do, but you don't.

    The telling sign is your comment about L2/L3 cache.
  • chizow - Sunday, August 29, 2010 - link

    Actually it sounds like you don't know what you're talking about or you didn't read the article:

    "Only the Core i7 2600 has an 8MB L3 cache, the 2400, 2500 and 2600 have a 6MB L3 and the 2100 has a 3MB L3. The L3 size should matter more with Sandy Bridge due to the fact that it’s shared by the GPU in those cases where the integrated graphics is active. I am a bit puzzled why Intel strayed from the steadfast 2MB L3 per core Nehalem’s lead architect wanted to commit to. I guess I’ll find out more from him at IDF :)"

    You might've missed it very clearly stated in the tables also that only the 2600 has the same 8MB L3 or 2MB per core with previous 4C like Bloomfield/Lynnfield/Westmere/Clarkdale. The rest have 6MB or 3MB, which is less than 8MB or 4MB L3 used on the previous generation chips.

    This may change with the high-end/enthusiast platform, but again, the amount of L3 cache is actually going to be a downgrade on many of these Sandy Bridge SKUs for anyone who already owns a Nehalem/Westmere based CPU.
  • wut - Friday, September 10, 2010 - link

    You're parroting Anand and his purely number-based guess. Stop pretending.
  • mac2j - Saturday, August 28, 2010 - link

    990x is a Gulftown part on 1366 that's 130MHz faster than the 980x.... will cost $1000 and come out the same time as the 2600 (which will cost ~ 1/2 and deliver 90% of the performance) and at most a couple months before the i7-2800K which will cost less and trounce it performance-wise.

    You'd have to REALLY want those extra cores to buy a 990x on a lame-duck socket at that point!
  • wut - Sunday, August 29, 2010 - link

    Some has to get those chips to populate the uppermost echelons 3DMark score boards. It's an expensive hobby.
  • hybrid2d4x4 - Saturday, August 28, 2010 - link

    Anand, can you provide some more info on what the system configuration was when running the power tests? The test setup lists 2 vid cards and it's not clear which was used when deriving the power graphs. Also, what PSU was used?
    Just wondering since if it was a 1200W behemoth, then the 63W idle might really be 30W on a more reasonable PSU (assuming no vid cards)...
    As always, thanks for the article!
  • smilingcrow - Saturday, August 28, 2010 - link

    Was HT enabled for the power tests and what application was used to load the cores?
  • semo - Saturday, August 28, 2010 - link

    No USB3.0 support and a half baked SATA3 implementation. I could be a bit too harsh about the latter (can't say if SATA3 on a 6 series chipset will perform poorly or not) but why are they going with only 2 6Gb/s ports? I understand that most people are likely to be buying only 1 or so SSDs in the near future but what about in a few years when these things become mainstream? At least AMD took SATA3 seriously even if they couldn't quite make it work initially (we need a follow up on the 8 series chipsets' SATA performance!)

    Not only are Intel overlooking advance in technologies other than CPUs (which are important to most consumers, whether they are aware of it or not) but are also denying other companies who might have more focus in those areas. I wonder if Nvidia or someone else bother to release a chipset for Intel's latest and greatest.

Log in

Don't have an account? Sign up now