Last week we published our preview of Intel's 2011 Core microarchitecture update, codenamed Sandy Bridge. In the preview we presented a conservative estimate of what shipping Sandy Bridge performance will look like in Q1 2011. I call it conservative because we were dealing with an early platform, with turbo disabled, compared to fairly well established competitors with their turbo modes enabled.

It shouldn't come as a surprise to you that this performance preview, ~5 months before launch, wasn't officially sanctioned or supported by Intel. All companies like to control the manner in which information about their products is released, regardless of whether the outcome is good or bad. We acquired the chip on our own, ran the benchmarks on our own and published the article, on our own. 

As a result a number of questions remained unanswered. I measured significantly lower L3 cache latencies on SB vs. Westmere/Nehalem, I just have no idea why they were lower. I suspect many of these questions will be answered at IDF, but the point is that we were flying blind on this one.

A big unknown was the state of Sandy Bridge graphics. As I mentioned in the preview, there will be two types of integrated graphics enabled on Sandy Bridge parts: 1 core and 2 core parts. Intel refers to them as GT1 and GT2, respectively. The GT1 parts have 6 execution units (EUs), while the GT2 parts have 12.

While some desktop parts will feature GT2, all notebook parts (at launch) will feature GT2. Based on the information I had while running our tests, it looked like the Sandy Bridge sample was a GT1 part. With no official support from Intel and no way to tell how many EUs the sample had, I had no way to confirm. Since publication I've received more information that points to our sample being a GT2 part. It's not enough for me to 100% confirm that it's GT2, but that's what it looks to be at this point.

If it is indeed a GT2 part, the integrated graphics performance in our preview is indicative of the upper end of what you can expect for desktops and in the range of what we'd expect from SB notebooks (graphics turbo may move numbers up a bit but it's tough to tell at this point since our sample didn't have turbo enabled). As soon as I got this information I made updates to the articles indicating our uncertainty. I never like publishing something I'm not 100% sure of and for that, I owe you an apology. We trusted that our sources on the GT1/6EU information were accurate and in this case they may not have been. We all strive to be as accurate as possible on AnandTech and when any of us fail to live up to that standard, regardless of reasoning, it hurts. Thankfully the CPU and GPU performance data are both accurate, although we're simply unsure if the GPU performance will apply to the i5 2400 or not (it should be indicative of notebook SB GPU performance and some desktop SB GPU performance).

The desktop Sandy Bridge GPU rollout is less clear. I've heard that the enthusiast K-SKUs will have GT2 graphics while the more mainstream parts will have GT1. I'm not sure this makes sense, but we'll have to wait and see.

Many of you have been drawing the comparison to Llano and how it will do vs. Sandy Bridge. Llano is supposed to be based on a modified version of the current generation Phenom II architecture. Clock for clock, I'd expect that to be slower than Sandy Bridge. But clock for clock isn't what matters, it's performance per dollar and performance per watt that are most important. AMD has already made it clear that it can compete in the former and it's too early to tell what Llano perf per watt will be. On the CPU side I feel it's probably easy to say that Intel will have the higher absolute performance, but AMD may be competitive at certain price points (similar to how it is today). Intel likes to maintain certain profit margins and AMD doesn't mind dropping below them to maintain competitive, it's why competition is good.

Llano's GPU performance is arguably the more interesting comparison. While Intel had to do a lot of work to get Sandy Bridge to where it is today, AMD has an easier time on the graphics side (given ATI's experience). The assumption is that Llano's GPU will be more powerful than what Intel has in Sandy Bridge. If that's the case, then we're really going to have an awesome set of entry level desktops/notebooks next year. 

Comments Locked

43 Comments

View All Comments

  • mmatis - Wednesday, September 1, 2010 - link

    for your failure by offering an additional giveaway to your readers. I suspect most of them would be satisfied with a complete home theater system...
    }:-]
  • iwod - Wednesday, September 1, 2010 - link

    I dont know if i am disappointed, since i have always wanted a SUPER efficient Integrated Graphics inside CPU while a more powerful sits outside. Hence i really like the idea of Southbridge + GPU that i have been calling for nearly two years.

    The GPU inside CPU would be enough to do ( hopefully ) Hardware Video Decode, Graphics acceleration for web browsing, Desktop acceleration etc.. And in this scenario, not supporting OpenCL wont matter because the job is left for another GPU. The Intel GPU wont have to waste transistor on doing Compute things. And focus solely on graphics.

    But i am disappointed in ways because i saw some leaks that point to Intel spending nearly half of the chips on GPU ( including memory controller ) . If this is the case then i can see Intel's GPU is not really Power / Transistor efficient.

    A GT220 ( 340M on Macbook ) is rougly 2 - 4 times faster then a 5450. Which is roughly the same as Intel GPU.
  • IntelUser2000 - Wednesday, September 1, 2010 - link

    You can't count on the memory controller as part of the GPU because its shared. The GPU portion only takes ~40mm2 or so. That will turn out to be the most efficient GPU in terms of die size.
  • mino - Thursday, September 2, 2010 - link

    Umm, maybe.
    But counting microelectronics efficiency by mm2 is kind of pointless.

    Also when you remove MC&Video from 5450's 73 mm2 (at 40nm!) you would get comparable if not smaller GPU die.
  • Wayne321 - Thursday, September 2, 2010 - link

    For the case of desktop chips, how difficult would it be to code a driver that utilize only the integrated GPU during certain workloads (or by user choice), pass the video feed to the monitor through the dedicated video card while keeping it almost completely turned off? Considering the amount of power video cards draw even during idle, there is a good case to save some power if the integrated graphics can do the job.
  • iwod - Thursday, September 2, 2010 - link

    Nvidia Optimus with Drivers already allows you to that. ( Only on Laptops at the moment... )
  • Drag0nFire - Wednesday, September 1, 2010 - link

    Thank you for fessing up to the mistake. Your high journalistic standards are what makes this site so amazing! Many other sites wouldn't make such a big point of correcting a published article...

    Keep up the great work!
  • anandreader106 - Thursday, September 2, 2010 - link

    Now if only Anand could somehow convince Dailytech (which doesn't have any journalistic standards what-so-ever) to do the same for their articles. That would be amazing!!!
  • DigitalFreak - Thursday, September 2, 2010 - link

    Here here!
  • bah12 - Thursday, September 2, 2010 - link

    It would indeed, on DT if you even dare point out a typo you (usually) get down rated and flamed. AT welcomes the correction and thanks the user for finding it, the way a true journalism site should.

    Case in point.
    http://www.dailytech.com/Plugin+Electric+Vehicle+S...

    Picture caption says Nisan Volt instead of leaf, still not corrected 24hrs later even though it was pointed out in the comments.

Log in

Don't have an account? Sign up now