Why Sandy Bridge Matters for Notebooks

To say that we were caught off guard by Intel’s announcement last Monday of a flaw in their 6-series chipsets would be an understatement. Bad as that was, it’s the OEMs and system builders that are really feeling the pain—not to mention all the money Intel is losing on this “not a recall”. We’ve seen plenty of manufacturer statements about what they’re doing to address the problem, and we’ve also been talking with our notebook contacts trying to find out how the problem will impact availability.

We’ve also had more than a few delayed/canceled reviews while we wait for a fix. While we’ve looked at a generic Sandy Bridge notebook and a few motherboards, there was still plenty more we wanted to discuss. One such notebook came with a “low-end” i7-2630QM processor and a GTX 460M GPU, packed into a 15.6” chassis and sporting a 1080p LCD and RAID 0 hard drives. The manufacturer asked us to hold off on the full review, and we’ve returned the notebook, but not before we ran it through our suite of mobile benchmarks. Rather than complete a full review of a notebook that may or may not be available, we thought it would be interesting to look at what another SNB notebook would do in comparison to the previous generation parts.

Update: We just got word back, and MSI has given the okay to reveal that the notebook in question is the MSI GT680R; we should hopefully see it return to market in a couple months.

In terms of specs, the notebook in question was very similar to the ASUS G73Jw we reviewed last year. Change the CPU to an i7-2630QM in place of the old i7-740QM, use a different battery and chassis, and you’re set. So exactly what can the 2630QM do relative to the 740QM? We’ve added the complete benchmark results to our Mobile Bench area, so you can quickly see how the two stack up.

If you’re only interested in gaming performance, it’s no surprise that we’re mostly GPU limited with the GTX 460M. The majority of titles are 2-8% faster with the Sandy Bridge setup, but we’re also dealing with updated drivers so the performance increase may come at least in part from NVIDIA. That said, there are a couple of outliers: 900p STALKER: Call of Pripyat shows a massive performance increase, as does 900p StarCraft II. How much of that comes from drivers and how much from the CPU? Since we don’t have the G73Jw around to retest, it’s impossible to say for certain, but we can look at the CPU tests to see how much faster Sandy Bridge can be compared to Clarksfield.

PCMark as usual is heavily influenced by the storage subsystem, so RAID 0 versus a single HDD gives the unnamed system an inherent advantage. The use of Western Digital’s Scorpio Black drives versus a Seagate Momentus 7200.4 is another benefit in the storage area—WD has generally come out on the top of the HDD heap with their Black series (though SSD’s are still much faster). Ignoring PCMark, though, we still see a large advantage for the 2630QM. Single-threaded performance is 21% faster in Cinebench 10/11.5, which in our experience correlates well with general Windows use. In the heavily multithreaded tests, the gap increases to 47-58% in Cinebench and x264 encoding.

It’s not just about performance either. While the 2630QM notebook has a larger 87Wh battery, factoring that into the equation we still see relative battery life improved over the G73Jw by 17% at idle, 40% in H.264 playback, and 42% in Internet surfing. Looking at the comparison with 2820QM with HD Graphics 3000, the GTX 460M still clearly takes a toll on battery life (less than half the relative battery life), but it’s good to see more than three hours of mobility from a gaming laptop.

We’re curious to see if anyone is willing to do Optimus with a 460M (or higher) GPU and a quad-core SNB processor, as that will only serve to further increase battery life. Of course, we still see occasional glitches with Optimus that might make OEMs slow to use it on high-end gaming systems. For instance, Empire: Total War won’t let you select higher than “Medium” detail defaults (because it queries the IGP capabilities rather than the dGPU). Left 4 Dead 2 also had some oddities with the latest driver update—you can’t max out the graphics settings and have it run properly with a GT 420M Optimus in our experience; you have to drop the “Paged Pool Memory Available” setting to Low instead of High/Medium or it will exit to the desktop. The result is lower performance/compatibility relative to discrete GPUs, but I’d be willing to deal with the occasional bug for dramatically improved battery life.

So far the Sandy Bridge discussion has been quad-core SNB vs. quad-core Clarksfield, and that’s the other looming question: just how good will the dual-core SNB chips be? We expect better than Arrandale performance and better than Arrandale and Core 2 Duo battery life, but we haven’t been able to test any dual-core SNB systems yet. Unfortunately, the chipset bug/recall/whatever-you-want-to-call-it means we won’t be able to categorize dual-core SNB performance for at least another month, probably two. It appears the revised chipset allocation is going to go first towards big OEMs (i.e. Dell, HP, etc.), and it would seem Intel is focusing first on getting the mobile chipset fixed over the desktop chipset. Several manufacturers have indicated they expect laptops with the revised chipset to hit the market in the late-March to early-April time frame.

A Farewell to the Dell XPS 14
POST A COMMENT

49 Comments

View All Comments

  • vikingrinn - Tuesday, February 8, 2011 - link

    @BWMerlin You might be right, but 17.3" display in 15.6" size chassis not entirely implausible (although not sure if they slimmed down the chassis of the G73 for the G73SW release?), as the M17x R3 had been slimmed to almost the same size chassis as the M15x and also had 900p as a display option. Reply
  • JarredWalton - Tuesday, February 8, 2011 - link

    Note that I updated the article. MSI said I could pass along the fact that the testing was done with their GT680R. It's certainly fast enough for gaming, though there are some areas that could be improved (unless you like glossy plastic). Now we wait for PM67 version 1.01.... Reply
  • vikingrinn - Tuesday, February 8, 2011 - link

    @JarredWalton Thanks for the update - looking forward to a review of both the M17x R3 and G73SW soon then! ;) Reply
  • stmok - Monday, February 7, 2011 - link

    "What we know of Llano is that it will combine a K10.5 type CPU architecture with a midrange DX11 GPU (something like the HD 5650), integrated into a single chip."

    Firstly, AMD's Llano will be marketed as its "A-series" APU line. (Where G-series, E-series and C-series belong to their Bobcat-based lines.)

    Llano is a modified version of the Athlon II series with Radeon HD 5550 GPU as its IGP. The APU will feature Turbo Core 2.0 Technology (power gating, etc). It will use DDR3-1600 memory.

    Llano's x86 cores are codenamed "Husky".

    The IGP in Llano has two versions:
    One is codenamed "Winterpark" => Only in dual-core versions of APU.
    One is codenamed "Beavercreek". => Only in triple and quad-core versions of APU.

    For TDP spec, there will be two distinct lines for the desktop version of Llano.
    => 65W (dual-cores and low power quad-cores) and 100W (triple and quad-cores).

    As well the solution will allow for Hybrid-Crossfire configuration.
    => Llano IGP + Radeon HD 6570 or HD 6670 video cards.

    Performance wise...(According to AMD's presentation I saw.)

    Dual-core Llano
    => Overall, lags slightly behind Athlon II X2 250 (3.0Ghz) and Pentium E6500 (2.93Ghz)

    Quad-core Llano
    => Its slightly slower than a current Athlon II X4 630 with Radeon HD 5550 discrete video card.

    So in the end...

    Sandy Bridge => Far better CPU side. Not as good with IGP.
    Llano => Far better IGP. Not as good on CPU side.

    If you want an APU that will be revolutionary, its best if you wait for "Trinity" in 2012.
    Reply
  • Taft12 - Monday, February 7, 2011 - link

    This is great detail, more than I have ever seen about Llano before now (and thanks a bunch for it!)

    Is this from publically available AMD documentation? You said this was from a presentation you saw...
    Reply
  • Kiijibari - Monday, February 7, 2011 - link

    First, you wrote APU, even though there is no Bulldozer APU, yet. Zambezi and Interlagos/Valencia are normal CPUs. You correctly mentioned Trinity later, which is an APU, but that is already Bulldozer v2.0, and it is far away due in 2012.

    Second, you stated that cache-sizes are unkonwn - they are not:
    See AMD's blog, link removed due to SPAM detection bot.

    Third you speculate about a launch similar to the K8's in 2003, however; it is already know that desktop parts will launch *prior* to server parts in Q2:
    <Link removed due to SPAM detection, just read the analyst day slides again>
    Reply
  • JarredWalton - Monday, February 7, 2011 - link

    I've corrected some of the text to clarify the meaning. Orochi is the eight-core design, with "Zambezi" for desktops and "Velencia" destined for servers. AFAICT, it's the same chip with different packages depending on the market (and I'd guess AMD is using the extra time between desktop and servers to do extra validation). Zambezi is also apparently a name for the desktop platform in general, unless the "four core and six core Zambezi" won't get a separate name.

    Given the purported size of the Orochi core, I can see four-core and six-core being harvested die, but they're still going to be huge. Right now, it appears the eight-core will have 16MB total L2 cache (2MB per core!) and an additional 8MB L3 cache. Long-term, the four-core and six-core should get separate designs so they don't have to be quite so large. Those are the chips that I expect won't be out for desktops until Q3/Q4.
    Reply
  • Cow86 - Tuesday, February 8, 2011 - link

    Sorry there Jarred, first time poster, long time reader, but I hád to correct you on this :P Two things are wrong in what you say:

    1) The 8 core, 4 module bulldozer chip will have 8 MB of L2 cache (2 MB shared per MODULE, not core), and 8 MB L3 cache. This has been confirmed by Fruehe in discussions plenty of times, and you'll find it all over the web.

    2) Whilst you can indeed expect the 6-core to be harvested (as it will also keep the 8 MB of L3 cache) it is rather clear the 4-core will be separate, like the dualcore athlon II is now as well. The clue to this is the fact that the 4 core chip will only have 4 MB of L3 cache.

    http://www.techpowerup.com/134739/AMD-Zambezi-Bull...

    Look at the roadmap :)
    Reply
  • JarredWalton - Wednesday, February 9, 2011 - link

    Oh, I guess I read the "2MB per module" wrong -- thought they had said 2MB per core. Somewhere else said 16MB cache, and that then made sense, but if it's 16MB cache total that also works. Anyway, long-term it would be potentially useful to have separate die for 3-module and 2-module as well as the standard 4-module, because even the 6-core is still going to have 2MB cache and 2 cores disabled. However, the time to do such a redesign might make it too costly, so maybe not. There's nothing to prevent AMD from disabling part of the L3 cache as well as the cores for a 4-core version though -- we've already seen Athlon X2 that were harvested Phenom X4 for instance. That's definitely not something you want to do a lot if you can avoid it, obviously. Reply
  • DanNeely - Monday, February 7, 2011 - link

    "There’s actually a lot more work involved in moving a Redwood GPU architecture to 32nm, as most of the Intellectual Property (IP) related to GPUs targets the so-called half-nodes (55nm, 40m, and in the future 28nm). It’s one reason we expect AMD to eventually move all of their CPU and GPU production to such nodes, but that's a ways off and Llano will use the same process size as Intel’s current CPUs."

    What's actually different between the two? I assumed it was just a case of what they picked as the next scaling point. There've been a number of GPUs in the past that have dropped from half to full to half node again as each one became widely available. I'd've assumed the main engineering challenge would be optimizing for the quirks in GF's processes instead of TSMC's.
    Reply

Log in

Don't have an account? Sign up now