Why Sandy Bridge Matters for Notebooks

To say that we were caught off guard by Intel’s announcement last Monday of a flaw in their 6-series chipsets would be an understatement. Bad as that was, it’s the OEMs and system builders that are really feeling the pain—not to mention all the money Intel is losing on this “not a recall”. We’ve seen plenty of manufacturer statements about what they’re doing to address the problem, and we’ve also been talking with our notebook contacts trying to find out how the problem will impact availability.

We’ve also had more than a few delayed/canceled reviews while we wait for a fix. While we’ve looked at a generic Sandy Bridge notebook and a few motherboards, there was still plenty more we wanted to discuss. One such notebook came with a “low-end” i7-2630QM processor and a GTX 460M GPU, packed into a 15.6” chassis and sporting a 1080p LCD and RAID 0 hard drives. The manufacturer asked us to hold off on the full review, and we’ve returned the notebook, but not before we ran it through our suite of mobile benchmarks. Rather than complete a full review of a notebook that may or may not be available, we thought it would be interesting to look at what another SNB notebook would do in comparison to the previous generation parts.

Update: We just got word back, and MSI has given the okay to reveal that the notebook in question is the MSI GT680R; we should hopefully see it return to market in a couple months.

In terms of specs, the notebook in question was very similar to the ASUS G73Jw we reviewed last year. Change the CPU to an i7-2630QM in place of the old i7-740QM, use a different battery and chassis, and you’re set. So exactly what can the 2630QM do relative to the 740QM? We’ve added the complete benchmark results to our Mobile Bench area, so you can quickly see how the two stack up.

If you’re only interested in gaming performance, it’s no surprise that we’re mostly GPU limited with the GTX 460M. The majority of titles are 2-8% faster with the Sandy Bridge setup, but we’re also dealing with updated drivers so the performance increase may come at least in part from NVIDIA. That said, there are a couple of outliers: 900p STALKER: Call of Pripyat shows a massive performance increase, as does 900p StarCraft II. How much of that comes from drivers and how much from the CPU? Since we don’t have the G73Jw around to retest, it’s impossible to say for certain, but we can look at the CPU tests to see how much faster Sandy Bridge can be compared to Clarksfield.

PCMark as usual is heavily influenced by the storage subsystem, so RAID 0 versus a single HDD gives the unnamed system an inherent advantage. The use of Western Digital’s Scorpio Black drives versus a Seagate Momentus 7200.4 is another benefit in the storage area—WD has generally come out on the top of the HDD heap with their Black series (though SSD’s are still much faster). Ignoring PCMark, though, we still see a large advantage for the 2630QM. Single-threaded performance is 21% faster in Cinebench 10/11.5, which in our experience correlates well with general Windows use. In the heavily multithreaded tests, the gap increases to 47-58% in Cinebench and x264 encoding.

It’s not just about performance either. While the 2630QM notebook has a larger 87Wh battery, factoring that into the equation we still see relative battery life improved over the G73Jw by 17% at idle, 40% in H.264 playback, and 42% in Internet surfing. Looking at the comparison with 2820QM with HD Graphics 3000, the GTX 460M still clearly takes a toll on battery life (less than half the relative battery life), but it’s good to see more than three hours of mobility from a gaming laptop.

We’re curious to see if anyone is willing to do Optimus with a 460M (or higher) GPU and a quad-core SNB processor, as that will only serve to further increase battery life. Of course, we still see occasional glitches with Optimus that might make OEMs slow to use it on high-end gaming systems. For instance, Empire: Total War won’t let you select higher than “Medium” detail defaults (because it queries the IGP capabilities rather than the dGPU). Left 4 Dead 2 also had some oddities with the latest driver update—you can’t max out the graphics settings and have it run properly with a GT 420M Optimus in our experience; you have to drop the “Paged Pool Memory Available” setting to Low instead of High/Medium or it will exit to the desktop. The result is lower performance/compatibility relative to discrete GPUs, but I’d be willing to deal with the occasional bug for dramatically improved battery life.

So far the Sandy Bridge discussion has been quad-core SNB vs. quad-core Clarksfield, and that’s the other looming question: just how good will the dual-core SNB chips be? We expect better than Arrandale performance and better than Arrandale and Core 2 Duo battery life, but we haven’t been able to test any dual-core SNB systems yet. Unfortunately, the chipset bug/recall/whatever-you-want-to-call-it means we won’t be able to categorize dual-core SNB performance for at least another month, probably two. It appears the revised chipset allocation is going to go first towards big OEMs (i.e. Dell, HP, etc.), and it would seem Intel is focusing first on getting the mobile chipset fixed over the desktop chipset. Several manufacturers have indicated they expect laptops with the revised chipset to hit the market in the late-March to early-April time frame.

A Farewell to the Dell XPS 14
Comments Locked

49 Comments

View All Comments

  • JarredWalton - Monday, February 7, 2011 - link

    There's a lot of licensed technology in most GPUs, and most of that exists on the half-nodes right now. Back in the 90nm and 65nm days it didn't really matter, but when TSMC went to 55nm and then 40nm a lot of the companies doing design work on various modules went that route rather than sticking with the CPU nodes. So it's not just a quick and dirty process shrink, but the end result could be very interesting.
  • DanNeely - Monday, February 7, 2011 - link

    That didn't answer my question about what is different between half and full nodes that makes it more than just a process shrink?
  • JarredWalton - Monday, February 7, 2011 - link

    Sorry... AFAIK, nothing is different, other than extra work involved porting IP from 40nm (ATI's current target) to 32nm.
  • DanNeely - Tuesday, February 8, 2011 - link

    In that case, why are you expecting amd to move everything to half node processes?
  • JarredWalton - Tuesday, February 8, 2011 - link

    Because when everything else moves to 28nm, AMD would have their IP on 32nm; then next will be 20nm and 22nm. In my talks with AMD and GlobalFoundries at CES, they didn't outright state that they would move over, but right now the only ones really doing things on the "full nodes" are AMD and Intel. If you want to get in on the smartphone and tablet stuff -- or other SoC designs -- it makes it far easier to be able to license chunks of the design from others.
  • Soleron2 - Monday, February 7, 2011 - link

    "Anand guessed at a Q3/Q4 2011 launch for desktop Bulldozer, which means Bulldozer might not join the mobile party until Q4’11 or perhaps even 2012."

    Desktop Bulldozer is Q2 '11 according to AMD, officially. John Fruehe has confirmed this multiple times. Server Bulldozer is Q3 '11.
  • JarredWalton - Monday, February 7, 2011 - link

    I clarified the text... high-end desktop will be first, but it's basically the server chip. I think the "mainstream" desktop stuff will come later, so basically we're getting Athlon FX equivalent first, then Opteron, and then regular Athlon (to draw parallels with the K8 rollout).
  • icrf - Monday, February 7, 2011 - link

    "multithreaded tasks like video encoding and 3D rendering generally need more floating-point performance"

    My understanding is video encoding is very integer intensive, or at least any DCT-based ones. I'm told x264 spends most of its time in integer SIMD, so I'm not sure standard integer cores matter much, as the vector hardware is where everything is happening.
  • JarredWalton - Monday, February 7, 2011 - link

    I believe video encoding apps have been optimized to use a lot of SSE code, which means even if they're doing INT work in SSE, it still uses the FP/SSE registers. Anyway, without hardware we really just can't say for sure how Bulldozer will perform -- or what sort of power it will require. I'm guessing it will be competitive with Sandy Bridge on some things, faster in pure INT workloads, and slower in FP/SSE. But for mobility, I think it might use a lot more power than most notebooks can provide. We'll see in a few months.
  • SteelCity1981 - Monday, February 7, 2011 - link

    Clock speed also makes a diff seeing has the i7-2630QM is 270mhz faster than the i7-720QM.

    Anantech should underclock a i7-2630QM to match a i7-720QM clock speed in one of its test to see how much faster the i7-2630QM is clock for clock.

Log in

Don't have an account? Sign up now