Original Link: http://www.anandtech.com/show/5626/ivy-bridge-preview-core-i7-3770k
The Ivy Bridge Preview: Core i7 3770K Testedby Anand Lal Shimpi on March 6, 2012 8:16 PM EST
Note: This preview was not sanctioned or supported by Intel in any way.
I still remember hearing about Intel's tick-tock cadence and not having much faith that the company could pull it off. Granted Intel hasn't given us a new chip every 12 months on the dot, but more or less there's something new every year. Every year we either get a new architecture on an established process node (tock), or a derivative architecture on a new process node (tick). The table below summarizes what we've seen since Intel adopted the strategy:
|Intel's Tick-Tock Cadence|
|Microarchitecture||Process Node||Tick or Tock||Release Year|
Last year was a big one. Sandy Bridge brought a Conroe-like increase in performance across the board thanks to a massive re-plumbing of Intel's out-of-order execution engine and other significant changes to the microarchitecture. If you remember Conroe (the first Core 2 architecture), what followed it was a relatively mild upgrade called Penryn that gave you a little bit in the way of performance and dropped power consumption at the same time.
Ivy Bridge, the tick that follows Sandy Bridge, would typically be just that: a mild upgrade that inched performance ahead while dropping power consumption. Intel's microprocessor ticks are usually very conservative on the architecture side, which limits the performance improvement. Being less risky on the architecture allows Intel to focus more on working out the kinks in its next process node, in turn delivering some amount of tangible power reduction.
Where Ivy Bridge shakes things up is on the graphics side. For years Intel has been able to ship substandard graphics in its chipsets based on the principle that only gamers needed real GPUs and Windows ran just fine on integrated graphics. Over the past decade that philosophy required adjustment. First it was HD video decode acceleration, then GPU accelerated user interfaces and, more recently, GPU computing applications. Intel eventually committed to taking GPU performance (and driver quality) seriously, setting out on a path to significantly improve its GPUs.
As Ivy is a tick in Intel's cadence, we shouldn't see much of a performance improvement. On the CPU side that's mostly true. You can expect a 5 - 15% increase in performance for the same price as a Sandy Bridge CPU today. A continued desire to be aggressive on the GPU front however puts Intel in a tough spot. Moving to a new manufacturing process, especially one as dramatically different as Intel's 22nm 3D tri-gate node isn't easy. Any additional complexity outside of the new process simply puts schedule at risk. That being said, its GPUs continue to lag significantly behind AMD and more importantly, they still aren't fast enough by customer standards.
Apple has been pushing Intel for faster graphics for years, having no issues with including discrete GPUs across its lineup or even prioritizing GPU over CPU upgrades. Intel's exclusivity agreement with Apple expired around Nehalem, meaning every design win can easily be lost if the fit isn't right.
With Haswell, Intel will finally deliver what Apple and other customers have been asking for on the GPU front. Until then Intel had to do something to move performance forward. A simple tick wouldn't cut it.
Intel calls Ivy Bridge a tick+. While CPU performance steps forward, GPU performance sees a more significant improvement - in the 20 - 50% range. The magnitude of improvement on the GPU side is more consistent with what you'd expect from a tock. The combination of a CPU tick and a GPU tock is how Intel arrives at the tick+ naming. I'm personally curious to see how this unfolds going forward. Will GPU and CPUs go through alternating tocks or will Intel try to synchronize them? Do we see innovation on one side slow down as the other increases? Does tick-tock remain on a two year cadence now that there are two fairly different architectures that need updating? These are questions I don't know that we'll see answers to until after Haswell. For now, let's focus on Ivy Bridge.
Ivy Bridge Architecture Recap
At IDF Intel disclosed much of Ivy's CPU architecture, but below is a quick summary:
- 4-wide front end with µOp cache from Sandy Bridge
- OoO execution engine from Sandy Bridge
- Data structures previously statically shared between threads can now be dynamically shared (e.g. DSB queue), improves single threaded performance
- FP/integer divider delivers 2x throughput compared to Sandy Bridge
- MOV instructions no longer occupy an execution port, potential for improved ILP when MOVs are present
- Power gated DDR3 interface
- DDR3L support
- Max supported DDR3 frequency is now 2800MHz (up from 2133MHz), memory speed can be moved in 200MHz increments
- Lower system agent voltage options, lower voltages at intermediate turbo frequencies, power aware interrupt routing
- Power efficiency improvements related to 22nm
- Configurable TDP
I've highlighted the three big items from a CPU performance standpoint. Much of the gains you'll see will come from those areas coupled with more aggressive turbo frequencies.
On the GPU, the improvements are more significant. Some of the major changes are below:
- DirectX 11 Support
- More execution units (16 vs 12) for GT2 graphics (Intel HD 4000)
- 2x MADs per clock
- EUs can now co-issue more operations
- GPU specific on-die L3 cache
- Faster QuickSync performance
- Lower power consumption due to 22nm
Intel will initially launch quad-core SKUs on the desktop. Ivy Bridge will be branded as Intel's 3rd generation Core microarchitecture and use model numbers below 3800. The 3800 - 3900 series are reserved for Sandy Bridge E for the time being, while the 2000 series refers to last year's Sandy Bridge parts. Just like we saw with Sandy Bridge, Ivy will be available in fully unlocked (K-series), partially unlocked (any part with Turbo support) and fully locked (anything without Turbo support) SKUs.
What we know about the lineup today is summarized in the table below:
|Processor||Core Clock||Cores / Threads||L3 Cache||Max Turbo||Intel HD Graphics||TDP||Price|
|Intel Core i7 3960X||3.3GHz||6 / 12||15MB||3.9GHz||N/A||130W||$990|
|Intel Core i7 3930K||3.2GHz||6 / 12||12MB||3.8GHz||N/A||130W||$555|
|Intel Core i7 3820||3.6GHz||4 / 8||10MB||3.9GHz||N/A||130W||$285|
|Intel Core i7 3770K||3.5GHz||4 / 8||8MB||3.9GHz||4000||77W||$332 est|
|Intel Core i7 3770||3.4GHz||4 / 8||8MB||3.9GHz||4000||77W||$294 est|
|Intel Core i5 3570K||3.4GHz||4 / 4||6MB||3.8GHz||4000||77W||TBD|
|Intel Core i5 3570||3.4GHz||4 / 4||6MB||3.8GHz||2500||77W||TBD|
|Intel Core i5 3550||3.3GHz||4 / 4||6MB||3.7GHz||2500||77W||TBD|
|Intel Core i5 3470||3.2GHz||4 / 4||6MB||3.6GHz||2500||77W||TBD|
|Intel Core i5 3450||3.1GHz||4 / 4||6MB||3.5GHz||2500||77W||TBD|
|Intel Core i5 3330||3.0GHz||4 / 4||6MB||3.2GHz||2500||77W||TBD|
|Intel Core i7 2700K||3.5GHz||4 / 8||8MB||3.9GHz||3000||95W||$332|
|Intel Core i7 2600K||3.4GHz||4 / 8||8MB||3.8GHz||3000||95W||$317|
|Intel Core i7 2600||3.4GHz||4 / 8||8MB||3.8GHz||2000||95W||$294|
|Intel Core i5 2500K||3.3GHz||4 / 4||6MB||3.7GHz||3000||95W||$216|
|Intel Core i5 2500||3.3GHz||4 / 4||6MB||3.7GHz||2000||95W||$205|
Unlike the initial Sandy Bridge launch, both fully and partially unlocked Ivy Bridge parts will ship with Intel HD 4000 graphics - although that's still reserved for the high-end on the desktop. I am also seeing movement towards removing core-count restrictions on turbo frequencies. Today max turbo is defined in most cases by the highest frequency you can reach with only one core active. I would not be surprised to see Intel eventually move to a setup where max turbo can be reached regardless of number of active cores and just base it on current power consumption and thermal conditions.
Ivy Bridge uses the same LGA-1155 socket as Sandy Bridge. Provided there's BIOS/UEFI support from your board maker, you can use Ivy Bridge CPUs in older 6-series motherboards. Doing so won't give you access to some of the newer 7-series chipset features like PCIe Gen 3 (some 6-series boards are claiming 3.0 support), native USB 3.0 (many 6-series boards have 3rd party USB 3.0 controllers) and Intel's Rapid Start Technology.
|CPU PCIe Config||
1 x16 or
2 x8 or
1 x8 + 2 x4
1 x16 or
2 x8 PCIe 3.0
|1 x16 PCIe 3.0||
1 x16 or
2 x8 or
1 x8 + 2 x4
1 x16 or
2 x8 PCIe 3.0
|1 x16 PCIe 3.0|
|Processor Graphics Support||Yes||Yes||Yes||Yes||No||Yes|
|Intel SRT (SSD caching)||Yes||No||Yes||Yes||No||No|
|USB 2.0 Ports (3.0)||14 (4)||14 (4)||14 (4)||14||14||14|
|SATA Total (Max Number of 6Gbps Ports)||6 (2)||6 (2)||6 (2)||6 (2)||6 (2)||6 (2)|
|PCIe Lanes||8 (5GT/s)||8 (5GT/s)||8 (5GT/s)||8 (5GT/s)||8 (5GT/s)||8 (5GT/s)|
The big change this year is that all 7-series chipsets support processor graphics, while last year Intel had the silly P vs. H split until Z68 arrived and simplified everything.
The State of Ivy Bridge Silicon
Intel finally delivered production quality Ivy Bridge silicon to its partners last month. The launch is still scheduled for this Spring, however there has been a delay of approximately three weeks. Remember what I said earlier about the risks associated with doing too much on the architecture side while shifting to a new process node.
We were able to spend some time with the new high-end Ivy Bridge desktop SKU: Intel's Core i7 3770K. What follows is a preview of its performance. Keep in mind that this is a preview using early drivers and an early Z77 motherboard. The numbers here could change. This preview was not supported or sanctioned by Intel in any way.
To keep the preview length manageable we're presenting a subset of our results here. For all benchmark results and even more comparisons be sure to use our performance comparison tool: Bench.
ASUS P8Z68-V Pro (Intel Z68)
ASUS Crosshair V Formula (AMD 990FX)
Intel DX79SI (Intel X79)
Intel Z77 Chipset Based Motherboard
Intel X25-M SSD (80GB)
Crucial RealSSD C300
OCZ Agility 3 (240GB)
|Memory:||4 x 4GB G.Skill Ripjaws X DDR3-1600 9-9-9-20|
ATI Radeon HD 5870 (Windows 7)
AMD Processor Graphics
Intel Processor Graphics
|Video Drivers:||AMD Catalyst 12.2 Preview|
|Desktop Resolution:||1920 x 1200|
|OS:||Windows 7 x64|
SYSMark 2007 & 2012
Although not the best indication of overall system performance, the SYSMark suites do give us a good idea of lighter workloads than we're used to testing. SYSMark 2007 is a better indication of low thread count performance, although 2012 isn't tremendously better in that regard.
As the SYSMark suites aren't particularly thread heavy, there's little advantage to the 6-core Sandy Bridge E CPUs. The 3770K however manages to slot in above all of the other Sandy Bridge parts at between 5 - 20% faster than the 2600K. The biggest advantages show up in either the lightly threaded tests or in the FP heavy benchmarks. Given what we know about Ivy's enhancements, this is exactly what we'd expect.
Content Creation Performance
Adobe Photoshop CS4
To measure performance under Photoshop CS4 we turn to the Retouch Artists’ Speed Test. The test does basic photo editing; there are a couple of color space conversions, many layer creations, color curve adjustment, image and canvas size adjustment, unsharp mask, and finally a gaussian blur performed on the entire image.
The whole process is timed and thanks to the use of Intel's X25-M SSD as our test bed hard drive, performance is far more predictable than back when we used to test on mechanical disks.
Time is reported in seconds and the lower numbers mean better performance. The test is multithreaded and can hit all four cores in a quad-core machine.
Our Photoshop test is well threaded but it doesn't peg all cores constantly. Instead you get burstier behavior. With the core count advantage out of the way, SNB-E steps aside and allows the 3770K to step up as the fastest CPU we've tested here. The performance advantage over the 2600K is around 9%.
Today's desktop processors are more than fast enough to do professional level 3D rendering at home. To look at performance under 3dsmax we ran the SPECapc 3dsmax 8 benchmark (only the CPU rendering tests) under 3dsmax 9 SP1. The results reported are the rendering composite scores.
In another FP heavy workload we see a pretty reasonable gain for Ivy Bridge: 8.5% over a 2600K. This isn't enough to make you want to abandon your Sandy Bridge, but it's a good step forward for a tick.
Created by the Cinema 4D folks we have Cinebench, a popular 3D rendering benchmark that gives us both single and multi-threaded 3D rendering results.
The single threaded Cinebench test shows a 9% performance advantage for the 3770K over the 2600K. The gap increases slightly to 11% as we look at the multithreaded results:
If you're running a workload that can really stress multiple cores, the 6-core Sandy Bridge E parts will remain unstoppable but in the quad-core world, Ivy Bridge leads the pack.
Video Transcoding Performance
x264 HD 3.03 Benchmark
Graysky's x264 HD test uses x264 to encode a 4Mbps 720p MPEG-2 source. The focus here is on quality rather than speed, thus the benchmark uses a 2-pass encode and reports the average frame rate in each pass.
In the second pass of our x264 test we see a nearly 14% increase over the 2600K. Once again, there's no replacement for more cores in these types of workloads but delivering better performance in a lower TDP than last year's quad-core is great for more thermally conscious desktops.
Software Development Performance
Compile Chromium Test
You guys asked for it and finally I have something I feel is a good software build test. Using Visual Studio 2008 I'm compiling Chromium. It's a pretty huge project that takes over forty minutes to compile from the command line on a Core i3 2100. But the results are repeatable and the compile process will stress all 12 threads at 100% for almost the entire time on a 980X so it works for me.
Ivy Bridge shows more traditional gains in our VS2008 benchmark - performance moves forward here by a few percent, but nothing significant. We are seeing a bit of a compressed dynamic range here for this particular compiler workload, it's quite possible that other bottlenecks are beginning to creep in as we get even faster microarchitectures.
Compression & Encryption Performance
By working with a small dataset, the 7-zip benchmark gives us an indication of multithreaded integer performance without being IO limited:
Although real world compression/decompression tests can be heavily influenced by disk IO, the CPU does play a significant role. Here we're showing a 15% increase in performance over the 2600K. In the real world you'd see something much smaller as workloads aren't always so well threaded. The results here do have implications for other heavily compute bound integer workloads however.
TrueCrypt is a very popular encryption package that offers full AES-NI support. The application also features a built-in encryption benchmark that we can use to measure CPU performance:
Our TrueCrypt test scales fairly well with clock speed, I suspect what we're seeing here might be due in part to Ivy's ability to maintain higher multi-core turbo frequencies despite having similar max turbo frequencies to Sandy Bridge.
Discrete GPU Gaming Performance
Gaming performance with a discrete GPU does improve in line with the rest of what we've seen thus far from Ivy Bridge. It's definitely a step ahead of Sandy Bridge, but not enough to warrant an upgrade in most cases. If you haven't already made the jump to Sandy Bridge however, the upgrade will do you well.
Dragon Age Origins
DAO has been a staple of our CPU gaming benchmarks for some time now. The third/first person RPG is well threaded and is influenced both by CPU and GPU performance. Our benchmark is a FRAPS runthrough of our character through a castle.
Dawn of War II
Dawn of War II is an RTS title that ships with a built in performance test. I ran at Ultra quality settings at 1680 x 1050:
World of Warcraft
Our WoW test is run at High quality settings on a lightly populated server in an area where no other players are present to produce repeatable results. We ran at 1680 x 1050.
We have two Starcraft II benchmarks: a GPU and a CPU test. The GPU test is mostly a navigate-around-the-map test, as scrolling and panning around tends to be the most GPU bound in the game. Our CPU test involves a massive battle of 6 armies in the center of the map, stressing the CPU more than the GPU. At these low quality settings however, both benchmarks are influenced by CPU and GPU. We'll get to the GPU test shortly, but our CPU test results are below. The benchmark runs at 1024 x 768 at Medium Quality settings with all CPU influenced features set to Ultra.
We're using the Metro 2033 benchmark that ships with the game. We run the benchmark at 1024 x 768 for a more CPU bound test as well as 1920 x 1200 to show what happens in a more GPU bound scenario.
We ran two DiRT 3 benchmarks to get an idea for CPU bound and GPU bound performance. First the CPU bound settings:
Civ V's lateGameView benchmark presents us with two separate scores: average frame rate for the entire test as well as a no-render score that only looks at CPU performance. We're looking at the no-render score here to isolate CPU performance alone:
Intel isn't really exploiting 22nm for significantly higher default or max turbo frequencies. While it does seem like you'll hit turbo frequencies more often with Ivy, most of what 22nm offers will be realized as power savings.
At idle, all cores are already power gated so there's not much more Ivy Bridge can offer. We see savings of a couple of watts at most over the 2600K but otherwise it's nothing significant. In notebooks I'd expect to see implementations of DDR3L help keep power consumption down, but at idle there's really not much that can be done.
Under load however the power savings are significant. The Core i7 3770K pulls 27 fewer watts while delivering better performance than the 2600K. Again, translating this to what you can expect in notebooks I'd say that peak battery life likely won't be affected, but battery life under load will be better with Ivy.
Intel HD Graphics 4000 Performance
With respectable but still very tick-like performance gains on the CPU, our focus now turns to Ivy Bridge's GPU. Drivers play a significant role in performance here and we're still several weeks away from launch so these numbers may improve. We used the latest available drivers as of today for all other GPUs.
We'll start with Crysis, a title that no one would have considered running on integrated graphics a few years ago. Sandy Bridge brought playable performance at low quality settings (Performance defaults) last year, but how much better does Ivy do this year?
In our highest quality benchmark (Mainstream) settings, Intel's HD Graphics 4000 is 55% faster than the 3000 series graphics in Sandy Bridge. While still tangibly slower than AMD's Llano (Radeon HD 6550D), Ivy Bridge is a significant step forward. Drop the quality down a bit and playability improves significantly:
Over 50 fps at 1680 x 1050 from Intel integrated graphics is pretty impressive. Here we're showing a 41% increase in performance compared to Sandy Bridge, with Llano maintaining a 33% advantage over Ivy. I would've liked to have seen an outright doubling of performance, but this is a big enough step forward to be noticeable on systems with no discrete GPU.
The heaviest test we'll show in this suite, Metro 2033 still requires a discrete GPU for reasonable performance. That being said, it's still an interesting measure of how much more GPU horsepower exists in Ivy Bridge:
Here the Llano gap shrinks to about 13 - 25% depending on the resolution/quality settings. AMD still has the clear advantage in GPU performance, but Intel does step closer. The performance advantage over Sandy Bridge ranges from 20 - 40%. With these sorts of numbers it's clear why Intel views Ivy Bridge as being a tick+. Generational performance improvements on the CPU side generally fall in the 20 - 40% range. As you've just seen, Ivy Bridge offers a 7 - 15% increase in CPU performance over Sandy Bridge - making it a bonafide tick from a CPU perspective. The 20 - 40% increase on the graphics side is what blurs the line between a conventional tick and what we have with Ivy Bridge.
DiRT 3 is another DirectX 11 game that's able to run in DX11 mode on Ivy Bridge. The medium quality settings default to a DX11 codepath although Sandy Bridge falls back to DX9 in this test. Although racing games are typically very CPU limited at lower resolutions, DiRT 3 pushes the envelope and these GPUs are enough of a limit to make things interesting:
DiRT 3 remains in line with the other tests we've run. Ivy Bridge (HD 4000) offers a 30 - 40% increase in performance over Sandy Bridge (HD 3000), while AMD's Llano (6550D) holds onto a 27% advantage over Ivy.
Starcraft 2 can vary wildly from being very CPU limited to very GPU limited. As a result we've had to create two separate benchmarks. Heavy battles are often CPU limited, while scrolling around a map tends to stress the GPU more.
SC2 shows us a smaller advantage over Sandy Bridge - generally around 20%. Llano is able to maintain a much larger 40 - 50% advantage as a result.
The Elder Scrolls V: Skyrim
Skyrim is one of the more important titles for processor graphics to handle well as it doesn't necessarily fall into the realm of exclusively for hardcore FPS gamers. We tested with medium quality defaults and AA/AF disabled.
Ivy Bridge does very well in Skyrim, not able to reach 60 fps but still above 30 at up to 1680 x 1050. The gains over Sandy Bridge are significant at nearly 50%.
Anisotropic Filtering Quality
At IDF last year Intel promised an improvement in its anisotropic filtering quality compared to Sandy Bridge. Personally I didn't believe SNB's GPU was fast enough to warrant turning on AF in most modern titles, but as Intel's GPU performance improves it must take image quality more seriously.
I wouldn't put a ton of faith in these early results as things can change, but AF quality does appear to be much better than Sandy Bridge:
The peculiar radial lines that were present in SNB's algorithm remain here, although they are more muted. Again it's too early to tell if we're looking at final image quality or something that will improve over time. If we are to judge based on this result alone, I'd say it mirrors what we saw in our performance investigation: Ivy is a step towards AMD in the GPU department, but not a step ahead.
DirectX 11 Compute Performance
As Ivy Bridge is Intel's first DirectX 11 GPU architecture, we're actually able to run some DX11 workloads on it without having them fall back to DX10. We'll do a much more significant investigation into GPU compute performance in our full Ivy Bridge review, but as a teaser we've got our standard DirectX 11 Compute Shader Fluid Simulation test from the DX11 SDK:
Ivy Bridge does extremely well here, likely due in no small part to its excellent last level cache. The Fluid Simulation we run looks at shared memory performance, which allows Ivy to do quite well. We're seeing over 3.2x the performance of Sandy Bridge here, and even a slight advantage over Llano.
Late last year we mentioned that Intel would be bringing improved QuickSync to Ivy Bridge. Details were scarce, but we theorized that improvements to Ivy Bridge's video decoder might be part of the reason. At least with the drivers we tested with, improved video decoder speed was largely responsible for the advantage you can see below.
For this test we transcoded a 3GB AVC main profile movie using ArcSoft's 720p 3Mbps YouTube HD profile:
Once again we're showing a ~40% improvement over Sandy Bridge - a big step forward. The QuickSync side of IVB is one of the lesser known aspects of the architecture, I suspect we'll have to wait for the launch to truly understand what we can expect from Ivy. Needless to say, QuickSync is even faster this year.
Based on these early numbers, Ivy Bridge is pretty much right where we expected it on the CPU side. You're looking at a 5 - 15% increase in CPU performance over Sandy Bridge at a similar price point. I have to say that I'm pretty impressed by the gains we've seen here today. It's quite difficult to get tangible IPC improvements from a modern architecture these days, particularly on such a strict nearly-annual basis. For a tick in Intel's cadence, Ivy Bridge is quite good. It feels a lot like Penryn did after Conroe, but better.
The improvement on the GPU side is significant. Although not nearly the jump we saw going to Sandy Bridge last year. Ivy's GPU finally puts Intel's processor graphics into the realm of reasonable for a system with low end GPU needs. Based on what we've seen, discrete GPUs below the $50 - $60 mark don't make sense if you've got Intel's HD 4000 inside your system. The discrete market above $100 remains fairly safe however.
With Ivy Bridge you aren't limited to playing older titles, although you are still limited to relatively low quality settings on newer games. If you're willing to trade off display resolution you can reach a much better balance. We are finally able to deliver acceptable performance at or above 1366 x 768. With the exception of Metro 2033, the games we tested even showed greater than 30 fps at 1680 x 1050. The fact that we were able to run Crysis: Warhead at 1680 x 1050 at over 50 fps on free graphics from Intel is sort of insane when you think about where Intel was just a few years ago.
Whether or not this is enough for mainstream gaming really depends on your definition of that segment of the market. Being able to play brand new titles at reasonable frame rates as realistic resolutions is a bar that Intel has safely met. I hate to sound like a broken record but Ivy Bridge continues Intel's march in the right direction when it comes to GPU performance. Personally, I want more and I suspect that Haswell will deliver much of that. It is worth pointing out that Intel is progressing at a faster rate than the discrete GPU industry at this point. Admittedly the gap is downright huge, but from what I've heard even the significant gains we're seeing here with Ivy will pale in comparison to what Haswell provides.
What Ivy Bridge does not appear to do is catch up to AMD's A8-series Llano APU. It narrows the gap, but for systems whose primary purpose is gaming AMD will still likely hold a significant advantage with Trinity. The fact that Ivy Bridge hasn't progressed enough to challenge AMD on the GPU side is good news. The last thing we need is for a single company to dominate on both fronts. At least today we still have some degree of competition in the market. To Intel's credit however, it's just as unlikely that AMD will surpass Intel in CPU performance this next round with Trinity. Both sides are getting more competitive, but it still boils down to what matters more to you: GPU or CPU performance. Similarly, there's also the question of which one (CPU or GPU) approaches "good enough" first. I suspect the answer to this is going to continue to vary wildly depending on the end user.
The power savings from 22nm are pretty good on the desktop. Under heavy CPU load we measured a ~30W decrease in total system power consumption compared to a similar Sandy Bridge part. If this is an indication of what we can expect from notebooks based on Ivy Bridge I'd say you shouldn't expect significant gains in battery life under light workloads, but you may see improvement in worst case scenario battery life. For example, in our Mac battery life suite we pegged the Sandy Bridge MacBook Pro at around 2.5 hours of battery life in our heavy multitasking scenario. That's the number I'd expect to see improve with Ivy Bridge. We only had a short amount of time with the system and couldn't validate Intel's claims of significant gains in GPU power efficiency but we'll hopefully be able to do that closer to launch.
There's still more to learn about Ivy Bridge, including how it performs as a notebook chip. If the results today are any indication, it should be a good showing all around. Lower power consumption and better performance at the same price as last year's parts - it's the Moore's Law way. There's not enough of an improvement to make existing SNB owners want to upgrade, but if you're still clinging to an old Core 2 (or earlier) system, Ivy will be a great step forward.