Original Link: http://www.anandtech.com/show/5771/the-intel-ivy-bridge-core-i7-3770k-review



The times, they are changing. In fact, the times have already changed, we're just waiting for the results. I remember the first time Intel brought me into a hotel room to show me their answer to AMD's Athlon 64 FX—the Pentium 4 Extreme Edition. Back then the desktop race was hotly contested. Pushing the absolute limits of what could be done without a concern for power consumption was the name of the game. In the mid-2000s, the notebook started to take over. Just like the famous day when Apple announced that it was no longer a manufacturer of personal computers but a manufacturer of mobile devices, Intel came to a similar realization years prior when these slides were first shown at an IDF in 2005:


IDF 2005


IDF 2005

Intel is preparing for another major transition, similar to the one it brought to light seven years ago. The move will once again be motivated by mobility, and the transition will be away from the giant CPUs that currently power high-end desktops and notebooks to lower power, more integrated SoCs that find their way into tablets and smartphones. Intel won't leave the high-end market behind, but the trend towards mobility didn't stop with notebooks.

The fact of the matter is that everything Charlie has said on the big H is correct. Haswell will be a significant step forward in graphics performance over Ivy Bridge, and will likely mark Intel's biggest generational leap in GPU technology of all time. Internally Haswell is viewed as the solution to the ARM problem. Build a chip that can deliver extremely low idle power, to the point where you can't tell the difference between an ARM tablet running in standby and one with a Haswell inside. At the same time, give it the performance we've come to expect from Intel. Haswell is the future, and this is the bridge to take us there.

In our Ivy Bridge preview I applauded Intel for executing so well over the past few years. By limiting major architectural shifts to known process technologies, and keeping design simple when transitioning to a new manufacturing process, Intel took what once was a five year design cycle for microprocessor architectures and condensed it into two. Sure the nature of the changes every 2 years was simpler than what we used to see every 5, but like most things in life—smaller but frequent progress often works better than putting big changes off for a long time.

It's Intel's tick-tock philosophy that kept it from having a Bulldozer, and the lack of such structure that left AMD in the situation it is today (on the CPU side at least). Ironically what we saw happen between AMD and Intel over the past ten years is really just a matter of the same mistake being made by both companies, just at different times. Intel's complacency and lack of an aggressive execution model led to AMD's ability to outshine it in the late K7/K8 days. AMD's similar lack of an execution model and executive complacency allowed the tides to turn once more.

Ivy Bridge is a tick+, as we've already established. Intel took a design risk and went for greater performance all while transitioning to the most significant process technology it has ever seen. The end result is a reasonable increase in CPU performance (for a tick), a big step in GPU performance, and a decrease in power consumption.

Today is the day that Ivy Bridge gets official. Its name truly embodies its purpose. While Sandy Bridge was a bridge to a new architecture, Ivy connects a different set of things. It's a bridge to 22nm, warming the seat before Haswell arrives. It's a bridge to a new world of notebooks that are significantly thinner and more power efficient than what we have today. It's a means to the next chapter in the evolution of the PC.

Let's get to it.

Additional Reading

Intel's Ivy Bridge Architecture Exposed
Mobile Ivy Bridge Review
Undervolting & Overclocking on Ivy Bridge

Intel's Ivy Bridge: An HTPC Perspective



The Lineup: Quad-Core Only for Now

Very telling of how times have changed is that today's Ivy Bridge launch only comes with a single Extreme Edition processor—the Core i7-3920XM, a mobile part. There are some great enthusiast desktop parts of course, but as with Sandy Bridge the desktop Extreme Edition is reserved for another platform. In this case, we're talking about LGA-2011 which won't launch in an Ivy flavor until the end of this year/early next year at this point.


From left to right: Clarkdale, Sandy Bridge, Ivy Bridge, Sandy Bridge E

Contrary to everything I've been saying thus far however is the nature of the launch: only quad-core parts will be available first. The dual-core, and more importantly for Ivy Bridge, the ultra low voltage parts won't come until May/June. That means the bigger notebooks and naturally the performance desktops will arrive first, followed by the ultraportables, Ultrabooks and more affordable desktops. This strategy makes sense as the volumes for expensive quad-core notebooks and performance desktops in general are lower than cheaper dual-core notebooks/desktops. From what I've heard, the move to 22nm has been the most challenging transition Intel's fab teams have ever faced, which obviously constrains initial supplies.

Intel 2012 CPU Lineup (Standard Power)
Processor Core Clock Cores / Threads L3 Cache Max Turbo Intel HD Graphics TDP Price
Intel Core i7 3960X 3.3GHz 6 / 12 15MB 3.9GHz N/A 130W $999
Intel Core i7 3930K 3.2GHz 6 / 12 12MB 3.8GHz N/A 130W $583
Intel Core i7 3820 3.6GHz 4 / 8 10MB 3.9GHz N/A 130W $294
Intel Core i7 3770K 3.5GHz 4 / 8 8MB 3.9GHz 4000 77W $313
Intel Core i7 3770 3.4GHz 4 / 8 8MB 3.9GHz 4000 77W $278
Intel Core i5 3570K 3.4GHz 4 / 4 6MB 3.8GHz 4000 77W $212
Intel Core i5 3550 3.3GHz 4 / 4 6MB 3.7GHz 2500 77W $194
Intel Core i5 3450 3.1GHz 4 / 4 6MB 3.5GHz 2500 77W $174
Intel Core i7 2700K 3.5GHz 4 / 8 8MB 3.9GHz 3000 95W $332
Intel Core i5 2550K 3.4GHz 4 / 4 6MB 3.8GHz N/A 95W $225
Intel Core i5 2500 3.3GHz 4 / 4 6MB 3.7GHz 2000 95W $205
Intel Core i5 2400 3.1GHz 4 / 4 6MB 3.4GHz 2000 95W $195
Intel Core i5 2320 3.0GHz 4 / 4 6MB 3.3GHz 2000 95W $177

There are five 77W desktop parts launching today, three 65W parts and one 45W part. The latter four are either T or S SKUs (lower leakage, lower TDP and lower clocked parts), while the first five are traditional, standard power parts. Note that max TDP for Ivy Bridge on the desktop has been reduced from 95W down to 77W thanks to Intel's 22nm process. The power savings do roughly follow that 18W decrease in TDP. Despite the power reduction, you may see 95W labels on boxes and OEMs are still asked to design for 95W as Ivy Bridge platforms can accept both 77W IVB and 95W Sandy Bridge parts.

We've already gone through Ivy's architecture in detail so check out our feature here for more details if you haven't already.

Intel 2012 Additional CPU Features (Standard Power)
Processor GPU Clock (base) GPU Clock (max) PCIe 3.0 Intel SIPP Intel vPro Intel VT-d Intel TXT
Intel Core i7 3770K 650MHz 1150MHz Yes No No No No
Intel Core i7 3770 650MHz 1150MHz Yes Yes Yes Yes Yes
Intel Core i5 3570K 650MHz 1150MHz Yes No No No No
Intel Core i5 3550 650MHz 1150MHz Yes Yes Yes Yes Yes
Intel Core i5 3450 650MHz 1100MHz Yes Yes Yes Yes Yes

The successful K-series SKUs are front and center in the Ivy lineup. As you'll remember from Sandy Bridge, anything with a K suffix ships fully unlocked. Ivy Bridge K-series SKUs support multipliers of up to 63x, an increase from the 57x maximum on Sandy Bridge. This won't impact most users unless you're doing any exotic cooling however.

If you don't have a K in your product name then your part is either partially or fully locked. Although this doesn't apply to any of the CPUs launching today, Ivy Bridge chips without support for turbo are fully locked and cannot be overclocked.

If your chip does support turbo boost, then you can overclock via increasing turbo ratios by as much as 4 bins above their standard setting. For example, the Core i7 3550 has a max turbo frequency of 3.7GHz with a single core active. Add another four bins (4 x 100MHz) and you get a maximum overclock of 4.1GHz, with one core active. The other turbo ratios can also be increased by up to four bins.

Sandy Bridge vs. Ivy Bridge Pricing
Sandy Bridge Price Price Ivy Bridge
Core i7 2700K $332 $313 Core i7 3770K
Core i7 2600 $294 $278 Core i7 3770
Core i5 2550K $225 $212 Core i5 3750K
Core i5 2500 $205 $194 Core i5 3550
Core i5 2400 $184 $174 Core i5 3450

The 3770K is the new king of the hill and it comes in $19 cheaper than the hill's previous resident: the Core i7 2700K. The non-K version saves you $16 compared to Sandy Bridge. The deltas continue down the line ranging ranging from $10—$19.

Unlike the Sandy Bridge launch, Intel is offering its high-end GPU on more than just K-series desktop parts right away. It is also differentiating K from non-K by adding another 100MHz to the base clock for K series parts. While the Core i7 2600K and Core i7 2600 both ran at 3.3GHz, the 3770 runs at 3.4GHz compared to the 3770K's 3.5GHz. It's a small difference but one that Intel hopes will help justify the added cost of the K.

Classic feature segmentation is alive and well with Ivy Bridge. In the quad-core lineup, only Core i7s get Hyper Threading—Core i5s do not. When the dual-core Core i3s show up in the coming months they will once again do so without support for turbo boost. Features like VT-d and Intel TXT are once again reserved for regular, non-K-series parts alone.

 



Die Size and Transistor Count

At IDF last year we got word of Ivy Bridge's transistor count (1.4 billion), however today we know its die size: 160mm2. That's 75% the size of a quad-core Sandy Bridge, but with 20% more transistors.

This marks the first time since 2006 that Intel is offering a high-end desktop CPU with this small of a die size. I'm excluding the 6-core parts from the discussion since that line isn't really aimed at the same market anymore. The chart is even more insane when you consider the Ivy Bridge die size includes an integrated GPU alongside four of the highest performance x86 cores Intel has ever shipped. Remove the GPU and Ivy Bridge is even smaller than Conroe. A hypothetical GPU-less Ivy Bridge would measure in at roughly 113mm^2 chip on its 22nm process, making it smaller than any high-end Intel CPU since the days of the Pentium 3.

CPU Specification Comparison
CPU Manufacturing Process Cores Transistor Count Die Size
AMD Bulldozer 8C 32nm 8 1.2B 315mm2
Intel Ivy Bridge 4C 22nm 4 1.4B 160mm2
Intel Sandy Bridge E (6C) 32nm 6 2.27B 435mm2
Intel Sandy Bridge E (4C) 32nm 4 1.27B 294mm2
Intel Sandy Bridge 4C 32nm 4 1.16B 216mm2
Intel Lynnfield 4C 45nm 4 774M 296mm2
Intel Sandy Bridge 2C (GT1) 32nm 2 504M 131mm2
Intel Sandy Bridge 2C (GT2) 32nm 2 624M 149mm2

Ivy Bridge is tiny—but what does this mean? For starters, it means the obvious—Intel has little competition in the desktop space. I'm always hard on AMD in my meetings with them because of this reason alone. A less than competitive AMD means we get a less aggressive Intel.

More importantly however, a tiny Ivy means that Intel could have given us a much bigger GPU without breaking the bank. I hinted at this possibility in our Ivy Bridge architecture article. Unfortunately at the time only Apple was interested in a hypothetical Ivy Bridge GT3 and rumor has it that Otellini wasn't willing to make a part that only one OEM would buy in large quantities. We will eventually get the GPU that Apple wanted, but it'll be next year, with Haswell GT3. And the GPU that Apple really really wanted? That'll be GT4, with Broadwell in 2014.

All of this being said however, we must keep in mind that Ivy Bridge is both faster than Sandy Bridge and no more expensive. If we look at the supply and pricing constraints that accompany TSMC's 28nm process, the fact that Intel is able to ramp up 22nm and ship the first products without any price increase is something we shouldn't take for granted.



Overclocking and 22nm

In the old days, whenever Intel transitioned to a new manufacturing process it was accompanied by increased overclocking headroom thanks to the reduction in power consumption and increase in switching speed afforded by the new transistors. To be honest, it's surprising the ride has even lasted this long.

Intel's 22nm process (P1270) is the most ambitious yet. The non-planar "3D" transistors promise to bring a tremendous increase in power efficiency by increasing the surface area of the transistor's inversion layer. It's the vehicle that will bring Intel into new form factors in mobile, but we're around a year away from Haswell's introduction. Rather than 22nm being a delivery platform for Ivy Bridge, it feels like Ivy Bridge is being used to deliver 22nm.

The process is still young and likely biased a bit towards the lower leakage characteristics of lower voltage/lower wattage CPUs, such as those that would be used in Ultrabooks. These two factors combined with some architectural decisions focused on increasing power efficiency result in what many of you may have heard by now: Ivy Bridge won't typically overclock as high as Sandy Bridge on air.

The frequency delta isn't huge. You'll still be able to hit 4.4—4.6GHz without resorting to exotic cooling, but success in the 4.8—5.0GHz range will be limited to water alone for most. Ivy Bridge is also far more sensitive to voltage than Sandy Bridge. Heat dissipation can increase significantly as a function of voltage, so you'll want to stay below 1.3V in your overclocking attempts.

Dr. Ian Cutress, our own Senior Motherboard Editor, put Ivy Bridge through a pretty exhaustive investigation if you want more details on exactly how the chip behaves when overclocking and how best to overclock it.

For the past few years I've been focused on power efficient overclocking. I'm looking for the best gains I can get without significant increases in core voltage. With my 3770K I was able to reliably hit 4.5GHz with only a 140mV increase in core voltage:

The end result is a 15—28% overclock, accompanied by a 32% increase in power consumption. The relationship between overclock speed and power consumption actually hasn't changed since Ivy Bridge, at least based on this datapoint.

Ivy Bridge Overclocking
Intel Core i7 3770K Stock 4.6GHz Overclock % Increase
Load Power Consumption 146.4W 204W 39.3%
x264—2nd Pass 41.8 fps 49.5 fps 18.4%

As always, your mileage may vary depending on the particular characteristics of your chip. Ivy Bridge can be overclocked, but at least initially it's not going to be as good of an overclocker as Sandy Bridge. Over time I expect this to improve somewhat as Intel's 22nm process matures, but by how much remains a question to me. It's unclear just how much of these limits are by design vs. a simple matter of process maturity.



The 7 Series Chipset & USB 3.0

The platform story around Ivy Bridge is far better than it was when Sandy Bridge launched. There are a ton of chipsets, but the delineation makes sense this time. All chipsets support Ivy's processor graphics, however only the Z75/Z77 support CPU overclocking. The good news for current 6-series/Sandy Bridge owners is, with BIOS support, your platforms can support Ivy Bridge as well—making for a better upgrade path down the road.


Intel Z77 PCH

Intel Z68 PCH

 

The new 7-series platform features PCIe 3.0 support, but only when used with an Ivy Bridge CPU, and only on the lanes that branch off of the CPU itself—the PCH lanes are still PCIe 2.0. Ivy's processor graphics, when combined with a 7-series chipset, also enables support for three independent displays (up from 2 with Sandy Bridge/6-series). Other than those two items, the only remaining feature is USB 3.0 support. Intel's 7-series PCH finally has native support for up to 4 USB 3.0 ports.

Performance of Intel's USB 3.0 controller is very good as you'll see in our upcoming Ivy Bridge motherboard roundup.

Intel's 7-series chipset does support Thunderbolt when paired with an external Cactus Ridge Thunderbolt controller, however the Thunderbolt for PCs launch has been pushed back to late May so we'll have to wait a bit before diving into that.

While most enthusiasts will focus on Z77, you can give up SSD caching, some flexibility on the PCIe side (and Thunderbolt support) and go for Intel's Z75 chipset. The chipset itself isn't much cheaper, but boards built around it will likely target lower price points and be lighter on features.



The Test

It turns out that our initial preview numbers were quite good. The shipping 3770K performs identically to what we tested last month. To keep the review length manageable we're presenting a subset of our results here. For all benchmark results and even more comparisons be sure to use our performance comparison tool: Bench.

Motherboard: ASUS P8Z68-V Pro (Intel Z68)
ASUS Crosshair V Formula (AMD 990FX)
Intel DX79SI (Intel X79)
Intel DZ77GA-70K (Intel Z77)
Hard Disk: Intel X25-M SSD (80GB)
Crucial RealSSD C300
OCZ Agility 3 (240GB)
Memory: 4 x 4GB G.Skill Ripjaws X DDR3-1600 9-9-9-20
Video Card: ATI Radeon HD 5870 (Windows 7)
AMD Processor Graphics
Intel Processor Graphics
Video Drivers: AMD Catalyst 12.3
Desktop Resolution: 1920 x 1200
OS: Windows 7 x64

General Performance

SYSMark 2007 & 2012

Although not the best indication of overall system performance, the SYSMark suites do give us a good idea of lighter workloads than we're used to testing. SYSMark 2007 is a better indication of low thread count performance, although 2012 isn't tremendously better in that regard.

As the SYSMark suites aren't particularly thread heavy, there's little advantage to the 6-core Sandy Bridge E CPUs. The 3770K however manages to slot in above all of the other Sandy Bridge parts at between 5—20% faster than the 2600K. The biggest advantages show up in either the lightly threaded tests or in the FP heavy benchmarks. Given what we know about Ivy's enhancements, this is exactly what we'd expect.

SYSMark 2012—Overall

SYSMark 2012—Office Productivity

SYSMark 2012—Media Creation

SYSMark 2012—Web Development

SYSMark 2012—Data/Financial Analysis

SYSMark 2012—3D Modeling

SYSMark 2012—System Management

SYSMark 2007—Overall

SYSMark 2007—Productivity

SYSMark 2007—E-Learning

SYSMark 2007—Video Creation

SYSMark 2007—3D

Content Creation Performance

Adobe Photoshop CS4

To measure performance under Photoshop CS4 we turn to the Retouch Artists’ Speed Test. The test does basic photo editing; there are a couple of color space conversions, many layer creations, color curve adjustment, image and canvas size adjustment, unsharp mask, and finally a gaussian blur performed on the entire image.

The whole process is timed and thanks to the use of Intel's X25-M SSD as our test bed hard drive, performance is far more predictable than back when we used to test on mechanical disks.

Time is reported in seconds and the lower numbers mean better performance. The test is multithreaded and can hit all four cores in a quad-core machine.

Adobe Photoshop CS4—Retouch Artists Speed Test

Our Photoshop test is well threaded but it doesn't peg all cores constantly. Instead you get burstier behavior. With the core count advantage out of the way, SNB-E steps aside and allows the 3770K to step up as the fastest CPU we've tested here. The performance advantage over the 2600K is around 9%.

3dsmax 9

Today's desktop processors are more than fast enough to do professional level 3D rendering at home. To look at performance under 3dsmax we ran the SPECapc 3dsmax 8 benchmark (only the CPU rendering tests) under 3dsmax 9 SP1. The results reported are the rendering composite scores.

3dsmax r9—SPECapc 3dsmax 8 CPU Test

In another FP heavy workload we see a pretty reasonable gain for Ivy Bridge: 8.5% over a 2600K. This isn't enough to make you want to abandon your Sandy Bridge, but it's a good step forward for a tick.

Cinebench 11.5

Created by the Cinema 4D folks we have Cinebench, a popular 3D rendering benchmark that gives us both single and multi-threaded 3D rendering results.

Cinebench 11.5—Single Threaded

The single threaded Cinebench test shows a 9% performance advantage for the 3770K over the 2600K. The gap increases slightly to 11% as we look at the multithreaded results:

Cinebench 11.5—Multi-Threaded

If you're running a workload that can really stress multiple cores, the 6-core Sandy Bridge E parts will remain unstoppable but in the quad-core world, Ivy Bridge leads the pack.

Video Transcoding Performance

x264 HD 3.03 Benchmark

Graysky's x264 HD test uses x264 to encode a 4Mbps 720p MPEG-2 source. The focus here is on quality rather than speed, thus the benchmark uses a 2-pass encode and reports the average frame rate in each pass.

x264 HD Benchmark—1st pass—v3.03

x264 HD Benchmark—2nd pass—v3.03

In the second pass of our x264 test we see a nearly 14% increase over the 2600K. Once again, there's no replacement for more cores in these types of workloads but delivering better performance in a lower TDP than last year's quad-core is great for more thermally conscious desktops.

Software Development Performance

Compile Chromium Test

You guys asked for it and finally I have something I feel is a good software build test. Using Visual Studio 2008 I'm compiling Chromium. It's a pretty huge project that takes over forty minutes to compile from the command line on a Core i3 2100. But the results are repeatable and the compile process will stress all 12 threads at 100% for almost the entire time on a 980X so it works for me.

Build Chromium Project—Visual Studio 2008

Ivy Bridge shows more traditional gains in our VS2008 benchmark—performance moves forward here by a few percent, but nothing significant. We are seeing a bit of a compressed dynamic range here for this particular compiler workload, it's quite possible that other bottlenecks are beginning to creep in as we get even faster microarchitectures.

Compression & Encryption Performance

7-Zip Benchmark

By working with a small dataset, the 7-zip benchmark gives us an indication of multithreaded integer performance without being IO limited:

7-zip Benchmark

Although real world compression/decompression tests can be heavily influenced by disk IO, the CPU does play a significant role. Here we're showing a 15% increase in performance over the 2600K. In the real world you'd see something much smaller as workloads aren't always so well threaded. The results here do have implications for other heavily compute bound integer workloads however.

TrueCrypt Benchmark

TrueCrypt is a very popular encryption package that offers full AES-NI support. The application also features a built-in encryption benchmark that we can use to measure CPU performance:

AES-128 Performance—TrueCrypt 7.1 Benchmark

Our TrueCrypt test scales fairly well with clock speed, I suspect what we're seeing here might be due in part to Ivy's ability to maintain higher multi-core turbo frequencies despite having similar max turbo frequencies to Sandy Bridge.



Discrete GPU Gaming Performance

Gaming performance with a discrete GPU does improve in line with the rest of what we've seen thus far from Ivy Bridge. It's definitely a step ahead of Sandy Bridge, but not enough to warrant an upgrade in most cases. If you haven't already made the jump to Sandy Bridge however, the upgrade will do you well.

Dragon Age Origins

DAO has been a staple of our CPU gaming benchmarks for some time now. The third/first person RPG is well threaded and is influenced both by CPU and GPU performance. Our benchmark is a FRAPS runthrough of our character through a castle.

Dragon Age Origins—1680 x 1050—Max Settings (no AA/Vsync)

Dawn of War II

Dawn of War II is an RTS title that ships with a built in performance test. I ran at Ultra quality settings at 1680 x 1050:

Dawn of War II—1680 x 1050—Ultra Settings

World of Warcraft

Our WoW test is run at High quality settings on a lightly populated server in an area where no other players are present to produce repeatable results. We ran at 1680 x 1050.

World of Warcraft

Starcraft 2

We have two Starcraft II benchmarks: a GPU and a CPU test. The GPU test is mostly a navigate-around-the-map test, as scrolling and panning around tends to be the most GPU bound in the game. Our CPU test involves a massive battle of 6 armies in the center of the map, stressing the CPU more than the GPU. At these low quality settings however, both benchmarks are influenced by CPU and GPU. We'll get to the GPU test shortly, but our CPU test results are below. The benchmark runs at 1024 x 768 at Medium Quality settings with all CPU influenced features set to Ultra.

Starcraft 2

Metro 2033

We're using the Metro 2033 benchmark that ships with the game. We run the benchmark at 1024 x 768 for a more CPU bound test as well as 1920 x 1200 to show what happens in a more GPU bound scenario.

Metro 2033 Frontline Benchmark—1024 x 768—DX11 High Quality

Metro 2033 Frontline Benchmark—1920 x 1200—DX11 High Quality

DiRT 3

We ran two DiRT 3 benchmarks to get an idea for CPU bound and GPU bound performance. First the CPU bound settings:

DiRT 3—Aspen Benchmark—1024 x 768 Low Quality

DiRT 3—Aspen Benchmark—1920 x 1200 High Quality

Crysis: Warhead

Crysis Warhead Assault Benchmark—1680 x 1050 Mainstream DX10 64-bit

Civilization V

Civ V's lateGameView benchmark presents us with two separate scores: average frame rate for the entire test as well as a no-render score that only looks at CPU performance. We're looking at the no-render score here to isolate CPU performance alone:

Civilization V—1680 x 1050—DX11 High Quality



Intel HD 4000 Explored

What makes Ivy Bridge different from your average tick in Intel's cycle is the improvement to the on-die GPU. Intel's HD 4000, the new high-end offering, is now equipped with 16 EUs up from 12 in Sandy Bridge (soon to be 40 in Haswell). Intel's HD 2500 is the replacement to the old HD 2000 and it retains the same number of EUs (6). Efficiency is up at the EU level as Ivy Bridge is able to dual-issue more instruction combinations than its predecessor. There are a number of other enhancements that we've already detailed in our architecture piece, but a quick summary is below:

— DirectX 11 Support
— More execution units (16 vs 12) for GT2 graphics (Intel HD 4000)
— 2x MADs per clock
— EU can now co-issue more operations
— GPU specific on-die L3 cache
— Faster QuickSync performance
— Lower power due to 22nm

Although OpenCL is supported by the HD 4000, Intel has not yet delivered an OpenCL ICD so we cannot test functionality and performance. Update: OpenCL is supported in the launch driver, we are looking into why OpenCL-Z thought otherwise. DirectX 11 is alive and well however:

Image quality is actually quite good, although there are a few areas where Intel falls behind the competition. I don't believe Ivy Bridge's GPU performance is high enough yet where we can start nitpicking image quality but Intel isn't too far away from being there.


Current state of AF in IVB

Anisotropic filtering quality is much improved compared to Sandy Bridge. There's a low precision issue in DirectX 9 currently which results in the imperfect image above, that has already been fixed in a later driver revision awaiting validation. The issue also doesn't exist under DX10/DX11.


IVB with improved DX9 AF driver

Game compatibility is also quite good, not perfect but still on the right path for Intel. It's also worth noting that Intel has been extremely responsive in finding and eliminating bugs whenever we pointed at them in their drivers. One problem Intel does currently struggle with is game developers specifically targeting Intel graphics and treating the GPU as a lower class citizen. I suspect this issue will eventually resolve itself as Intel works to improve the perception of its graphics in the market, but until then Intel will have to suffer a bit.



Our first graphics test is Crysis: Warhead, which in spite of its relatively high system requirements is the oldest game in our test suite. Crysis was the first game to really make use of DX10, and set a very high bar for modern games that still hasn't been completely cleared. And while its age means it's not heavily played these days, it's a great reference for how far GPU performance has come since 2008. For an iGPU to even run Crysis at a playable framerate is a significant accomplishment, and even more so if it can do so at better than performance (low) quality settings.

In our highest quality benchmark (Mainstream) settings, Intel's HD Graphics 4000 is 55% faster than the 3000 series graphics in Sandy Bridge. While still tangibly slower than AMD's Llano (Radeon HD 6550D), Ivy Bridge is a significant step forward. Drop the quality down a bit and playability improves significantly:

Over 50 fps at 1680 x 1050 from Intel integrated graphics is pretty impressive. Here we're showing a 41% increase in performance compared to Sandy Bridge, with Llano maintaining a 33% advantage over Ivy. I would've liked to have seen an outright doubling of performance, but this is a big enough step forward to be noticeable on systems with no discrete GPU.



Our next graphics test is Metro 2033, another graphically challenging game. Since IVB is the first Intel GPU to feature DX11 capabilities, this is the first time an Intel GPU has been able to run Metro in DX11 mode. Like Crysis this is a game that is traditionally unplayable on Intel iGPUs, even in DX9 mode.

Here the Llano gap shrinks to about 13—25% depending on the resolution/quality settings. AMD still has the clear advantage in GPU performance, but Intel does step closer. The performance advantage over Sandy Bridge ranges from 20—40%. With these sorts of numbers it's clear why Intel views Ivy Bridge as being a tick+. Generational performance improvements on the CPU side generally fall in the 20—40% range. As you've just seen, Ivy Bridge offers a 7—15% increase in CPU performance over Sandy Bridge—making it a bonafide tick from a CPU perspective. The 20—40% increase on the graphics side is what blurs the line between a conventional tick and what we have with Ivy Bridge.



DiRT 3 is our next DX11 game. Developer Codemasters Southam added DX11 functionality to their EGO 2.0 engine back in 2009 with DiRT 2, and while it doesn't make extensive use of DX11 it does use it to good effect in order to apply tessellation to certain environmental models along with utilizing a better ambient occlusion lighting model. As a result DX11 functionality is very cheap from a performance standpoint, meaning it doesn't require a GPU that excels at DX11 feature performance.

DiRT 3 remains in line with the other tests we've run. Ivy Bridge (HD 4000) offers a 30—40% increase in performance over Sandy Bridge (HD 3000), while AMD's Llano (6550D) holds onto a 27% advantage over Ivy.



Total War: Shogun 2 is the latest installment of the long-running Total War series of turn based strategy games, and alongside Civilization V is notable for just how many units it can put on a screen at once. Adding to the load is the use of DX11 features such as tessellation and high definition ambient occlusion, which means it can give any GPU a run for its money.

At high quality with Shogun 2's basic DX11 functionality, Ivy Bridge can't quite reach playable framerates. At this point it's closer to the entry-level GT 520 and Radeon HD 5450 than it is AMD's Llano, trailing AMD's iGPU by over 50%.

Turning our settings down to medium, and thereby reverting to DX10 functionality, Ivy Bridge picks up some performance. 26fps still isn't good enough to reach the 30fps mark, but at the same time it represents a remarkable improvement for Intel. Ivy Bridge is 61% faster than Sandy Bridge here, which is far greater than the theoretical performance difference between the two GPUs. Nevertheless Ivy Bridge still trails Llano by 55%, keeping Ivy Bridge decidedly in the entry level class of GPUs.

What's interesting here is that this is the first test in our suite that is shader intensive at even lower settings. On paper Ivy Bridge has far more memory bandwidth than GPUs like the GT 520, but only roughly the same amount of shader performance. The fact that the two GPUs are nearly tied is a stark reminder of this fact. Ivy Bridge greatly improves on Sandy Bridge, but it still has quite some distance to go to catch Llano in shader-bound scenarios.



Portal 2 continues to be the latest and greatest Source engine game to come out of Valve's offices. While Source continues to be a DX9 engine, and hence is designed to allow games to be playable on a wide range of hardware, Valve has continued to upgrade it over the years to improve its quality, and combined with their choice of style you’d have a hard time telling it’s over 7 years old at this point. From a rendering standpoint Portal 2 isn't particularly geometry heavy, but it does make plenty of use of shaders.

Since Portal 2 is another shader heavy game, this ends up being another struggle for Ivy Bridge. Compared to Sandy Bridge performance has once again rocketed up, this time by 39% at 1366, once more indicating just how much the 33% increase in EUs and the improved turbo has improved Intel's GPU performance. At the same time like our previous shader-heavy games it's just trailing GT 520, while Llano enjoys a 100% lead. To that end while we did end up using 4x MSAA at 1366, it's not clear that disabling it would significantly improve performance for Ivy Bridge here since MSAA doesn't increase the shader workload.

It's worth noting however that this is the one game where we encountered something that may be a rendering error with Ivy Bridge. Based on our image quality screenshots Ivy Bridge renders a distinctly "busier" image than Llano or NVIDIA's GPUs. It's not clear whether this is causing an increased workload on Ivy Bridge, but it's worth considering.



Its popularity aside, Battlefield 3 may be the most interesting game in our benchmark suite for a single reason: it was the first AAA DX10+ game. Consequently it makes no attempt to shy away from pushing the graphics envelope, and pushing GPUs to their limits at the same time. Even at low settings Battlefield 3 is a handful, and to be able to run it on an iGPU would no doubt make quite a few traveling gamers happy.

So does Ivy Bridge reach its goal here? Kind of. 37.3 is playable in single player, but it's been in our experience that multiplayer framerates can bottom out at half our SP benchmark, which means Ivy Bridge would be bottoming out in the high teens.

All things considered however, this is another case of Intel greatly improving their performance. Compared to Sandy Bridge, Ivy bridge is 48% faster, once again well ahead of the clockspeed and EU count improvements for HD 4000. Intel's improved shader performance plays a large part here, but BF3 provides a good mix of shading, texturing, and geometry stress at low settings, meaning Ivy Bridge is getting a full workout. As a result it's also once again closing the gap on Llano, trailing AMD's iGPU by only 25% here.



Our next game is Starcraft II, Blizzard’s 2010 RTS megahit. Starcraft II is a DX9 game that is designed to run on a wide range of hardware, and given the growth in GPU performance over the years it's often CPU limited before it's GPU limited on higher-end cards.

SC2 shows us a smaller advantage over Sandy Bridge—generally around 20%. Llano is able to maintain a much larger 40—50% advantage as a result. Still, it's enough to get above 40fps even at 1680. The only thing that's unclear here is whether we're still shader limited at medium quality—which is what the results near the GT 520 would indicate—or if perhaps we're more simply texture limited.



Bethesda's epic sword & magic game The Elder Scrolls V: Skyrim is our RPG of choice for benchmarking. It's altogether a good CPU benchmark thanks to its complex scripting and AI, but it also can end up pushing a large number of fairly complex models and effects at once. This is a DX9 game so it isn't utilizing any of IVB's new DX11 functionality, but it can still be a demanding game.

Ivy Bridge does very well in Skyrim, not able to reach 60 fps but still above 30 at up to 1680 x 1050. The gains over Sandy Bridge are significant at nearly 50%. Meanwhile Ivy Bridge also gets surprisingly close to Llano, bringing the gap down to 10% at 1366 and 18% at 1680. Seeing as how Skyrim is not particularly shader heavy at these settings, Ivy Bridge looks to be largely shrugging off its biggest bottleneck here in favor of pure pixel pushing power.



Switching gears for the moment we have Minecraft, our OpenGL title. It's no secret that OpenGL usage on the PC has fallen by the wayside in recent years, and as far major games go Minecraft is one of but a few recently released major titles using OpenGL. Minecraft is incredibly simple—not even utilizing pixel shaders let alone more advanced hardware—but this doesn't mean it's easy to render. Its use of massive amounts of blocks (and the overdraw that creates) means you need solid hardware and an efficient OpenGL implementation if you want to hit playable framerates with a far render distance. Consequently, as the most successful OpenGL game in quite some number of years (at over 5.5mil copies sold), it's a good reminder for GPU manufacturers that OpenGL is not to be ignored.

Our test here is pretty simple: we're looking at lush forest after the world finishes loading. In spite of a lack of any kind of shader workload for Ivy Bridge, it's still struggling here. On the one hand this is the single biggest gain over Sandy Bridge we've seen in any of our tests, with Ivy Bridge improving on its predecessor by an incredible 130%, and at the same time it's still only competitive with the entry-level discrete GPUs. Worse, for all of its gains, Ivy Bridge is still only achieving a mere 30% of the performance of Llano here.

Since this is largely a pixel pushing test, we'd expect Llano and Ivy Bridge to be closer than where they are. Given the gains versus Sandy Bridge Intel may still have some ROP bottlenecks that only come out in unusual workloads like Minecraft, but at the same time it's hard to imagine that OpenGL drivers aren't playing a role here. If that's the case, then Intel clearly has some work to do.



Our final game, Civilization V, gives us an interesting look at things that other RTSes cannot match, with a much weaker focus on shading in the game world, and a much greater focus on creating the geometry needed to bring such a world to life. In doing so it uses a slew of DirectX 11 technologies, including tessellation for said geometry, driver command lists for reducing CPU overhead, and compute shaders for on-the-fly texture decompression. There are other games that are more stressful overall, but this is likely the game most stressing of DX11 performance in particular.

Unfortunately this ends up being the worst showing for Ivy Bridge. It's ever so slightly trailing the Radeon HD 5450 here, never mind the faster dGPUs or Llano. As we'll see in our compute benchmarks DirectCompute performance plays a large part here, but even then it doesn't explain why CivV performance is quite this low. As a result Llano is once again well in the lead, with Ivy Bridge only reaching some 30% of Llano's performance.



While compute functionality could technically be shoehorned into DirectX 10 GPUs such as Sandy Bridge through DirectCompute 4.x, neither Intel nor AMD's DX10 GPUs were really meant for the task, and even NVIDIA's DX10 GPUs paled in comparison to what they've achieved with their DX11 generation GPUs. As a result Ivy Bridge is the first true compute capable GPU from Intel. This marks an interesting step in the evolution of Intel's GPUs, as originally projects such as Larrabee Prime were supposed to help Intel bring together CPU and GPU computing by creating an x86 based GPU. With Larrabee Prime canceled however, that task falls to the latest rendition of Intel's GPU architecture.

With Ivy Bridge Intel will be supporting both DirectCompute 5—which is dictated by DX11—but also the more general compute focused OpenCL 1.1. Intel has backed OpenCL development for some time and currently offers an OpenCL 1.1 runtime for their CPUs, however an OpenCL runtime for Ivy Bridge will not be available at launch. As a result Ivy Bridge is limited to DirectCompute for the time being, which limits just what kind of compute performance testing we can do with Ivy Bridge.

Our first compute benchmark comes from Civilization V, which uses DirectCompute 5 to decompress textures on the fly. Civ V includes a sub-benchmark that exclusively tests the speed of their texture decompression algorithm by repeatedly decompressing the textures required for one of the game’s leader scenes. And while games that use GPU compute functionality for texture decompression are still rare, it's becoming increasingly common as it's a practical way to pack textures in the most suitable manner for shipping rather than being limited to DX texture compression.

As we alluded to in our look at Civilization V's performance in game mode, Ivy Bridge ends up being compute limited here. It's well ahead of the even more DirectCompute anemic Radeon HD 5450 here—in spite of the fact that it can't take a lead in game mode—but it's slightly trailing the GT 520, which has a similar amount of compute performance on paper. This largely confirms what we know from the specs for HD 4000: it can pack a punch in pushing pixels, but given a shader heavy scenario it's going to have a great deal of trouble keeping up with Llano and its much greater shader performance.

But with that said, Ivy Bridge is still reaching 55% of Llano's performance here, thanks to AMD's overall lackluster DirectCompute performance on their pre-7000 series GPUs. As a result Ivy Bridge versus Llano isn't nearly as lop-sided as the paper specs tell us; Ivy Bridge won't be able to keep up in most situations, but in DirectCompute it isn't necessarily a goner.

And to prove that point, we have our second compute test: the Fluid Simulation Sample in the DirectX 11 SDK. This program simulates the motion and interactions of a 16k particle fluid using a compute shader, with a choice of several different algorithms. In this case we’re using an (O)n^2 nearest neighbor method that is optimized by using shared memory to cache data.

Thanks in large part to its new dedicated L3 graphics cache, Ivy Bridge does exceptionally well here. The framerate of this test is entirely arbitrary, but what isn't is the performance relative to other GPUs; Ivy Bridge is well within the territory of budget-level dGPUs such as the GT 430, Radeon HD 5570, and for the first time is ahead of Llano, taking a lead just shy of 10%.  The fluid simulation sample is a very special case—most compute shaders won't be nearly this heavily reliant on shared memory performance—but it's the perfect showcase for Ivy Bridge's ideal performance scenario. Ultimately this is just as much a story of AMD losing due to poor DirectCompute performance as it is Intel winning due to a speedy L3 cache, but it shows what is possible. The big question now is what OpenCL performance is going to be like, since AMD's OpenCL performance doesn't have the same kind of handicaps as their DirectCompute performance.

Synthetic Performance

Moving on, we'll take a few moments to look at synthetic performance. Synthetic performance is a poor tool to rank GPUs—what really matters is the games—but by breaking down workloads into discrete tasks it can sometimes tell us things that we don't see in games.

Our first synthetic test is 3DMark Vantage’s pixel fill test. Typically this test is memory bandwidth bound as the nature of the test has the ROPs pushing as many pixels as possible with as little overhead as possible, which in turn shifts the bottleneck to memory bandwidth so long as there's enough ROP throughput in the first place.

It's interesting to note here that as DDR3 clockspeeds have crept up over time, IVB now has as much memory bandwidth as most entry-to-mainstream level video cards, where 128bit DDR3 is equally common. Or on a historical basis, at this point it's half as much bandwidth as powerhouse video cards of yesteryear such as the 256bit GDDR3 based GeForce 8800GT.

Altogether, with 29.6GB/sec of memory bandwidth available to Ivy Bridge with our DDR3-1866 memory, Ivy Bridge ends up being able to push more pxiels than Llano, more pixels than the entry-level dGPUs, and even more pixels the budget-level dGPUs such as GT 440 and Radeon HD 5570 which have just as much dedicated memory bandwidth. Or put in numbers, Ivy Bridge is pushing 42% more pixels than Sandy Bridge and 25% more pixels than the otherwise more powerful Llano. And since pixel fillrates are so memory bandwidth bound Intel's L3 cache is almost certainly once again playing a role here, however it's not clear to what extent that's the case.

Moving on, our second synthetic test is 3DMark Vantage’s texture fill test, which provides a simple FP16 texture throughput test. FP16 textures are still fairly rare, but it's a good look at worst case scenario texturing performance.

After Ivy Bridge's strong pixel fillrate performance, its texture fillrate brings us back down to earth. At this point performance is once again much closer to entry level GPUs, and also well behind Llano. Here we see that Intel's texture performance increases also exactly linearly with the increase in EUs from Sandy Bridge to Ivy Bridge, indicating that those texture units are being put to good use, but at the same time it means Ivy Bridge has a long way to go to catch Llano's texture performance, achieving only 47% of Llano's performance here. The good news for Intel here is that texture size (and thereby texel density) hasn't increased much over the past couple of years in most games, however the bad news is that we're finally starting to see that change as dGPUs get more VRAM.

Our final synthetic test is the set of settings we use with Microsoft’s Detail Tessellation sample program out of the DX11 SDK. Since IVB is the first Intel iGPU with tessellation capabilities, it will be interesting to see how well IVB does here, as IVB is going to be the de facto baseline for DX11+ games in the future. Ideally we want to have enough tessellation performance here so that tessellation can be used on a global level, allowing developers to efficiently simulate their worlds with fewer polygons while still using many polygons on the final render.

The results here are actually pretty decent. Compared to what we've seen with shader and texture performance, where Ivy Bridge is largely tied at the hip with the GT 520, at lower tessellation factors Ivy Bridge manages to clearly overcome both the GT 520 and the Radeon HD 5450. Per unit of compute performance, Intel looks to have more tessellation performance than AMD or NVIDIA, which means Intel is setting a pretty good baseline for tessellation performance. Tessellation performance at high tessellation factors does dip however, with Ivy Bridge giving up much of its performance lead over the entry-level dGPUs, but still managing to stay ahead of both of its competitors.



Power Consumption

Intel isn't really exploiting 22nm for significantly higher default or max turbo frequencies. While it does seem like you'll hit turbo frequencies more often with Ivy, most of what 22nm offers will be realized as power savings.

The data in the charts below is from our original 3770K preview, however I've also provided a table comparing the 3770K to the 2700K using Intel's own Z77 motherboard which is a bit more power hungry than our typical testbed:

Power Consumption Comparison
Intel DZ77GA-70K Idle Load (x264 2nd pass)
Intel Core i7 3770K 80.1W 146.4W
Intel Core i7 2700K 79.4W 177.6W

As you can see, there are no savings at idle and a reasonably significant improvement under load.

The same is echoed on our earlier chip in a more power efficient platform:

Power Consumption—Idle

Power Consumption—Load (x264 HD 3.03 2nd Pass)

I was also curious to see what power consumption would look like compared to other low-end GPUs. For these next results I used the 3770K alone, without a discrete card and measured power consumption. I then added in discrete GPUs from our HD 4000 comparisons and looked at both idle and load power while playing Metro 2033:

GPU Power Consumption—Idle

Obviously at idle it's impossible to beat the HD 4000, the GPU is largely stopped/gated when idle keeping power consumption to a minimum. Under load is where things get interesting:

GPU Power Consumption—Load (Metro 2033)

Ivy's GPU is much more power efficient than SNB's, however Intel still has a way to go before it starts to equal the power efficiency of modern discrete GPU architectures. Remember the HD 4000 is on Intel's 22nm process here while the GT 440 is built on TSMC's 40nm process.



Quick Sync Image Quality & Performance

Intel obviously focused on increasing GPU performance with Ivy Bridge, but a side effect of that increased GPU performance is more compute available for Quick Sync. As you may recall, Sandy Bridge's secret weapon was an on-die hardware video transcode engine (Quick Sync), designed to keep Intel's CPUs competitive when faced with the onslaught of GPU computing applications. At the time, video transcode seemed to be the most likely candidate for significant GPU acceleration so the move made sense. Plus it doesn't hurt that video transcoding is an extremely popular activity to do with one's PC these days.

The power of Quick Sync was how it leveraged fixed function decode (and some encode) hardware with the on-die GPU's EU array. The combination of the two resulted in some pretty incredible performance gains not only over traditional software based transcoding, but also over the fastest GPU based solutions as well.

Intel put to rest any concerns about image quality when Quick Sync launched, and thankfully the situation hasn't changed today with Ivy Bridge. In fact, you get a bit more flexibility than you had a year ago.

Intel's latest drivers now allow for a selectable tradeoff between image quality and performance when transcoding using Quick Sync. The option is exposed in Media Espresso and ultimately corresponds to an increase in average bitrate. To test image quality and performance, I took the last Harry Potter Blu-ray, stripped it of its DRM and used Media Espresso to make it playable on an iPad 2 (1024 x 768 preset).

In the case of our Harry Potter transcode, selecting the Better Quality option increased average bitrate from to 3.86Mbps to 5.83Mbps. The resulting file size for the entire movie increased from 3.78GB to 5.71GB. Both options produced a good quality transcode, picking one over the other really depends on how much time (and space) you have as well as the screen size of the device you'll be watching it on. For most phone/tablet use I'd say the faster performing option is ideal.

Intel Core i7 3770K (x86) Intel Quick Sync (SNB) Intel Quick Sync (IVB) Intel Quick Sync, Better (IVB) NVIDIA GeForce GTX 680 AMD Radeon HD 7970
original original original original original original

 

While AMD has yet to enable VCE in any publicly available software, NVIDIA's hardware encoder built into Kepler is alive and well. Cyberlink Media Espresso 6.5 will take advantage of the 680's NVENC engine which is why we standardized on it here for these tests. Once again, Quick Sync's transcoding abilities are limited to applications like Media Espresso or ArcSoft's Media Converter—there's still no support in open source applications like Handbrake.

Compared to the output from Quick Sync, NVENC appears to produce a softer image. However, if you compare the NVENC output to what we got from the software/x86 path you'll see that the two are quite similar. It seems that Quick Sync, at least in this case, is sharpening/adding more noise beyond what you'd normally expect. I'm not sure I'd call it bad, but I need to do some more testing before I know whether or not it's a good thing.

The good news is that NVENC doesn't pose any of the horrible image quality issues that NVIDIA's CUDA transcoding path gave us last year. For getting videos onto your phone, tablet or game console I'd say the output of either of these options, NVENC or Quick Sync, is good enough.

Unfortunately AMD's solution hasn't improved. The washed out images we saw last year, particularly in dark scenes prior to a significant change in brightness are back again. While NVENC delivers acceptable image quality, AMD does not.

The performance story is unfortunately not much different from last year either. The chart below is average frame rate over the entire encode process.

CyberLink Media Espresso 6.5—Harry Potter 8 Transcode

Just as we saw with Sandy Bridge, Quick Sync continues to be an incredible way to get video content onto devices other than your PC. One thing I wanted to make sure of was that Media Espresso wasn't somehow holding x86 performance back to make the GPU accelerated transcodes seem much better than they actually are. I asked our resident video expert, Ganesh, to clone Media Espresso's settings in a Handbrake profile. We took the profile and performed the same transcode, the result is listed above as the Core i7 3770K (Handbrake). You will notice that the Handbrake x86/x264 path is definitely faster than Cyberlink's software path, by over 50% to be exact. However even using Handbrake as a reference, Quick Sync transcodes over 2x faster.

In the tests below I took the same source and varied the output quality with some custom profiles. I targeted 1080p, 720p and 480p at decreasing average bitrates to illustrate the relationship between compression demands and performance:

CyberLink Media Espresso 6.5—Harry Potter 8 Transcode

CyberLink Media Espresso 6.5—Harry Potter 8 Transcode

CyberLink Media Espresso 6.5—Harry Potter 8 Transcode

Unfortunately NVENC performance does not scale like Quick Sync. When asked to preserve a good amount of data, both NVENC and Quick Sync perform similarly in our 1080p/13Mbps test. However ask for more aggressive compression ratios for lower resolution/bitrate targets, and the Intel solution quickly distances itself from NVIDIA. One theory is that NVIDIA's entropy encode block could be the limiting factor here.

Ivy Bridge's improved Quick Sync appears to be aided both by an improved decoder and the HD 4000's faster/larger EU array. The graph below helps illustrate:

If we rely on software decoding but use Intel's hardware encode engine, Ivy Bridge is 18% faster than Sandy Bridge in this test (1080p 13Mbps output from BD source, same as above). If we turn on both hardware decode and encode, the advantage grows to 29%. More than half of the performance advantage in this case is due to the faster decode engine on Ivy Bridge.



Final Words

Reviewing a tick in Intel's cadence is always difficult. After Conroe if we didn't see a 40% jump in a generation we were disappointed. And honestly, after Sandy Bridge I felt it would be quite similar. Luckily for Intel, Ivy Bridge is quite possibly the strongest tick it has ever put forth.

Ivy Bridge is unique in that it gives us the mild CPU bump but combines it with a very significant increase in GPU performance. The latter may not matter to many desktop users, but in the mobile space it's quite significant. Ultimately that's what gives Ivy Bridge it's appeal. If you're already on Intel's latest and greatest, you won't appreciate Ivy as an upgrade but you may appreciate for the role it plays in the industry—as the first 22nm CPU from Intel and as a bridge to Haswell. If you missed last year's upgrade, it'll be Ivy's performance and lower TDP that will win you over instead.

Intel has done its best to make this tick more interesting than most. Ivy Bridge is being used as the introduction vehicle to Intel's 22nm process. In turn you get a cooler running CPU than Sandy Bridge (on the order of 20—30W under load), but you do give up a couple hundred MHz on the overclocking side. While I had no issues getting my 3770K up to 4.6GHz on the stock cooler, Sandy Bridge will likely be the better overclocker for most.

With Ivy Bridge and its 7-series chipset we finally get USB 3.0 support. In another month or so we'll also get Thunderbolt support (although you'll have to hold off on buying a 7-series motherboard until then if you want it). This platform is turning out to be everything Sandy Bridge should have been.

Ivy's GPU performance is, once again, a step in the right direction. While Sandy Bridge could play modern games at the absolute lowest quality settings, at low resolutions, Ivy lets us play at medium quality settings in most games. You're still going to be limited to 1366 x 768 in many situations, but things will look significantly better.

The sub-$80 GPU market continues to be in danger as we're finally able to get not-horrible graphics with nearly every Intel CPU sold. Intel still has a long way to go however. The GPUs we're comparing to are lackluster at best. While it's admirable that Intel has pulled itself out of the graphics rut that it was stuck in for the past decade, more progress is needed. Ivy's die size alone tells us that Intel could have given us more this generation, and I'm disappointed that we didn't get it. At some point Intel is going to have to be more aggressive with spending silicon real estate if it really wants to be taken seriously as a GPU company.

Similarly disappointing for everyone who isn't Intel, it's been more than a year after Sandy Bridge's launch and none of the GPU vendors have been able to put forth a better solution than Quick Sync. If you're constantly transcoding movies to get them onto your smartphone or tablet, you need Ivy Bridge. In less than 7 minutes, and with no impact to CPU usage, I was able to transcode a complete 130 minute 1080p video to an iPad friendly format—that's over 15x real time.

While it's not enough to tempt existing Sandy Bridge owners, if you missed the upgrade last year then Ivy Bridge is solid ground to walk on. It's still the best performing client x86 architecture on the planet and a little to a lot better than its predecessor depending on how much you use the on-die GPU.

Additional Reading

Intel's Ivy Bridge Architecture Exposed
Mobile Ivy Bridge Review
Undervolting & Overclocking on Ivy Bridge

Intel's Ivy Bridge: An HTPC Perspective

Log in

Don't have an account? Sign up now