Note: This preview was not sanctioned or supported by Intel in any way.

I still remember hearing about Intel's tick-tock cadence and not having much faith that the company could pull it off. Granted Intel hasn't given us a new chip every 12 months on the dot, but more or less there's something new every year. Every year we either get a new architecture on an established process node (tock), or a derivative architecture on a new process node (tick). The table below summarizes what we've seen since Intel adopted the strategy:

Intel's Tick-Tock Cadence
Microarchitecture Process Node Tick or Tock Release Year
Conroe/Merom 65nm Tock 2006
Penryn 45nm Tick 2007
Nehalem 45nm Tock 2008
Westmere 32nm Tick 2010
Sandy Bridge 32nm Tock 2011
Ivy Bridge 22nm Tick 2012
Haswell 22nm Tock 2013

Last year was a big one. Sandy Bridge brought a Conroe-like increase in performance across the board thanks to a massive re-plumbing of Intel's out-of-order execution engine and other significant changes to the microarchitecture. If you remember Conroe (the first Core 2 architecture), what followed it was a relatively mild upgrade called Penryn that gave you a little bit in the way of performance and dropped power consumption at the same time.

Ivy Bridge, the tick that follows Sandy Bridge, would typically be just that: a mild upgrade that inched performance ahead while dropping power consumption. Intel's microprocessor ticks are usually very conservative on the architecture side, which limits the performance improvement. Being less risky on the architecture allows Intel to focus more on working out the kinks in its next process node, in turn delivering some amount of tangible power reduction.

Where Ivy Bridge shakes things up is on the graphics side. For years Intel has been able to ship substandard graphics in its chipsets based on the principle that only gamers needed real GPUs and Windows ran just fine on integrated graphics. Over the past decade that philosophy required adjustment. First it was HD video decode acceleration, then GPU accelerated user interfaces and, more recently, GPU computing applications. Intel eventually committed to taking GPU performance (and driver quality) seriously, setting out on a path to significantly improve its GPUs.

As Ivy is a tick in Intel's cadence, we shouldn't see much of a performance improvement. On the CPU side that's mostly true. You can expect a 5 - 15% increase in performance for the same price as a Sandy Bridge CPU today. A continued desire to be aggressive on the GPU front however puts Intel in a tough spot. Moving to a new manufacturing process, especially one as dramatically different as Intel's 22nm 3D tri-gate node isn't easy. Any additional complexity outside of the new process simply puts schedule at risk. That being said, its GPUs continue to lag significantly behind AMD and more importantly, they still aren't fast enough by customer standards.

Apple has been pushing Intel for faster graphics for years, having no issues with including discrete GPUs across its lineup or even prioritizing GPU over CPU upgrades. Intel's exclusivity agreement with Apple expired around Nehalem, meaning every design win can easily be lost if the fit isn't right.

With Haswell, Intel will finally deliver what Apple and other customers have been asking for on the GPU front. Until then Intel had to do something to move performance forward. A simple tick wouldn't cut it.

Intel calls Ivy Bridge a tick+. While CPU performance steps forward, GPU performance sees a more significant improvement - in the 20 - 50% range. The magnitude of improvement on the GPU side is more consistent with what you'd expect from a tock. The combination of a CPU tick and a GPU tock is how Intel arrives at the tick+ naming. I'm personally curious to see how this unfolds going forward. Will GPU and CPUs go through alternating tocks or will Intel try to synchronize them? Do we see innovation on one side slow down as the other increases? Does tick-tock remain on a two year cadence now that there are two fairly different architectures that need updating? These are questions I don't know that we'll see answers to until after Haswell. For now, let's focus on Ivy Bridge.

Ivy Bridge Architecture Recap
POST A COMMENT

195 Comments

View All Comments

  • fic2 - Wednesday, March 07, 2012 - link

    I totally agree. Intel is again going to cobble the lower end with the HD2500 graphics so that people that don't need the i7 cpu have to buy a discrete video card. I really wish review sites would hammer Intel for this and pressure them to include the better integrated graphics. It's not like the HD4000 is so good that people will buy an i7 just for the graphics. Reply
  • Jamahl - Thursday, March 08, 2012 - link

    HD4000 takes up more die space which means it costs them more. That's all intel cares about, they don't give a shit about what people need at the lower end.

    They were forced to start using HD3000 graphics in all their lower end chips because of Llano. The 2105 basically replaced the 2100 at the same money so they would be less embarrassed by Llano. That's what competition does.
    Reply
  • Death666Angel - Wednesday, March 07, 2012 - link

    I like this tick. The CPU performance goes up by as much as I expected and the iGPU side goes up significantly.

    If I had the spare change to throw around, I'd upgrade from my 3.8GHz i7 860. But as it is now, an upgraded CPU wouldn't do much for me in terms of gaming performance and I rarely do CPU intensive tasks these days. The chipset and native USB 3.0 are nice, but I'll wait for Haswell next year and get a good GPU or two instead.
    Reply
  • tiro_uspsss - Wednesday, March 07, 2012 - link

    I'm a little confused :/

    the 3770K consistently beat the 3820 (by a very small margin)

    *wait*

    oh.. I found out why.. the specs of the 3820 as listed in the 'line up' are incorrect - the 3820 'only' turbos to 3.8 not 3.9.. is this why the 3770K did a little better?

    aside from the small extra turbo that the 3770K has, the 3820 has more L3, more memory channels & a higher core clock (that's if the core clock listed for the 3770K is correct)

    soooo.. the extra turbo.. is that why the 3770K is slightly better all-round?
    Reply
  • Death666Angel - Wednesday, March 07, 2012 - link

    You know that they are different CPU generations, right? One is SNB-E on a 32nm process node and the other is IVB on a 22nm node. The review said that IVB has a 5-15% higher IPC. Reply
  • tiro_uspsss - Wednesday, March 07, 2012 - link

    *slaps own forehead* DUH! thats right! I forgot! :D I knew I was forgetting something! :P :D thanks! makes sense now! :) Reply
  • BSMonitor - Wednesday, March 07, 2012 - link

    The number scheme is misleading.

    3820 and up are SNB-E.

    3770K is Ivy Bridge.

    An IVB core will perform better than a SNB core clocked at the same speed.

    New architecture wins over cache, memory channels, clock speed.
    Reply
  • Shadowmaster625 - Wednesday, March 07, 2012 - link

    "Generational performance improvements on the CPU side generally fall in the 20 - 40% range. As you've just seen, Ivy Bridge offers a 7 - 15% increase in CPU performance over Sandy Bridge - making it a bonafide tick from a CPU perspective. The 20 - 40% increase on the graphics side is what blurs the line between a conventional tick and what we have with Ivy Bridge."

    "Being able to play brand new titles at reasonable frame rates as realistic resolutions is a bar that Intel has safely met."
    Reply
  • hansmuff - Wednesday, March 07, 2012 - link

    The review is good, I really like that you added the compilation benchmark for chromium -- good job!

    I'm a little disappointed in the lack of overclocking information. What is the point of reviewing the K edition of this chip without even doing a simple overclock with a comparison to 2600K in terms of power draw and heat?
    Reply
  • Silenus - Wednesday, March 07, 2012 - link

    That is because this is NOT a review...it's just a preview. I'm sure they will do some overclocking testing in the full review later. Those results would be more meaningful then anyway as this is still early hardware/drivers. Reply

Log in

Don't have an account? Sign up now