Note: This preview was not sanctioned or supported by Intel in any way.

I still remember hearing about Intel's tick-tock cadence and not having much faith that the company could pull it off. Granted Intel hasn't given us a new chip every 12 months on the dot, but more or less there's something new every year. Every year we either get a new architecture on an established process node (tock), or a derivative architecture on a new process node (tick). The table below summarizes what we've seen since Intel adopted the strategy:

Intel's Tick-Tock Cadence
Microarchitecture Process Node Tick or Tock Release Year
Conroe/Merom 65nm Tock 2006
Penryn 45nm Tick 2007
Nehalem 45nm Tock 2008
Westmere 32nm Tick 2010
Sandy Bridge 32nm Tock 2011
Ivy Bridge 22nm Tick 2012
Haswell 22nm Tock 2013

Last year was a big one. Sandy Bridge brought a Conroe-like increase in performance across the board thanks to a massive re-plumbing of Intel's out-of-order execution engine and other significant changes to the microarchitecture. If you remember Conroe (the first Core 2 architecture), what followed it was a relatively mild upgrade called Penryn that gave you a little bit in the way of performance and dropped power consumption at the same time.

Ivy Bridge, the tick that follows Sandy Bridge, would typically be just that: a mild upgrade that inched performance ahead while dropping power consumption. Intel's microprocessor ticks are usually very conservative on the architecture side, which limits the performance improvement. Being less risky on the architecture allows Intel to focus more on working out the kinks in its next process node, in turn delivering some amount of tangible power reduction.

Where Ivy Bridge shakes things up is on the graphics side. For years Intel has been able to ship substandard graphics in its chipsets based on the principle that only gamers needed real GPUs and Windows ran just fine on integrated graphics. Over the past decade that philosophy required adjustment. First it was HD video decode acceleration, then GPU accelerated user interfaces and, more recently, GPU computing applications. Intel eventually committed to taking GPU performance (and driver quality) seriously, setting out on a path to significantly improve its GPUs.

As Ivy is a tick in Intel's cadence, we shouldn't see much of a performance improvement. On the CPU side that's mostly true. You can expect a 5 - 15% increase in performance for the same price as a Sandy Bridge CPU today. A continued desire to be aggressive on the GPU front however puts Intel in a tough spot. Moving to a new manufacturing process, especially one as dramatically different as Intel's 22nm 3D tri-gate node isn't easy. Any additional complexity outside of the new process simply puts schedule at risk. That being said, its GPUs continue to lag significantly behind AMD and more importantly, they still aren't fast enough by customer standards.

Apple has been pushing Intel for faster graphics for years, having no issues with including discrete GPUs across its lineup or even prioritizing GPU over CPU upgrades. Intel's exclusivity agreement with Apple expired around Nehalem, meaning every design win can easily be lost if the fit isn't right.

With Haswell, Intel will finally deliver what Apple and other customers have been asking for on the GPU front. Until then Intel had to do something to move performance forward. A simple tick wouldn't cut it.

Intel calls Ivy Bridge a tick+. While CPU performance steps forward, GPU performance sees a more significant improvement - in the 20 - 50% range. The magnitude of improvement on the GPU side is more consistent with what you'd expect from a tock. The combination of a CPU tick and a GPU tock is how Intel arrives at the tick+ naming. I'm personally curious to see how this unfolds going forward. Will GPU and CPUs go through alternating tocks or will Intel try to synchronize them? Do we see innovation on one side slow down as the other increases? Does tick-tock remain on a two year cadence now that there are two fairly different architectures that need updating? These are questions I don't know that we'll see answers to until after Haswell. For now, let's focus on Ivy Bridge.

Ivy Bridge Architecture Recap


View All Comments

  • krumme - Wednesday, March 07, 2012 - link

    Well the dilemma for Anand is apparent. If he stops writing those previews that is nice to Intel, someone else will get the oportunity and all the info. He can write two bad previews and the info and early chips just stops comming. Intel and Anand have a business to run, and there is a reason Intel gives Anand the chips (indirectly).

    He have a "deal" with Intel, the same way we have a deal with Anand when we read the review. We get the info - bended/biased - and then we can think ourselves. I think its a fair deal :) - we get a lot of good info from this preview. The uninformed gets raped, but its alway like that. Someone have to pay for the show.
  • chemist1 - Wednesday, March 07, 2012 - link

    The Macbook Pro, for instance, has a discrete GPU, yet can switch to the chip-based GPU to save power when on battery. So having a better chip-based GPU makes sense in this context. Reply
  • Sabresiberian - Wednesday, March 07, 2012 - link

    I'd like to see the discreet graphics card industry make the kind of progress, relatively speaking, Intel has made in the last 2 years.

    Ivy Bridge is a ways from competing with a high-end discreet solution, but if the relative rates in progress don't change, Intel will catch up soon.
  • sixtyfivedays - Wednesday, March 07, 2012 - link

    I use the iGPU on my build for my second monitor and it is quite nice.

    I can watch HD videos on it and it doesn't take away from my dedicated GPU at all.
  • mlkmade - Thursday, March 08, 2012 - link

    Is that even possible? Special hack or software?

    When you install a discrete graphics card, the integrated gpu gets disabled.

    Would love to know how you accomplished this..Is it a desktop or laptop?
  • mathew7 - Thursday, March 08, 2012 - link

    "When you install a discrete graphics card, the integrated gpu gets disabled."

    It was exclusive in northbridge-IGP units (Core2Duo/Quad and older). With Core-i, it's by default disabled but can be enabled through BIOS (of course if you don't have a P5x/6x chipset).
  • AnnonymousCoward - Wednesday, March 07, 2012 - link

    1. How much faster is Ivy Bridge at single thread versus my Conroe@3GHz?
    2. How much faster is my GTX560Ti than HD4000?
  • dr/owned - Thursday, March 08, 2012 - link

    1) Your 65 nm cpu would get the shit blow out of it by IB at the same clock speed in single threaded applications. Assuming 15% improvements in each of the tick-tocks since Conroe, a 1.8 Ghz IB would probably be about the same as a 3 Ghz Conroe.
    2) Discrete graphics vs. integrated graphics. Intel isn't trying to compete here so it's a stupid comparison.
  • AnnonymousCoward - Friday, March 09, 2012 - link

    1. Your "get the shit blown out" is worthless. All I'm looking for is a number, and your effective answer is +67%.

    2. It's not a stupid comparison, because:
    a) I'm interested.
    b) HD4000 is designed for games.
    c) They benchmarked with modern games.
    d) Games are designed around people's performance.
  • AnnonymousCoward - Friday, March 09, 2012 - link

    1. Another website shows the i7 3770K scored 2643 on the Fritz Chess Benchmark with 1 processor. My machine does 2093. That's only 26% different.

    2. I very roughly estimate the GTX560Ti might be 5-6x faster than the HD4000.

    It'd be useful to see a real comparison of these though.

Log in

Don't have an account? Sign up now