Note: This preview was not sanctioned or supported by Intel in any way.

I still remember hearing about Intel's tick-tock cadence and not having much faith that the company could pull it off. Granted Intel hasn't given us a new chip every 12 months on the dot, but more or less there's something new every year. Every year we either get a new architecture on an established process node (tock), or a derivative architecture on a new process node (tick). The table below summarizes what we've seen since Intel adopted the strategy:

Intel's Tick-Tock Cadence
Microarchitecture Process Node Tick or Tock Release Year
Conroe/Merom 65nm Tock 2006
Penryn 45nm Tick 2007
Nehalem 45nm Tock 2008
Westmere 32nm Tick 2010
Sandy Bridge 32nm Tock 2011
Ivy Bridge 22nm Tick 2012
Haswell 22nm Tock 2013

Last year was a big one. Sandy Bridge brought a Conroe-like increase in performance across the board thanks to a massive re-plumbing of Intel's out-of-order execution engine and other significant changes to the microarchitecture. If you remember Conroe (the first Core 2 architecture), what followed it was a relatively mild upgrade called Penryn that gave you a little bit in the way of performance and dropped power consumption at the same time.

Ivy Bridge, the tick that follows Sandy Bridge, would typically be just that: a mild upgrade that inched performance ahead while dropping power consumption. Intel's microprocessor ticks are usually very conservative on the architecture side, which limits the performance improvement. Being less risky on the architecture allows Intel to focus more on working out the kinks in its next process node, in turn delivering some amount of tangible power reduction.

Where Ivy Bridge shakes things up is on the graphics side. For years Intel has been able to ship substandard graphics in its chipsets based on the principle that only gamers needed real GPUs and Windows ran just fine on integrated graphics. Over the past decade that philosophy required adjustment. First it was HD video decode acceleration, then GPU accelerated user interfaces and, more recently, GPU computing applications. Intel eventually committed to taking GPU performance (and driver quality) seriously, setting out on a path to significantly improve its GPUs.

As Ivy is a tick in Intel's cadence, we shouldn't see much of a performance improvement. On the CPU side that's mostly true. You can expect a 5 - 15% increase in performance for the same price as a Sandy Bridge CPU today. A continued desire to be aggressive on the GPU front however puts Intel in a tough spot. Moving to a new manufacturing process, especially one as dramatically different as Intel's 22nm 3D tri-gate node isn't easy. Any additional complexity outside of the new process simply puts schedule at risk. That being said, its GPUs continue to lag significantly behind AMD and more importantly, they still aren't fast enough by customer standards.

Apple has been pushing Intel for faster graphics for years, having no issues with including discrete GPUs across its lineup or even prioritizing GPU over CPU upgrades. Intel's exclusivity agreement with Apple expired around Nehalem, meaning every design win can easily be lost if the fit isn't right.

With Haswell, Intel will finally deliver what Apple and other customers have been asking for on the GPU front. Until then Intel had to do something to move performance forward. A simple tick wouldn't cut it.

Intel calls Ivy Bridge a tick+. While CPU performance steps forward, GPU performance sees a more significant improvement - in the 20 - 50% range. The magnitude of improvement on the GPU side is more consistent with what you'd expect from a tock. The combination of a CPU tick and a GPU tock is how Intel arrives at the tick+ naming. I'm personally curious to see how this unfolds going forward. Will GPU and CPUs go through alternating tocks or will Intel try to synchronize them? Do we see innovation on one side slow down as the other increases? Does tick-tock remain on a two year cadence now that there are two fairly different architectures that need updating? These are questions I don't know that we'll see answers to until after Haswell. For now, let's focus on Ivy Bridge.

Ivy Bridge Architecture Recap
Comments Locked

195 Comments

View All Comments

  • BoFox - Wednesday, March 7, 2012 - link

    There are three different versions of HD 5570 (DDR2, DDR3, and GDDR5 - with the GDDR5 having FIVE times as much bandwidth as the DDR2 version).

    There are also two different versions of HD 5450 (DDR2 and DDR3).

    It would be appreciated if you could let us know which versions were used in the benchmarks in this article. Thanks
  • BoFox - Wednesday, March 7, 2012 - link

    Just let us know which GPU was used for the discrete GPU tests! LOL..
  • KZ0 - Wednesday, March 7, 2012 - link

    "ATI Radeon HD 5870 (Windows 7)", page 4
    :)
  • WiZARD7 - Wednesday, March 7, 2012 - link

    There should be a comparison at the same clock speed - nehalem - sandy bridge - ivy bridge (@4 Ghz )
  • Breach1337 - Wednesday, March 7, 2012 - link

    On page, shouldn't:

    " Doing so won't give you access to some of the newer 7-series chipset features like PCIe Gen 3 (some 6-series boards are claiming 3.0 support), native USB 3.0 (many 6-series boards have 3rd party USB 3.0 controllers) and Intel's Rapid Start Technology."

    say

    " Doing so will give you access to some of the newer 7-series chipset features like PCIe Gen 3 (some 6-series boards are claiming 3.0 support), native USB 3.0 (many 6-series boards have 3rd party USB 3.0 controllers) and Intel's Rapid Start Technology."
  • iwod - Wednesday, March 7, 2012 - link

    There are many thing not mentioned.

    Intel are strong in Software everywhere except the Gfx drivers department. No wonder why others call Anand a pro Intel site, i dont want to believe it, until all the article continue to label Intel are hard at work on Gfx drivers when they are clearly not. They are better then what they are used to be, but still far from good.

    Graphics Quality on Intel IGP are not even close to what AMD offers.

    Even if Haswell double the performance of Ivy they will still be one generation behind AMD.

    I continue to wonder why they use their own GPU on Desktop / Laptop and not on Mobile SoC. They could have used PowerVR on Desktop as well, developing drivers for one hardware will simplify things and hopefully have bigger incentive to increase software R&D.
  • meloz - Wednesday, March 7, 2012 - link

    >>No wonder why others call Anand a pro Intel site
    What should he do, fake the benchmark results to make AMD look better than they are? Anand can only report his findings, he does this truthfully. Some people do not want to accept reality and prefer to shoot the messenger. Direct your frustrations towards AMD, not websites which report results of benchmarks.

    From past benchmarks you can see the results at Anandtech are that different from other websites, AMD is getting destroyed on CPU perfomance and performance / watt metric.

    >>I continue to wonder why they use their own GPU on Desktop / Laptop and not on Mobile SoC. They could have used PowerVR on Desktop as well,

    FYI, they are dumping PowerVR in near future as well. Already covered on many websites, google it. PowerVR was a temporary fix, or rather an attempt at a fix which was more of a hassle and didn't work in the marketplace anyway.

    They are now comitted to improving their own iGPU and drivers. This will take time for sure, Intel marches to its own beat.

    The simple fact is that with the much weaker Sandy Brdige iGPU they outsold AMD 15 to 1, so even though the Ivy Bridge iGPU has not surpassed AMD yet, Intel should continue to do really well.

    >>i dont want to believe it, until all the article continue to label Intel are hard at work on Gfx drivers when they are clearly not.

    You can believe whatever you want to believe, this is not about beliefs but facts. As a user of Sandy Bridge and linux I know better than most just how much Intel drivers suck. In fact, their linux iGPU drivers suck much worse than Windows version (hard to imagine, but true) and weren't truly ready until Mesa 8.0, more than a year after release of the hardware.

    But I also know they are working on things like SNA which in early test already offers ~20% performance boost.

    No word on when it will be consumer ready, but Intel are working and steadily improving on drivers side as well. Perhaps not at the pace you want. You do not have to accept reality if it is so difficult for you, don't blame websites for reporting reality, however.

    I am almost grateful Intel is not 'good enough' on GPU side as yet. It keeps AMD alive another year. Hopefully.
  • meloz - Wednesday, March 7, 2012 - link

    >>From past benchmarks you can see the results at Anandtech are that different from other websites

    Should read: From past benchmarks you can see the results at Anandtech are NOT that different from other websites.

    Sigh, allow us to edit posts, if only for 10 minutes or so after making the initial post.
  • ET - Wednesday, March 7, 2012 - link

    PowerVR has lower performance and fewer features, so would not be a good PC solution. I'm also sure that Intel would rather have its own solution, it's just that it can't yet compete with PowerVR at the low power arena. I imagine that if Intel succeeds in mobile space it will try to create its own low power 3D core.

    As for graphics drivers, I'm sure Intel is hard at work at them, but probably has fewer people than AMD on that. Far as I can see it's no longer the case that reviews with Intel graphics keep talking about what didn't run correctly, which means that things are getting better.
  • Belard - Wednesday, March 7, 2012 - link

    Anyone notice in the Compile Chromium Test in which CORE count actually matters...

    AMD's "8 core" fx8150 doesn't come close to the 3770K, much less the 2500K (4 core/4 thread) CPU.

    But give it to AMD for Llano for easily out-performing intel with built-in graphics, handy for notebooks. AMD should have put GPUs into the fx-line.

    The odd-thing about intel's HD-Graphics is that the lower-end really needs to have the HD4000 more than the higher end.

Log in

Don't have an account? Sign up now