Note: This preview was not sanctioned or supported by Intel in any way.

I still remember hearing about Intel's tick-tock cadence and not having much faith that the company could pull it off. Granted Intel hasn't given us a new chip every 12 months on the dot, but more or less there's something new every year. Every year we either get a new architecture on an established process node (tock), or a derivative architecture on a new process node (tick). The table below summarizes what we've seen since Intel adopted the strategy:

Intel's Tick-Tock Cadence
Microarchitecture Process Node Tick or Tock Release Year
Conroe/Merom 65nm Tock 2006
Penryn 45nm Tick 2007
Nehalem 45nm Tock 2008
Westmere 32nm Tick 2010
Sandy Bridge 32nm Tock 2011
Ivy Bridge 22nm Tick 2012
Haswell 22nm Tock 2013

Last year was a big one. Sandy Bridge brought a Conroe-like increase in performance across the board thanks to a massive re-plumbing of Intel's out-of-order execution engine and other significant changes to the microarchitecture. If you remember Conroe (the first Core 2 architecture), what followed it was a relatively mild upgrade called Penryn that gave you a little bit in the way of performance and dropped power consumption at the same time.

Ivy Bridge, the tick that follows Sandy Bridge, would typically be just that: a mild upgrade that inched performance ahead while dropping power consumption. Intel's microprocessor ticks are usually very conservative on the architecture side, which limits the performance improvement. Being less risky on the architecture allows Intel to focus more on working out the kinks in its next process node, in turn delivering some amount of tangible power reduction.

Where Ivy Bridge shakes things up is on the graphics side. For years Intel has been able to ship substandard graphics in its chipsets based on the principle that only gamers needed real GPUs and Windows ran just fine on integrated graphics. Over the past decade that philosophy required adjustment. First it was HD video decode acceleration, then GPU accelerated user interfaces and, more recently, GPU computing applications. Intel eventually committed to taking GPU performance (and driver quality) seriously, setting out on a path to significantly improve its GPUs.

As Ivy is a tick in Intel's cadence, we shouldn't see much of a performance improvement. On the CPU side that's mostly true. You can expect a 5 - 15% increase in performance for the same price as a Sandy Bridge CPU today. A continued desire to be aggressive on the GPU front however puts Intel in a tough spot. Moving to a new manufacturing process, especially one as dramatically different as Intel's 22nm 3D tri-gate node isn't easy. Any additional complexity outside of the new process simply puts schedule at risk. That being said, its GPUs continue to lag significantly behind AMD and more importantly, they still aren't fast enough by customer standards.

Apple has been pushing Intel for faster graphics for years, having no issues with including discrete GPUs across its lineup or even prioritizing GPU over CPU upgrades. Intel's exclusivity agreement with Apple expired around Nehalem, meaning every design win can easily be lost if the fit isn't right.

With Haswell, Intel will finally deliver what Apple and other customers have been asking for on the GPU front. Until then Intel had to do something to move performance forward. A simple tick wouldn't cut it.

Intel calls Ivy Bridge a tick+. While CPU performance steps forward, GPU performance sees a more significant improvement - in the 20 - 50% range. The magnitude of improvement on the GPU side is more consistent with what you'd expect from a tock. The combination of a CPU tick and a GPU tock is how Intel arrives at the tick+ naming. I'm personally curious to see how this unfolds going forward. Will GPU and CPUs go through alternating tocks or will Intel try to synchronize them? Do we see innovation on one side slow down as the other increases? Does tick-tock remain on a two year cadence now that there are two fairly different architectures that need updating? These are questions I don't know that we'll see answers to until after Haswell. For now, let's focus on Ivy Bridge.

Ivy Bridge Architecture Recap
Comments Locked

195 Comments

View All Comments

  • mrSmigs - Wednesday, March 7, 2012 - link

    The ivy bridge 3770k is a direct replacement for the sandy bridge 2700k which is only a small upgrade from the 2600k yet still missing from the benchmarks to allow a direct architectural comparison.

    Intel badly need powervr in its graphics core.... will they finally use a multicore Rogue series 6 core in the next generation (Haswell???) for some decent performance in their IGP???? They developed easily the fastest graphics core in the arm soc tablets/phones inside the ipad 2/iphone4s now its time to save intel (one of imgtechs biggest shareholders along with apple). Intel need to ditch this old weak igp core architecture and get with the times....

    The amd llano even with its terribly weak cpu core still clearly outpaces this new improved intel hd4000 core in these non gpu limited tests. If amd had a faster cpu they would be even further ahead in regards to graphic capabilities, which appear cpu limited in many cases too(see discreet gpu tables to get an idea of intels cpu advantages).

    Where are the in game checks on intel's notorious poor image quality, much like when radeons are compared to geforces to ensure these are even producing an acceptable image for the performance they give and not cutting corners???

    Happy with the lower power and performance cpu gains of Ivy Bridge. Disappointed in the weak old graphics once again, which fail to match llano even with a far stronger cpu dragging it along...
  • hasseb64 - Wednesday, March 7, 2012 - link

    How about OPEN GL support?
  • numberoneoppa - Wednesday, March 7, 2012 - link

    Perhaps because not everybody who needs a lot of CPU power also needs to game or do other GPU heavy activities.

    Come on, mate. Think.
  • Conficio - Wednesday, March 7, 2012 - link

    You guys asked for it and finally I have something I feel is a good software build test.


    I just wanted to say thank you for this. May be we can add a maven based java test as well, which should give some idea of javac performance (or a large Eclipse base build all).
  • Conficio - Wednesday, March 7, 2012 - link

    Uhh, this comment renders funny oh Chrome.
  • piesquared - Wednesday, March 7, 2012 - link

    Is this some kind of joke? It may be comical, but it sure ain't funny. intel themselves had slides circulating around showing at least 2x performance increasee over last generation. Now they show up with not even half that and Anand falls to his knees in praise.. Seems a little fishy to me where have I seen this before....Right, the primary elections in the US! Same shit, the elite give the mainstream media their marching orders, and the main stream media sets out to brainwash the mass population with that message. And you continue to lead the charge on downplaying image quality and functionality, ever since you became intel's mouthpiece. Where are the days of proper image quality comparisons, and feature benefit to consumers. That's all dropped off the radar because intel has abysmal and atrocious graphics capability and know how. They're the WORST in the industry, and yet he we have good ol' anand patting his buddy on the bumb ensuring that intel will ever have a need to actualy compete. They can just hand off money to the pieces' of shit in the world and have them manipulate the perception.
    tics
  • Hector2 - Wednesday, March 7, 2012 - link

    Sounds like you have some issues. Maybe you should see a therapist
  • awg0681 - Wednesday, March 7, 2012 - link

    Maybe I misread the article or read a different one. It came across to me that Anand was mainly comparing the HD4000 to HD3000. In which case there is generally a notable increase in performance. It's not 2x the HD3000, but doing a quick search trying to find these slides you mention showing such an increase came up with nothing. Only found one on Tom's which was a leaked slide comparing HD2000 to HD4000. If you could link some of those that would be great. Also, in just about every case where the HD4000 was (almost inevitably) beaten by AMD in graphics performance, it was pointed out.
  • geddarkstorm - Wednesday, March 7, 2012 - link

    I wonder how much of the improvement in the performance to power ratio is due to the trigate technology. In same ways, I was expecting a bigger jump around 20%, but since they also dropped the power by 30W, that says a lot. Looking at his from the perf/power perspective makes it a bigger deal than it sounds from a 5-15% CPU gain.

    Still.. for some reason I feel a little disappointed. I thought trigate would change things even more in conjuncture with 22 nm process.

    So can't wait to see what Hanswell will do.
  • Exodite - Wednesday, March 7, 2012 - link

    Does it matter though?

    After all that argument cuts both ways.

    Any iGPU today is good enough for 2D use, browsing and mainstream gaming - which means stuff like The SIMS 3 rather than Crysis.

    The same is true for CPU power.

    Heck, most users would be perfectly happy with using their smartphones as a desktop.

Log in

Don't have an account? Sign up now