The times, they are changing. In fact, the times have already changed, we're just waiting for the results. I remember the first time Intel brought me into a hotel room to show me their answer to AMD's Athlon 64 FX—the Pentium 4 Extreme Edition. Back then the desktop race was hotly contested. Pushing the absolute limits of what could be done without a concern for power consumption was the name of the game. In the mid-2000s, the notebook started to take over. Just like the famous day when Apple announced that it was no longer a manufacturer of personal computers but a manufacturer of mobile devices, Intel came to a similar realization years prior when these slides were first shown at an IDF in 2005:


IDF 2005


IDF 2005

Intel is preparing for another major transition, similar to the one it brought to light seven years ago. The move will once again be motivated by mobility, and the transition will be away from the giant CPUs that currently power high-end desktops and notebooks to lower power, more integrated SoCs that find their way into tablets and smartphones. Intel won't leave the high-end market behind, but the trend towards mobility didn't stop with notebooks.

The fact of the matter is that everything Charlie has said on the big H is correct. Haswell will be a significant step forward in graphics performance over Ivy Bridge, and will likely mark Intel's biggest generational leap in GPU technology of all time. Internally Haswell is viewed as the solution to the ARM problem. Build a chip that can deliver extremely low idle power, to the point where you can't tell the difference between an ARM tablet running in standby and one with a Haswell inside. At the same time, give it the performance we've come to expect from Intel. Haswell is the future, and this is the bridge to take us there.

In our Ivy Bridge preview I applauded Intel for executing so well over the past few years. By limiting major architectural shifts to known process technologies, and keeping design simple when transitioning to a new manufacturing process, Intel took what once was a five year design cycle for microprocessor architectures and condensed it into two. Sure the nature of the changes every 2 years was simpler than what we used to see every 5, but like most things in life—smaller but frequent progress often works better than putting big changes off for a long time.

It's Intel's tick-tock philosophy that kept it from having a Bulldozer, and the lack of such structure that left AMD in the situation it is today (on the CPU side at least). Ironically what we saw happen between AMD and Intel over the past ten years is really just a matter of the same mistake being made by both companies, just at different times. Intel's complacency and lack of an aggressive execution model led to AMD's ability to outshine it in the late K7/K8 days. AMD's similar lack of an execution model and executive complacency allowed the tides to turn once more.

Ivy Bridge is a tick+, as we've already established. Intel took a design risk and went for greater performance all while transitioning to the most significant process technology it has ever seen. The end result is a reasonable increase in CPU performance (for a tick), a big step in GPU performance, and a decrease in power consumption.

Today is the day that Ivy Bridge gets official. Its name truly embodies its purpose. While Sandy Bridge was a bridge to a new architecture, Ivy connects a different set of things. It's a bridge to 22nm, warming the seat before Haswell arrives. It's a bridge to a new world of notebooks that are significantly thinner and more power efficient than what we have today. It's a means to the next chapter in the evolution of the PC.

Let's get to it.

Additional Reading

Intel's Ivy Bridge Architecture Exposed
Mobile Ivy Bridge Review
Undervolting & Overclocking on Ivy Bridge

Intel's Ivy Bridge: An HTPC Perspective

The Lineup: Quad-Core Only for Now
Comments Locked

173 Comments

View All Comments

  • Alexo - Wednesday, April 25, 2012 - link

    It will be in Canada once Bill C-11 passes in a couple of months.
  • p05esto - Monday, April 23, 2012 - link

    It would be neat to see older CPUs in these benchmarks. It's always a pet peve of mine that these reviews only compare new CPUs against the previous generation and not 2-3 generations back.

    Most people do NOT upgrade with every single CPU release, most people upgrade their rigs every 2-3 years I'm guessing. For example, I'm running a Core i7 930 and it's very fast already, I want to upgrade to Ivy and will either way, but I'd love to see how much faster I can expect the Ivy to compare to the ol 930/920 which tons of people have.

    In my opinion going back a 2-3 generations is the ideal thing that people want to compare to. No one will upgrade from Sandy bridge (unless rich and a little stupid), but a lot of people will upgrade from the original 920 era which is a few years old now.

    Just food for thought.
  • Tchamber - Monday, April 23, 2012 - link

    I agree, I have an X58 CPU too, and there was no SB CPU worth upgrading to.
  • Anand Lal Shimpi - Monday, April 23, 2012 - link

    I agree with you and typically try to do just that, time was an issue this round - I was on the road for much of the past month and had to cut out a number of things I wanted to do for this launch.

    Thankfully, we have bench - with the 3770K included: www.anandtech.com/bench. Feel free to compare away :)

    Take care,
    Anand
  • AmdInside - Monday, April 23, 2012 - link

    Wish you guys would have included BF3 numbers for discrete GPU benchmarks. It is a game that is CPU heavy in multiplayer maps with large amounts of people.
  • fic2 - Monday, April 23, 2012 - link

    "One problem Intel does currently struggle with is game developers specifically targeting Intel graphics and treating the GPU as a lower class citizen."

    Well, as long as Intel treats their igp as the bastard red-headed step child then I am sure that developers will too.

    If they would actually put the HD3000/4000 into the main stream parts developers might pay attention to it. If I was a game developer why would I pay attention to the HD2000/2500 which isn't really capable of playing crap and is the mainstream Intel IGP? If I was a game developer I would know that anyone buying a 'K' series part is also going to be buying a discrete video card also.
  • JarredWalton - Monday, April 23, 2012 - link

    Intel's IGP performance has improved by about 500% since the days of GMA 4500. Is that not enough of an improvement for you? My comparison, Llano is only about 300% faster than the HD 4200 IGP. What's more, Haswell is set to go from 16 EUs in IVB GT2 to 40 EUs in GT3. Along with other architectural tweaks, I expect Haswell's GT3 IGP to be about three times as fast as Ivy Bridge. You'll notice that in the gaming tests, 3X HD 4000 is going to put discrete GPUs in a tough situation.
  • fic2 - Monday, April 23, 2012 - link

    Yes, but the majority of users will not have an HD3000/4000 since they will have an OEM built computer. Conversely, gamers will more than likely have an HD3000/4000 included with the 'K' series. BUT, these same gamers will more than likely also have a discrete video card and never use the HD3000/4000.

    Again, if I was a game developer why would I put resources into optimizing for an igp that gamers aren't going to use?

    I give props to Intel for the huge jump in improvement in the 'K' series igp - it went from really crappy to just sort of crappy.
    If Intel would stop doing the stupid igp segmentation and include the HD3000/HD4000 in ALL of their *Bridge cpus then game developers might see there is a big market there to optimize for. Until Intel stops shooting themselves in the marketing foot then game developers won't pay any attention to their igp. But, based on IB it looks like Haswell will probably do the same brain damaged thing and include the "good" graphics into cpus that less than 10% of the people buy and less than 10% of that 10% don't use a discrete graphics card.

    Oh, and your 500%/300% improvement is pretty crappy since HD 4200 was way faster than GMA 4500 to begin with so in absolute terms the 4200->Llano made a bigger jump than 4500->3000:
    i.e.
    4500 starts out at 2. 500% improvement would put it to 10 for an absolute improvement of 8.
    4200 starts out at 6. 300% improvement would put it at 18 for an absolute improvement of 12.
    So, AMD is still pulling away from Intel on the igp front. And AMD doesn't play igp segmentation game so their whole market has pretty good igp.
  • JarredWalton - Monday, April 23, 2012 - link

    It's an estimate, and it's pretty clear that AMD did not make the bigger jump. They were much faster than GMA 4500, but not the 3x improvement you suggest. In fact, I tested this several years back: http://www.anandtech.com/show/2818/8

    Even if we count the "failed to run" games as a 0 on Intel, AMD's HD 4200 was only 2.4x faster, and if we only look at games where the drivers didn't fail to work, they were more like 2X faster. So here's the detailed history that you're forgetting:

    1) HD 4200 was much faster than GMA 4500 -- call it twice as fast. Intel = 1, AMD = 2.

    2) Arrandale's HD Graphics really closed the gap with HD 4200 (which AMD continued to ship for far too long). Arrandale's "pathetic" HD Graphics were actually just 10% behind HD 4200, give or take. Intel = 1.9, AMD = 2 (http://www.anandtech.com/show/3773)

    3) Sandy Bridge more than doubled IGP performance on average compared to Arrandale, 130% faster by my tests (http://www.anandtech.com/show/4084/5). Meanwhile, AMD finally came out with a new IGP to replace the horribly outdated HD 4200 with Llano (http://www.anandtech.com/show/4444/11). The A8 GPU ended up being on average 50% faster than HD 3000. Intel = 2.5, AMD = 3.8.

    4) Ivy Bridge comes out and improves by 50% on average over HD 3000 (http://www.anandtech.com/show/5772/6). Intel = 3.8, AMD = 3.8

    So by those figures, what we've actually seen is that since GMA 4500MHD and HD 4200, Intel has improved their integrated graphics performance 280% and AMD has improved their performance by around 90%. So my initial estimates were off (badly, apparently). If we bring Trinity into the equation and it gets 50% more performance, then yes AMD is still ahead: Intel 3.8, AMD 5.7. That will give Intel a 280% improvement over three years and AMD a similar 280% improvement.

    Of course, if we look at the CPU side, Intel CPU multithreaded performance (just looking at Cinebench 10 SMP score) has gone up 340% from the Core 2 P8600 to the i7-3720QM. AMD's performance in the same test has gone up 80%. For single-threaded performance, Intel has gone up 115% and AMD has improved about 5-10%. So for all the talk of Intel IGP being bad, at least in terms of relative performance Intel has kept pace or even surpassed AMD. For CPU performance on the other hand, AMD has only improved marginally since the days of Athlon X2.

    Your discussion of the Intel's market segmentation is apparently missing the whole point of running a business. You do it to make a profit. Core i3 exists because not everyone is willing to pay Core i5 prices, and Core i5 exists because even fewer people are willing to pay Core i7 prices. The people that buy Core i3 and are willing to compromise on performance are happy, the people that buy i5 are happy, and the people that buy i7 are happy...and they all give money to Intel.

    If you look at the mobile side of the equation, your arguments become even less meaningful. Intel put HD 3000 into all of the Core i3/i5/i7 mobile parts because that's where IGP performance is the most important. They're doing the exact same thing on the mobile side. People who care about graphics performance on desktops are already going to by a dGPU, but you can't just add a dGPU to a notebook if you want more performance.

    And finally, "AMD doesn't play IGP segmentation" is just completely false. Take off your blinders. A8 APUs have 400 cores clocked at 444MHz. A6 APUs have 320 cores clocked at 400MHz, and A4 APUs have 240 cores clocked at 444MHz. AMD is every bit as bad as Intel when it comes to market segmentation by IGP performance!
  • fic2 - Monday, April 23, 2012 - link

    I guess you are correct about AMD - I haven't really paid much attention to them since, as you said, they can't keep up on the cpu side.

    But, TH lists the 6410 (A4 igp) as being 3 levels above the HD3000 in their Graphics Hierarchy Chart. They also have the HD2000 2 levels below the HD3000. So, Intel's mainstream igp is 5 levels below AMDs lowest igp.

    That is why game developers treat Intel's igp as a lower class citizen.

    The quote that I was addressing (as stated in my first post) is:
    "One problem Intel does currently struggle with is game developers specifically targeting Intel graphics and treating the GPU as a lower class citizen."

    The article acts like it is a total mystery why game developers don't give the Intel igp any respect. As I have repeatedly said in my comments - until Intel starts putting the HD3000/HD4000 into their mainstream parts and not just the 'K' series game developers know that Intel igp is a lower class citizen. And, yes, I know that you can get a xxx5 variant w/HD3000 if you look around enough, but I doubt any OEM is using them and they didn't appear until 6+ months after the launch. It is just easier to slap a 5-6 year old discrete video card into a computer.
    Game developers can't target the HD3000/HD4000 since those are the minority for SB/IB chips. They would have to target the HD2000/HD2500. Since they don't the conclusion is that it isn't worth putting the resources into such a low end graphics solution.

Log in

Don't have an account? Sign up now