The times, they are changing. In fact, the times have already changed, we're just waiting for the results. I remember the first time Intel brought me into a hotel room to show me their answer to AMD's Athlon 64 FX—the Pentium 4 Extreme Edition. Back then the desktop race was hotly contested. Pushing the absolute limits of what could be done without a concern for power consumption was the name of the game. In the mid-2000s, the notebook started to take over. Just like the famous day when Apple announced that it was no longer a manufacturer of personal computers but a manufacturer of mobile devices, Intel came to a similar realization years prior when these slides were first shown at an IDF in 2005:


IDF 2005


IDF 2005

Intel is preparing for another major transition, similar to the one it brought to light seven years ago. The move will once again be motivated by mobility, and the transition will be away from the giant CPUs that currently power high-end desktops and notebooks to lower power, more integrated SoCs that find their way into tablets and smartphones. Intel won't leave the high-end market behind, but the trend towards mobility didn't stop with notebooks.

The fact of the matter is that everything Charlie has said on the big H is correct. Haswell will be a significant step forward in graphics performance over Ivy Bridge, and will likely mark Intel's biggest generational leap in GPU technology of all time. Internally Haswell is viewed as the solution to the ARM problem. Build a chip that can deliver extremely low idle power, to the point where you can't tell the difference between an ARM tablet running in standby and one with a Haswell inside. At the same time, give it the performance we've come to expect from Intel. Haswell is the future, and this is the bridge to take us there.

In our Ivy Bridge preview I applauded Intel for executing so well over the past few years. By limiting major architectural shifts to known process technologies, and keeping design simple when transitioning to a new manufacturing process, Intel took what once was a five year design cycle for microprocessor architectures and condensed it into two. Sure the nature of the changes every 2 years was simpler than what we used to see every 5, but like most things in life—smaller but frequent progress often works better than putting big changes off for a long time.

It's Intel's tick-tock philosophy that kept it from having a Bulldozer, and the lack of such structure that left AMD in the situation it is today (on the CPU side at least). Ironically what we saw happen between AMD and Intel over the past ten years is really just a matter of the same mistake being made by both companies, just at different times. Intel's complacency and lack of an aggressive execution model led to AMD's ability to outshine it in the late K7/K8 days. AMD's similar lack of an execution model and executive complacency allowed the tides to turn once more.

Ivy Bridge is a tick+, as we've already established. Intel took a design risk and went for greater performance all while transitioning to the most significant process technology it has ever seen. The end result is a reasonable increase in CPU performance (for a tick), a big step in GPU performance, and a decrease in power consumption.

Today is the day that Ivy Bridge gets official. Its name truly embodies its purpose. While Sandy Bridge was a bridge to a new architecture, Ivy connects a different set of things. It's a bridge to 22nm, warming the seat before Haswell arrives. It's a bridge to a new world of notebooks that are significantly thinner and more power efficient than what we have today. It's a means to the next chapter in the evolution of the PC.

Let's get to it.

Additional Reading

Intel's Ivy Bridge Architecture Exposed
Mobile Ivy Bridge Review
Undervolting & Overclocking on Ivy Bridge

Intel's Ivy Bridge: An HTPC Perspective

The Lineup: Quad-Core Only for Now
POST A COMMENT

173 Comments

View All Comments

  • hechacker1 - Monday, April 23, 2012 - link

    VT-d is interesting if you run ESXi or a Linux based hyper visor, as they allow to utilize VT-d to directly assign hardware to the virtual machines. I think you can even share hardware with it.

    In Linux for example you could host Windows and assign it a real GPU and get full performance from it.

    A while ago I built a machine with that idea in mind, but the software bits weren't in place just yet.

    I too with for an overclockable VT-d part.
    Reply
  • terragb - Monday, April 23, 2012 - link

    Just to add to this, all the processors do support VT-x which is the potentially performance enhancing spec for virtualization. Reply
  • JimmiG - Monday, April 23, 2012 - link

    Really annoying how Intel decides seemingly at random which parts get VT-d and which don't.
    Why do you get it with the $174 i5 3450, but not with the "one CPU to rule them all", everything-but-the-kitchen-sink, $313 i7 3770K?
    It's also a stupid way to segment your product line, since 99% of the people buying systems with these CPUs won't even know what it does.

    This means AMD also gets some of my money when I upgrade - I'll just build a cheap Bulldozer system for my virtualization needs. I can't really use my Phenom II X4 for that after upgrading - it uses too much power and it's dependent on DDR-2 RAM, which is hard to find and expensive.
    Reply
  • dcollins - Monday, April 23, 2012 - link

    VT-d is required to support Intel's Trusted Execution Platform, which is used by many OEMs to provide business management tools. That's why the low end CPUs have support and the enthusiast SKUs do not. VT-d provides no benefit to Desktop users right now because desktop virtualization packages do not support it.

    I agree that it is frustrating having to sacrifice future-proofing for overclocking, but Intel's logic kind of makes sense. Remember, any features that can be disabled will increase yields which means lower prices (or higher margins).
    Reply
  • JimmiG - Tuesday, April 24, 2012 - link

    VirtualBox, which is one of the most popular desktop virtualization packages, does support VT-d. In fact it's required for 64-bit guests and guests with more than one CPU being virtualized.

    Does VT-d really use so many transistors that disabling it increases yields? AMD keep their hardware virtualization features enabled even in their lowest-end CPUs (even those where entire cores have been disabled to increase yields)
    Reply
  • dgingeri - Monday, April 23, 2012 - link

    "I took the last Harry Potter Blu-ray, stripped it of its DRM and used Media Espresso to make it playable on an iPad 2 (1024 x 768 preset)."

    I wouldn't admit that in print, if I were you. The DMCA goblins will come and get you.
    Reply
  • p05esto - Monday, April 23, 2012 - link

    They can say they're just kidding and used it as an example, because they would "never" actually do that. I think pirate cops would need more than talk to go to court. Imagine how bad this site would rip into them if they said anything, lol. Reply
  • XJDHDR - Monday, April 23, 2012 - link

    Why? No-one loses money from transcode benchmarks. Besides, piracy is the real problem. If it didn't exist, there would be no DRM to strip away. Reply
  • dgingeri - Monday, April 23, 2012 - link

    Sure, nobody loses any money, but the entertainment industry pushed DMCA through, and they will use it if they think they could get any profit out of it. It's one law, out of many, that isn't there to protect anyone. It's there so the MPAA and RIAA can screw people over. Reply
  • copyrightforreal - Monday, April 23, 2012 - link

    Don't pretend you know shit about copyright law when you don't.

    Ripping a DVD you own is NOT illegal under the DMCA or Copyright act.

    Wikipedia article that even you will be able to comprehend:
    http://en.wikipedia.org/wiki/Ripping#Circumvention...
    Reply

Log in

Don't have an account? Sign up now