The Test

It turns out that our initial preview numbers were quite good. The shipping 3770K performs identically to what we tested last month. To keep the review length manageable we're presenting a subset of our results here. For all benchmark results and even more comparisons be sure to use our performance comparison tool: Bench.

Motherboard: ASUS P8Z68-V Pro (Intel Z68)
ASUS Crosshair V Formula (AMD 990FX)
Intel DX79SI (Intel X79)
Intel DZ77GA-70K (Intel Z77)
Hard Disk: Intel X25-M SSD (80GB)
Crucial RealSSD C300
OCZ Agility 3 (240GB)
Memory: 4 x 4GB G.Skill Ripjaws X DDR3-1600 9-9-9-20
Video Card: ATI Radeon HD 5870 (Windows 7)
AMD Processor Graphics
Intel Processor Graphics
Video Drivers: AMD Catalyst 12.3
Desktop Resolution: 1920 x 1200
OS: Windows 7 x64

General Performance

SYSMark 2007 & 2012

Although not the best indication of overall system performance, the SYSMark suites do give us a good idea of lighter workloads than we're used to testing. SYSMark 2007 is a better indication of low thread count performance, although 2012 isn't tremendously better in that regard.

As the SYSMark suites aren't particularly thread heavy, there's little advantage to the 6-core Sandy Bridge E CPUs. The 3770K however manages to slot in above all of the other Sandy Bridge parts at between 5—20% faster than the 2600K. The biggest advantages show up in either the lightly threaded tests or in the FP heavy benchmarks. Given what we know about Ivy's enhancements, this is exactly what we'd expect.

SYSMark 2012—Overall

SYSMark 2012—Office Productivity

SYSMark 2012—Media Creation

SYSMark 2012—Web Development

SYSMark 2012—Data/Financial Analysis

SYSMark 2012—3D Modeling

SYSMark 2012—System Management

SYSMark 2007—Overall

SYSMark 2007—Productivity

SYSMark 2007—E-Learning

SYSMark 2007—Video Creation

SYSMark 2007—3D

Content Creation Performance

Adobe Photoshop CS4

To measure performance under Photoshop CS4 we turn to the Retouch Artists’ Speed Test. The test does basic photo editing; there are a couple of color space conversions, many layer creations, color curve adjustment, image and canvas size adjustment, unsharp mask, and finally a gaussian blur performed on the entire image.

The whole process is timed and thanks to the use of Intel's X25-M SSD as our test bed hard drive, performance is far more predictable than back when we used to test on mechanical disks.

Time is reported in seconds and the lower numbers mean better performance. The test is multithreaded and can hit all four cores in a quad-core machine.

Adobe Photoshop CS4—Retouch Artists Speed Test

Our Photoshop test is well threaded but it doesn't peg all cores constantly. Instead you get burstier behavior. With the core count advantage out of the way, SNB-E steps aside and allows the 3770K to step up as the fastest CPU we've tested here. The performance advantage over the 2600K is around 9%.

3dsmax 9

Today's desktop processors are more than fast enough to do professional level 3D rendering at home. To look at performance under 3dsmax we ran the SPECapc 3dsmax 8 benchmark (only the CPU rendering tests) under 3dsmax 9 SP1. The results reported are the rendering composite scores.

3dsmax r9—SPECapc 3dsmax 8 CPU Test

In another FP heavy workload we see a pretty reasonable gain for Ivy Bridge: 8.5% over a 2600K. This isn't enough to make you want to abandon your Sandy Bridge, but it's a good step forward for a tick.

Cinebench 11.5

Created by the Cinema 4D folks we have Cinebench, a popular 3D rendering benchmark that gives us both single and multi-threaded 3D rendering results.

Cinebench 11.5—Single Threaded

The single threaded Cinebench test shows a 9% performance advantage for the 3770K over the 2600K. The gap increases slightly to 11% as we look at the multithreaded results:

Cinebench 11.5—Multi-Threaded

If you're running a workload that can really stress multiple cores, the 6-core Sandy Bridge E parts will remain unstoppable but in the quad-core world, Ivy Bridge leads the pack.

Video Transcoding Performance

x264 HD 3.03 Benchmark

Graysky's x264 HD test uses x264 to encode a 4Mbps 720p MPEG-2 source. The focus here is on quality rather than speed, thus the benchmark uses a 2-pass encode and reports the average frame rate in each pass.

x264 HD Benchmark—1st pass—v3.03

x264 HD Benchmark—2nd pass—v3.03

In the second pass of our x264 test we see a nearly 14% increase over the 2600K. Once again, there's no replacement for more cores in these types of workloads but delivering better performance in a lower TDP than last year's quad-core is great for more thermally conscious desktops.

Software Development Performance

Compile Chromium Test

You guys asked for it and finally I have something I feel is a good software build test. Using Visual Studio 2008 I'm compiling Chromium. It's a pretty huge project that takes over forty minutes to compile from the command line on a Core i3 2100. But the results are repeatable and the compile process will stress all 12 threads at 100% for almost the entire time on a 980X so it works for me.

Build Chromium Project—Visual Studio 2008

Ivy Bridge shows more traditional gains in our VS2008 benchmark—performance moves forward here by a few percent, but nothing significant. We are seeing a bit of a compressed dynamic range here for this particular compiler workload, it's quite possible that other bottlenecks are beginning to creep in as we get even faster microarchitectures.

Compression & Encryption Performance

7-Zip Benchmark

By working with a small dataset, the 7-zip benchmark gives us an indication of multithreaded integer performance without being IO limited:

7-zip Benchmark

Although real world compression/decompression tests can be heavily influenced by disk IO, the CPU does play a significant role. Here we're showing a 15% increase in performance over the 2600K. In the real world you'd see something much smaller as workloads aren't always so well threaded. The results here do have implications for other heavily compute bound integer workloads however.

TrueCrypt Benchmark

TrueCrypt is a very popular encryption package that offers full AES-NI support. The application also features a built-in encryption benchmark that we can use to measure CPU performance:

AES-128 Performance—TrueCrypt 7.1 Benchmark

Our TrueCrypt test scales fairly well with clock speed, I suspect what we're seeing here might be due in part to Ivy's ability to maintain higher multi-core turbo frequencies despite having similar max turbo frequencies to Sandy Bridge.

The 7 Series Chipset & USB 3.0 Discrete GPU Gaming Performance
Comments Locked

173 Comments

View All Comments

  • Alexo - Wednesday, April 25, 2012 - link

    It will be in Canada once Bill C-11 passes in a couple of months.
  • p05esto - Monday, April 23, 2012 - link

    It would be neat to see older CPUs in these benchmarks. It's always a pet peve of mine that these reviews only compare new CPUs against the previous generation and not 2-3 generations back.

    Most people do NOT upgrade with every single CPU release, most people upgrade their rigs every 2-3 years I'm guessing. For example, I'm running a Core i7 930 and it's very fast already, I want to upgrade to Ivy and will either way, but I'd love to see how much faster I can expect the Ivy to compare to the ol 930/920 which tons of people have.

    In my opinion going back a 2-3 generations is the ideal thing that people want to compare to. No one will upgrade from Sandy bridge (unless rich and a little stupid), but a lot of people will upgrade from the original 920 era which is a few years old now.

    Just food for thought.
  • Tchamber - Monday, April 23, 2012 - link

    I agree, I have an X58 CPU too, and there was no SB CPU worth upgrading to.
  • Anand Lal Shimpi - Monday, April 23, 2012 - link

    I agree with you and typically try to do just that, time was an issue this round - I was on the road for much of the past month and had to cut out a number of things I wanted to do for this launch.

    Thankfully, we have bench - with the 3770K included: www.anandtech.com/bench. Feel free to compare away :)

    Take care,
    Anand
  • AmdInside - Monday, April 23, 2012 - link

    Wish you guys would have included BF3 numbers for discrete GPU benchmarks. It is a game that is CPU heavy in multiplayer maps with large amounts of people.
  • fic2 - Monday, April 23, 2012 - link

    "One problem Intel does currently struggle with is game developers specifically targeting Intel graphics and treating the GPU as a lower class citizen."

    Well, as long as Intel treats their igp as the bastard red-headed step child then I am sure that developers will too.

    If they would actually put the HD3000/4000 into the main stream parts developers might pay attention to it. If I was a game developer why would I pay attention to the HD2000/2500 which isn't really capable of playing crap and is the mainstream Intel IGP? If I was a game developer I would know that anyone buying a 'K' series part is also going to be buying a discrete video card also.
  • JarredWalton - Monday, April 23, 2012 - link

    Intel's IGP performance has improved by about 500% since the days of GMA 4500. Is that not enough of an improvement for you? My comparison, Llano is only about 300% faster than the HD 4200 IGP. What's more, Haswell is set to go from 16 EUs in IVB GT2 to 40 EUs in GT3. Along with other architectural tweaks, I expect Haswell's GT3 IGP to be about three times as fast as Ivy Bridge. You'll notice that in the gaming tests, 3X HD 4000 is going to put discrete GPUs in a tough situation.
  • fic2 - Monday, April 23, 2012 - link

    Yes, but the majority of users will not have an HD3000/4000 since they will have an OEM built computer. Conversely, gamers will more than likely have an HD3000/4000 included with the 'K' series. BUT, these same gamers will more than likely also have a discrete video card and never use the HD3000/4000.

    Again, if I was a game developer why would I put resources into optimizing for an igp that gamers aren't going to use?

    I give props to Intel for the huge jump in improvement in the 'K' series igp - it went from really crappy to just sort of crappy.
    If Intel would stop doing the stupid igp segmentation and include the HD3000/HD4000 in ALL of their *Bridge cpus then game developers might see there is a big market there to optimize for. Until Intel stops shooting themselves in the marketing foot then game developers won't pay any attention to their igp. But, based on IB it looks like Haswell will probably do the same brain damaged thing and include the "good" graphics into cpus that less than 10% of the people buy and less than 10% of that 10% don't use a discrete graphics card.

    Oh, and your 500%/300% improvement is pretty crappy since HD 4200 was way faster than GMA 4500 to begin with so in absolute terms the 4200->Llano made a bigger jump than 4500->3000:
    i.e.
    4500 starts out at 2. 500% improvement would put it to 10 for an absolute improvement of 8.
    4200 starts out at 6. 300% improvement would put it at 18 for an absolute improvement of 12.
    So, AMD is still pulling away from Intel on the igp front. And AMD doesn't play igp segmentation game so their whole market has pretty good igp.
  • JarredWalton - Monday, April 23, 2012 - link

    It's an estimate, and it's pretty clear that AMD did not make the bigger jump. They were much faster than GMA 4500, but not the 3x improvement you suggest. In fact, I tested this several years back: http://www.anandtech.com/show/2818/8

    Even if we count the "failed to run" games as a 0 on Intel, AMD's HD 4200 was only 2.4x faster, and if we only look at games where the drivers didn't fail to work, they were more like 2X faster. So here's the detailed history that you're forgetting:

    1) HD 4200 was much faster than GMA 4500 -- call it twice as fast. Intel = 1, AMD = 2.

    2) Arrandale's HD Graphics really closed the gap with HD 4200 (which AMD continued to ship for far too long). Arrandale's "pathetic" HD Graphics were actually just 10% behind HD 4200, give or take. Intel = 1.9, AMD = 2 (http://www.anandtech.com/show/3773)

    3) Sandy Bridge more than doubled IGP performance on average compared to Arrandale, 130% faster by my tests (http://www.anandtech.com/show/4084/5). Meanwhile, AMD finally came out with a new IGP to replace the horribly outdated HD 4200 with Llano (http://www.anandtech.com/show/4444/11). The A8 GPU ended up being on average 50% faster than HD 3000. Intel = 2.5, AMD = 3.8.

    4) Ivy Bridge comes out and improves by 50% on average over HD 3000 (http://www.anandtech.com/show/5772/6). Intel = 3.8, AMD = 3.8

    So by those figures, what we've actually seen is that since GMA 4500MHD and HD 4200, Intel has improved their integrated graphics performance 280% and AMD has improved their performance by around 90%. So my initial estimates were off (badly, apparently). If we bring Trinity into the equation and it gets 50% more performance, then yes AMD is still ahead: Intel 3.8, AMD 5.7. That will give Intel a 280% improvement over three years and AMD a similar 280% improvement.

    Of course, if we look at the CPU side, Intel CPU multithreaded performance (just looking at Cinebench 10 SMP score) has gone up 340% from the Core 2 P8600 to the i7-3720QM. AMD's performance in the same test has gone up 80%. For single-threaded performance, Intel has gone up 115% and AMD has improved about 5-10%. So for all the talk of Intel IGP being bad, at least in terms of relative performance Intel has kept pace or even surpassed AMD. For CPU performance on the other hand, AMD has only improved marginally since the days of Athlon X2.

    Your discussion of the Intel's market segmentation is apparently missing the whole point of running a business. You do it to make a profit. Core i3 exists because not everyone is willing to pay Core i5 prices, and Core i5 exists because even fewer people are willing to pay Core i7 prices. The people that buy Core i3 and are willing to compromise on performance are happy, the people that buy i5 are happy, and the people that buy i7 are happy...and they all give money to Intel.

    If you look at the mobile side of the equation, your arguments become even less meaningful. Intel put HD 3000 into all of the Core i3/i5/i7 mobile parts because that's where IGP performance is the most important. They're doing the exact same thing on the mobile side. People who care about graphics performance on desktops are already going to by a dGPU, but you can't just add a dGPU to a notebook if you want more performance.

    And finally, "AMD doesn't play IGP segmentation" is just completely false. Take off your blinders. A8 APUs have 400 cores clocked at 444MHz. A6 APUs have 320 cores clocked at 400MHz, and A4 APUs have 240 cores clocked at 444MHz. AMD is every bit as bad as Intel when it comes to market segmentation by IGP performance!
  • fic2 - Monday, April 23, 2012 - link

    I guess you are correct about AMD - I haven't really paid much attention to them since, as you said, they can't keep up on the cpu side.

    But, TH lists the 6410 (A4 igp) as being 3 levels above the HD3000 in their Graphics Hierarchy Chart. They also have the HD2000 2 levels below the HD3000. So, Intel's mainstream igp is 5 levels below AMDs lowest igp.

    That is why game developers treat Intel's igp as a lower class citizen.

    The quote that I was addressing (as stated in my first post) is:
    "One problem Intel does currently struggle with is game developers specifically targeting Intel graphics and treating the GPU as a lower class citizen."

    The article acts like it is a total mystery why game developers don't give the Intel igp any respect. As I have repeatedly said in my comments - until Intel starts putting the HD3000/HD4000 into their mainstream parts and not just the 'K' series game developers know that Intel igp is a lower class citizen. And, yes, I know that you can get a xxx5 variant w/HD3000 if you look around enough, but I doubt any OEM is using them and they didn't appear until 6+ months after the launch. It is just easier to slap a 5-6 year old discrete video card into a computer.
    Game developers can't target the HD3000/HD4000 since those are the minority for SB/IB chips. They would have to target the HD2000/HD2500. Since they don't the conclusion is that it isn't worth putting the resources into such a low end graphics solution.

Log in

Don't have an account? Sign up now