Adobe Photoshop CS4 Performance

To measure performance under Photoshop CS4 we turn to the Retouch Artists’ Speed Test. The test does basic photo editing; there are a couple of color space conversions, many layer creations, color curve adjustment, image and canvas size adjustment, unsharp mask, and finally a gaussian blur performed on the entire image.

The whole process is timed and thanks to the use of Intel's X25-M SSD as our test bed hard drive, performance is far more predictable than back when we used to test on mechanical disks.

Time is reported in seconds and the lower numbers mean better performance. The test is multithreaded and can hit all four cores in a quad-core machine.

Right off the bat Sandy Bridge is killer. In our Photoshop test it’s faster than its closest quad-core price competitor, faster than its identically clocked Lynnfield, faster than AMD’s fastest and loses out only to Intel’s $999 Core i7 980X. That being said, it only takes about 9% longer to complete our benchmark than the 980X.

DivX 6.5.3 with Xmpeg 5.0.3

Our DivX test is the same DivX / XMpeg 5.03 test we've run for the past few years now, the 1080p source file is encoded using the unconstrained DivX profile, quality/performance is set balanced at 5 and enhanced multithreading is enabled:

While not the most stressful encoding test, it’s still a valid measure of performance and once again, Sandy Bridge is faster than all. In this case we’re faster than the Core i5 760 (~16%) and just behind the Core i7 880. Clock for clock there's not a huge improvement in performance here (HT doesn't seem to do much), it's just a better value than the 760 assuming prices remain the same.

x264 HD Video Encoding Performance

Graysky's x264 HD test uses the publicly available x264 encoder to transcode a 4Mbps 720p MPEG-2 source. The focus here is on quality rather than speed, thus the benchmark uses a 2-pass encode and reports the average frame rate in each pass.

Lightly threaded performance is much improved - the 2400 is 14.6% faster than the Core i7 880.

The actual encoding pass favors more threads, so we see a big improvement over the 760 (19%) but it falls short of the Core i7 880. Turn HT on and we get a 12.6% improvement over an identically clocked/configured Lynnfield.

Note that CPU based video encoding performance may not matter if Intel implemented a good video transcode engine in Sandy Bridge.

Windows Media Encoder 9 x64 Advanced Profile

In order to be codec agnostic we've got a Windows Media Encoder benchmark looking at the same sort of thing we've been doing in the DivX and x264 tests, but using WME instead.

Performance in WME rarely scales anymore. Our benchmark doesn’t scale well beyond 4 cores and the only hope for performance are increases in clock speed or IPC. Sandy Bridge delivers the latter.

A 20% increase in performance vs. the similarly clocked 880 in a test that doesn’t scale with anything but IPC tells you a lot. Compared to the Core i5 760, Sandy Bridge is 26% faster.

Sandy Bridge Integrated Graphics Performance 3D Rendering Performance
Comments Locked

200 Comments

View All Comments

  • overzealot - Saturday, August 28, 2010 - link

    Now, that's a name I've not heard in a long time. A long time.
  • mapesdhs - Saturday, August 28, 2010 - link


    Seems to be Intel is slowly locking up the overclocking scene because it has no
    competition. If so, and Intel continues in that direction, then it would be a great
    chance for AMD to win back overclocking fans with something that just isn't
    locked out in the same way.

    Looking at the performance numbers, I see nothing which suggests a product that
    would beat my current 4GHz i7 860, except for the expensive top-end unlocked
    option which I wouldn't consider anyway given the price.

    Oh well, perhaps my next system will be a 6-core AMD.

    Ian.
  • LuckyKnight - Saturday, August 28, 2010 - link

    Do we have something more precise about the release date? Q1 is what - Jan/Feb/March/Apri?

    Looking to upgrade a core 2 duo at the moment - not sure whether to wait
  • mino - Saturday, August 28, 2010 - link

    Q1 (in this case) means tricle amounts in Jan/Feb, mainstream availability Mar/April and worth-buying mature mobos in May/June timeframe.
  • tatertot - Saturday, August 28, 2010 - link

    Intel has already announced that shipments for revenue will occur in Q4 of this year. So, January launch.

    They've also commented that Sandy Bridge OEM demand is very strong, and they are adjusting the 32nm ramp up to increase supply. So January should be a decent launch.

    Not surprising-- these parts have been in silicon since LAST summer.
  • chrsjav - Saturday, August 28, 2010 - link

    Do modern clock generators use a quartz resonator? How would that be put on-die?
  • iwodo - Saturday, August 28, 2010 - link

    Since you didn't get this chip directly from Intel , i suspect there were no reviews guideline for you to follow, like which test to run and what test not to run etc.

    Therefore those benchmark from Games were not a results of special optimization in drivers. Which is great, because drivers matter much more then Hardware in GPU. If these are only early indication of what Intel new GPU can do, i expect there are more to extract from drivers.

    You mention 2 Core GPU ( 12 EU ) verus 1 GPU ( 6 EU ), Any Guess as to what "E" stand for? And it seems like a SLI like tech rather then actually having more EU in one chip. The different being SLI or crossfire does not get any advantage unless drivers and games are working together. Which greatly reduces the chances of it working at full performance.

    It also seems every one fail to realize one of the greatest performance will be coming from AVX. AVX will be like MMX again when we had the Pentium. I cant think of any other SSE having as great important to performance as AVX. Once software are specially optimize for AVX we should get another major lift in performance.

    I also heard about rumors that 64bit in Sandy Bridge will work much better. But i dont know if there are anything we could test this.

    The OpenCL sounds like a Intel management decision rather then a technical decision. May be Intel will provide or work with Apple to provide OpenCL on these GPU?

    You also mention that Intel somehow support PCI -Express 2.0 with 1.0 performance. I dont get that bit there. Could you elaborate? 2.5GT/s for G45 Chipset??

    If Intel ever decide to finally work on their drivers, then their GPU will be great for entry levels.

    Are Dual Channel DDR3 1333 enough for Quad Core CPU + GPU? or even Dual core CPU.
    Is GPU memory bandwidth limited?

    Any update on Hardware Decoder? And what about transcoding part?

    Would there be ways to lock the GPU to run at Turbo Clock all the time? Or GPU gets higher priority in Turbo etc..

    How big is the Die?

    P.S - ( Any news on Intel G3 SSD? i am getting worried that next Gen Sandforce is too good for intel. )
  • ssj4Gogeta - Saturday, August 28, 2010 - link

    I believe EU means execution units.
  • DanNeely - Sunday, August 29, 2010 - link

    "You also mention that Intel somehow support PCI -Express 2.0 with 1.0 performance. I dont get that bit there. Could you elaborate? 2.5GT/s for G45 Chipset??"

    PCIE 2.0 included other low level protocol improvements in addition to the doubled clock speed. Intel only implemented the former; probably because the latter would have strangled the DMI bus.

    "Are Dual Channel DDR3 1333 enough for Quad Core CPU + GPU? or even Dual core CPU."

    Probably. The performance gains vs the previous generation isn't that large and it was enough for anything except pathological test cases (eg memory benchmarks). If it wasn't there'd be no reason why Intel couldn't officially support DDR3-1600 in their locked chipsets to give a bit of extra bandwidth.
  • chizow - Saturday, August 28, 2010 - link

    @Anand

    Could you please clarify and expand on this comment please? Is this true for all Intel chipsets that claim support for PCIe 2.0?

    [q]The other major (and welcome) change is the move to PCIe 2.0 lanes running at 5GT/s. Currently, Intel chipsets support PCIe 2.0 but they only run at 2.5GT/s, which limits them to a maximum of 250MB/s per direction per lane. This is a problem with high bandwidth USB 3.0 and 6Gbps SATA interfaces connected over PCIe x1 slots. With the move to 5GT/s, Intel is at feature parity with AMD’s chipsets and more importantly the bandwidth limits are a lot higher. A single PCIe x1 slot on a P67 motherboard can support up to 500MB/s of bandwidth in each direction (1GB/s bidirectional bandwidth).[/q]

    If this is true, current Intel chipsets do not support PCIe 2.0 as 2.5GT/s and 250MB/s is actually the same effective bandwidth as PCIe 1.1. How did you come across this information? I was looking for ways to measure PCIe bandwidth but only found obscure proprietary tools not available publicly.

    If Intel chipsets are only running at PCIe 1.1 regardless of what they're claiming externally, that would explain some of the complaints/concerns about bandwidth on older Intel chipsets.

Log in

Don't have an account? Sign up now