Compute

As always we'll start with our DirectCompute game example, Civilization V, which uses DirectCompute to decompress textures on the fly. Civ V includes a sub-benchmark that exclusively tests the speed of their texture decompression algorithm by repeatedly decompressing the textures required for one of the game’s leader scenes. While DirectCompute is used in many games, this is one of the only games with a benchmark that can isolate the use of DirectCompute and its resulting performance.

Compute: Civilization V

AMD does extremely well in our sole DirectCompute test, outperforming Intel's latest desktop graphics solution by a huge margin.

Our next benchmark is LuxMark2.0, the official benchmark of SmallLuxGPU 2.0. SmallLuxGPU is an OpenCL accelerated ray tracer that is part of the larger LuxRender suite. Ray tracing has become a stronghold for GPUs in recent years as ray tracing maps well to GPU pipelines, allowing artists to render scenes much more quickly than with CPUs alone.

Compute: LuxMark 2.0

Haswell GT2's OpenCL performance can be very good, which is what we're seeing here. HD 4600 ends up being almost 60% faster than the Radeon HD 8670D.

Our 3rd benchmark set comes from CLBenchmark 1.1. CLBenchmark contains a number of subtests; we’re focusing on the most practical of them, the computer vision test and the fluid simulation test. The former being a useful proxy for computer imaging tasks where systems are required to parse images and identify features (e.g. humans), while fluid simulations are common in professional graphics work and games alike.

Compute: CLBenchmark 1.1 Computer Vision

Compute: CLBenchmark 1.1 Fluid Simulation

AMD and Intel trade places once again with CLBenchmark. Here, Richland does extremely well.

Our final compute benchmark is Sony Vegas Pro 12, an OpenGL and OpenCL video editing and authoring package. Vegas can use GPUs in a few different ways, the primary uses being to accelerate the video effects and compositing process itself, and in the video encoding step. With video encoding being increasingly offloaded to dedicated DSPs these days we’re focusing on the editing and compositing process, rendering to a low CPU overhead format (XDCAM EX). This specific test comes from Sony, and measures how long it takes to render a video.

Compute: Sony Vegas Pro 12 Video Render

The last compute test goes to Intel, although the two put up a good fight across the entire suite.

Synthetics 3DMark and GFXBench
Comments Locked

102 Comments

View All Comments

  • FriendlyUser - Thursday, June 6, 2013 - link

    Indeed, there is a $468 part. You can still fit a decent dGPU and a decent CPU on that budget for, once again, vastly superior performance. And you don't need crossfire but you do lose on power consumption, which is the only point the Iris has for it.
  • iwod - Thursday, June 6, 2013 - link

    I wonder how much discount do OEM generally gets from Intel. 30% off Tray $440 @ $308/chip ? If the CPU used to cost them $200 and $100 for the GPU, i guess the space saving of 2in1 solution, less power usage, while giving similar performance is going to be attractive enough.
  • testbug00 - Friday, June 7, 2013 - link

    My desktop costed less than that... Mine probably is a little slower even with 1.1Ghz GPU and 4.4 CPU (my A10-5800K w/ 1866 OCed to 2133)
  • Sabresiberian - Friday, June 7, 2013 - link

    Yah, for me, the only consideration for a system with on-die CPU graphics is if I buy a low-end notebook that I want to do a little gaming on, and the chips with Iris price themselves out of that market. I've recommended AMD for that kind of product to my friends before, and I don't see any reason to change that.
  • Sabresiberian - Friday, June 7, 2013 - link

    What does Crossfire have to do with it? Using on-die graphics with an added discrete card doesn't have anything to do with Crossfire.
  • max1001 - Friday, June 7, 2013 - link

    Because AMD like to call APU+GPU combo Hybird Crossfire.
  • Spunjji - Friday, June 7, 2013 - link

    Who said anything about Crossfire?!
  • MrSpadge - Thursday, June 6, 2013 - link

    No, Crystalwell also makes sense on any high-performance part. Be it the topmost dekstop K-series or the Xeons. That cache can add ~10% performance in quite a few applications, which equals 300 - 500 MHz more CPU clock. And at 300$ there'd easily be enough margin left for Intel. But no need to push such chips...
  • Gigaplex - Thursday, June 6, 2013 - link

    There isn't a single K-series part with Crystalwell.
  • mdular - Thursday, June 6, 2013 - link

    As others have already pointed out it's not the "most important information" at all. Crystalwell isn't available on a regular desktop socket.

    Most importantly though, that is also for a good reason: Who would buy it? At the price point of the Crystalwell equipped CPUs you would get hugely better gaming performance with an i3/i5/FX and a dedicated GPU. You can build an entire system from scratch for the same amount and game away with decent quality settings, often high - in full HD.

    There is a point to make for HTPCs, gaming laptops/laplets, but i would assume that they don't sell a lot of them at the Crystalwell performance target.

    Since the article is about Desktops however, and considering all of the above, Crystalwell is pretty irrelevant in this comparison. If you seek the info on Crystalwell performance i guess you will know where to find it.

Log in

Don't have an account? Sign up now