Compute

As always we'll start with our DirectCompute game example, Civilization V, which uses DirectCompute to decompress textures on the fly. Civ V includes a sub-benchmark that exclusively tests the speed of their texture decompression algorithm by repeatedly decompressing the textures required for one of the game’s leader scenes. While DirectCompute is used in many games, this is one of the only games with a benchmark that can isolate the use of DirectCompute and its resulting performance.

Compute: Civilization V

AMD does extremely well in our sole DirectCompute test, outperforming Intel's latest desktop graphics solution by a huge margin.

Our next benchmark is LuxMark2.0, the official benchmark of SmallLuxGPU 2.0. SmallLuxGPU is an OpenCL accelerated ray tracer that is part of the larger LuxRender suite. Ray tracing has become a stronghold for GPUs in recent years as ray tracing maps well to GPU pipelines, allowing artists to render scenes much more quickly than with CPUs alone.

Compute: LuxMark 2.0

Haswell GT2's OpenCL performance can be very good, which is what we're seeing here. HD 4600 ends up being almost 60% faster than the Radeon HD 8670D.

Our 3rd benchmark set comes from CLBenchmark 1.1. CLBenchmark contains a number of subtests; we’re focusing on the most practical of them, the computer vision test and the fluid simulation test. The former being a useful proxy for computer imaging tasks where systems are required to parse images and identify features (e.g. humans), while fluid simulations are common in professional graphics work and games alike.

Compute: CLBenchmark 1.1 Computer Vision

Compute: CLBenchmark 1.1 Fluid Simulation

AMD and Intel trade places once again with CLBenchmark. Here, Richland does extremely well.

Our final compute benchmark is Sony Vegas Pro 12, an OpenGL and OpenCL video editing and authoring package. Vegas can use GPUs in a few different ways, the primary uses being to accelerate the video effects and compositing process itself, and in the video encoding step. With video encoding being increasingly offloaded to dedicated DSPs these days we’re focusing on the editing and compositing process, rendering to a low CPU overhead format (XDCAM EX). This specific test comes from Sony, and measures how long it takes to render a video.

Compute: Sony Vegas Pro 12 Video Render

The last compute test goes to Intel, although the two put up a good fight across the entire suite.

Synthetics 3DMark and GFXBench
Comments Locked

102 Comments

View All Comments

  • Death666Angel - Sunday, June 9, 2013 - link

    "Who would buy it?" If it was just the added cost of the eDRAM put on top of the -K SKU (so 50€ or something on top of the i5-4670K and i7-4770K) I'd buy it in a heartbeat. First of all, it offers better QS functionality and second of all, the 128MB L4 cache is not exclusive to the iGPU, but can be used for the CPU as well, which offers some serious performance gains in programs that can make use of it.
  • shinkueagle - Sunday, June 9, 2013 - link

    Because it stupid to make such a comparison... And even more stupid of you to bring up such NONSENSE....
  • gfluet - Monday, June 10, 2013 - link

    Mostly because there are no Desktop Crystalwells yet, and the comparios is between socketed CPUs.

    But yeah, I look forward to when AnandTech gets a review model of the I7-4770R. I want to put one of those in a supercompact system.
  • Ewram - Thursday, June 6, 2013 - link

    Excuse me, but what is the MSRP of the A10-6800k versus the i7-4770k? Also, wouldn't benchmarks also be affected by CPU performance to at least some extent?
  • 3DoubleD - Thursday, June 6, 2013 - link

    Given how GPU and memory bandwidth limited these systems are, I'm sure the difference in CPU performance plays only a small if not negligible role in the final score.

    Even if we were talking a single 7970, the difference between AMD and Intel was pretty insignificant http://anandtech.com/show/6985/choosing-a-gaming-c...
  • CannedTurkey - Thursday, June 6, 2013 - link

    The i7-4770 is roughly double the price of the A10-6800.
  • BSMonitor - Thursday, June 6, 2013 - link

    MSRP really isn't a valid comparison here as they are entirely different price points/target audiences. The point is to test the iGPU capability.

    AMD and Intel have very different approaches to iGPU and processor SKU's, today. AMD and it's Fusion are specifically targeting low price points where AMD believes the value of an iGPU is most attractive. The CPU cores are similar to its FX line, but it's an entirely different die than its flagship desktop parts which have NO iGPU whatsoever.

    Intel on the desktop for the mostpart has a single die for all its mainstream Core i7's down to budget Core i3, Pentiums. The Core i7's iGPU isn't really focused on giving a budget gaming experience. And this is where Anand's criticism is aimed. They could make an amazing APU with a very balanced iGPU and CPU on the high end desktop parts but have chosen not to. It would seem the powers that be have decided there is no market for Iris Pro and its high end desktop parts.

    MSRP would be a valid comparison in the Mobile Core i7 with Iris Pro vs the Richland Mobile parts.
  • silverblue - Thursday, June 6, 2013 - link

    Perhaps. In that case, the price of the CPU would be partially obscured by the total BOM. If Iris Pro is that good, and you got double performance for twice the price, it wouldn't be too bad.
  • Concillian - Thursday, June 6, 2013 - link

    "Intel on the desktop for the mostpart has a single die for all its mainstream Core i7's down to budget Core i3, Pentiums. "

    Not true. Dual core and quad core have had different silicon since they started the i3 / i5 / i7 naming convention. I'm no mobile expert, but I know that on the desktop i3 has never had the same die as i7.
  • eanazag - Thursday, June 6, 2013 - link

    A10-6800K is sitting around $150 on Newegg, while the 4770K is pushing $349 daisies. The comparison is still sensible and useful. Spend less money on Intel CPU and the clocks go down. So in an iGP setting for gaming AMD makes more sense, but if you throw a discrete card in the mix you'll have to rethink what your goals are. After staring at those prices, for a gaming only rig I might rather spend the price difference on a discrete card and call it a day if the monitor resolution is 1080p or less.

Log in

Don't have an account? Sign up now