Compute

As always we'll start with our DirectCompute game example, Civilization V, which uses DirectCompute to decompress textures on the fly. Civ V includes a sub-benchmark that exclusively tests the speed of their texture decompression algorithm by repeatedly decompressing the textures required for one of the game’s leader scenes. While DirectCompute is used in many games, this is one of the only games with a benchmark that can isolate the use of DirectCompute and its resulting performance.

Compute: Civilization V

AMD does extremely well in our sole DirectCompute test, outperforming Intel's latest desktop graphics solution by a huge margin.

Our next benchmark is LuxMark2.0, the official benchmark of SmallLuxGPU 2.0. SmallLuxGPU is an OpenCL accelerated ray tracer that is part of the larger LuxRender suite. Ray tracing has become a stronghold for GPUs in recent years as ray tracing maps well to GPU pipelines, allowing artists to render scenes much more quickly than with CPUs alone.

Compute: LuxMark 2.0

Haswell GT2's OpenCL performance can be very good, which is what we're seeing here. HD 4600 ends up being almost 60% faster than the Radeon HD 8670D.

Our 3rd benchmark set comes from CLBenchmark 1.1. CLBenchmark contains a number of subtests; we’re focusing on the most practical of them, the computer vision test and the fluid simulation test. The former being a useful proxy for computer imaging tasks where systems are required to parse images and identify features (e.g. humans), while fluid simulations are common in professional graphics work and games alike.

Compute: CLBenchmark 1.1 Computer Vision

Compute: CLBenchmark 1.1 Fluid Simulation

AMD and Intel trade places once again with CLBenchmark. Here, Richland does extremely well.

Our final compute benchmark is Sony Vegas Pro 12, an OpenGL and OpenCL video editing and authoring package. Vegas can use GPUs in a few different ways, the primary uses being to accelerate the video effects and compositing process itself, and in the video encoding step. With video encoding being increasingly offloaded to dedicated DSPs these days we’re focusing on the editing and compositing process, rendering to a low CPU overhead format (XDCAM EX). This specific test comes from Sony, and measures how long it takes to render a video.

Compute: Sony Vegas Pro 12 Video Render

The last compute test goes to Intel, although the two put up a good fight across the entire suite.

Synthetics 3DMark and GFXBench
Comments Locked

102 Comments

View All Comments

  • Will Robinson - Sunday, June 9, 2013 - link

    LOL...NeelyCam must be crying his eyes out over those results.
    Good work AMD!
  • Wurmer - Sunday, June 9, 2013 - link

    I still find this article interesting even if IGP are certainly not the main focus of gamers. I don't consider myself a hardcore gamer but I don't game on IGP. I am currently using a 560 GTX which provides me with decent performances in pretty much any situation. On the other hand, it gives an idea of the progress made by IGP. I certainly would enjoy more performance from the one I am using at work which is a GMA 4500 paired with a E8400. There are markets for good IGP but gaming is not of them. As I see it, IGP are more suited to be paired with low to mid CPUs which would make very decent all around machine.
  • lordmetroid - Monday, June 10, 2013 - link

    Using high end games that will never be played on the internal graphic processor is totally pointless, why not use something like ETQW?
  • skgiven - Monday, June 10, 2013 - link

    Looks like you used a 65W GT640, released just over a year ago.
    You could have used the slightly newer and faster 49W or 50W models or a 65W GTX640 (37% faster than the 65W GT640).
    Better still a GeForce GT 630 Rev. 2 (25W) with the same performance as a 65W GT640!
    (I'm sure you don't have every GPU thats ever been released lying around, so just saying whats there).

    An i7-4770K, or one of its many siblings, costs ~$350.
    For most light gaming and GPU apps, the Celeron G1610T (35W) along with a 49W GT640 would outperform the i7-4770K.
    The combined Wattage is exactly the same - 84W but the relative price is $140!
    Obviously the 25W GK208 GeForce GT 630 Rev. 2 would save you another $20 and give you a combined TDP of 60W, which is 40% better than the i7-4770K.
    It’s likely that there will be a few more GT600 Rev.2 models and the GK700 range has to fill out. Existing mid-range GPU’s offer >5times the performance of the i7-4770K.
    The reasons for buying an i7 still have little or nothing to do with its GPU!
  • skgiven - Monday, June 10, 2013 - link

    - meant GTX645 (not GTX640)
  • NoKidding - Monday, June 24, 2013 - link

    i shudder to think what an a10 kaveri can bring to the table considering it'll be equipped with amd's gcn architecture and additional ipc improvements. low price + 4 cores + (possibly) hybrid xfire with a 7xxx series radeon? a great starting point for a decent gaming rig. not to mention that the minimum baseline for pc gaming will rise from decent to respectable.
  • Silma - Friday, June 28, 2013 - link

    Sometimes I really don't understand your comparisons and even less the conclusions.
    Why compare a Richland to a Haswell when obviously they will get used for totally different purposes? Who will purchase a desktop Haswell without graphic card for gaming? Why use super expensive 2133 memory with a super bad processor?

    There are really 3 conclusions to be had:
    - CPU-Wise Richland sucks aplenty
    - GPU-Wise there is next to no progress as compared to Trinity, the difference being fully explained by a small frenquency increase.
    - If you want cheap desktop gaming you will be much better server by a Pentium G2020 + Radeon HD6670 or HD 7750 for the same price as a crappy A6800 or A6700.
  • XmenMR - Monday, September 2, 2013 - link

    You make me laugh. I normally do not post comments on these things based on the fact that I read them just to get a laugh, but I do have to point out how wrong you are. I have a G1620, G2020, i3-3240, A8, A10 and a more and have ran benchmarks with a 6450, 6570, 6670, 7730, 7750 and 7770 for budget build gaming computers for customers.
    Your build of a G2020 with a 6670 in my test was beaten, hands down by the A10-6800k hxf with 7750 (yes I said it, hybrid crossfire with 7750, it can be done although not popular supported by AMD). G2020 with 6670 will run you about $130, and an A10 with 7750 is about $230. To match the A10 hxf 7750 ($230 value) performance with Intel I did have to use 7750/7770 or higher with the Pentiums and I3+7750 ($210 value) did quite well but still was beaten in quite a few things graphics related.
    Point being a discrete GPU changes the whole aspect of the concept. I3+7750 are very close to A10+hxf7750 in more ways than just performance, but that’s not the point of this Topic. It was AMD 8670D vs Intel HD 4600. I know lots of people that buy Intel i5 and i7 and live off the iGPU thinking one day they would have the money to get a nice GPU and call it good, %60 of the time this does not happen, new tech comes out that’s better and they just change their minds and try to get a whole new system. The APU on the other hand would have been cheaper and performed better for what they needed, had they just gone that road, and I am not the only one that came to that conclusion. AMD has done a great job with the APU and after testing many myself, I have become a believer. Stock i5 computer for $700 got smashed by stock A10 $400 in CS6 sitting side by side, I could not believe it. I do not have to argue how good the APU is doing because Microsoft and Sony have already done it. So I leave with a question. If the APU was not a fantastic alternative that delivers a higher standard of graphics performance, then why are they going to be used in the Xbox1 and PS4?
  • ezjohny - Tuesday, September 10, 2013 - link

    when are we going to get a APU where you could go in game an adjust the graphic setting to very high with out a bottle neck!
  • nanomech - Sunday, December 8, 2013 - link

    This is a slanted review. The i7 with the separate Nvidia card skews the results, perhaps erroneously, toward Intel. How about the A10 with the same separate Nvidia card and/or the comparable separate AMD video card? The performance difference can be quite drastic.

    IMHO, one should compare apples to apples as much as possible. Doing so yields a much more complete comparison. I realize that these APUs tout their built-in graphic abilities, but Intel is trying to do so as well. It's the only way to give the CPU part of the APU a fair shake. That or leave the i7-Nvidia results out completely.

Log in

Don't have an account? Sign up now