AMD’s Integrated HD 4290 vs. Intel Integrated HD Graphics

As I mentioned at the start of this article, AMD’s 890GX graphics are identical in every way (other than model number) to the 790GX integrated graphics. Earlier this year, Intel introduced its biggest leap to date in integrated graphics performance with its Intel HD Graphics.

Found only on the physical CPU package of Core i5, i3 and Pentium G-series CPUs, Intel HD Graphics is the first step towards on-die graphics integration. Today, Intel uses graphics performance as a differentiator between various CPU price points. Depending on what CPU you buy, you either get Intel’s HD Graphics clocked at 533MHz, 733MHz or 900MHz. In order to provide the clearest picture, I’ve included results from all Clarkdale CPUs.

AMD’s graphics is still on-motherboard, and thus all 890GX boards run their Radeon HD 4290 core at the same 700MHz clock. To show the influence of CPUs on integrated graphics performance I’ve included results from both a Phenom II X4 965 and an Athlon II X2 255. As I’ve already shown that there’s no difference in performance between a 790GX and 890GX, I’m only using 890GX numbers in these charts.

Many of you have asked for us to compare integrated graphics to low end discrete graphics. For this specific purpose I’ve included results with a Radeon HD 5450.

Batman: Arkham Asylum

Intel technically has the same performance as AMD does, but it requires a $210 CPU to get it. If you look at the more mainstream Core i5s and i3s, they use the slower 733MHz graphics clock and aren’t as fast as the 890GX’s 4290.

Dragon Age Origins

We see a similar story under Dragon Age. If you have a Core i5 661 then you’ve got clearly better than AMD performance, but pick any of the other i3s or i5s and you’re about equal. If you pick a more reasonable AMD CPU, Intel takes a small lead.

Dawn of War II

The Core i5s all post roughly the same numbers as the 890GX. If you opt for a more reasonable AMD processor, the gap widens. Now compared to the Core i3s, it boils down to what CPU you have. If you have a quad-core AMD you’re probably faster on the 890GX. If you have a dual-core AMD, you’re probably faster with Intel.

The fact that picking the winner is this conditional means that Intel has come a long way in integrated graphics performance.

Modern Warfare 2

AMD has the clear victory here. The 890GX’s graphics is a good 35% faster than Intel’s fastest.

BioShock 2

BioShock 2 is a pain to play on either platform, even at the lowest settings. Upgrading to a $50 graphics card makes a huge difference in playability. Intel’s Core i5 661 is technically the fastest followed by the 890GX with a Phenom II X4 965 and then the rest of the i3/i5 lineup and finally the 890GX with an Athlon II.

World of Warcraft

I had to drop the quality settings on my WoW test from Good to Fair in order to get something playable on these IGPs. That doesn’t change the results though, Intel loses out on another key benchmark here. WoW is noticeably faster on the 890GX, regardless of CPU.

HAWX

AMD scores another win in HAWX.

Overall Intel has significantly improved its integrated graphics, but it loses on a couple of key benchmarks (Modern Warfare 2 and World of Warcraft). I’d call Intel HD Graphics in the same league as AMD’s 890GX (something AMD could have avoided by actually increasing performance this generation), but when AMD wins it tends to be by a larger margin.

The 890GX also benefits from much wider compatibility testing and optimization done for AMD’s other GPUs, which make it a safer bet from a being able to run all games standpoint.

I do have to say though, the fact that a $50 video card can offer roughly twice the performance of either integrated graphics solution is telling. Integrated graphics can be useful for playing a small subset of games, but you still need a discrete solution to have fun.

Power Consumption

While we're presenting power consumption data here, it's not a very balanced comparison. Our H55 test motherboard is a micro-ATX board without any fancy features like USB 3.0 and uses scaled down power delivery. Our 890GX board however is a fully equipped ATX board from Gigabyte and thus can easily draw another 10W+ over a reduced micro-ATX design.

With nearly identical motherboards, we would expect x264 playback power consumption to be relatively similar between Intel and AMD integrated platforms. It's also not a far stretch to imagine an AMD system with similar idle power given how close the Athlon II X2 255 is to the H55 setup. The only situation where AMD doesn't stand a chance is under 3D gaming load.

Dual Graphics Support

AMD’s 890GX chipset does support two PCIe x8 slots for running cards in CrossFire mode, but you do have one more option for flexibility. If you have a Radeon HD 5450 and happen to be running one of the following games: Battleforge, BioShock, Company of Heroes, HAWX, World in Conflict or 3DMark Vantage then the 890GX’s IGP will work in tandem with the 5450.

Unfortunately I could not get Dual Graphics mode working on my testbed. AMD is calling this a preview at this point (hence the limited hardware and software support), so we may see a more extensive rollout in the future.

Generally, after a certain point it doesn’t make sense to use the integrated graphics core to boost performance. With 40 SPs the integrated Radeon HD 4290 can add 50% to the core count of a 5450. But go up to a 5670 and the 890GX can offer only 10%, and even less if you take into account clock frequencies.

The 890GX + 5450 (or similar class hardware) would seem to be the only combination that makes sense.

Index Better Suited for USB 3.0?
Comments Locked

65 Comments

View All Comments

  • GullLars - Tuesday, March 2, 2010 - link

    Since the SB850 also supports RAID, it would be nice to see what the bandwidth and IOPS roof is when RAIDing SSDs. The pictures say 2 GB/s bandwidth, but i'm sceptical. That would put it on par with LSI 9211-4i, only with more ports, so it's possible to reach higher bandwidth with a cheap SATA 3Gbps RAID.
    Dependent on scaling, you could get about 1200MB/s with x25-V's or 1500+ MB/s with x25-M's, but would the IOPS scale?

    I also have a request for a re-run of your 4KB random read numbers for SB750/850 vs ICH10R, as testing at low QD on an vertex LE makes no sense when comparing avalible performance (only accesstime overhead, wich would be better meassured at QD 1). If you redo the test at QD 32, you will see numbers roughly around 150MB/s and then maybe there will be more of a difference than at ~40MB/s, especially when you consider in the additional management overhead of NCQ at higher queue depths. Even indilinx barefoot drives can do 60MB/s 4KB random read at QD 5 (wich according to your traces is within average QD in some scenarios).
  • sbrown23 - Tuesday, March 2, 2010 - link

    I had hoped to see the Cheese Slices thing make another appearance here. Is video playback performance improved at all, or just the same as 790GX?

    On a side not, does anyone know how the Core i3 HD graphics does with Cheese Slices? Does it look like crap? Wondering if i3 is suitable for a Media Center/streaming media PC.
  • Tek92010 - Tuesday, March 2, 2010 - link

    Hi, ever since the ATI purchase I've been hearing less and less about new nVidia chipsets for the AMD platform. I can understand the issues that they are having competing on the Intel side of the fence however, it really irks me to see them give up on something that they have bein doing so well for so long. In my opinion nVidia still has the best solution on the AMD side - even more so if you're running nVidia graphics.

    Their USB 2.0 performance, disk performance and AHCI support is up there with Intel's. Their inter-chipset communications speed is up to mark and their IGP performance is acceptable. Not to mention the general stability and maturity of their chipsets and drivers, so why then have they been so quiet on the desktop chipset side for so long?

    Why haven't they updated their AMD IGP's to be on par with the excellent Geforce 9300/9400 solutions that they produced for Intel solutions? They had announced upcoming ACC support shortly after AMD started showing off what new tricks it could do with their then new SB750. How long again do we have to wait for that to be implemented?

    Can someone who knows please tell me the truth about what nVidia's future chipset plans are on both the AMD and Intel fronts. I am feeling as if I will be forced to make a platform change sooner or later and I really don't want to go with AMD's current solutions until they address their relatively sub-par disk and AHCI performance.
  • Tek92010 - Wednesday, March 3, 2010 - link

    After doing some reading last night I learned that nVidia does have an answer to AMD's ACC out called NCC (nVidia Core Calibration). Apparantly it allows for the same core unlocking and better overclocking on Phenom as AMD's ACC.

    http://blog.the-odyssey.co.cc/?p=179">http://blog.the-odyssey.co.cc/?p=179
  • Tek92010 - Wednesday, March 3, 2010 - link

    After doing some research last night I discovered that nVidia had in fact released it's own version of AMD's ACC which it calls NCC (nVidia Core Calibration). It's supposed to enable better overclocking as well as core unlocking on Phenom.

  • Paladin1211 - Wednesday, March 3, 2010 - link

    Quoted:

    "until they address their relatively sub-par disk and AHCI performance."

    It's not sub-par, mate. USB 3.0 performance is as good as Intel. And with a midrange SSD you'd have the same speed as Intel AHCI, too. Only when you slap in a high end SSD, then it would start to differ.

    Given the mass majority of people don't even use a single low-end SSD let alone high-end SSD, the SB850 performs more than good enough. Enthusiasts will at least go for the i7 920 or 860, they don't deal with Phenom II 955 or SB850 either.

    Sub-par disk and AHCI performance? You must be joking!
  • Tek92010 - Wednesday, March 3, 2010 - link

    Quoted:

    "sub·par (sb-pär)
    adj.
    1. Not measuring up to traditional standards of performance, value, or production.
    2. Below par in a hole, round, or game of golf."

    The American Heritage® Dictionary of the English Language, Fourth Edition copyright ©2000 by Houghton Mifflin Company. Updated in 2009. Published by Houghton Mifflin Company. All rights reserved.

    It's sub-par to me compared to nVidia and Intel chipsets which won't limit my nor my customer's high-end SSD performance. I was never talking about USB 3.0 performance. What qualifies you to make such a broad, generalizing statement about the processor choices of enthusiasts? Why wouldn't an enthusiast build a system with an AMD Phenom II 955 and instead opt for at least the i7 920 or 860, because you said so? The onset of SSD's have shown many people a better way to spend their system budget. Some may now opt for a cheaper and lower performing, "just good enough" processor in order to afford a good performing SSD and might end up with a better overall experience than one with an unbalanced and "mechanical hard disk drive bottlenecked" system.

    It is quite posibble to build an excellent gaming system for example, based on the AMD Phenom II 955 and any of the midrange to high end GPU options available today. At high image quality settings the Phenom II 955 will surely be able to provide an acceptable and playable frame rate to many compared to it's i7 920 or 860 counterparts.

    Don't forget the whole point of my post. It was really about what is going on at the nVidia camp. If nVidia were to stop making chipsets for AMD then we would all be stuck with something that is not on the same level as Intel in terms of AHCI and general disk performance. If that were to happen then it might push many system builders towards opting for Intel. Not that anything's wrong with that though.
  • strikeback03 - Wednesday, March 3, 2010 - link

    Remember though, AMD isn't looking for just budget shoppers but ideally would like to market to the small number of big spenders too. And those big spenders won't be spending with AMD if the expensive SSD they have bought doesn't work as well.
  • Ryun - Tuesday, March 2, 2010 - link

    nVidia isn't making a LGA 1155/1366 board because they don't have a license for the QPI from Intel. Intel is legally blocking them from making chipsets for their new Core i processors (and the new Atom as well), but last I heard there was pending litigation.

    I don't have a concrete reason for nVidia not making AMD boards anymore, but my guess would be because a) AMD doesn't have a very good product lineup in the mobile segment and that's the one seeing the most growth at the moment. I don't think nVidia wants to invest in a platform that doesn't bring in the dough; and b) nVidia is still upset about AMD buying ATI and creating it's own in-house chipset.

    That said, I agree. If I had to choose between ATI and nVidia chipsets I would choose nVidia.
  • Tek92010 - Tuesday, March 2, 2010 - link

    Can you find out nVidia's official position on and reasoning for their lack of AMD chipset innovation and competition Anand?

    There was a time when the chipset market was very exciting. Are we ever going to return to the glory days of VIA vs SiS vs AMD vs nVidia vs Intel?

    Also why haven't you used nVidia chipsets in your testing of AMD platforms for so long? nVidia platforms might actually make them look little better on the benchmarks than their own in-house designs, or are the AMD chipsets better than nVidia's now in your opinion?

Log in

Don't have an account? Sign up now