AMD Compatibility with Recent Titles

Along with performance, we also need to discuss compatibility for the recent releases. In something of a surprise, considering the AMD driver date and the release dates of the various games, we actually managed to run all of the games without major issues on both laptops. However, application based switching didn’t always work for AMD, requiring us to use manual switching in a few instances. There was also at least one instance where manual switching had problems, requiring us to use dynamic switching. We’ll have a video and additional discussion of our concerns with the AMD UI and switchable graphics implementation on the next page, but here’s how the games stack up in terms of compatibility for both AMD and NVIDIA.

In the “works as expected” category, Duke Nukem Forever, Portal 2, and The Witcher 2 all ran without any noteworthy issues. Deus Ex: Human Revolution also ran fine, but there was no way to use application-based switching and have it run on the Intel IGP (no loss, really). The same problem occurred with DiRT 3, but with a few extra glitches. First, there was a black border on the right side of the screen—approximately 80 or so pixels wide—that shouldn’t be there; it was present regardless of resolution and even in windowed mode. Second, in manual switching mode DiRT 3 did not render properly in full screen viewing, but worked in a window. Running in a window is not a good solution, so this is a pretty serious glitch. Super Street Fighter IV: Arcade Edition on the other hand ran fine on the IGP in dynamic mode, but on the dGPU the models wouldn’t animate properly and in general the game was unplayable. The workaround is to use manual switching (which may or may not be supported on all laptops—Sony’s VAIO C supports it, and HP added the fixed function switching option in an updated BIOS for the dv6/dv7 laptops), after which the game runs properly. Also worth noting is that a few titles appear to run somewhat faster in manual switching mode, SSF4 and SC2 being two examples.

Besides the above six “new” titles, it’s important to note that all OpenGL titles are currently unsupported by dynamic switching (e.g. Enemy Territory: Quake Wars, Minecraft, presumably Rage when it launches, and as far as we know all other OpenGL apps/games). The workaround is to use fixed function (manual) switching, similar to what we had to do for Street Fighter IV—which means you’ll want to make sure your laptop supports manual switching in some form. AMD informs us that they have a working solution for OpenGL dynamic switching, but it isn’t fully tested yet. It should come out in an updated driver, hopefully before the end of this year (*cough* Rage *cough*). Then we’ll need to see Sony and HP (and anyone else using AMD switchable graphics) to release their own updated driver, and this feels like more of a question of “if” rather than “when”.

NVIDIA Compatibility and Thoughts

All of our discussions so far have centered on AMD’s Dynamic Switchable Graphics implementation and any problems we encountered. What about NVIDIA’s Optimus Technology? First, we immediately note that NVIDIA is at an advantage here, since the 280.26 WHQL drivers we used are only a month old (and there’s a new 285.27 beta driver from last week available now). While our testing is by no means fully comprehensive, so far the only issue we encountered out of the 16+ tested games is in Total War: Shogun 2. The game runs fine, but we are unable to select the Very High preset. Our best guess is that the game is querying the Intel IGP/drivers and limiting a few settings based on the detected capabilities. (We saw a similar issue in the older Empire: Total War in the past, except the last we checked it was limited to the Medium preset.) For someone with a high-end laptop (e.g. GTX 580M), the Very High settings might be desirable, but for 99% of laptops you’ll need to run at High or even Medium settings to get acceptable performance from Shogun 2. Overall, NVIDIA’s Optimus Technology is clearly the more mature and easier to use dynamic switching technology right now.

The only area I can come up with where Optimus isn’t desirable is if you want to run Linux, which isn’t high on NVIDIA’s list of priorities right now—in fact, they’ve said they’re not even going to bother trying to make Optimus work with Linux. This doesn’t make AMD’s switchable graphics solution superior in Linux, unless something has changed and the AMD drivers (or the open-source initiative for AMD GPUs) have improved since the last time we looked. I also have no idea whether AMD’s Dynamic Switchable Graphics works under Linux; it appears that AMD is doing some extra work in their drivers to make things run under Windows, so they might have the same issue as Optimus under Linux. I can’t say it really matters to me either way, as I don’t run Linux, but if you do feel free to add in your comments on which GPU vendor is better, and any information on how the switchable graphics solutions fare. My hunch is that a discrete-only NVIDIA GPU is still the way to go, and if you’re really into Linux the old-style manual switchable graphics with muxes is the better solution.

Summary of Compatibility

Our list of tested titles is obviously limited—I’m only one person, and even with a month of testing there’s only so much I can do—but so far we have yet to find a title that absolutely would not work on either the Sony or Acer laptops. For Acer (NVIDIA Optimus), nearly all games/applications worked without any extra fiddling, but you may need to manually add newer titles (or wait for NVIDIA to release a profile update). On Optimus, the only choice is to run in dynamic switching mode, but that’s generally fine because it works so well. On AMD, depending on the game you might need to select either dynamic switching or manual switching, and if you’re playing multiple games (or using some other GPU enabled application) you will very likely have to go back and forth during the course of a day of gaming. That may sound reasonable, but a lot of users want something that just works without a bunch of extra fussing around, and AMD is coming up short in that area. So, let’s go through changes and annoyances we’ve experienced in testing the Sony VAIO C, specifically as it relates to AMD’s switchable graphics.

What about Recent Games? Sony's Driver Snafu and AMD UI Concerns
Comments Locked

91 Comments

View All Comments

  • JarredWalton - Tuesday, September 20, 2011 - link

    I can't see any other drivers for that laptop other than the original Sept. 2010 driver and a new Sept. 2011 driver. Got a link for the other previous drivers to confirm? Anyway, AMD says they make a driver build available for switchable graphics on a monthly basis, so it sounds like Acer is actually using that. Kudos to them if that's the case.
  • overseer - Tuesday, September 20, 2011 - link

    Perhaps Acer just omitted older versions of drivers they deemed unnecessary. My 4745G was manufactured in Jan. 2010 and the initial CCC was dated late 2009. I can recall I took 2 updates: one in summer 2010 (Catalyst 10.3?) and one in last week (the latest one). So it's safe to say there have been at least 4 traceable versions of AMD GPU drivers for my model.

    While I can't really convince you that it's a bi-monthly or quarterly update cycle from Acer with the limited evidence, this OEM nonetheless has been keeping an eye on new graphics drivers - something I'd never expected in the first place as an early adopter of AMD switchable.
  • MJEvans - Tuesday, September 20, 2011 - link

    You toyed with several ideas for power throttling graphics chips. The obvious ones like turning off cores and working at a different point of the voltage/frequency curve to slash used power are solid.

    Where things turn silly is suggesting the use of only 64 of 256 bits of memory interface. This simply won't work for a few reasons. However let's assume that for price and performance 4, 64 bit, chips had been selected. Probably, the application assets would be littered across the 4 slabs of memory; either for wider page accesses to the same content (faster transfer) or for parallel access to unrelated content (say for two isolated tasks). In any event the cost of consolidating them; both in time and energy expended for the moves, would only be worthwhile if it were for a long duration.

    Instead a better approach would be to follow a similar voltage/freq curve for medium power modes. For low power modes the obvious solution is to have core assets on one bank of memory and use any other enabled blanks as disposable framebuffers. This would allow you to operate at lower clock speeds without impacting performance. Further, if the display output were disabled you would then be able to de-activate all but the asset bank of memory.

    Probably a patent thicket of somekind exists for this stuff; but really I consider these to be things that must be obvious to someone skilled in the art, or even just of logic and basic knowledge of the physics driving current transistors; since my college degree is getting stale and I've never been employed in this field.
  • Old_Fogie_Late_Bloomer - Tuesday, September 20, 2011 - link

    This might not be feasible, but perhaps AMD and/or Nvidia could do something like what Nvidia is doing with Kal-El: have low-power circuitry that can do the bare minimum of what's needed for Windows Aero, etc. (basically what the integrated graphics chip does in Optimus) and switch it out for the more powerful circuitry when needed. As with Kal-El, this switch could be more or less invisible to Windows, or it could be handled at the driver level.

    Of course, that basically wastes the cost of the unused integrated graphics. Perhaps AMD's APUs could more take advantage of this idea: basically, put a CPU and two GPUs on one die, flipping between the slow, power-sipping graphics and the fast and powerful graphics.
  • MJEvans - Tuesday, September 20, 2011 - link

    Actually the Kal-El article explained the key point rather well. The two common options are high speed but a high cost of being 'on' and lower speed but more efficiency per operation at the same speed. Given the highly parallel nature of a graphics solution it makes much more sense to keep the parts that are operating running at faster speed and switch off more of the ones that then aren't needed at all. The main barrier to doing that effectively enough would be the blocks of units used; however if development is occurring with this in mind it might be economically viable. That's a question that would require actual industry experience and knowledge of current design trade-offs to answer.
  • Old_Fogie_Late_Bloomer - Wednesday, September 21, 2011 - link

    Well, the thing that I got from the Kal-El article that was really interesting to me, which I think COULD be relevant to mobile GPU applications, is that, apparently, using this other form of transistor--which is low-leakage but cannot run faster than a certain speed (half a GHz or so)--is sufficiently more efficient in terms of power usage that Nvidia engineers felt that the additional cost is worth it, both in terms of silicon area and the increased cost per part of manufacturing, which, of course, trickles down to the price of the tablet or whatever it's used in. That sounds to me like they feel pretty confident about the idea.

    That being said, I have not the slightest clue what kind of transistors are used in current mobile chips. It might be that GPU manufacturers are already reaping the benefits of low-leakage transistors, in which case there might not be anywhere to go. If they are not, however, why not have just enough low-power cores to run Windows 8's flashy graphical effects, and then switch over to the more powerful, higher-clocked circuitry for gaming or GPGPU applications. I don't know how much it would cost to the consumer, but I'm betting someone would pay $25-$50 more for something that "just works."
  • MJEvans - Friday, September 23, 2011 - link

    It seems that either your comment missed my primary point or I didn't state it clearly enough.

    Likely the engineering trade-off favors powering just a single graphics core (out of the hundreds even mainstream systems how have, relative to merely 4 (vastly more complex) CPU cores on 'mid-high' end systems) rather than increasing hardware and software complexity by adding in an entirely different manufacturing technology and tying up valuable area that could be used for other things with a custom low power version of a thing.

    I find it much more likely that normal use states favor these scenarios:
    a) Display is off,
    a.a) entire gpu is off (Ram might be trickle refreshing).
    b) Display is frozen,
    b.a) entire gpu is off (Ram might be trickle refreshing).
    c) Display is on,
    c.a) gpu has one core active at medium or higher speed
    (This would not be /as/ efficient as an ultra-low power core or two, but I seriously have to wonder if they would even be sufficient; or what fraction of a single core is required for 'basic' features these days).
    c.b) mid-power; some cores are active, possibly dynamically scaled based on load (similar to CPU frequency governing but a bit more basic here)
    c.c) full power; everything is on and cycles are not even wasted on profiling (this would be a special state requested by intensive games/applications).
  • danjw - Tuesday, September 20, 2011 - link

    When I worked for a small game developer, getting ATI to give you the time of day was pretty much impossible. Where as Nvidia was more then happy to help us out, with some free graphics cards and support. If AMD is continuing on this path, they will not ever be real competition to Nvidia.
  • fynamo - Tuesday, September 20, 2011 - link

    The WHOLE POINT of having switchable graphics is to reduce power consumption and thereby extend battery life, and at the same time provide any necessary 2D acceleration capabilities for the OS and Web browsing.

    I'm disappointed that this review totally misses the mark.

    I've been testing numerous Optimus configurations myself lately and have discovered some SERIOUS issues with [at least the Optimus version of] switchable graphics technology: Web browsing.

    Web browsers today increasingly accelerate CSS3, SVG and Flash; however, GPU's have yet to catch up to this trend. As a result rendering performance is abysmal on a Dell XPS 15 with an Intel 2720QM CPU + NVIDIA GeForce 540. Not just with web acceleration. Changing window sizes, basic desktop / Aero UI stuff is like a slideshow. I upgrade from a Dell XPS 16 with the Radeon 3670, and the overall experience has been reduced from a liquid-smooth Windows environment to a slideshow.

    Granted, gaming performance is not bad but that's not the issue.

    Running the latest drivers for everything.

    I was hoping to see this topic researched better in this article.
  • JarredWalton - Tuesday, September 20, 2011 - link

    There's really not much to say in regards to power and battery life, assuming the switchable graphics works right. When the dGPU isn't needed, both AMD and NVIDIA let the IGP do all the work, so then we're looking at Sony vs. Acer using Intel i5-2410M. BIOS and power optimizations come into play, but the two are close enough that it doesn't make much of a difference. (I posted the battery life results above if you're interested, and I still plan on doing a full review of the individual laptops.)

    I'm curious what sites/tests/content you're using that create problems with CSS3/SVG/Flash. My experience on an Intel IGP only is fine for everything I've done outside of running games, but admittedly I might not be pushing things too hard in my regular use. Even so, the latest NVIDIA drivers should allow you to run your browser on dGPU if you need to -- have you tried that? Maybe your global setting has the laptop set to default to IGP everywhere, which might cause some issues. But like I said, specific sites and interactions that cause the sluggishness would be useful, since I can't rule out other software interactions from being the culprit otherwise.

Log in

Don't have an account? Sign up now