Let’s Talk About Drivers

So what does our discussion of gaming compatibility and performance really mean? First, let’s start by looking at the AMD driver version. The CCC version used by Sony appears to date from February 4, 2011 (software version 8.811.1.5-110204a-115635C-Sony), so best-case we’re looking at bits and pieces from the Catalyst 11.2 era, but mostly 11.1. Ouch. There are also oddities with the Sony driver release where sometimes not all CCC options are available (e.g. the Information tab doesn't always show up). CCC also crashed a few times when switching between manual and dynamic modes (or IGP and GPU modes when in manual switching). Given how old the Sony drivers are, many of the lower-than-expected performance results (particularly in the recent titles list) should improve with something like the Catalyst 11.8 drivers. That right there is reason enough to consider bypassing any laptop where you can’t get up to date drivers from the GPU manufacturer, and Sony completely misses the boat with their AMD-equipped laptops.

For any consumer notebook with a discrete GPU, if the GPU is actually supposed to be useful over the long haul, owners need the ability to get regular driver updates. For games in particular, in some cases driver updates can mean the difference between running properly or not at all; in other instances, a new driver might increase performance by as much as 20% or more, especially on new releases. And lest anyone get the idea that I’m just picking on AMD’s mobile drivers, let me refer back to something I wrote in 2008 (and substitute “gaming” for “SLI”): “Honestly, what we really need is the ability to run reference drivers on gaming laptops - even more so than regular laptops, although that would also be great. As far as we're concerned, a gaming notebook needs to be as seamless as a desktop when it comes to updating drivers and running games. Until that happens, we would think very carefully before spending a lot of money on gaming laptop.”

I took NVIDIA to task for their lack of generic driver updates over three years ago, and over the next two years we saw the situation improve with their Verde program, Optimus Technology, and then the two combined so that virtually all NVIDIA-equipped laptops (a few business-centric laptops may not qualify) can get new drivers the same day that desktop drivers launch. Today it's time to give AMD and Sony the same treatment. For Sony, I really see no point in shipping a laptop with a discrete GPU and then doing nothing with the drivers; long-term, you might as well just sell the laptop with an Intel IGP rather than going after the checkbox feature.

AMD has a similar program to NVIDIA’s Verde, but notably absent from the program are the following: laptops with switchable graphics and Intel chipsets, Toshiba notebooks, Sony notebooks, and Panasonic notebooks. Thus, Sony gets a double fail on the driver situation with the VAIO C: once for not participating in AMD’s mobile driver program at all (for non-switchable laptops), and the second they get to share blame with AMD (for switchable graphics). Simply put, NVIDIA has the better approach: provide reference drivers for nearly all laptops that use NVIDIA GPUs, and include support for laptops with Optimus graphics switching technology. (Incidentally, if you look at the latest NVIDIA driver release notes, Hybrid Power laptops aren’t supported, since they use manual graphics switching technology similar to AMD’s switchable graphics, and only a couple Sony laptops are supported. Fujitsu laptops are also on the unsupported list.)

UI Concerns

I brought up my concerns with AMD in regards to their mobile drivers prior to writing this article. Their general stance is that they want to make things “easy” by not overwhelming users with too much information. For advanced users (like myself and many of our readers), they can fall back to the fixed function switchable graphics (i.e. manual mode). My major gripe is that making things easy apparently means not providing a global list of applications/games with profiles—something we’ve been asking them to add since CrossFire first hit the scene! Just to be clear, I strongly disagree with their “easy” suggestion, because using the VAIO C made it patently clear that this is not easier than using an Optimus-based laptop. Here’s why.

It’s true that you can switch between dynamic (application based) switching and manual switching, but as I’ve discussed earlier, there are instances for both sides where games don’t work quite right. The solution thus becomes one of opening the CCC, selecting the appropriate mode (there’s no quick way to tell which mode you’re in outside of the CCC), and then you can load the game. First, I have to say that opening AMD’s Catalyst Control Center and switching between dynamic and manual modes is hardly ideal—in fact, it can often be a bit of a pain. Particularly on this Sony laptop, I’ve noticed that the CCC frequently takes far too long to load—I’ve seen it take upwards of 30 seconds on a regular basis! A slow hard drive with a bloated .NET interface appears to be the issue, but whatever the cause (an old driver as well?), it can be annoying.

There’s also the question of what games are supposed to work properly, and there’s no global source you can consult for this information (at least, not that I could find). AMD can call it overwhelming if they want, but NVIDIA’s drivers make it really easy to see if a game at least should work with Optimus (though even unlisted games have worked in my experience so far). AMD’s application based switching looks for anything using DirectX calls as far as I can tell, which results in things like dialogs asking whether Explorer.exe should run on the Power Saving GPU or the High Performance GPU. Perhaps that’s a better definition of overwhelming: popping up dialogs that aren’t necessary? Early on in testing, I turned off the “prompt every time a new graphics application runs” option; if you want to get it back, however, you’ll need to restore the default settings as there’s no box to uncheck. Either way, the first time you launch a graphics-enabled application, it’s given a “Not Assigned” classification, which appears to be about 50-50 in terms of running on the IGP or running on the GPU.

In general, I found the manual switching mode to be the better AMD solution right now—only DiRT 3 had a major issue there. Contrast that with minor to major annoyances in Super Street Fighter IV: Arcade Edition, StarCraft II, and all OpenGL titles not working with the application-based switching and there’s really no point in dynamic switching. But then, we’re also looking at drivers from over seven months back, apparently, so perhaps things are better—we couldn’t check, as we don’t have another laptop with AMD’s switchable graphics and a more recent driver available. Even if you skip dynamic switching, manual switching isn’t a great experience either, since the screen will flash and go blank for 5-10 seconds every time you switch between the IGP and GPU. Granted, in most cases the only time you’d actually need to do that would be when going from AC power to battery power—get the switching to happen faster and get me regular driver updates and I’d be fine with manual control.

Other Requests

When I first laid hands on a laptop with switchable graphics, I predicted that such a design would become the future for mobile GPUs. Really, if everything works properly on a low-end or mid-range GPU, there’s no reason it shouldn’t work on high-end GPUs as well. Once that happens, there’s a good incentive for notebook manufactures to use graphics switching on every mobile product that has a dedicated GPU.

This is precisely what we’ve seen with NVIDIA’s mobile lineup. Starting with the 200M, we saw only a couple laptops with Optimus or some other form of switchable graphics. The 300M is when Optimus really gained traction on virtually all the mainstream laptops, and with 400M we started to see higher performance mobile GPUs with Optimus support. Now their 500M even has a few vendors (e.g. Alienware M17x) that use Optimus on their top-end GTX 580M.

For AMD, right now it looks like the only laptops with dynamic switchable graphics use either the HD 6400M or the HD 6600M/6700M (same chip at different clocks). The HD 6470M is almost superfluous these days, considering Intel’s HD 3000 is within striking distance (and Ivy Bridge will apparently close the gap). Meanwhile, the 6300M, 6500M, and 6800M are just renamed 5400M/5600M/5800M chips, so they apparenlty lack the necessary hardware change to do dynamic switching. It's not clear whether 6900M can support dynamic switching, and high-end GPUs could certainly benefit (assuming the bugs and other issues are worked out), but no one is doing it. Perhaps it’s just a case of the chicken and the egg: if AMD gets dynamic switching to work properly on all their mobile GPUs, vendors would be a lot more likely to use the technology on high-end laptops.

With all my talk of switchable graphics, though, let’s make one thing clear: switchable graphics is not necessarily the Holy Grail of mobile GPUs. The true ideal in my opinion is mobile GPUs that can run fast when needed (i.e. playing games), while also being able to power off large portions of the chip and RAM and get down to IGP levels of power consumption. The GTX 580M for instance has 384 CUDA cores divided into eight Shader Modules, with a 256-bit memory controller divided into four 64-bit interfaces. When playing games or doing other intensive graphics/computational work, the GTX 580M can use up to 100W of power. At idle, we estimate power consumption to be around 16W, which obviously takes a toll on battery life.

Imagine if the GTX 580M could fully shut down seven of the eight SMs and three of the four memory interfaces, as well as doing some voltage and clock speed modifications. Do we need more than 48 CUDA cores and a 64-bit memory interface for the Windows desktop? Most likely not. It’s possible with the right design, we could get a dedicated GPU that would idle at less than 1W—similar to current IGPs. If we can get that, then there’s no actual need for graphics switching technology; you’d get the best of both worlds. But until and unless we reach that point, technologies like Optimus and dynamic switchable graphics are the next best thing—at least when everything works properly. Of course, with AMD’s APUs and Intel putting faster IGPs into their CPU packages, focusing on switchable graphics makes a lot of sense. Going forward, nearly every consumer CPU is going to have some form of on-die graphics, so why not put it to good use?

Gaming Compatibility Results Video Demonstrations
Comments Locked

91 Comments

View All Comments

  • JarredWalton - Tuesday, September 20, 2011 - link

    I can't see any other drivers for that laptop other than the original Sept. 2010 driver and a new Sept. 2011 driver. Got a link for the other previous drivers to confirm? Anyway, AMD says they make a driver build available for switchable graphics on a monthly basis, so it sounds like Acer is actually using that. Kudos to them if that's the case.
  • overseer - Tuesday, September 20, 2011 - link

    Perhaps Acer just omitted older versions of drivers they deemed unnecessary. My 4745G was manufactured in Jan. 2010 and the initial CCC was dated late 2009. I can recall I took 2 updates: one in summer 2010 (Catalyst 10.3?) and one in last week (the latest one). So it's safe to say there have been at least 4 traceable versions of AMD GPU drivers for my model.

    While I can't really convince you that it's a bi-monthly or quarterly update cycle from Acer with the limited evidence, this OEM nonetheless has been keeping an eye on new graphics drivers - something I'd never expected in the first place as an early adopter of AMD switchable.
  • MJEvans - Tuesday, September 20, 2011 - link

    You toyed with several ideas for power throttling graphics chips. The obvious ones like turning off cores and working at a different point of the voltage/frequency curve to slash used power are solid.

    Where things turn silly is suggesting the use of only 64 of 256 bits of memory interface. This simply won't work for a few reasons. However let's assume that for price and performance 4, 64 bit, chips had been selected. Probably, the application assets would be littered across the 4 slabs of memory; either for wider page accesses to the same content (faster transfer) or for parallel access to unrelated content (say for two isolated tasks). In any event the cost of consolidating them; both in time and energy expended for the moves, would only be worthwhile if it were for a long duration.

    Instead a better approach would be to follow a similar voltage/freq curve for medium power modes. For low power modes the obvious solution is to have core assets on one bank of memory and use any other enabled blanks as disposable framebuffers. This would allow you to operate at lower clock speeds without impacting performance. Further, if the display output were disabled you would then be able to de-activate all but the asset bank of memory.

    Probably a patent thicket of somekind exists for this stuff; but really I consider these to be things that must be obvious to someone skilled in the art, or even just of logic and basic knowledge of the physics driving current transistors; since my college degree is getting stale and I've never been employed in this field.
  • Old_Fogie_Late_Bloomer - Tuesday, September 20, 2011 - link

    This might not be feasible, but perhaps AMD and/or Nvidia could do something like what Nvidia is doing with Kal-El: have low-power circuitry that can do the bare minimum of what's needed for Windows Aero, etc. (basically what the integrated graphics chip does in Optimus) and switch it out for the more powerful circuitry when needed. As with Kal-El, this switch could be more or less invisible to Windows, or it could be handled at the driver level.

    Of course, that basically wastes the cost of the unused integrated graphics. Perhaps AMD's APUs could more take advantage of this idea: basically, put a CPU and two GPUs on one die, flipping between the slow, power-sipping graphics and the fast and powerful graphics.
  • MJEvans - Tuesday, September 20, 2011 - link

    Actually the Kal-El article explained the key point rather well. The two common options are high speed but a high cost of being 'on' and lower speed but more efficiency per operation at the same speed. Given the highly parallel nature of a graphics solution it makes much more sense to keep the parts that are operating running at faster speed and switch off more of the ones that then aren't needed at all. The main barrier to doing that effectively enough would be the blocks of units used; however if development is occurring with this in mind it might be economically viable. That's a question that would require actual industry experience and knowledge of current design trade-offs to answer.
  • Old_Fogie_Late_Bloomer - Wednesday, September 21, 2011 - link

    Well, the thing that I got from the Kal-El article that was really interesting to me, which I think COULD be relevant to mobile GPU applications, is that, apparently, using this other form of transistor--which is low-leakage but cannot run faster than a certain speed (half a GHz or so)--is sufficiently more efficient in terms of power usage that Nvidia engineers felt that the additional cost is worth it, both in terms of silicon area and the increased cost per part of manufacturing, which, of course, trickles down to the price of the tablet or whatever it's used in. That sounds to me like they feel pretty confident about the idea.

    That being said, I have not the slightest clue what kind of transistors are used in current mobile chips. It might be that GPU manufacturers are already reaping the benefits of low-leakage transistors, in which case there might not be anywhere to go. If they are not, however, why not have just enough low-power cores to run Windows 8's flashy graphical effects, and then switch over to the more powerful, higher-clocked circuitry for gaming or GPGPU applications. I don't know how much it would cost to the consumer, but I'm betting someone would pay $25-$50 more for something that "just works."
  • MJEvans - Friday, September 23, 2011 - link

    It seems that either your comment missed my primary point or I didn't state it clearly enough.

    Likely the engineering trade-off favors powering just a single graphics core (out of the hundreds even mainstream systems how have, relative to merely 4 (vastly more complex) CPU cores on 'mid-high' end systems) rather than increasing hardware and software complexity by adding in an entirely different manufacturing technology and tying up valuable area that could be used for other things with a custom low power version of a thing.

    I find it much more likely that normal use states favor these scenarios:
    a) Display is off,
    a.a) entire gpu is off (Ram might be trickle refreshing).
    b) Display is frozen,
    b.a) entire gpu is off (Ram might be trickle refreshing).
    c) Display is on,
    c.a) gpu has one core active at medium or higher speed
    (This would not be /as/ efficient as an ultra-low power core or two, but I seriously have to wonder if they would even be sufficient; or what fraction of a single core is required for 'basic' features these days).
    c.b) mid-power; some cores are active, possibly dynamically scaled based on load (similar to CPU frequency governing but a bit more basic here)
    c.c) full power; everything is on and cycles are not even wasted on profiling (this would be a special state requested by intensive games/applications).
  • danjw - Tuesday, September 20, 2011 - link

    When I worked for a small game developer, getting ATI to give you the time of day was pretty much impossible. Where as Nvidia was more then happy to help us out, with some free graphics cards and support. If AMD is continuing on this path, they will not ever be real competition to Nvidia.
  • fynamo - Tuesday, September 20, 2011 - link

    The WHOLE POINT of having switchable graphics is to reduce power consumption and thereby extend battery life, and at the same time provide any necessary 2D acceleration capabilities for the OS and Web browsing.

    I'm disappointed that this review totally misses the mark.

    I've been testing numerous Optimus configurations myself lately and have discovered some SERIOUS issues with [at least the Optimus version of] switchable graphics technology: Web browsing.

    Web browsers today increasingly accelerate CSS3, SVG and Flash; however, GPU's have yet to catch up to this trend. As a result rendering performance is abysmal on a Dell XPS 15 with an Intel 2720QM CPU + NVIDIA GeForce 540. Not just with web acceleration. Changing window sizes, basic desktop / Aero UI stuff is like a slideshow. I upgrade from a Dell XPS 16 with the Radeon 3670, and the overall experience has been reduced from a liquid-smooth Windows environment to a slideshow.

    Granted, gaming performance is not bad but that's not the issue.

    Running the latest drivers for everything.

    I was hoping to see this topic researched better in this article.
  • JarredWalton - Tuesday, September 20, 2011 - link

    There's really not much to say in regards to power and battery life, assuming the switchable graphics works right. When the dGPU isn't needed, both AMD and NVIDIA let the IGP do all the work, so then we're looking at Sony vs. Acer using Intel i5-2410M. BIOS and power optimizations come into play, but the two are close enough that it doesn't make much of a difference. (I posted the battery life results above if you're interested, and I still plan on doing a full review of the individual laptops.)

    I'm curious what sites/tests/content you're using that create problems with CSS3/SVG/Flash. My experience on an Intel IGP only is fine for everything I've done outside of running games, but admittedly I might not be pushing things too hard in my regular use. Even so, the latest NVIDIA drivers should allow you to run your browser on dGPU if you need to -- have you tried that? Maybe your global setting has the laptop set to default to IGP everywhere, which might cause some issues. But like I said, specific sites and interactions that cause the sluggishness would be useful, since I can't rule out other software interactions from being the culprit otherwise.

Log in

Don't have an account? Sign up now