NVIDIA Optimus Demonstration

So how well does Optimus actually work in practice? Outside of a few edge cases which we will mention in a moment, the experience was awesome. Some technophiles might still prefer manual control, but the vast majority of users will be extremely happy with the Optimus solution. You no longer need to worry about what video mode you're currently using, as the Optimus driver can switch dynamically. Even when you run several applications that can benefit from discrete graphics, we didn't encounter any anomalies. Load up a Flash video and the GPU turns on; load a CUDA application and the GPU stays on, even if you then close the Flash video. It's seamless and it takes the guesswork out of GPU power management.


NVIDIA provided a demonstration video showing second-generation switchable graphics compared to Optimus. We've uploaded the video to our server for your enjoyment. (Please note that QuickTime is required, and the sample video uses the H.264 codec so you'll need a reasonable CPU and/or GPU to view it properly.) At the Optimus Deep Dive, NVIDIA provided two other demonstrations of engineering platforms to show how well Optimus works. Sadly, we couldn't take pictures or record videos, but we can talk about the demonstrations.

The first demonstration was an open testbed notebook motherboard using engineering sample hardware. This definitely wasn't the sort of system you run at home, since there was a notebook LCD connected via a standard notebook power/video cable to the motherboard, exposed hardware, etc. The main purpose was to demonstrate how quickly the GPU turns on/off, as well as the fact that the GPU is really OFF. NVIDIA started by booting up Win7, at which point the mobile GPU is off. A small LED on the GPU board would light up when the GPU was on, and the fans would also spin. After Windows finished loading, NVIDIA fired up a simple app on the IGP and nothing changed. Next they started a 3D app and the GPU LED/fan powered up as the application launched; when they shut down the application, the LED/fan powered back off. At one point, with the GPU powered off, NVIDIA removed the GPU module from the system and disconnected its fan; they again loaded a simple application to demonstrate that the system is still fully functional and running of the IGP. (Had they chosen to launch a 3D application at this point, the system would have obviously crashed.) So yes, the GPU in an Optimus laptop is really powered down completely when it's not needed. Very cool!

The second demonstration wasn't quite as impressive, since no one removed a GPU from a running system. This time, Lenovo provided a technology demonstration for NVIDIA showing power draw while running various tasks. The test system was an engineering sample 17" notebook, and we weren't given other details other than the fact that it had an Arrandale CPU and some form of Optimus GPU. The Lenovo notebook had a custom application showing laptop power draw, updating roughly once per second. After loading Windows 7, the idle power was shown at 17W. NVIDIA launched a 3D app on the IGP and power draw increased to 32W, but rendering performance was quite slow. Then they launched the same 3D app on the dGPU and power use hit 39W, but with much better 3D performance. After closing the application, power draw dropped right back to 17W in a matter of seconds. At present there is no word on if/when this Arrandale-based laptop will ship, but it's a safe bet that if Lenovo can provide sample engineering hardware they're likely to ship Optimus laptops in the future.

The final "demonstration" is going to be more in line with what we like to see. Not only did and NVIDIA show us several running Optimus notebooks/laptops, but they also provided each of the attendees with an ASUS UL50Vf sample for review. The UL50Vf should be available for purchase today, and it sounds like the only reason NVIDIA delayed the Optimus launch until now was so that they could have hardware available for end-user purchase. The final part of our Optimus overview will be a review of the ASUS UL50Vf.

Optimus: Recognizing Applications ASUS UL50Vf Overview
Comments Locked

49 Comments

View All Comments

  • Hrel - Tuesday, February 9, 2010 - link

    Now that I've calmed down a little I should add that I'm not buying ANY gpu that doesn't support DX11 EVER again. We've moved past that; DX11 is necessary; no exceptions.
  • JarredWalton - Tuesday, February 9, 2010 - link

    I'm hoping NVIDIA calls me in for a sooper seekrit meeting some time in the next month or two, but right now they're not talking. They're definitely due for a new architecture, but the real question is going to be what they put together. Will the next gen be DX11? (It really has to be at this point.) Will it be a tweaked version of Fermi (i.e. cut Fermi down to a reasonable number of SPs), or will they tack DX11 functionality onto current designs?

    On a different note, I still wish we could get upgradeable notebook graphics, but that's probably a pipe dream. Consider: NVIDIA makes a new mGPU that they can sell to an OEM for $150 or something. OEM can turn that into an MXM module, do some testing and validation on "old laptops", and sell it to a customer for $300 (maybe even more--I swear the markup on mobile GPUs is HUGE!). Or, the OEM could just tell the customer, "Time for an upgrade" and sell them a new $1500 gaming laptop. Do we even need to guess which route they choose? Grrr....
  • Hrel - Tuesday, February 9, 2010 - link

    It STILL doesn't have a screen with a resolution of AT LEAST 1600x900!!! Seriously!? What do I need to do? Get up on roof tops and scream from the top of my lungs? Cause I'm almost to that point. GIVE ME USEABLE SCREENS!!!!!!!
  • MrSpadge - Wednesday, February 10, 2010 - link

    Not everyones eyes are as good as yours. When I asked some 40+ people if I got the location right and showed it to them via Google Maps on my HTC Touch Diamond they rfused to even think about it without their glasses.
  • strikeback03 - Thursday, February 11, 2010 - link

    I've never had people complain about using Google Maps on my Diamond. Reading text messages and such yes, and for a lot of people forget about using the internet since they have to zoom the browser so far in, but the maps work fine.
  • GoodRevrnd - Tuesday, February 9, 2010 - link

    Any chance you could add the Macbook / Pro to the LCD quality graphs when you do these comparisons?
  • JarredWalton - Tuesday, February 9, 2010 - link

    Tell Anand to send me a MacBook for testing. :-) (I think he may have the necessary tools now to run the tests, but so far I haven't seen any results from his end.)
  • MrSpadge - Tuesday, February 9, 2010 - link

    Consider this: Fermi and following high end chips are going to beasts, but they might accelerate scientific / engineering apps tremendously. But if I put one into my workstation it's going to suck power even when not in use. It's generating noise, it's heating the room and making the air stuffy. This could easily be avoided with Optimus! It's just that someone had to ditch the old concept of "desktops don't need power saving" even more. 20 W for an idle GPU is not OK.

    And there's more: if I run GP-GPU the screen refresh often becomes sluggish (see BOINC etc.) or the app doesn't run at full potential. With Optimus I could have a high performance card crunch along, either at work or BOINC or whatever, and still get a responsive desktop from an IGP!
  • Drizzt321 - Tuesday, February 9, 2010 - link

    Is there a way to set this to specifically only use IGP? So turn off the discrete graphics entirely? Like if I'm willing to suffer lower performance but need the extra battery life. I imagine if I could, the UL50Vf could equal the UL80Vt pretty easily in terms of battery life. I'm definitely all for the default being Optimus turned on...but lets say the IGP is more efficient at decoding that 720p or 1080p, yet NVidia's profile says gotta fire up the discrete GPU. There goes quite a bit of battery life!
  • kpxgq - Wednesday, February 10, 2010 - link

    depending on the scenario... the discrete gpu may use less power than the igp... ie say a discrete gpu working at 10% vs an igp working at 90%...

    kind of like using a lower gear at inclines uses less fuel than a higher gear going the same speed since it works less harder... the software should automatically do the math

Log in

Don't have an account? Sign up now