Original Link: https://www.anandtech.com/show/2934/nvidia-optimus-truly-seamless-switchable-graphics-and-asus-ul50vf



Back in December, NVIDIA contacted me to let me know that something "really big" was coming out in the near future. It's January 24 as I write this, and tomorrow is the "Optimus Deep Dive" event, an exclusive event with only 14 or so of the top technology websites and magazines in attendance. When Sean Pelletier from NVIDIA contacted me, he was extra excited this time and said something to the effect of, "This new technology is pretty much targeted at you, Jarred… when I saw it, I said, 'We should just call this the Jarred Edition of our mobile platform.' We can't go into details yet, but basically it's going to do all of the stuff that you've been talking about for the past couple of years." With a statement like that, you can understand that it got the gears in my head to start churning. What exactly have I been pining for in terms of mobile GPUs of late? So in advance of the unveiling of their latest technologies and products, I thought I'd put down what I really want to see and then we'll find out how well NVIDIA has matched my expectations.

I've put together my thoughts before getting any actual details from NVIDIA; I'll start with those, but of course NDAs mean that you won't get to read any of this until after the parts are officially announced. Page two will begin the coverage of NVIDIA's Optimus announcement, but my hopes and expectations will serve as a nice springboard into the meat of this article. They set my expectations pretty high back in December, which might come back to haunt them….

First off, if we're talking about a mobile product, we need to consider battery life. Sure, there are some users that want the fastest notebook money can buy—battery life be damned! I'm not that type of user. The way I figure it, the technology has now existed for at least 18 months to offer a laptop that can provide good performance when you need it, but at the same time it should be able to power down unnecessary devices and provide upwards of six hours of battery life (eight would be better). Take one of the big, beefy gaming laptops with an 85Wh (or larger) battery, and if you shut down the discrete GPU and limit the CPU to moderate performance levels, you ought to be able to get a good mobile solution as well as something that can power through tasks when necessary. Why should a 9 pound notebook be limited to just 2 hours (often less) of battery life?

What's more, not all IGPs are created equal, and it would be nice if only certain features of a discrete GPU could power up when needed. Take video decoding as an example. The Intel Atom N270/280/450 processors are all extremely low power CPUs, but they can't provide enough performance to decode a 1080p H.264 video. Pine Trail disappointed us in that respect, but we have Broadcom Crystal HD chips that are supposed to provide the missing functionality. Well, why can't we get something similar from NVIDIA (and ATI for that matter)? We really expect any Core i3/i5 laptop shipped with a discrete GPU to properly support hybrid graphics, and the faster a system can switch between the two ("instantly" being the holy grail), the better. What we'd really like to see is a discrete GPU that can power up just the video processing engine while leaving the rest of the GPU off (i.e. via power gate transistors or something similar). If the video engine on a GPU can do a better job than the IGP and only use a couple watts that would be much better than software decoding on the CPU. Then again, Intel's latest HD Graphics may make this a moot point, provided they can handle 1080p H.264 content properly (including Flash video).

Obviously, the GPU is only part of the equation, and quad-core CPUs aren't an ideal solution for such a product, unless you can fully shut down several of the cores and prevent the OS from waking them up all the time. Core i3/i5/i7 CPUs have power gate transistors that can at least partially accomplish this, but the OS side of things certainly appears to be lagging behind right now. If I unplug and I know all I'm going to be doing for the next couple of hours is typing in Word, why not let me configure the OS to temporarily disable all but one CPU core? What we'd really like to see is a Core i7 type processor that can reach idle power figures similar to Core 2 Duo ULV parts. Incidentally, I'm in a plane writing this in Word on a CULV laptop right now; my estimated battery life remaining is a whopping 9 hours on a 55Wh battery and I have yet to feel the laptop is "too slow" for this task. We haven't reached this state of technology yet and NVIDIA isn't going to announce anything that would affect this aspect of laptops, but since they said this announcement was tailored to meet my wish list I thought I'd mention it.

Another area totally unrelated to power use but equally important for mobile GPUs is the ability to get regular driver updates. NVIDIA first discussed plans for their Verde Notebook Driver Program back at the 8800M launch in late 2007. We first discussed this in early 2008 but it wasn't until December 2008 that we received the first official Verde driver. At that time, the reference driver was only for certain products and operating systems, and it was several releases behind the desktop drivers. By the time Windows 7 launched last fall, NVIDIA managed to release updated mobile drivers for all Windows OSes with support for their 8000M series and newer hardware, and this was done at the same time and with the same version as the desktop driver release. That pattern hasn't held in the months following the Win7 launch, but our wish list for mobile GPUs would definitely include drivers released at the same time as the desktop drivers. With NVIDIA's push on PhysX, CUDA, and other GPGPU technologies, linking the driver releases for both mobile and desktop solutions would be ideal. We can't discuss AMD's plans for their updated ATI Catalyst Mobility just yet, but suffice it to say ATI is well aware of the need for regular mobile driver updates and they're looking to dramatically improve product support in this area. We'll have more to say about this next week.

Finally, the last thing we'd like to see from NVIDIA is less of a gap between mobile and desktop performance. We understand that the power constraints on laptops inherently limit what you can do, and we're certainly not suggesting anyone try to put a 300W (or even 150W) GPU into a laptop. However, right now the gap between desktop and mobile products has grown incredibly wide—not so much for ATI, but certainly for NVIDIA. The current top-performing mobile solution is the GTX 280M, but despite the name this part has nothing to do with the desktop GTX 280. Where the desktop GTX 285 is now up to 240 shader cores (SPs) clocked at 1476MHz, the mobile part is essentially a tweaked version of the old 8800 GTS 512 part. We have a current maximum of 128 SPs running at 1500MHz (1463MHz for the GTX 280M), which is a bit more than half of the theoretical performance of the desktop part with the same name. The bandwidth side of things isn't any better, with around 159GB/s for the desktop and only 61GB/s for notebooks.

As we discussed recently, NVIDIA is all set to release Fermi/GF100 for desktop platforms in the next month or two. Obviously it's time for a new mobile architecture, but what we really want is a mobile version of GF100 rather than a mobile version of GT200. One of the key differences is the support for DirectX 11 on GF100, and with ATI's Mobility Radeon 5000 series already starting to show up in retail products, NVIDIA is behind the 8-ball in this area. We don't have a ton of released or upcoming DX11 games just yet, but all things being equal we'd rather have DX11 support than not. Considering Fermi looks to be a beast in terms of power consumption, we're obviously going to need to make some performance sacrifices in order to keep power in check. GF100 looks to have several parts with varying levels of SPs, so it may be as simple as cutting the number of SPs in half and toning down the clock rates. Another option is that perhaps NVIDIA can take a hybrid approach and tack DX11 features onto the G90 or GT200 architecture rather than reworking GF100 into a mobile product. Whatever route they take, NVIDIA really needs to maintain feature parity with ATI's mobile products, and right now that means DX11 support.

So, that's my wish list right now. I don't ask for much, really: give me mobile performance that has feature parity with desktop parts, with a moderate performance hit in order to keep maximum power requirements in check, and do all that with a chip that's able to switch between 0W power draw and normal power requirements in a fraction of a second as needed. Simple! Now it's time to begin coverage of the actual presentation and find out exactly what NVIDIA is announcing. So turn the page and let's delve into the latest and greatest mobile news from NVIDIA.



A Brief History of Switchable Graphics

So what is Optimus exactly? You could call it NVIDIA Switchable Graphics III, but that makes it sound like only a minor change. After our introduction, in some ways the actual Optimus technology is a bit of a letdown. This is not to say that Optimus is bad—far from it—but the technology only addresses a subset of our wish list. The area Optimus does address—specifically switching times—is dealt with in what can only be described as an ideal fashion. The switch between IGP and discrete graphics is essentially instantaneous and transparent to the end-user, and it's so slick that it becomes virtually a must-have feature for any laptop with a discrete GPU. We'll talk about how Optimus works in a minute, but first let's discuss how we got to our present state of affairs.

It turns out that the Optimus hardware has been ready for a while, but NVIDIA has been working hard on the software side of things. On the hardware front, all of the current G200M and G300M 40nm GPUs have the necessary internal hardware to support Optimus. Earlier laptop designs using those GPUs don't actually support the technology, but the potential was at least there. The software wasn't quite ready, as it appears to be quite complex—NVIDIA says that the GeForce driver base now has more lines of code than Windows NT, for example. NVIDIA was also keen to point out that they have more software engineers (over 1000) than hardware engineers, and some of those software engineers are housed in partner offices (e.g. there are NVIDIA employees working over at Adobe, helping with the Flash 10.1 software). Anyway, the Optimus hardware and software are both ready for public release, and you should be able to find first Optimus enabled laptops for sale starting today.


The original switchable graphics designs used a hardware switch to allow users to select either an IGP or discrete GPU. The first such system that we tested was the ASUS N10JC, but the very first implementation came from Sony in the form of the VAIO SZ-110B launched way back in April 2006. (The Alienware m15x was also gen1 hardware.) Generation one required a system reboot in order to switch between graphics adapters, with hardware multiplexers routing power to the appropriate GPU and more multiplexers to route the video signal from either GPU to the various display outputs. On the surface, the idea is pretty straightforward, but the actual implementation is much more involved. A typical laptop will have three separate video devices: the laptop LCD, a VGA port, and a DVI/HDMI port. Adding the necessary hardware requires six (possibly more) multiplexer ICs at a cost of around $1 each, plus more layers on the motherboard to route all of the signals. In short, it was expensive, and what's worse the required system reboot was highly disruptive.

Many users of the original switchable graphics laptops seldom switched, opting instead to use either the IGP or GPU all the time. I still liked the hardware and practically begged manufacturers to include switchable graphics in all future laptop designs. My wish wasn't granted, although given the cost it's not difficult to see why. As far as the rebooting, my personal take is that it was pretty easy to simply switch your laptop into IGP mode right before spending a day on the road. If you were going to spend most of the day seated at your desk, you'd switch to discrete mode. The problem is, as one of the highly technical folks on the planet, I'm not a good representation of a typical user. Try explaining to a Best Buy shopper exactly what switchable graphics is and how it works, and you're likely to cause more confusion than anything. Our readers grasp the concept, but the added cost for a feature many wouldn't use meant there was limited uptake.


Generation two involved a lot more work on the software side, as the hardware switch became a software controlled switch. NVIDIA also managed to eliminate the required system reboot (although certain laptop vendors continue to require a reboot or at least a logout, e.g. Apple). Again, that makes it sound relatively simple, but there are many hurdles to overcome. Now the operating system has to be able to manage two different sets of drivers, but Windows Vista in particular doesn't allow multiple display drivers to be active. The solution was to create a "Display Driver Interposer" that had knowledge of both driver sets. Launched in 2008, the first laptop we've reviewed with gen2 hardware and software was actually the ASUS UL80Vt, which took everything we loved about gen1 and made it a lot more useful. Now there was no need to reboot the system; you could switch between IGP and dGPU in about 5 to 10 seconds, theoretically allowing you the best of both worlds. We really liked the UL80Vt and gave it our Silver Editors' Choice award, but there was still room for improvement.

First, the use of a driver interposer meant that generic Verde drivers would not work with switchable graphics gen2. The interposer conforms to the standard graphics APIs, but then there's a custom API to talk to the IGP drivers. The result is that the display driver package contains both NVIDIA and Intel drivers (assuming it's a laptop with an Intel IGP), so driver updates are far more limited. If either NVIDIA or Intel release a new driver, there's an extra ~10 days of validation and testing that take place; if all goes well, the new driver is released, but any bugs reset the clock. Of course, that's only in a best-case situation where NVIDIA and Intel driver releases happen at the same time, which they rarely do. In practice, the only time you're likely to get a new driver is if there's a showstopper bug of some form and the laptop OEM asks NVIDIA for a new driver drop. This pretty much takes you back to the old way of doing mobile graphics drivers, which is not something we're fond of.

Another problem with gen2 is that there are still instances where switching from IGP to dGPU or vice versa will "block". Blocking occurs when an application is in memory that currently uses the graphics system. If blocking was limited to 3D games it wouldn't be a critical problem, but the fact is blocking can occur on many applications—including minesweeper and solitaire, web browsers where you're watching (or have watched) Flash videos, etc. If a blocking application is active, you need to close it in order to switch. The switch also results in a black screen as the hardware shifts from one graphics device to the other, which looks like potentially flaky hardware if you're not expecting the behavior. (This appears to be why Apple MacBook Pro systems require a reboot/logout to switch, even though they're technically gen2 hardware.) Finally, it's important to note that gen2 costs just as much as gen1 in terms of muxes and board layers, so it can still increase BOM and R&D costs substantially.

Incidentally, AMD switchable graphics is essentially equivalent to NVIDIA's generation two implementation. The HP Envy 13 is an example of ATI switchable graphics, with no reboot required and about 5-10 seconds required to switch between IGP and discrete graphics (an HD 4330 in this case—and why is it they always seem to use the slowest GPUs; why no HD 4670?).

For technically inclined users, gen2 was a big step forward and the above problems aren't a big deal; for your typical Best Buy shopper, though, it's a different story. NVIDIA showed a quote from Roger Kay, President of Endpoint Technology Associates, that highlights the problem for such users.

"Switchable graphics is a great idea in theory, but in practice people rarely switch. The process is just too cumbersome and confusing. Some buyers wonder why their performance is so poor when they think the discrete GPU is active, but, unknown to them, it isn't."

The research from NVIDIA indicates that only 1% of users ever switched between IGP and dGPU, which frankly seems far too low. Personally, if a laptop is plugged in then there's no real reason to switch off the discrete graphics, and if you're running on battery power there's little reason to enable the discrete graphics most of the time. It could be that only 1% of users actually recognize that there's a switch taking place when they unplug their laptop; it could also be that MacBook Pro users with switchable graphics represented a large percentage of the surveyed users. Personally, I am quite happy with gen2 and my only complaint is that not enough companies use the technology in their laptops.



NVIDIA Optimus Unveiled

Optimus is switchable graphics on steroids, but how does it all work and what makes it so much better than gen2? If you refer back to the last page where we discussed the problems with generation two switchable graphics, Optimus solves virtually every one of the complaints. Manual switching? It's no longer required. Blocking applications? That doesn't happen anymore. The 5 to 10 second delay is gone, with the actual switch taking around 200 ms—and that time is hidden in the application launch process, so you won't notice it. Finally, there's no flicker or screen blanking when you switch between IGP and dGPU. The only remaining concern is the frequency of driver updates. NVIDIA has committed to rolling Optimus into their Verde driver program, which means you should get at least quarterly driver updates, but we're still looking forward to the day when notebook and desktop drivers all come out at the same time.

As we mentioned, most of the work that went into Optimus is on the software side of things. Where the previous switchable graphics implementations used hardware muxes, all of that is now done in software. The trick is that NVIDIA's Optimus driver is able to look at each running application and decide whether it should use discrete graphics or the IGP. If an application can benefit from discrete graphics, the GPU is "instantly" powered up (most of the 200 ms delay is spent waiting for voltages to stabilize), the GPU does the necessary work, and the final result is then copied from the GPU frame buffer into the IGP frame buffer over the PCI Express bus. This is how NVIDIA is able to avoid screen flicker, and they have apparently done all of the background work using standard API calls so that there's no need to worry about updating drivers for both graphics chips simultaneously. They're a bit tight-lipped about the precise details of the software implementation (with a patent pending for what they've done), but we can at least go over the high-level view and block diagram as we discuss how things work.


NVIDIA states that their goal was to create a solution that was similar to hybrid cars. In a hybrid car, the driver doesn't worry about whether they're currently using the battery or if they're running off the regular engine. The car knows what's best and it dynamically switches between the two as necessary. It's seamless, it's easy, and it just works. You can get great performance, great battery life, and you don't need to worry about the small details. (Well, almost, but we'll discuss that in a bit.) The demo laptop for Optimus is the ASUS UL50Vf, which is identical from the outside when looking at the UL50Vt, but there are some internal changes.


Previously, switchable graphics required several hardware multiplexers and a hardware or software switch. With Optimus, all of the video connections come through the IGP, so there's no extra hardware on the motherboard. Let me repeat that, because this is important: Optimus requires no extra motherboard hardware (beyond the GPU, naturally). It is now possible for a laptop manufacturer to have a single motherboard design with an optional GPU. They don't need to have extra layers for the additional video traces and multiplexers, R&D times are cut down, you don't need to worry about signal integrity issues or other quality concerns, and there's no extra board real estate required for multiplexers. In short, if a laptop has an NVIDIA GPU and a CPU/chipset with an IGP, going forward there is no reason it shouldn't have Optimus. That takes care of one of the biggest barriers to adoption, and NVIDIA says we should see more than 50 Optimus enabled notebooks and laptops this summer. These will be in everything from next-generation ION netbooks to CULV designs, multimedia laptops, and high-performance gaming monsters.


We stated that most of the work was on the software side, but there is one new hardware feature required for Optimus, which NVIDIA calls the Optimus Copy Engine. In theory you could do everything Optimus does without the copy engine, but in that case the 3D Engine would be responsible for getting the data over the PCI-E bus and into the IGP frame buffer. The problem with that approach is that the 3D engine would have to delay work on graphics rendering while it copied the frame buffer, resulting in reduced performance (it would take hundreds of GPU cycles to copy a frame). To eliminate this problem, NVIDIA added a copy engine that works asynchronously from frame rendering, and to do that they had to separate the rendering effort from the rest of the graphics engine. With those hardware changes complete, the rest is relatively straightforward. Graphics rendering is already buffered, so the Copy Engine simply transfers a finished frame over the PCI-E bus while the 3D Engine continues working on the next frame.

If you're worried about bandwidth, consider this: In a worst-case situation where sixty 2560x1600 32-bit frames are sent at 60FPS (the typical LCD refresh rate), the copying only requires 983MB/s. An x16 PCI-E 2.0 link is capable of transferring 8GB/s, so there's still plenty of bandwidth left. A more realistic resolution of 1920x1080 (1080p) reduces the bandwidth requirement to 498MB/s. Remember that PCI-E is bidirectional as well, so there's still 8GB/s of bandwidth from the system to the GPU; the bandwidth from GPU to system isn't used nearly much. There may be a slight performance hit relative to native rendering, but it should be less than 5% and the cost and routing benefits far outweigh such concerns. NVIDIA states that the copying of a frame takes roughly 20% of the frame display time, adding around 3ms of latency.

That covers the basics of the hardware and software, but Optimus does work beyond simply rendering an image and transferring it to the IGP buffer. It needs to know which applications require the GPU, and that brings us to a discussion of the next major software enhancement NVIDIA delivers with Optimus.



Optimus: Recognizing Applications

Beyond addressing the problems with switching between IGP and dGPU, the Optimus driver has also been re-architected to provide an extensible framework that allows NVIDIA to support new applications with minimal effort. We've seen application profiling in graphics drivers for a while now, but Optimus adds a new type of profiling. Whereas gaming profiles are generally designed to get optimal performance out of the graphics hardware, Optimus is intended to provide an ideal platform for a variety of tasks. If an application can benefit from running on a discrete GPU, the Optimus driver will route the necessary calls to the dGPU. Likewise, if an application doesn't need any extra performance/features, the calls get routed to the IGP for rendering. The idea is that Optimus will use the GPU if there's a performance, quality, and/or power saving benefit.


At present, Optimus recognizes applications based on the executable file name. In some cases, the recognition goes a little deeper. For example, surfing the Internet generally won't benefit from the dGPU; however, if you happen to be viewing a Flash video (and you have the Flash 10.1 beta installed for your browser), Optimus will power up the GPU and begin routing calls through the video processing engine. Close the Flash video website and the GPU can turn off again. Similarly, if you load up a media player application, the GPU won't be necessary if you're dealing SD content but it would be enabled for HD content (and this can be changed depending on the hardware if necessary). Optimus should activate the dGPU any time a user requires DXVA, DirectX (OpenGL), or CUDA features.

The big change in application profiling is that the profiles are now separate from the main graphics driver. NVIDIA has created a robust infrastructure to deal with automatically downloading and updating the profiles, with user customizable options directing how frequently this should occur. This means that unlike SLI support, where a fully functional profile might require one or two releases before it's integrated into the standard NVIDIA drivers, NVIDIA can add applications that can benefit from a GPU to the Optimus profile list within days or perhaps even hours.

What's more, it's possible to add an application yourself if necessary. As an example, our Steam version of Batman: Arkham Asylum wasn't enabling the dGPU initially; we added a profile pointing at the Steam Batman EXE and the problem was solved. Ideally, we shouldn't have had to do that, and if "only 1%" of users ever manually switch between IGP and dGPU before, we suspect far less than 1% would be willing to manually add an application to the Optimus profile list. Hopefully NVIDIA will be able to push out regular profile updates for such omissions quickly.

The automatic updating of Optimus profiles also raises the possibility of using automatic updates for other areas. The big one is going to be SLI profile support, and while it isn't part of the current program it sounds as though NVIDIA intends to add that feature down the road. Once the infrastructure is in place and the drivers support a separate profile download, it should be relatively easy to get SLI profiles in a similar manner. It would also be interesting to see NVIDIA allow users to "suggest" applications for Optimus support through the drivers—i.e., anything that a user has manually added could be uploaded to the server, and if an application name gets enough hits NVIDIA would be more likely to enable support. Naturally, there would be some privacy concerns with such a scheme and some users wouldn't want to participate in such a program, but it might be useful.

As an aside, we've wanted AMD/ATI to enable manual user profiling of games for CrossFire for some time. They still haven't done that, and now NVIDIA has taken things a step further and separated the profiles from the main drivers. This is definitely an improvement over previous profiling schemes and it's something we hope to see more of in the future—from both AMD as well as NVIDIA.



NVIDIA Optimus Demonstration

So how well does Optimus actually work in practice? Outside of a few edge cases which we will mention in a moment, the experience was awesome. Some technophiles might still prefer manual control, but the vast majority of users will be extremely happy with the Optimus solution. You no longer need to worry about what video mode you're currently using, as the Optimus driver can switch dynamically. Even when you run several applications that can benefit from discrete graphics, we didn't encounter any anomalies. Load up a Flash video and the GPU turns on; load a CUDA application and the GPU stays on, even if you then close the Flash video. It's seamless and it takes the guesswork out of GPU power management.


NVIDIA provided a demonstration video showing second-generation switchable graphics compared to Optimus. We've uploaded the video to our server for your enjoyment. (Please note that QuickTime is required, and the sample video uses the H.264 codec so you'll need a reasonable CPU and/or GPU to view it properly.) At the Optimus Deep Dive, NVIDIA provided two other demonstrations of engineering platforms to show how well Optimus works. Sadly, we couldn't take pictures or record videos, but we can talk about the demonstrations.

The first demonstration was an open testbed notebook motherboard using engineering sample hardware. This definitely wasn't the sort of system you run at home, since there was a notebook LCD connected via a standard notebook power/video cable to the motherboard, exposed hardware, etc. The main purpose was to demonstrate how quickly the GPU turns on/off, as well as the fact that the GPU is really OFF. NVIDIA started by booting up Win7, at which point the mobile GPU is off. A small LED on the GPU board would light up when the GPU was on, and the fans would also spin. After Windows finished loading, NVIDIA fired up a simple app on the IGP and nothing changed. Next they started a 3D app and the GPU LED/fan powered up as the application launched; when they shut down the application, the LED/fan powered back off. At one point, with the GPU powered off, NVIDIA removed the GPU module from the system and disconnected its fan; they again loaded a simple application to demonstrate that the system is still fully functional and running of the IGP. (Had they chosen to launch a 3D application at this point, the system would have obviously crashed.) So yes, the GPU in an Optimus laptop is really powered down completely when it's not needed. Very cool!

The second demonstration wasn't quite as impressive, since no one removed a GPU from a running system. This time, Lenovo provided a technology demonstration for NVIDIA showing power draw while running various tasks. The test system was an engineering sample 17" notebook, and we weren't given other details other than the fact that it had an Arrandale CPU and some form of Optimus GPU. The Lenovo notebook had a custom application showing laptop power draw, updating roughly once per second. After loading Windows 7, the idle power was shown at 17W. NVIDIA launched a 3D app on the IGP and power draw increased to 32W, but rendering performance was quite slow. Then they launched the same 3D app on the dGPU and power use hit 39W, but with much better 3D performance. After closing the application, power draw dropped right back to 17W in a matter of seconds. At present there is no word on if/when this Arrandale-based laptop will ship, but it's a safe bet that if Lenovo can provide sample engineering hardware they're likely to ship Optimus laptops in the future.

The final "demonstration" is going to be more in line with what we like to see. Not only did and NVIDIA show us several running Optimus notebooks/laptops, but they also provided each of the attendees with an ASUS UL50Vf sample for review. The UL50Vf should be available for purchase today, and it sounds like the only reason NVIDIA delayed the Optimus launch until now was so that they could have hardware available for end-user purchase. The final part of our Optimus overview will be a review of the ASUS UL50Vf.



ASUS UL50Vf Overview

The ASUS UL50Vf is essentially the Optimus version of the UL50Vt, and the UL50Vt is the 15.6" version of the UL80Vt we liked so much. To be honest, we are a lot more interested in the ASUS UL30Jc—a 13.3" Optimus CULV laptop with an optical drive (some models will even ship with Blu-ray support in the near future). Here are the specifications for the UL50Vf.

ASUS UL50Vf Specifications
Processor Intel Core 2 Duo SU7300
(2x1.3GHz, 45nm, 3MB L2, 800FSB, 10W)
Overclockable to 1.73GHz/1066FSB (Turbo33)
Chipset Intel GS45 + ICH9M
Memory 2x2GB DDR3-1066 (Max 2x4GB)
Graphics NVIDIA GeForce G210M 512MB
(16SPs, 606/1468/1580 Core/Shader/RAM clocks)
Intel GMA 4500MHD IGP
Switchable NVIDIA Optimus Graphics
Display 15.6" LED Glossy 16:9 768p (1366x768)
Hard Drive(s) 320GB 5400RPM HDD
Optical Drive 8x DVDR SuperMulti
Networking Gigabit Ethernet
Atheros AR9285 802.11n
Audio HD Audio (2 stereo speakers with two audio jacks)
Battery 8-Cell, 15V, 5600mAh, 84Wh battery
"Up to 12 Hours"
Front Side None
Left Side Headphone/Microphone jacks
2 x USB
HDMI
Flash reader (MMC/MS/MS Pro/SD)
Cooling Exhaust
AC Power connection
Right Side 1 x USB 2.0
Optical Drive (DVDRW)
Gigabit Ethernet
VGA
Kensington Lock
Back Side None
Operating System Windows 7 Home Premium 64-bit
Dimensions 15.4" x 10.4" x 1.05" (WxDxH)
Weight 5.2 lbs (with 8-cell battery)
Extras Webcam
103-Key keyboard with 10-key
Flash reader (MMC/MS/MSPro/SD)
Multi-touch touchpad
Brushed aluminum cover
ExpressGate OS (8-second boot)
Warranty 2-year global warranty
1-year battery pack warranty
1-year accidental damage
30-day zero bright dot LCD
Pricing $800 MSRP

Obviously, there were some changes to the motherboard in order to work with Optimus. Specifically, ASUS was able to remove any multiplexers and extra signal routing from the UL50Vt design. However, those changes are on the inside and you can't see any difference looking at the exterior. Specifications remain the same as the UL50Vt/UL80Vt, and performance is virtually the same as the UL80Vt we tested. (There will be some minor differences due to the change in LCD size and the use of different drivers, but that's about it.)

Pretty much everything we had to say about the UL80Vt applies to the UL50Vf. The features are great, and Optimus makes it even better. You can overclock the CPU by 33% in order to improve performance, or you can run the CULV processor at the stock speed and improve battery life. Unlike Optimus, changing the CPU speed doesn't happen on-the-fly (unfortunately), but it is a little easier than what we experienced with UL80Vt. This time, instead of requiring a full system reboot, enabling/disabling Turbo33 only requires the system to enter suspend mode. In that sense, Turbo33 is sort of like switchable graphics gen2: it requires manual user intervention and takes 10 to 15 seconds to shift modes. Ideally, we would like to be able to switch the overclock without suspending, and even better would be the option to enable overclocking on AC power and disable it on DC power.

The UL50Vf carries over the aluminum cover on the LCD lid along with the glossy interior plastic and LCD. It also uses the same 1366x768 LCD resolution. Considering the larger chassis, we feel ASUS would have been better off increasing the LCD resolution slightly (1440x900 or 1600x900 would have been good), and we would have also appreciated a faster dGPU. With Optimus allowing the GPU to switch on/off as needed and a 15.6" chassis, we feel ASUS should have been able to get something like the GT 335/325M into the UL50Vf. After all, Alienware is managing to cram similar hardware into an 11.6" chassis with the M11x!

Before we get to the tests, we did encounter a few minor glitches during testing. First, we couldn't get x264 decode acceleration to work with the dGPU using Media Player Classic - Home Cinema. We could set the application to load on the discrete graphics, but MPC-HC apparently didn't know how to talk to the Optimus GPU and ended up running off the IGP. Since the GMA 4500MHD was more than capable of handling our 720p and 1080p x264 files, we're not too concerned with this issue. Another glitch is that CPU-Z refused to work; it would hang at the graphics detection stage. This isn't so much a problem with Optimus as a need for changes to CPU-Z—and very likely some other low-level tools that talk directly to the graphics hardware. (We didn't try any overclocking or tweaking of the GPU on the UL50Vf, but we suspect it might be a bit trickier than normal.)

Finally, when using the dGPU and playing games, we periodically noticed a slight glitch where the screen would flicker black for a frame. We couldn't come up with any repeatable test, but it seems like the problem may be related to the Copy Engine transferring incorrect data. This was not limited to any one title, but it occurred most frequently during our Empire: Total War testing (usually at least once every 60 seconds). It would hardly be surprising to find that there are a few bugs in the NVIDIA drivers, and most likely this is one of them. We didn't find the occasional "flicker" to be a serious issue and at present we really don't have enough information to say more about what might be causing the glitch we experienced. We'll do some additional testing to see if we can determine if this is more of a problem with specific games or if it happens on all games.

We've run an abbreviated set of tests with the UL50Vf. As mentioned, performance is virtually identical to the UL80Vt, the primary difference being the ability to immediately switch between discrete and integrated graphics as necessary. We will highlight both the old UL80Vt and the UL50Vf in our charts for comparison; you can see additional performance results for the UL80Vt in our previous review. All tests were conducted with the default graphics settings, so the discrete GPU is used when Optimus deems it beneficial and the IGP is used in all other cases. The gaming and general performance tests are run with Turbo33 engaged (33% CPU overclock) while battery testing was conducted at stock CPU speed.



ASUS UL50Vf General Performance

Futuremark PCMark Vantage

Futuremark PCMark05

Internet Performance

3D Rendering - CINEBENCH R10

3D Rendering - CINEBENCH R10

Video Encoding - x264

Video Encoding - x264

General performance is what you would expect from an overclocked CULV with G210M. It's not going to outperform higher end CPU/GPU configurations, but for a large number of users it will do very well. As we've mentioned before, CULV is generally more than three times as fast as netbook Atom (N280/N450) and yet it doesn't cost three times as much. Add on the 33% overclock with the UL50Vf and the lead only increases. We do see some minor performance variations when comparing the UL50Vf with the UL80Vt, but the scores are close enough that slight differences in drivers and components can account for the change.



ASUS UL50Vf Graphics Performance

Batman: Arkham Asylum

Crysis

Empire: Total War

Far Cry 2

GRID

Futuremark 3DMark Vantage

Futuremark 3DMark06

Futuremark 3DMark05

Futuremark 3DMark03

This is the big selling point for including a discrete GPU. 3DMark shows a slightly larger difference between the UL80Vt and the UL50Vf, but all of the scores are close enough that driver differences can account for the changes. The G210M still isn't a particularly fast GPU, as it only has 16 SPs; as we mentioned earlier, we would have liked to see something like the GT220M that doubles the SP count. Even better would be a GT 325M (3x the SP count) or the GT335M (72 SPs). Any of these would also improve memory bandwidth by moving to a 128-bit bus. The G210M is capable of running pretty much any current game, but there will be times where you'll need to run at very low detail settings in order to get above 30 FPS.



ASUS UL50Vf Battery Life and Power

Battery Life - Idle

Battery Life - Internet

Battery Life - DivX 720p

Battery Life - x264 720p

Relative Battery Life

As with the UL80Vt (and all CULV laptops), battery life is a very strong selling point. Putting CULV into a 15.6" chassis and adding a discrete GPU wouldn't be the first choice of most users, however, and here we see the UL50Vf falling behind the UL80Vt. As far as we can tell, the major difference comes down to the LCD, and the result is that the UL80Vt is able to deliver anywhere from 15 (x264) to 215 (idle) minutes more battery life. The Internet test is probably the best overall indication of battery life in common usage scenarios, and even there the 14" UL80Vt delivers 11% more battery life.

This is not to say that the UL80Vt is the better laptop, of course; if the choice is UL50Vf with Optimus or UL80Vt with the second generation switchable graphics, we'd definitely recommend the UL50Vf. However, it does raise the question of why NVIDIA/ASUS would launch the 15.6" model first. Smaller models should follow soon, along with faster, more powerful laptops like the ASUS N61.

NVIDIA's presentation suggests that Optimus allows you to get the best of both worlds: performance as well as battery life. As the test results so far have shown, all of that is possible. However, do keep in mind that you still can't get performance at the same time as long battery life. If you fire up a game on virtually any laptop, even with IGP, battery drain increases substantially. We performed a test of exactly that sort of scenario and the UL50Vf delivered 178 minutes of run time—impressive compared to some larger, faster offerings, sure, but not something you're going to be able to use on a long plane ride or at a LAN party without plugging in.



ASUS UL50Vf LCD Quality

Wrapping things up, we have the usual disappointing LCD. We understand the need for balancing price and features, but the display is something you will look at every minute you're using a laptop. As such, a better LCD panel would be very much appreciated.

Laptop LCD Quality - Contrast

Laptop LCD Quality - White

Laptop LCD Quality - Black

Laptop LCD Quality - Color Accuracy

Laptop LCD Quality - Color Gamut



This is your typical inexpensive LCD panel: 15.6 inches of low contrast, low color gamut, low resolution "goodness". Considering the competition, it's difficult to get something with a substantially better LCD without compromising on other features. The question is whether you want to compromise on battery life, LCD quality, price, or features. It would be nice not to have to compromise at all, but that's difficult to do.



Optimus, Prime?

"The Autobots called… they want their leader back!" Transformers jokes aside, it's amazing how much the computing landscape can change in such a short amount of time. A few months ago, switchable graphics seemed like a great technology that not enough companies were interested in using. Now, switchable graphics tastes like leftovers from last week. All things being equal, who would want to spend the same amount of money for something that doesn't work nearly as well? Certainly I wouldn't, so once more NVIDIA has put their competition in a tough situation. They did this with the Verde notebook driver program, and now just when AMD looks like they're catching up—DirectX 11 and switchable graphics are starting to appear in a few laptops, and AMD is set to announce their own updated mobile driver program—NVIDIA goes and raises the bar.


As far as the ability to switch quickly between discrete and integrated graphics is concerned, it's difficult to imagine anything that still needs improvement. If you're in the market for a new laptop and you want a discrete GPU, then you definitely want an Optimus laptop. Optimus works well and is generally transparent to the end user; during our testing we didn't encounter any serious problems, and certainly nothing that would make us hesitant to recommend purchasing an Optimus laptop. That doesn't mean there aren't other areas that could be improved, however.

One item that we would still like to see NVIDIA address is their driver release schedule. Specifically, we would love to see desktop and mobile drivers released at the same time. The far bigger concern we have right now is that NVIDIA's mobile GPUs are currently "last gen" technology; ATI has already started to ship Mobility Radeon 5000 GPUs, and you can find laptops using them online quite easily. DX11 is out, NVIDIA is ready to begin shipping DX11 desktop GPUs, and we are still using mobile GPUs that are based on the old G9x architecture. We expect NVIDIA to release their next mobile architecture in the not-too-distant future and it will certainly include Optimus, but we don't know when exactly this will happen and we don't know how fast the next-generation parts are going to be. If the new parts are DX11 and they launch sooner rather than later, the initial Optimus laptops are going to have a very short shelf life.

NVIDIA is pitching Optimus as a solution that will be available top to bottom. Whether you want a netbook, a laptop, or a high-end gaming notebook, there will be some form of Optimus available. We remain skeptical about the market for netbooks with discrete graphics; ION made sense when it was IGP, but a next generation ION as a discrete GPU with Intel IGP is going to be a tough sell. After all, Atom is already a serious bottleneck for a 9400M, so anything faster goes to waste. Since netbooks tend to be focused primarily at low cost markets, increasing costs by $50 or even $25 on the manufacturing side is going to be difficult. The best way to sell such a product would be to make sure all the other features are up to snuff; give us great build quality, a high contrast LCD panel, and a keyboard and touchpad that don't suck and we'd at least be willing to pay more than $300.

Another area where Optimus may not be necessary is video decoding. Intel's IGPs have been the whipping boy of graphics hardware for some time, but while they certainly aren't a great gaming solution the GMA 4500MHD generally works well for video decoding and the new HD Graphics improves the situation quite a bit. If Intel can get Flash 10.1 decode acceleration to work, the only people that will need discrete GPUs are gamers and anyone using CUDA apps (and DirectCompute when it starts to gain traction). Do you spend $600 for a decent CULV laptop or $800 for an Optimus laptop where the major advantage is gaming performance? There are going to be plenty of users that don't want to spend the extra money.

Update: There were a few questions raised by readers, and I have asked NVIDIA for comment. I'm putting this here as both are technically negatives, although the likelihood of either being a major concern is quite small—they will only affect a small subset of users. First, if you want to drive an external LCD at 120Hz you will need a laptop with dual-link DVI (or equivalent), and that's not likely to be present on GS45-based implementations like the UL50Vf. 3D Vision isn't likely to get support initially either, though you would want something faster than G210M for that regardless. Some of you asked about Linux support for Optimus, and if your usage plans fall into that category we have bad news: at present, NVIDIA is focusing on Win7 for Optimus support. We could see Linux support (and perhaps OSX as well) down the road, but right now it doesn't exist. Moreover, we're not even sure what would happen if you try to install Linux on an Optimus laptop... so we're going to quickly check and post the results. It will either run off the IGP exclusively, or it might simply fail. Given Optimus involves a complex software element, the dGPU will not work under Linux right now.

Update #2: Sorry this took so long to get back, but I did actually test what happens with Optimus laptops and Linux. Even using an older 9.04 installation of Ubuntu, everything works as expected... which is to say, you can run Linux but you won't get the Optimus GPU. Since there's no direct connection of any display to the GPU, Optimus is required to move rendered content from the G210M to the GMA 4500MHD display buffer. As noted above, NVIDIA is currently focusing their efforts on making Optimus work on Windows 7; it's possible they will move to other OSes down the road, but they are not committed to doing so. If you want to run Linux and you want to use a discrete GPU, Optimus Technology won't work. Perhaps some skilled Linux community folks can figure out a way to do Optimus-like technology on their own, but given the level of detail required to interface with the GPU and IGP we see that as unlikely.

As long as you're in the market for a laptop with a discrete GPU, however, Optimus is definitely where it's at. When we first tried switchable graphics a couple years ago, we said it was an awesome idea and recommended all future laptops try to include the technology. Apparently, manufacturers weren't interested. We liked the improved implementations even more last year, and yet plenty of Core i3/i5 laptops are shipping without switchable graphics. Ugh. With Optimus, there's no excuse left for skipping out on switchable graphics. It's seamless, it doesn't add to the cost or R&D efforts (other than a small fee to NVIDIA that can easily be absorbed into the GPU cost), and even technophobes can benefit—whether they know their laptop is switching between GPU modes or not. So we say again: no NVIDIA-based laptop going forward should ship without Optimus. In fact, if a manufacturer has a design in the works that's almost ready for market and it uses an NVIDIA discrete GPU without Optimus, please do us a favor and rework the product.

If you're interested in the full NVIDIA presentation, here's a complete gallery of the slides.

What about the ASUS UL50Vf?

As far as laptops go, the UL50Vf is a decent design but it's definitely not our favorite product ever. ASUS deserves some praise for being first, and the best part of the laptop is the Optimus technology we just finished praising. However, Optimus is set to launch in numerous other laptops in the near future and we expect some of the upcoming models will surpass the UL50Vf in several areas. Arrandale models will offer improved performance, and we should see 13.3" laptops with the same components as the UL50Vf. We should also see 15.6" laptops with a faster GPU, and Alienware's M11x has a GT335M in an 11.6" chassis—though it appears the M11x uses the older gen2 switchable graphics.

As for the UL50Vf, the keyboard is easy to type on and the construction is decent as well. We just wish ASUS would stop with the glossy black plastic shells, and we also wish the LCD was better. Battery life appears to have dropped somewhat compared to the 14" UL80Vt, but if you want Optimus right now and you're okay with a 15.6" chassis, the ASUS UL50Vf is a good laptop that should serve you well. We're not sure on the MSRP, but we expect it to be the same as the previous generation UL50Vt—after all, Optimus is supposed to make the design and component costs lower, right? If you're looking for something else, here are a few more ASUS Optimus laptops that might suit you better.

Gallery: ASUS UL30Jc

Right now, the ASUS UL30Jc is the laptop we're really looking forward to testing. It's not a ULV design, but it does use Core i3-350M (potentially i5/i7 in some models), it ditches the glossy black plastic, and it has a 13.3" chassis and LCD. The GPU is the G310M, but performance is about the same as G210M (1530MHz SP clock instead of 1500MHz, and still 16 SPs). Some models will even have Blu-ray support—though the price is likely to be $1000+ in that case. Anyway, feast your eyes on the above pictures and tell me that doesn't look a ton better than the UL50Vf. Brushed aluminum on the palm rest as well as the LCD? I think I've reached Nerdvana. It's too bad they still appear to have a glossy black bezel around the display, but it might not be too late to provide a second model with a matte bezel. How about it, ASUS?

Besides the UL30Jc and the UL50Vf, we know ASUS has at least two other Optimus laptops in the works. The N61JV is a 16" chassis available in brown or white, with an Arrandale (Core i3/i5) processor. We're still waiting for full details on which GPU it uses, but hopefully it's at least a G220M or faster (GT335M would be great!). The N61 should arrive shortly and we'll review it as soon as we can. The N82JV is the other Optimus laptop we know ASUS has coming, and it appears to be a 14" model that presumably builds off the UL80Vt design. It's brown instead of glossy black, but we don't know any details of the remaining hardware. We'd love for it to be an Arrandale CULV design, but we'll have to wait a bit longer to find out.

Log in

Don't have an account? Sign up now