LucidLogix Virtu MVP Technology and HyperFormance

While not specifically a feature of the chipset, Z77 will be one of the first chipsets to use this remarkable new technology. LucidLogix was the brains behind the Hydra chip—a hardware/software combination solution to allow GPUs from different manufacturers to work together (as we reviewed the last iteration on the ECS P62H2-A). Lucid was also behind the original Virtu software, designed to allow a discrete GPU to remain idle until needed, and let the integrated GPU deal with the video output (as we reviewed with the ASUS P8Z68-V Pro). This time, we get to see Virtu MVP, a new technology designed to increase gaming performance.

To explain how Virtu MVP works, I am going to liberally utilize and condense what is said in the Lucid whitepaper about Lucid MVP, however everyone is free to read what is a rather interesting ten pages.

The basic concept behind Virtu MVP is the relationship between how many frames per second the discrete GPU can calculate, against what is shown on the screen to the user, in an effort to increase the 'immersive experience'.

Each screen/monitor the user has comes with a refresh rate, typically 60 Hz, 75 Hz or 120 Hz with 3D monitors (Hz = Hertz, or times ‘per second’). This means that at 60 times per second, the system will pull out what is in the frame buffer (the bit of the output that holds what the GPU has computed) and display what is on the screen.

With standard V-Sync, the system will only pull out what is in the buffer at certain intervals—namely at factors of the base frequency (e.g. 60, 30, 20, 15, 12, 10, 6, 5, 3, 2, or 1 for 60Hz) depending on the monitor being used. The issue is with what happens when the GPU is much faster (or slower) than the refresh rate.

The key tenet of Lucid’s new technology is the term responsiveness. Responsiveness is a wide-ranging term which could mean many things. Lucid distils it into two key areas:

a) How many frames per second can the human eye see?
b) How many frames per second can the human hand respond to?

To clarify, these are NOT the same questions as:

i) How many frames per second do I need to make the motion look fluid?
ii) How many frames per second makes a movie stop flickering?
iii) What is the fastest frame (shortest time) a human eye would notice

If the display refreshes at 60 Hz, and the game runs at 50 fps, would this need to be synchronized? Would a divisor of 60 Hz be better? Alternatively, perhaps if you were at 100 fps, woud 60 fps be better? The other part of responsiveness is how a person deals with hand-to-eye coordination, and if the human mind can correctly interpolate between a screen's refresh rate and the output of the GPU. While a ~25 Hz rate may be applicable for a human eye, the human hand can be as sensitive as 1000 Hz, and so having the correlation between hand movement and the eye is all-important for 'immersive' gaming.

Take the following scenarios:

Scenario 1: GPU is faster than Refresh Rate, VSync Off

Refresh rate: 60 Hz
GPU: 87 fps
Mouse/Keyboard responsiveness is 1-2 frames, or ~11.5 to 23 milliseconds
Effective responsiveness makes the game feel like it is between 42 and 85 FPS

In this case, the GPU is 45% faster than the screen. This means that as the GPU fills the frame buffer, it will continuously be between frames when the display dumps the buffer contents on screen, such that the computation of the old frame and the new frame is still in the buffer:

This is a phenomenon known as Tearing (which many of you are likely familiar with). Depending on the scenario you are in, tearing may be something you ignore, notice occasionally, or find rather annoying. For example:

So the question becomes, was it worth computing that small amount of frame N+1 or N+3?

Scenario 2: GPU is slower than Refresh Rate, VSync Off

Refresh rate: 60 Hz
GPU: 47 fps
Mouse/Keyboard responsiveness is 1-2 frames, or ~21.3 to 43 milliseconds
Effective responsiveness makes the game feel it is between 25 and 47 FPS

In this case, the GPU is ~37% slower than the screen. This means that as the GPU fills the frame buffer slower than what the screen requests and it will continuously be between frames when the display dumps the buffer contents on screen, such that the computation of the old frame and the new frame is still in the buffer.

So does this mean that for a better experience, computing frame N+1 was not needed, and N+2 should have been the focus of computation?

Scenario 3: GPU can handle the refresh rate, V -Sync On

This setting allows the GPU to synchronize to every frame. Now all elements of the system are synchronized to 60 Hz—CPU, application, GPU and display will aim for 60 Hz, but also at lower intervals (30, 20, etc.) as required.

While this produces the best visual experience with clean images, the input devices for haptic feedback are limited to the V-Sync rate. So while the GPU could enable more performance, this artificial setting is capping all input and output.

Result:

If the GPU is slower than the display or faster than the display, there is no guarantee that the frame buffer that is drawn on the display is of a complete frame. A GPU has multiple frames in its pipeline, but only few are ever displayed at high speeds, or frames are in-between when the GPU is slow. When the system is set a software limit, responsiveness decreases. Is there a way to take advantage of the increased power of systems while working with a limited refresh rate—is there a way to ignore these redundant tasks to provide a more 'immersive' experience?

LucidLogix apparently has the answer…

The answer from Lucid is Virtu MVP. Back in September 2011, Ryan gave his analysis on the principles of the solution. We are still restricted to the high level overview (due to patents) explanation as Ryan was back then. Nevertheless, it all boils down to the following image:

Situation (A) determines whether a rendering task/frame should be processed by the GPU, and situation (B) decides which frames should go to the display. (B) helps with tearing, while (A) better utilizes the GPU. Nevertheless, the GPU is doing multiple tasks—snooping to determine which frames are required, rendering the desired frame, and outputting to a display. Lucid is using hybrid systems (those with an integrated GPU and a discrete GPU) to overcome this.

Situation (B) is what Lucid calls its Virtual V-Sync, an adaptive V-Sync technology currently in Virtu. Situation (A) is an extension of this, called HyperFormance, designed to reduce input lag by only sending required work to the GPU rather than redundant tasks.

Within the hybrid system, the integrated GPU takes over two of the tasks for the GPU—snooping for required frames, and display output. This requires a system to run in i-Mode, where the display is connected to the integrated GPU. Users of Virtu on Z68 may remember this: back then it caused a 10% decrease in output FPS. This generation of drivers and tools should alleviate some of this decrease.

What this means for Joe Public

Lucid’s goal is to improve the 'immersive experience' by removing redundant rendering tasks, making the GPU synchronize with the refresh rate of the connected display and reduce input lag.

By introducing a level of middleware that intercepts rendering calls, Virtual V-Sync and HyperFormance are both tools that decide whether a frame should be rendered and then delivered to the display. However the FPS counter within a title counts frame calls, not completed frames. So as the software intercepts a call, the frame rate counter is increased, whether the frame is rendered or not. This could lead to many unrendered frames, and an artificially high FPS number, when in reality the software is merely optimizing the sequence of rendering tasks rather than increasing FPS.

If it helps the 'immersion factor' of a game (less tearing, more responsiveness), then it could be highly beneficial to gamers. Currently, to work as Lucid has intended, they have validated around 100 titles. We spoke to Lucid (see next page), and they say that the technology should work with most, if not all titles. Users will have to add programs manually to take advantage of the technology if the software is not in the list. The reason for only 100 titles being validated is that each game has to be validated with a lot of settings, on lots of different kit, making the validation matrix huge (for example, 100 games x 12 different settings x 48 different system hardware configurations = time and lots of it).

Virtu MVP causes many issues when it comes to benchmarking and comparison of systems as well. The method of telling the performance of systems apart has typically been the FPS values. With this new technology, the FPS value is almost meaningless as it counts the frames that are not rendered. This has consequences for benchmarking companies like Futuremark and overclockers who like to compare systems (Futuremark have released a statement about this). Technically all you would need to do (if we understand the software correctly) to increase your score/FPS would be to reduce the refresh rate of your monitor.

Since this article was started, we have had an opportunity to speak to Lucid regarding these technologies, and they have pointed out several usage scenarios that have perhaps been neglected in other earlier reviews regarding this technology. In the next page, we will discuss what Lucid considers ‘normal’ usage.

The Z77 Chipset Lucid’s Take on Virtu MVP and How it Should Work
Comments Locked

145 Comments

View All Comments

  • Springf - Sunday, April 8, 2012 - link

    Quote: Native USB 3.0

    The other long awaited addition found on Panther Point is the native implementation of USB 3.0 that comes directly from the chipset. The chipset will only provide two USB 3.0 ports,

    ------- end quote

    I think Z77 natively support 4 USB 3.0 ports

    http://www.intel.com/content/dam/www/public/us/en/...
  • C'DaleRider - Sunday, April 8, 2012 - link

    When you write sentences like this:

    "ASUS have a lot to live up to with its Ivy Bridge Pro board."

    You do realize that you're mixing a plural verb and singular pronoun for the same damn thing...Asus in this case. First, you use a plural verb talking about Asus and then use a singular pronoun for Asus in the same sentence. You cannot do both; well, I guess you can, but you show you have no clue about English grammar and look like you dropped out of third grade.

    Get a copy editor! How can anyone take this site a professional when the writing borders on illiterate?
  • sausagestrike - Sunday, April 8, 2012 - link

    You should higher a sand removal specialist to take a look at you're twat.
  • Arbie - Monday, April 9, 2012 - link

    @C'DaleRider -

    You do realize that... you're the illiterate one, don't you?

    "ASUS have" is perfectly legitimate English, and is in fact what you will hear in England itself. "ASUS" is a company of people and can be taken as singular or plural.

    For me, the AT editors just made major points right in this set of comments by correcting another ignoramus, who was misusing "begs the question".

    Now, can we get back to fan headers?
  • Iketh - Tuesday, April 10, 2012 - link

    no, there are errors STILL all over the place in this article... it's horrid... when your site is 99% words, please make them as easy as possible to comprehend...

    PLEASE LEARN TO WRITE LIKE ANAND, THX!

    Anand, for the love of god, pay a little more to hire a little more education (SEE WHAT I DID THAR??)
  • nz_nails - Monday, April 9, 2012 - link

    "Biostar have unfortunately put much effort in here, with only three to play with..."

    Should be a "not" in there I suppose.
  • s1lencerman - Monday, April 9, 2012 - link

    I do not understand why non express PCI slots are still on boards. The only one to see the light is MSI, and if they had a bit better performance I would switch from ASUS for my next mobo in a heartbeat. Also, why do these boards have a VGA connector (D-sub)? Intel HD graphics can only support 2 displays max, and if you have more than you should get a dedicated graphics card anyway, and probably already have. I don't see the point.

    Another thing, when will OEMs start putting the USB hub at the bottom of the board facing down and not away from the board. If you have multiple cards on the board then you can get really cramped really fast when you are trying to use those.

    I'm sorely tempted to just wait another year or so till there is a board with these features and over 50% SATA 6G/s, but we'll see if that even comes out in that short of time.
  • DanNeely - Monday, April 9, 2012 - link

    1) Some customers are asking for them. Customer demand was why a few boards started sporting floppy controllers again last summer. Legacy PCI demand is almost certainly much higher.

    2) Intel doesn't have enough PCIe lanes on the southbridge for well featured ATX boards.

    2.1) This means a bridge chip of some sort.

    2.2) PCIe devices are used to being able to count on the full 250/500MBps bandwidth.

    2.3) Legacy PCI devices are used to sharing their bandwidth (133MBps).

    2.4) 2.2 and 2.3 combined mean there's less risk of compatibility problems in filling out a few slots with legacy PCI slots.

    This is probably going to remain an issue until either:

    A) Intel increases the number of lanes they offer on their boards by a half dozen or so (bridges are also used for on board devices).

    B) Intel integrates a lot more stuff into the southbridge so it doesn't need PCIe lanes: More USB3, Sata6GB, audio, ethernet.

    C) A new version of PCIe allows sizes other than powers of two. Fitting everything on would be much easier if a board maker could fall back on just offering 13/1x or 7/7x on the main gfx slots.
  • jfelano - Monday, April 9, 2012 - link

    GReat to see Asrock finally stepping up with the warranty, great products.
  • James5mith - Monday, April 9, 2012 - link

    Something I realized by reading this roundup:

    Almost all of the current motherboards are using PCI connected Firewire chips. Even the ones that have PCIe connected firewire use TI chips, which in turn are still PCI firewire, with an internal PCI to PCIe translator.

    After some research the only native PCIe firewire controller I've found is from LSI. Does anyone else know of another solution? This is an interesting "dirty secret" that I never really paid any attention to.

Log in

Don't have an account? Sign up now