Last week Ryan and I had a meeting with Offir Remez, Co-Founder, President and Vice President of Business Development at Lucid. It went on a while, as Offir described Lucid and we peppered him with questions. There are several points worth mentioning about the functionality of Virtu MVP, which Offir was happy to share with us so we could share with you.

The first, to reiterate, is that Virtu requires a system with an integrated GPU (iGPU, such as a mainstream Intel Sandy/Ivy Bridge processor or an AMD APU) as well as discrete graphics (dGPU, AMD or NVIDIA). Without this setup, Virtu will not do anything. This also means that both the drivers for the iGPU and dGPU have to be installed at the same time. Lucid have validated the several previous generations of discrete cards (AMD 5xxx and above, NVIDIA 4xx and above), but are keen to stress that the technology should work with any card, including 8800 and 2400 series—they don’t have the time to validate every configuration being the point here.

In terms of how Virtu functions, it is important to understand the concept of being able to perform what was mentioned on the previous page—being able to manipulate what the GPU does and what it does not do. The underlying technology of Virtu is that the environment is virtualized. This means that instead of the GPUs working on top of the operating system, Virtu adds in a middle layer between the operating system and the GPU. This way, Virtu can manipulate everything that the operating system wants to say to the GPU, and vice versa, without either of them knowing that there is a middleware layer.

The second is most important perhaps, which I will go into later. Virtual V-Sync and HyperFormance will only make a difference in the following circumstances:

a) You suffer from visual tearing in your games, or you actively use V-Sync
b) If your setup (screen resolution and graphics settings) perform better than your refresh rate of your monitor (essentially 60 FPS for most people). If you have less than this, then you will probably not see any benefit.

I should mention some terminology. Virtu works in two modes, i-Mode and d-Mode, depending on if the display is connected to the integrated graphics or discrete graphics:

Let us go over the technologies again quickly.

Virtual V-Sync: Making sure the last fully rendered frame is shown, without tying the CPU, display and input down to 60 Hz (or the refresh rate of your display) and increasing responsiveness.

Virtual V-Sync works by making the integrated GPU probe the rendering buffer on the discrete card. In i-Mode, it transfers the last completed frame that the GPU has processed into the iGPU, and then when the display requires a refresh, it can process that entire frame. This results in no tearing, and the latest frame being shown. This means that Virtual V-Sync only works when your frame rate is greater than the refresh rate—if it is worse, then Virtual V-Sync will show stuttering in the periods where the next frame is not complete in time, thus it shows an old frame.

To reiterate, Virtual V-Sync will only work if your frame rate is greater than your refresh rate, usually 60 FPS. It does not increase frame rates, but removes tearing while keeping the responsiveness of the input system (mouse) better than normal V-Sync.

HyperFormance: Predicting which frames (or rendering tasks) will never be shown and taking them out of the pipeline so the GPU can work on what is needed (which also increases input responsiveness).

HyperFormance is a tool that uses the iGPU to examine rendering tasks. It applies a prediction algorithm to calculate how long the next few frames will take to render. If the refresh rate of the monitor means that the next two frames have no chance of being shown, then HyperFormance removes all applicable rendering tasks for those frames. Some rendering tasks still need to be processed (certain features like smoke or iterative algorithms), however a lot can be nullified. It then does the predictive algorithm and probe again; when it finds a frame which needs to be rendered for the refresh rate, it will make sure all rendering tasks are done on the GPU.

As the FPS counter is increased at the beginning of a frame, this makes it look like the FPS has shot up as rendering tasks are removed. However, this is a poor indication of what the software does—it means that the responsiveness of the input peripherals (e.g. mouse) goes up in line with this ‘FPS’ number.

This means, HyperFormance increases responsiveness and makes FPS meaningless. In order to work, HyperFormance will only make a difference if your normal frame rate is greater than your refresh rate, usually 60 FPS.

Essentially, the FPS number you get from HyperFormance becomes a measure of system responsiveness, not GPU output.

If the prediction algorithm for HyperFormance is wrong and a frame takes longer to render than predicted, the system will show the last fully rendered frame, causing stuttering. Lucid have asked us to make clear that this should be reported to them as a bug, stating the system being used, settings, and version of Lucid MVP.

What you will see on a system

As the technologies are all based around a virtualization layer, there is a little overhead in processing. Information from the frame buffer has to be passed from the iGPU to the dGPU and vice versa over the PCIe bus. If your screen resolution is high (~2MP for 1920x1080, ~4.1 MP for 2560x1600), this means data transfer could potentially be limited by bandwidth, but it should not be at 5GBps. The virtualization layer actually adds in latency, per frame and per call.

For Virtual V-Sync, in i-Mode, frames need to be transferred between dGPU to iGPU, and then displayed through the iGPU. The latency for this, or so we are told, should be around 3% for high frame rates greater than 100 FPS, compared to that from d-Mode. d-Mode requires less overhead as frames do not need to be copied for display, but commands still have to go between the iGPU and dGPU.

For HyperFormance, in i-Mode, we still have to transfer frames to output on the integrated graphics. However, as the iGPU is also dealing with rendering calls for parts of the frame, this is the main transfer between the two graphics units. In d-Mode, we have a similar experience to Virtual V-Sync.

The upshot of how all this works comes in the form of several numbers and a feature, depending on which settings you choose. The table below shows a theoretical result of what you can expect to get with these technologies in d-Mode mode. Here we are going with Lucid’s standard predictions—Virtual V-Sync and HyperFormance causing a 3% reduction in frame rates above 60 FPS, Virtual V-Sync removing tearing and utilizing more of the GPU, and HyperFormance removing 30% of rendering tasks.

  FPS Refresh Rate Tearing On-Screen FPS Responsiveness
Normal 100 60 Yes 60 100
In Game V-Sync 100 60 No 60 60
Virtual V-Sync 100 60 No 60 100
HyperFormance 100 60 Yes 60 130
Virtual V-Sync +
HyperFormance
100 60 No 60 130

As you can see, both Virtual V-Sync and HyperFormance are designed to work together, but are available in the software as separate options. This is, as we found out, because there may be the odd setup that does not like a particular game/setting, or that it changes the feel of a game too much to what the user is used to. Thus, Lucid allows users to chop and change with whatever feels good to them.

The ultimate realization is this—the FPS counter shown in games when using Virtu MVP no longer means the true FPS value of the output of your GPU. The FPS counter is now a measure of responsiveness. So please be wary when reading reviews using this technology and of the analysis of the results—the refresh rate of the monitor is also a vital component of the results.

Caveats of the Technology

There are a few bumps in the road, as expected with anything new, and we put these to Lucid for answers. The first is the possibility of stuttering as mentioned above when the prediction algorithm cannot cope with a massively different frame to previous—Lucid has asked us to reiterate that this should be reported as a bug.

In terms of multiple GPU users, Lucid says they are not specifically designing the software for them. However, they have said that multiple GPU users are more than welcome to assign Virtu MVP to their games to see if it works—Lucid just will not be validating multi-GPU scenarios due to the additional QA required.

There are currently three main titles as of today that Lucid are still fine-tuning for HyperFormance—Crysis 2, Lost Planet 2, and most importantly, Battlefield 3. These games are currently not validated as they are experiencing stuttering, which Lucid hopes to fix. This is more than likely related to the way the software interprets the rendering calls and the adaptive prediction algorithm specifically for these titles.

There is also an issue of licensing. Lucid MVP is a licensed bit of software, requiring motherboard manufacturers to write specific BIOS code in order for it to work. This obviously costs some money to the motherboard manufacturers as well. Some motherboard manufacturers will be licensing it for their back catalogue of hardware (H61, H67), whereas some will focus only from Z77/H77 onwards. There is a possibility that we will see SKUs that do not have it, either through design or as a sales decision. Despite this, Lucid expects Virtu MVP to be enabled on 100 different Z77/H77 motherboards, and up to 10 million motherboards shipped in the next twelve months.

In addition, as graphics cards become more powerful, HyperFormance obviously will increase, surpassing the responsiveness of the input system. Thus running a 7970 with an old DX9 game at 1024x768 may make your FPS hit 1500+, but your responsiveness will be limited by the system.

Future for Virtu MVP

On paper, Virtu MVP sounds like a great technology to have if you are a gamer demanding a more responsive system for 'immersive' gameplay. If Lucid can make Virtu MVP live up to the software’s lofty goals (and not be dogged by new game releases), it sounds good. Arguably, the best place to see this technology would be with upcoming consoles. If it is integrated into the console, and part of the validation of future games is that is has to run with Lucid, it could mean a lot cleaner and more responsive gameplay.

So much for the new technologies; what about the motherboards themselves? In this preview (we will come back at a later date with Ivy Bridge performance numbers), we aim to cover as many boards as possible to give you an idea of what is available on the market. Up first is the ASRock Z77 Extreme4.

LucidLogix Virtu MVP Technology ASRock Z77 Extreme4
Comments Locked

145 Comments

View All Comments

  • mechjman - Monday, April 9, 2012 - link

    I don't remember seeing PCIe 3.0 support straight from P6x series chipsets.
    http://www.intel.com/content/www/us/en/chipsets/ma...

    If this is regarding in use with a PLX chip, it might be good to state so.
  • extide - Tuesday, April 10, 2012 - link

    It's actually when the boards DONT use a plx chip, or if the use 3.0 capable ones. It's only the boards that use 2.0 chips that are limited to 2.0
  • GameLifter - Tuesday, April 10, 2012 - link

    I am very curious to see how this technology will affect the overall performance of the RAM. If it works well, I may have to get the P8Z77-V Pro.
  • jbuiltman - Tuesday, April 10, 2012 - link

    I am leaving my AMD FX-60, 3 GB DDR, Asus 939 Delux, Win XP, Raptor 150 HDD for IVY Bridge pasures!!!

    I am all for ASUS 16+4 power, multi usb 2.0 and 3.0 on the back panel. I also like the multiple 4 pin fan plug ins, mem ok, LED problem indicator, switches, 4 SATA 6GB connectors and heat pipes connecting the alunimum fins.

    What i want to see is 16x/16x not 8x/8x on dual video card on a Z77 board. ASUS, don't skimp for a measly $30! I hate cheap companies and don't make me think you are just being cheap!!!
  • jbuiltman - Tuesday, April 10, 2012 - link

    Hey all you MoBo companies. Don't get cheap with the Z77 boards and not include 16x/16x on the pci-e 3.0!!!! Come on, add what you need to and pass the $30 on to me!!!!
  • ratbert1 - Wednesday, April 11, 2012 - link

    "ASUS as a direct standard are now placing Intel NICs on all their channel motherboards. This is a result of a significant number of their user base requesting them over the Realtek solutions."
    Um... ASUS P8Z77-V LX has Realtek!
    and...ASUS P8H77-M PRO has Realtek!
    There are more.
  • ratbert1 - Wednesday, April 11, 2012 - link

    I meant P8Z77-M PRO, but the H77 has it as well.
  • lbeyak - Sunday, April 15, 2012 - link

    I would love a detailed review of the Gigabyte G1.Sniper 3 Z77 board when it becomes available.

    Keep up the good work!
  • csrikant - Sunday, April 22, 2012 - link

    Dual E5-2690
    So far best i have got burn a lot $$$ to get this right
    my last build was with I7 990x got itchy in oct 2011 with some minor issue decided to change my PC got my i7 2700K did not meet my expectation
    built i7 3960x still failed many of my requirements regret my pc change from 990x
    Finally with all my pain and wasting$$ got my new build that so far perform better than my 990X build
    My advice do not get carried away by fancy new i7 release they are just little benefit for P4 just wasting time I was shocked that they released P4 with 1155 socket it was having same performace as 2700K not much change in fact it was cheaper too.

    Am not expert an average system builder but my advice from bottom of my heart is just go for E5 build if you are really looking for performace and some benefits you may spend some extra $$ on MB ,CPU,Casing etc it is worth in long run works out cheper than any fancy High end gaming rig water cooling etc all just shit tech advice. Never get ferrari performance form mod toyota.
  • mudy - Monday, April 23, 2012 - link

    With the third pcie lane on the z77 boards I have come across almost all manufacturers saying "1xPCI Express 2.0 x16 (x4 Mode) & only available if a Gen 3 CPU are used". Does this mean that the lane is pcie 2.0 at x16 but works at pcie 3.0 x4 mode, if an IVB processor is connected, and other two pcie 3.0 lane is populated giving x8/x4x4 speed with pcie 3.0 compliant cards?? Also what will happen if I put Pcie 2.0 GPUs in the first two pcie 3.0 x16 slots and a pcie 2.0 compliant raid card (rr2720SGL) in the third pcie lane? Will it give me an effective pcie 2.0 bandwidth of x16/x8/x8 or not?? Damn these are so confusing!! I wish anandtech would do an extensive review on just the pcie lanes covering all sorts of scenario and I think NOW would be the best time to this as the transition from pcie 2.0 to pcie 3.0 will happen slowly (maybe years) so majority end-user will still be keeping their pcie 2.0 compliant devices!!

    Thanks

Log in

Don't have an account? Sign up now