Last week Ryan and I had a meeting with Offir Remez, Co-Founder, President and Vice President of Business Development at Lucid. It went on a while, as Offir described Lucid and we peppered him with questions. There are several points worth mentioning about the functionality of Virtu MVP, which Offir was happy to share with us so we could share with you.

The first, to reiterate, is that Virtu requires a system with an integrated GPU (iGPU, such as a mainstream Intel Sandy/Ivy Bridge processor or an AMD APU) as well as discrete graphics (dGPU, AMD or NVIDIA). Without this setup, Virtu will not do anything. This also means that both the drivers for the iGPU and dGPU have to be installed at the same time. Lucid have validated the several previous generations of discrete cards (AMD 5xxx and above, NVIDIA 4xx and above), but are keen to stress that the technology should work with any card, including 8800 and 2400 series—they don’t have the time to validate every configuration being the point here.

In terms of how Virtu functions, it is important to understand the concept of being able to perform what was mentioned on the previous page—being able to manipulate what the GPU does and what it does not do. The underlying technology of Virtu is that the environment is virtualized. This means that instead of the GPUs working on top of the operating system, Virtu adds in a middle layer between the operating system and the GPU. This way, Virtu can manipulate everything that the operating system wants to say to the GPU, and vice versa, without either of them knowing that there is a middleware layer.

The second is most important perhaps, which I will go into later. Virtual V-Sync and HyperFormance will only make a difference in the following circumstances:

a) You suffer from visual tearing in your games, or you actively use V-Sync
b) If your setup (screen resolution and graphics settings) perform better than your refresh rate of your monitor (essentially 60 FPS for most people). If you have less than this, then you will probably not see any benefit.

I should mention some terminology. Virtu works in two modes, i-Mode and d-Mode, depending on if the display is connected to the integrated graphics or discrete graphics:

Let us go over the technologies again quickly.

Virtual V-Sync: Making sure the last fully rendered frame is shown, without tying the CPU, display and input down to 60 Hz (or the refresh rate of your display) and increasing responsiveness.

Virtual V-Sync works by making the integrated GPU probe the rendering buffer on the discrete card. In i-Mode, it transfers the last completed frame that the GPU has processed into the iGPU, and then when the display requires a refresh, it can process that entire frame. This results in no tearing, and the latest frame being shown. This means that Virtual V-Sync only works when your frame rate is greater than the refresh rate—if it is worse, then Virtual V-Sync will show stuttering in the periods where the next frame is not complete in time, thus it shows an old frame.

To reiterate, Virtual V-Sync will only work if your frame rate is greater than your refresh rate, usually 60 FPS. It does not increase frame rates, but removes tearing while keeping the responsiveness of the input system (mouse) better than normal V-Sync.

HyperFormance: Predicting which frames (or rendering tasks) will never be shown and taking them out of the pipeline so the GPU can work on what is needed (which also increases input responsiveness).

HyperFormance is a tool that uses the iGPU to examine rendering tasks. It applies a prediction algorithm to calculate how long the next few frames will take to render. If the refresh rate of the monitor means that the next two frames have no chance of being shown, then HyperFormance removes all applicable rendering tasks for those frames. Some rendering tasks still need to be processed (certain features like smoke or iterative algorithms), however a lot can be nullified. It then does the predictive algorithm and probe again; when it finds a frame which needs to be rendered for the refresh rate, it will make sure all rendering tasks are done on the GPU.

As the FPS counter is increased at the beginning of a frame, this makes it look like the FPS has shot up as rendering tasks are removed. However, this is a poor indication of what the software does—it means that the responsiveness of the input peripherals (e.g. mouse) goes up in line with this ‘FPS’ number.

This means, HyperFormance increases responsiveness and makes FPS meaningless. In order to work, HyperFormance will only make a difference if your normal frame rate is greater than your refresh rate, usually 60 FPS.

Essentially, the FPS number you get from HyperFormance becomes a measure of system responsiveness, not GPU output.

If the prediction algorithm for HyperFormance is wrong and a frame takes longer to render than predicted, the system will show the last fully rendered frame, causing stuttering. Lucid have asked us to make clear that this should be reported to them as a bug, stating the system being used, settings, and version of Lucid MVP.

What you will see on a system

As the technologies are all based around a virtualization layer, there is a little overhead in processing. Information from the frame buffer has to be passed from the iGPU to the dGPU and vice versa over the PCIe bus. If your screen resolution is high (~2MP for 1920x1080, ~4.1 MP for 2560x1600), this means data transfer could potentially be limited by bandwidth, but it should not be at 5GBps. The virtualization layer actually adds in latency, per frame and per call.

For Virtual V-Sync, in i-Mode, frames need to be transferred between dGPU to iGPU, and then displayed through the iGPU. The latency for this, or so we are told, should be around 3% for high frame rates greater than 100 FPS, compared to that from d-Mode. d-Mode requires less overhead as frames do not need to be copied for display, but commands still have to go between the iGPU and dGPU.

For HyperFormance, in i-Mode, we still have to transfer frames to output on the integrated graphics. However, as the iGPU is also dealing with rendering calls for parts of the frame, this is the main transfer between the two graphics units. In d-Mode, we have a similar experience to Virtual V-Sync.

The upshot of how all this works comes in the form of several numbers and a feature, depending on which settings you choose. The table below shows a theoretical result of what you can expect to get with these technologies in d-Mode mode. Here we are going with Lucid’s standard predictions—Virtual V-Sync and HyperFormance causing a 3% reduction in frame rates above 60 FPS, Virtual V-Sync removing tearing and utilizing more of the GPU, and HyperFormance removing 30% of rendering tasks.

  FPS Refresh Rate Tearing On-Screen FPS Responsiveness
Normal 100 60 Yes 60 100
In Game V-Sync 100 60 No 60 60
Virtual V-Sync 100 60 No 60 100
HyperFormance 100 60 Yes 60 130
Virtual V-Sync +
HyperFormance
100 60 No 60 130

As you can see, both Virtual V-Sync and HyperFormance are designed to work together, but are available in the software as separate options. This is, as we found out, because there may be the odd setup that does not like a particular game/setting, or that it changes the feel of a game too much to what the user is used to. Thus, Lucid allows users to chop and change with whatever feels good to them.

The ultimate realization is this—the FPS counter shown in games when using Virtu MVP no longer means the true FPS value of the output of your GPU. The FPS counter is now a measure of responsiveness. So please be wary when reading reviews using this technology and of the analysis of the results—the refresh rate of the monitor is also a vital component of the results.

Caveats of the Technology

There are a few bumps in the road, as expected with anything new, and we put these to Lucid for answers. The first is the possibility of stuttering as mentioned above when the prediction algorithm cannot cope with a massively different frame to previous—Lucid has asked us to reiterate that this should be reported as a bug.

In terms of multiple GPU users, Lucid says they are not specifically designing the software for them. However, they have said that multiple GPU users are more than welcome to assign Virtu MVP to their games to see if it works—Lucid just will not be validating multi-GPU scenarios due to the additional QA required.

There are currently three main titles as of today that Lucid are still fine-tuning for HyperFormance—Crysis 2, Lost Planet 2, and most importantly, Battlefield 3. These games are currently not validated as they are experiencing stuttering, which Lucid hopes to fix. This is more than likely related to the way the software interprets the rendering calls and the adaptive prediction algorithm specifically for these titles.

There is also an issue of licensing. Lucid MVP is a licensed bit of software, requiring motherboard manufacturers to write specific BIOS code in order for it to work. This obviously costs some money to the motherboard manufacturers as well. Some motherboard manufacturers will be licensing it for their back catalogue of hardware (H61, H67), whereas some will focus only from Z77/H77 onwards. There is a possibility that we will see SKUs that do not have it, either through design or as a sales decision. Despite this, Lucid expects Virtu MVP to be enabled on 100 different Z77/H77 motherboards, and up to 10 million motherboards shipped in the next twelve months.

In addition, as graphics cards become more powerful, HyperFormance obviously will increase, surpassing the responsiveness of the input system. Thus running a 7970 with an old DX9 game at 1024x768 may make your FPS hit 1500+, but your responsiveness will be limited by the system.

Future for Virtu MVP

On paper, Virtu MVP sounds like a great technology to have if you are a gamer demanding a more responsive system for 'immersive' gameplay. If Lucid can make Virtu MVP live up to the software’s lofty goals (and not be dogged by new game releases), it sounds good. Arguably, the best place to see this technology would be with upcoming consoles. If it is integrated into the console, and part of the validation of future games is that is has to run with Lucid, it could mean a lot cleaner and more responsive gameplay.

So much for the new technologies; what about the motherboards themselves? In this preview (we will come back at a later date with Ivy Bridge performance numbers), we aim to cover as many boards as possible to give you an idea of what is available on the market. Up first is the ASRock Z77 Extreme4.

LucidLogix Virtu MVP Technology ASRock Z77 Extreme4
Comments Locked

145 Comments

View All Comments

  • DanNeely - Monday, April 9, 2012 - link

    This is similar to what happened with the USB1->2 transition. The newer controller is significantly bigger (read more expensive) and very few people have more than one or two devices using it per computer. I suspect the 8x (Haswell) chipset will be mixed as well; simply because the total number of ports on the chipset is so much higher than it was a decade ago (vs older boards were all but the lowest end models added more USB from 3rd party controllers).
  • ASUSTechMKT - Monday, April 9, 2012 - link

    mSATA currently has very little penetration on the market and cost wise it is much lower to purchase a larger cache SSD for the same or lower cost. We would prefer to focus on bringing implementations that offer immediate value to users.

    As for the Intel nics all our launch boards across the board for ATX ( Standard and above all feature Intel lan ) we have been leading in this regard for a couple of generations.

    In regards to USB 3 we offer more than the standard on many boards but keep in mind many users only have 1 USB3 device.
  • jimnicoloff - Sunday, April 8, 2012 - link

    Maybe I missed something from an earlier post, but could someone please tell me why these don't have light peak? Are they waiting to go optical and it is not ready yet? Having my USB3 controlled by Intel instead of another chip is not enough to make me want to upgrade my Z68 board...
  • repoman27 - Sunday, April 8, 2012 - link

    Thunderbolt controllers are relatively expensive ($20-30) and their value is fairly limited on a system using a full size ATX motherboard that has multiple PCIe slots. Including two digital display outputs, an x4 and a couple x1 PCIe slots on a motherboard provides essentially all the same functionality as Thunderbolt but at a way lower cost.
  • ASUSTechMKT - Monday, April 9, 2012 - link

    Almost all of our boards feature a special TB header which allows for you to easily equip our boards with a Thunderbolt add on card which we will release at the end of the month. Expect an approximate cost of $40 dollars, this card will connect to the TB header and install in a X4 slot providing you with Thunderbolt should you want it. A great option for those who want it and for those who do not they do not pay for it.
  • DanNeely - Tuesday, April 10, 2012 - link

    Sounds like a reasonable choice for something that's still rather expensive and a very niche product.

    Am I correct in thinking that the mobo header is to bring in the DisplayPort out channel without impacting bandwidth available for devices?
  • jimwatkins - Sunday, April 8, 2012 - link

    I've made it this far on my venerable OC Q6600, but I can't wait any longer. I do wish they weren't so stingy on the 6 core as I could use it, but I just can't justify the price differential (w 3 kids that is.)
  • androticus - Sunday, April 8, 2012 - link

    USB 3.0 descriptions and depictions are contradictory. The platform summary table says there are 4. The Intel diagram shows up to 4 on front and back (and the diagram is itself very confusing, because there are 4 USB 3.0 ports indicated on the chipset, and then they show 2 going to hubs, and 2 going directly to the jacks.) The text of the article says there can only be 2 USB 3.0 ports.

    What is the correct answer?
  • mariush - Sunday, April 8, 2012 - link

    I think there's 2 real ports (full bandwidth ports) and the Intel solution uses 2 additional chips that act like "hubs", splitting each real port into 4 separate ports.

    Basically the bandwidth of each real port gets split if there are several devices connected to the same hub.

    Hub as far as I know means that what the hub receives sends to all four ports (and then the devices at the end of each port ignore the data if it's not for them).
    This would be different than a switch, which has the brains to send the data packages only to the proper port.
  • plamengv - Sunday, April 8, 2012 - link

    DZ77GA-70K makes DX79SI looks like a bad joke (which it is really).

    LGA 2011 turns into an epic fail and DZ77GA-70K is the proof. I have 1366 system and I have zero will to get LGA 2011 system thanks to the crappy tech decisions somebody made there. Six cores is the top? Again? An old 32nm process? Really? Chipset with nothing new inside but troubles? Since 1366 something strange is going on and Intel fails to see it. The end user can get better manufacturing tech for the video card than for the CPU. First it was 45nm CPU with 40nm GPU and now 28nm GPU and 32nm CPU and Intel call that high end? Really?

    Everything that DX79SI should have been you can find inside DZ77GA-70K.

    1. DZ77GA-70K has high quality TI 1394 firewire controller, while DX79SI has cheap VIA one that no any audio pro would ever want to deal with.
    2. DZ77GA-70K has next best after Intel SATA controller by Marvell to get 2 more SATA 6.0 and eSATA vs zero extra SATA and hard to believe no any eSATA on DX79SI.
    3. Intel USB 3.0 vs crappy Renesas.

    DZ77GA-70K has everything to impress, including the two Intel LANs vs the Realtek that everyone else is using.

    DZ77GA-70K fails in only one thing - it had to be LGA 2011, not 1155 that will be just 4 cores like forever and has zero future.

    Wake up INTEL!

Log in

Don't have an account? Sign up now