The problem with Sandy Bridge was simple: if you wanted to use Intel's integrated graphics, you had to buy a motherboard based on an H-series chipset. Unfortunately, Intel's H-series chipsets don't let you overclock the CPU or memory—only the integrated GPU. If you want to overclock the CPU and/or memory, you need a P-series chipset—which doesn't support Sandy Bridge's on-die GPU. Intel effectively forced overclockers to buy discrete GPUs from AMD or NVIDIA, even if they didn't need the added GPU power.

The situation got more complicated from there. Sandy Bridge's Quick Sync was one of the best features of the platform, however it was only available when you used the CPU's on-die GPU, which once again meant you needed an H-series chipset with no support for overclocking. You could either have Quick Sync or overclocking, but not both (at least initially).

Finally, Intel did very little to actually move chipsets forward with its 6-series Sandy Bridge platform. Native USB 3.0 support was out and won't be included until Ivy Bridge, we got a pair of 6Gbps SATA ports and PCIe 2.0 slots but not much else. I can't help but feel like Intel was purposefully very conservative with its chipset design. Despite all of that, the seemingly conservative chipset design was plagued by the single largest bug Intel has ever faced publicly.

As strong as the Sandy Bridge launch was, the 6-series chipset did little to help it.

Addressing the Problems: Z68

In our Sandy Bridge review I mentioned a chipset that would come out in Q2 that would solve most of Sandy Bridge's platform issues. A quick look at the calendar reveals that it's indeed the second quarter of the year, and a quick look at the photo below reveals the first motherboard to hit our labs based on Intel's new Z68 chipset:


ASUS' P8Z68-V Pro

Architecturally Intel's Z68 chipset is no different than the H67. It supports video output from any Sandy Bridge CPU and has the same number of USB, SATA and PCIe lanes. What the Z68 chipset adds however is full overclocking support for CPU, memory and integrated graphics giving you the choice to do pretty much anything you'd want.

Pricing should be similar to P67 with motherboards selling for a $5—$10 premium. Not all Z68 motherboards will come with video out, those that do may have an additional $5 premium on top of that in order to cover the licensing fees for Lucid's Virtu software that will likely be bundled with most if not all Z68 motherboards that have iGPU out. Lucid's software excluded, any price premium is a little ridiculous here given that the functionality offered by Z68 should've been there from the start. I'm hoping over time Intel will come to its senses but for now, Z68 will still be sold at a slight premium over P67.

Overclocking: It Works

Ian will have more on overclocking in his article on ASUS' first Z68 motherboard, but in short it works as expected. You can use Sandy Bridge's integrated graphics and still overclock your CPU. Of course the Sandy Bridge overclocking limits still apply—if you don't have a CPU that supports Turbo (e.g. Core i3 2100), your chip is entirely clock locked.

Ian found that overclocking behavior on Z68 was pretty similar to P67. You can obviously also overclock the on-die GPU on Z68 boards with video out.

The Quick Sync Problem

Back in February we previewed Lucid's Virtu software, which allows you to have a discrete GPU but still use Sandy Bridge's on-die GPU for Quick Sync, video decoding and basic 2D/3D acceleration.

Virtu works by intercepting the command stream directed at your GPU. Depending on the source of the commands, they are directed at either your discrete GPU (dGPU) or on-die GPU (iGPU).

There are two physical approaches to setting up Virtu. You can either connect your display to the iGPU or dGPU. If you do the former (i-mode), the iGPU handles all display duties and any rendering done on the dGPU has to be copied over to the iGPU's frame buffer before being output to your display. Note that you can run an application in a window that requires the dGPU while running another that uses the iGPU (e.g. Quick Sync).

As you can guess, there is some amount of overhead in the process, which we've measured to varying degrees. When it works well the overhead is typically limited to around 10%, however we've seen situations where a native dGPU setup is over 40% faster.

Lucid Virtu i-mode Performance Comparison (1920 x 1200—Highest Quality Settings)
  Metro 2033 Mafia II World of Warcraft Starcraft 2 DiRT 2
AMD Radeon HD 6970 35.2 fps 61.5 fps 81.3 fps 115.6 fps 137.7 fps
AMD Radeon HD 6970 (Virtu) 24.3 fps 58.7 fps 74.8 fps 116.6 fps 117.9 fps

The dGPU doesn't completely turn off when it's not in use in this situation, however it will be in its lowest possible idle state.

The second approach (d-mode) requires that you connect your display directly to the dGPU. This is the preferred route for the absolute best 3D performance since there's no copying of frame buffers. The downside here is that you will likely have higher idle power as Sandy Bridge's on-die GPU is probably more power efficient under non-3D gaming loads than any high end discrete GPU.

With a display connected to the dGPU and with Virtu running you can still access Quick Sync. CrossFire and SLI are both supported in d-mode only.

As I mentioned before, Lucid determines where to send commands based on the source of the commands. In i-mode all commands go to the iGPU by default, and in d-mode everything goes to the dGPU. The only exceptions are if there are particular application profiles defined within the Virtu software that list exceptions. In i-mode that means a list of games/apps that should run on the dGPU, and in d-mode that is a smaller list of apps that use Quick Sync (as everything else should run on the dGPU).

Virtu works although there are still some random issues when running in i-mode. Your best bet to keep Quick Sync functionality and maintain the best overall 3D performance is to hook your display up to your dGPU and only use Sandy Bridge's GPU for transcoding. Ultimately I'd like to see Intel enable this functionality without the use of 3rd party software utilities.

SSD Caching
POST A COMMENT

106 Comments

View All Comments

  • cbass64 - Wednesday, May 11, 2011 - link

    RAID0 can't even compare. With PCMark Vantage a RAID0 with MD's gives you roughly a 10% increase in performance in the HDD suite. A high end SSD is is 300-400% faster in the Vantage HDD suite scores. Even if you only achieve 50% of the SSD performance increase with SRT you'd still be seeing 150-200% increase and this article seems to claim that SRT is much closer to a pure SSD than 50%.

    Obviously benchmarks like Vantage HDD suite don't always reflect real world performance but I think there's still an obvious difference between 10% and a couple hundred %...
    Reply
  • Hrel - Thursday, May 12, 2011 - link

    all I know is since I switched to RAID 0 my games load in 2/3 the time they used to. 10% is crazy. RAID 0 should get you a 50% performance improvement across the board; you did something wrong. Reply
  • DanNeely - Thursday, May 12, 2011 - link

    Raid only helps with sequential transfers. If Vantage has a lot of random IO with small files it won't do any good. Reply
  • don_k - Wednesday, May 11, 2011 - link

    Or the fact that it is an entirely software based solution. Intel's software does not, as far as I and google know, run on linux, nor would I be inclined to install such software on linux even if it were. So this is a non-starter for me. For steam and games I say get a 60-120gb consumer level ssd and call it a day. No software glitches, no stuff like that.

    This kind of caching needs to be implemented at the filesystem level if you ask me, which is what I hope some linux filesystems will bring 'soon'. On windows the outlook is bleak.
    Reply
  • jzodda - Wednesday, May 11, 2011 - link

    Are there any plans in the future of this technology being made available to P67 boards?

    Before I read this I thought it was a chipset feature. I had no idea this was being implemented in software at a driver level.

    I am hoping that after a reasonable amount of time passes they make this available for P67 users. I understand that for now they want to add some value to this new launch but after some time passes why not?
    Reply
  • michael2k - Wednesday, May 11, 2011 - link

    Given that the drive has built in 4gb of flash, it would be very interesting to compare this to the aforementioned SRT. Architecturally similar, though it requires two drives instead of one. Heck, what would happen if you used SRT with a Seagate Momentus? Reply
  • kenthaman - Wednesday, May 11, 2011 - link

    1. You mention that:

    "Even gamers may find use in SSD caching as they could dedicate a portion of their SSD to acting as a cache for a dedicated games HDD, thereby speeding up launch and level load times for the games that reside on that drive."

    Does Intel make any mention of possible future software versions allowing user customization to specifically select applications to take precedence over others to remain in cache. For example say that you regularly run 10 - 12 applications (assuming that this work load is sufficient to begin the eviction process), rather than having the algorithm just select the least utilized files have it so that you can point to an exe and then it could track the associated files to keep them in cache above the priority of the standard cleaning algorithm.

    2. Would it even make sense to use this in a system that has a 40/64/80 gig OS SSD and then link this to a HDD/array or would the system SSD already be handling the caching? Just trying to see if this would help offload some of the work/storage to the larger HDDs since space is already limited on these smaller.
    Reply
  • Midwayman - Wednesday, May 11, 2011 - link

    What is the long term use degradation like? I know without TRIM SSD's tend to lose performance over time. Is there something like trim happening here since this all seems to be below the OS level? Reply
  • jiffylube1024 - Wednesday, May 11, 2011 - link

    Great review, as always on Anandtech!

    This technology looks to be a boon for so many users. Whereas technophiles who live on the bleeding edge (like me) probably won't settle for anything less than an SSD for their main boot drive, this SSD cache + HDD combo looks to be an amazing alternative for the vast majority of users out there.

    There's several reasons why I really like this technology:

    1. Many users are not smart and savvy at organizing their files, so a 500GB+ C drive is necessary. That is not feasible with today's SSD prices.

    2. This allows gamers to have a large HDD as their boot drive and an SSD to speed up game loads. A 64GB SSD would be fantastic for this as the cache!

    3. This makes the ultimate drop-in upgrade. You can build a PC now with an HDD and pop in an SSD later for a wicked speed bump!

    I'm strongly considering swapping my P67 for a Z68 at some point, moving my 160GB SSD to my laptop (where I don't need tons of space but the boot speed is appreciated), and using a 30-60GB SSD as a cache on my desktop for a Seagate 7200.12 500GB, my favourite cheap boot HDD.
    Reply
  • samsp99 - Wednesday, May 11, 2011 - link

    Is the intel 311 the best choice for the $$, or would other SSDs of a similar cost perform better. For example the egg has OCZ Vertex 2 and other sandforce based drives in the 60GB range for approx $130. That is a better cache size than the 20GB of the intel drive.

    Sandforce relies on compression to get some of its high data rates, would that still work well in this kind of a cache scenario?
    Reply

Log in

Don't have an account? Sign up now