Well, today is a day that everyone was expecting but not the most exciting day of the month. Today, Intel officially releases the spate of 7-series chipsets for their 3rd generation Core microarchitecture processors (read—Ivy Bridge), despite the actual release of the processors being another three weeks away. This means that today we can actually look at, test, or purchase the range of motherboards that natively support Ivy Bridge. However, we can’t officially publish all the benchmarks for these products using Ivy Bridge until that date three weeks away (to be honest, we are still testing!). So today we are going to have a good in depth look into the Z77 chipset itself, and the review products we have received to let you know what we think looks good. All these boards today will be fully reviewed, warts and all, with Ivy Bridge, as close to official release as possible.

The boards today are from ASRock, ASUS, Gigabyte, MSI, ECS and Biostar. All these boards will natively support Sandy Bridge processors, and be fully upgradeable to Ivy Bridge silicon when the processors are released.

The Z77 Chipset

The important things to note about the Z77 chipset will be its differences to all things Sandy Bridge/Cougar Point, typically in relation to the Z68 platform. Rather than have a split between a chipset with video outputs and a chip without video outputs, all motherboards in the Z77 (and H77) will natively support video outputs (if the motherboard manufacturer wishes to use them). There will also not be a chipset issue as there was back this time last year with Cougar Point—so none of this B2/B3 nonsense that confused consumers.

In comparison to the previous chipsets, what we can see is simple from a table:

Chipset Comparison
Z77 Z75 H77 Z68 P67 H67
CPU Support IVB
LGA-1155
IVB
LGA-1155
IVB
LGA-1155
SNB/IVB
LGA-1155
SNB/IVB
LGA-1155
SNB/IVB
LGA-1155
CPU Overclocking Yes Yes No Yes Yes No
CPU PCIe Config 1 x16 or
2 x8 or
1 x8 + 2 x4
PCIe 3.0
1 x16 or
2 x8 PCIe 3.0
1 x16 PCIe 3.0 1 x16 or
2 x8 or
1 x8 + 2 x4
PCIe 3.0
1 x16 or
2 x8 PCIe 3.0
1 x16 PCIe 3.0
Processor Graphics Support Yes Yes Yes Yes No Yes
Intel SRT (SSD caching) Yes No Yes Yes No No
RAID Support Yes Yes Yes Yes Yes Yes
USB 2.0 Ports (3.0) 14 (4) 14 (4) 14 (4) 14 14 14
SATA Total (Max Number of 6Gbps Ports) 6 (2) 6 (2) 6 (2) 6 (2) 6 (2) 6 (2)
PCIe Lanes 8 (5GT/s) 8 (5GT/s) 8 (5GT/s) 8 (5GT/s) 8 (5GT/s) 8 (5GT/s)

Segregation is quite apparent for the consumer—a Z means you can overclock an appropriate CPU, and a ‘77’ means that you can take advantage of Intel’s SSD smart caching.

The chipset diagram above shows the layout of the Intel DZ77GA-70K motherboard, showing sixteen PCIe lanes from the processor. The astute will note that only one PCIe lane is coming from the PCH to the PCI Express Switch, which powers the PCIe to PCI bridge (and thus PCI and IEEE1394a) and the PCIe 2.0 x1 ports. This would mean that users of the DZ77GA-70K may experience bandwidth limitations when PCI and PCIe x1 devices are used.

PCIe 3.0

We still have 16 PCIe lanes from the CPU, but with the new chipset these are fully and officially rated at PCIe 3.0 speeds if you have the GPUs to take advantage of this feature. This becomes more apparent in multi-GPU setups, where the 16 lanes are split between the slots. Typically, this will be in the form of x16 for single card, x8/x8 for dual GPU, and x8/x4/x4 at tri-GPU. Having an x4 PCIe 3.0 is equivalent to an x8 PCIe 2.0, but only if you have a card that can take advantage of PCIe 3.0.

Due to this feature with the CPUs, the chipsets will have to adhere to PCIe 3.0 compliance, to deal with the transfers and lane layout design. A novel addition that we will see on many of the high-end boards is a series of onboard chips to increase the number of available lanes. These are known as PCIe 3.0 PLX PXE chips, which increase the number of lanes on the motherboard from 16 to 32—all rated PCIe 3.0. This also raises the cost of the motherboard, perhaps as much as $10-$15, so it is worth thinking long and hard if you really need that tri-GPU setup.

Native USB 3.0

The other long awaited addition found on Panther Point is the native implementation of USB 3.0 that comes directly from the chipset. The chipset will only provide two USB 3.0 headers, giving four ports total, either in the form of rear panel additions or an onboard header (this is at the discretion of the motherboard manufacturer). For many low-end boards, this eliminates the cost of a controller from the product. For high end SKUs that typically have 4-6+ USB 3.0 ports, while it does reduce the number of controllers by two, some PCB real estate will still be utilized by USB 3.0 controllers.

Memory Improvements

The Panther Point chipset comes with a few additional features that may be of benefit to some users. Partially derived from the processors themselves, the chipset now supports DDR3L thanks to its power gated DDR3 interface. This is a primary benefit for the lower end processor and chipset products (that is, all things mobile), but we will see if this moves its way into the mainstream or HTPC versions of Panther Point.

For the high-end power user, the chipset is now geared to handle memory that is more powerful. Again, thanks to the improvement on the new processors, these combined systems should be able to handle (with ease) DDR3-2800 memory. The memory dividers work similar to the gear ratios in Sandy Bridge-E, with users able to call a larger array of memory dividers than before with up to 200 MHz increments. This is of great benefit to the integrated graphics on the new generation processors, which should benefit from the purchase of a higher end memory kit to provide enough bandwidth.

We are still going to play with dual channel memory for the time being on consumer platforms. As this processor change is only a tick (process node decrease), we will have to wait for a tock or two in order for the memory subsystem to change to a larger number of channels. As seen on X58, tri-channel memory/6 slots was fairly cumbersome on motherboards, however on Sandy Bridge-E the quad channel memory (should Intel want to move in this direction) would still allow four slots on the board for an increase in memory bandwidth. However for now, dual channel it is.

If you happen to purchase ASUS for Ivy Bridge, there is also a little treat in store, as they have reworked the memory sub-system. Their new method stunned Intel engineers, but should provide distinct memory speed advantages. Simply put, instead of memory banks being read consecutively, the memory is read in parallel. We are awaiting more detail regarding how this feature works.

Power Consumption and Power Delivery

It has been well documented over the past few months that the top end model of the new processors should have a TDP of 77W. This is just shy of 50 watts less than the top models seen in Sandy Bridge-E, but more importantly, down from 95 watts as shown by the 2600K/2700K processors of Sandy Bridge. This does force a small change with the power delivery on Panther Point/Z77 products.

With less TDP to cater for, the motherboard manufacturers can spend less on the power delivery to get it to work. So there is a chance that the days of '24 phase power’ are long gone, and we will be dealing with less phases. Fewer motherboards will have dual 8-pin 12V connectors, and unless you go high end, may only require one 4-pin 12V connector to get the job done. Obviously if you apply an overclock, having access to more current (and thus more power) is of major benefit, but for the majority of system builders who will run systems at stock, this could represent savings in the construction being passed on to the consumer.

However, this does lend itself to potential issues when used in conjunction with Sandy Bridge processors—having a power delivery system less robust than Z68 could perhaps hinder Sandy Bridge overclocks.

Backwards Compatibility with Sandy Bridge Processors

To confirm what has been said in news articles around the world, the Intel 7-series chipsets will be backwards compatible with the Sandy Bridge (Intel 2nd Generation Core Microarchitecture) processors from the point of sale. All motherboard vendors should have this functionality built into their BIOSes. The beauty lies not in backwards compatibility but in releasing chipsets to take advantage of the new features—this is something AMD has done well with in the past.

In addition, a significant number of Intel 6-series chipsets (P67, Z68) should be forwards compatible with the Ivy Bridge processors, pending a BIOS update. Thus, current Sandy Bridge processor owners wishing to upgrade should make sure they have an appropriate BIOS before installing a new processor in their 6-series motherboard.

LucidLogix Virtu MVP Technology
POST A COMMENT

145 Comments

View All Comments

  • Iketh - Sunday, April 08, 2012 - link

    "handling input in a game engine" means nothing here. What matters is when your input is reflected in a rendered image and displayed on your monitor. That involves the entire package. Lucid basically prevents GPUs from rendering an image that won't get displayed in its entirety, allowing the GPU to begin work on the next image, effectively narrowing the gap from your input to the screen. Reply
  • extide - Tuesday, April 10, 2012 - link

    I am sure he knows that. He was just giving a bit of detail as to his exact experience, of which I would bet is far more than most people on here. You have to be very aware of things such as latency and delay when you are handling input in a game engine. I agree with the OP and am skeptical also. The bit that makes me most curious is the transfer of the fully rendered screens from one framebuffer to the other, that has to add some latency, and probably enough to make the entire process worthless. It's not like Lucid has a good track record on stuff like this, I mean we all know how their cross platform SLI/CF took off and worked so well.... Reply
  • Iketh - Wednesday, April 11, 2012 - link

    Why would you need to physically copy framebuffers?? I'm sure pointers are used...

    I have no idea if this has tangible benefits, but theoretically it does. None of us know until we can test it. I'm more inclined to discredit the people already discrediting Lucid, despite Lucid's track record. That's what you call hating.
    Reply
  • Iketh - Wednesday, April 11, 2012 - link

    excuse me, you're right... it has to copy the frame from gpu to igpu... what kind of crap tech is this??? Reply
  • ssj3gohan - Sunday, April 08, 2012 - link

    Personally, I'm absolutely uninterested in anything 'high-performance', especially fancy gaming stuff. Not to say that I don't think that's a valid market niche, but I see other possibilities.

    I'm really looking forward to new thin ITX boards with built-in DC-DC converter (i.e. running directly off a 19V brick), and I am especially wondering whether Intel (or Zotac, possibly) is going to build a golden board this time around. Last time, they made DH61AG which was a nice board, but lacked an msata port (kind of a must for a truly thin computer) and 'only' had an H61 chipset.

    With H77, I expect it will be possible to make a thin ITX board with USB 3.0 and a fast on-board SSD option, combining this with an HD 4000 equipped processor would enable users to build a truly thin (sub-4 inch thick) computer that fits on the back of their monitor but still provides ample computing power.
    Reply
  • Senti - Sunday, April 08, 2012 - link

    It sounds to me that Lucid Virtual V-Sync is just glorified triple buffering with a lot of marketing and a bit of overhead for transferring frames and powering two video cards instead of one. I'm very skeptical on the HyperFormance too. Reply
  • Cavalcade - Sunday, April 08, 2012 - link

    It seems a bit more involved than triple buffering, more like having 2 buffers where the back buffer is not flipped until it is fully rendered. Seems like this would lead to more stuttering, and given the number of times they asked Mr. Cutress to reiterate that this would be a bug, it may be something they are seriously concerned with.

    Thinking about it a little more, I'm not sure what advantages this system would have over a system with separated input and rendering modules. The academic side of me is extremely interested and hopeful, but the practical developer side of me is going to require a lot more to be brought on board.
    Reply
  • Iketh - Sunday, April 08, 2012 - link

    Separate input and rendering modules, as I stated in an earlier post, means nothing. They allow for a responsive mouse cursor, for instance. But, when you actually provide input that alters the RENDERED WORLD, you have to wait for that input to reflect on screen. It doesn't matter how perfectly the software solution is architected, you still have to wait for the rendering of the image after your input.

    Lucid simply prevents renders that never get displayed in their entirety, allowing the GPU to work on the NEXT image, shortening the time from your input to the screen.
    Reply
  • Cavalcade - Monday, April 09, 2012 - link

    The comment was to indicate that while I have experience writing input systems, rendering is still relatively new to me; simply a qualifier of my impression and opinion.

    The way I am understanding Lucid, it is attempting to preempt displaying a frame that is not fully rendered in time for the next screen refresh. By presenting a virtual interface to both the GPU and the application, the application believes the frame has been rendered (displaying user input at that time) and proceeds to render the next frame. Thinking more about it, would this reduce the time interval between input reflected in frame one (which was preempted) and frame two (which will be displayed) so that rather than having input sampled at a fixed rate (say 60Hz) and displayed at a variable rate, input would be more closely tied to the frame for which it is intended.

    My interest is rising, but it still seems like a rather complex solution to a problem that I either haven't experienced, or which doesn't really bother me.
    Reply
  • Iketh - Tuesday, April 10, 2012 - link

    it's not preemtively doing anything, except determining if a frame added to the queue will finish rendering in time... if not, it >>>>DOESNT LET THE GPU RENDER IT<<<< and places the previously rendered image in its place, allowing the GPU to immediately begin work on the FOLLOWING frame... that's it... it cuts unneeded frames from queues

    as for your input sampling rate question, that's entirely based on how the application is coded to handle input, lucid has nothing to do with this...
    Reply

Log in

Don't have an account? Sign up now