Note: There is a pretty big issue with the shipping (F2) BIOS on the board I received.  If any memory settings were chosen (including XMP) rather than left at auto, the board would not engage more than a 1x turbo mode, even in single threaded performance.  This issue has been addressed, and Gigabyte suggest that users upgrade to the latest BIOS, available from their website.

The Touch BIOS – no Graphical UEFI

Gigabyte have yet to adopt a full graphical UEFI BIOS. In fact, they merely have a “hybrid” BIOS which adds support for 2.2TB+ hard drives. In order for the 2.2TB+ hard drives to be detected, you have to ‘activate’ them within Windows. You have to run a program to initialize the drives. For the rest, the layout is exactly like a five year old Gigabyte board I have laying around from the LGA775 age.

Unfortunately, this is about as interesting as it gets. It’s a shame that Gigabyte have yet to take on a full UEFI style but it may come in the not too distant future. On the plus side however, Gigabyte have put all of the features in their relative submenus that makes them easy to find. For example, all of the system clocks and overclocking features are located in the MB Intelligent Tweaker (M.I.T.) menu and the settings are under their respective menus in there too. This is relevant for all of the menus within this BIOS.

There are no options to change any of the system fan speeds within the BIOS. However, scrolling down in the PC Health Status part of the BIOS reveals a few options for the CPU fan control. When you select CPU Smart FAN Control and enable it, you can alter the CPU Smart FAN Mode. You can either let it automatically adjust the voltage of the fan or you can manually assign it. If you have a 3 pin fan, you can alter the voltage which will correspond to the speed of the fan or if you have a 4 pin fan, you can set it to PWM mode and the BIOS will take care of it for you. There are four different options which you can choose from whilst using the PWM function. They consist of Normal, Silent, Manual and Disabled.

The TOUCH BIOS from within Windows

Above is the software, which you can use to change absolutely everything in the BIOS from within Windows.


The only obvious issue I found with it is in the M.I.T Status section of this software. It only registers one DIMM no matter which slot it is in but it does register that you are using the correct amount of RAM which is slightly odd.

Overclocking

Just like other manufacturers, Gigabyte have their own software which allows you to overclock your system automatically from within the OS. However, they all vary and some do better than others. We tested out Gigabytes’ offerings before we fiddled and tweaked within the BIOS.

When the EasyTune6 software loads, it loads the Tuner menu by default. There are three different “Quick Boost” options you can choose which consist of a 3.6, 3.8 and 4.1GHz core speed and are labeled 1, 2 and 3 respectively. All three options were tested and proven stable but I will write about the level 3 variable. There are a few things to note here. The voltages set are far too high for the clock speed. A core voltage of 1.4v is set. This particular CPU only requires 1.34v for 4.3GHz. This means there will be excess heat which could be avoided by using less voltage. No Load Line Calibration (LLC) is set during this overclocking procedure, which means the voltage droops to 1.33v when the CPU is highly stressed.

Thankfully, on the RAM side of things it does a little better. The RAM is set to 1866MHz with X.M.P timings and voltages. Our RAM is rated at 2000MHz which isn’t available for selection in the BIOS. There are two options – either 2133MHz or 1866MHz. The safer choice of 1866MHz was chosen to try and eliminate any possible RAM stability issues.

Unfortunately, the CPU core voltage isn’t displayed correctly by CPUZ which means you have to use the EasyTune6 software, which seems to do a better job although I wouldn’t rely on it entirely. The vDroop on this board is something which needs to be addressed. In the BIOS, there are ten levels of LLC, one being the weakest and ten being the strongest. There is no description in the BIOS of how they affect the voltage. When you use no LLC at all, the vDroop on 1.4v can dip right down to 1.33v which is a significant change in voltage. However, when you use level 5, so in the middle of min and max, it will increase the voltage to 1.45v which is far too much again. Gigabyte really do need to sort this out.

Given that the EasyTune6 software only offers a 4.1GHz overclock which uses too much voltage, I was eager to get on and see how well the motherboard overclocks if you set the speeds and voltages on your own accord.

First of all, I tested to see if the CPU would be stable at 4.3GHz with 1.34v just like it is in other motherboard testing. It was semi stable; LLC had to be set to level 3 in order to obtain complete stability. The voltages peaked at 1.36v which isn’t too serious.

After deciding it was stable enough, I pushed on and attempted to find the highest clock speed which could be used to run our benchmarks. The maximum overclock achieved was 102.1x47 which isn’t too shabby at all. The BCLK isn’t as impressive as other motherboards but as you have an unlocked multiplier, you might as well just take advantage of it instead of messing around with the BCLK.

When you push the overclock a touch too far, it sometimes fails to recover from a bad OC without manually simulating the boot cycles. By this, I mean turning the PC on and off by the PSU and letting the board power itself back on. It only takes a few attempts. The motherboard is designed to copy data from the secondary BIOS chip to the first one when the overclock fails in order to recover itself. All this does is restore the BIOS defaults which allows you to either try again or try something different.

Board Features and Software Test Setup, Power Consumption and Temperatures
Comments Locked

70 Comments

View All Comments

  • Taft12 - Monday, July 11, 2011 - link

    Though your specific case leads a problem, I still think there are more PCI cards in use in the world's PC's than PCIe x1 cards. Also, why did you get a 3-slot GPU when you had 3 other expansion cards you wanted to use??

    Gigabyte DOES do the right thing by placing an x1 slot ABOVE the highest x16 slot. This should be standard practice on all motherboards and sadly it is not.
  • cyberguyz - Tuesday, July 12, 2011 - link

    The point I was trying to make is that PCI cards are on the way out. They are obsolete. In order to implement them on the latest chipsets, the motherboard manufacturer actually has to include extra components to bridge the PCI slots to PCIe. Yes, there are still people using PCI cards - I agree, but that number is decreasing every year in favor of PCIe cards.

    As to why I am using a video card that uses 3 slots... I wanted a video card with very high performance that runs nice and cool. Simple. Very high performance video cards that run cool use coolers that take up 3 slots.

    Yes, having that one slot above the PCIe x16 slot helps - as long as the PCIe card is small like a cheap network controller or 2-port SATA software raid controller. Some mobo makers put a BIG northbtridge heatsink just inboard of that 1x slot which blocks all but the shortest cards. I would not advise putting a Creative Labs Fatility Titanium pro soundcard there though. The metal shield gets awful close to the backside of the video card (granted would be a problem with any PC with loaded slots) and blocks airflow (video cards need airflow on the backside as well as the cooler side).

    There are some motherboards that do away with the PCI slots altogether, but they suppliment the PCIe slots with a secondary PCIe controller/bridge (i.e. Maximus IV Extreme-Z), but you are going to pay dearly for that.
  • enterco - Monday, July 18, 2011 - link

    @On-topic: I don't see any reference to the limitations of the Intel RST when used with RAID configurations. If anyone succeeds to enable SSD caching/acceleration of an array with two existing volumes, please let me know. I know that Intel RST 10.5 , combined with mixed configurations, such a 2 drives array with one striped volume and with a secondary mirrored volume aren't too popular, but this kind of configuration can't be accelerated with Z68's SSD caching combined with iRST ver. 10.5.

    @Cyberguyz: I see that you are trying to suggest toat PCI devices are obsolete.
    Let me share some of my experience.
    Chapter one: I've used a PCIe sound card: SB X-Fi Titanium Fatal1ty Pro until one of the SMDs burnt. After that event, the sound card wasn't able to use the analogic inputs/outputs and the driver provided by the vendor no longer installed properly. I believe that the burnt SMD was responsible with the power delivery to the analogue I/O part of the sound card.

    Chapter two: I tought that 'using PCI Express' will 'bring me to today', so I bought an Asus Xonar D2X instead of an Asus Xonar D2/PM. I'think i've made a bad decision, and I'll explain why. The D2X has two things different from the D2/PM: it includes a PCI Express-to-PCI bridge [a supplemental component], and it needs to be powered by a molex/floppy connector.
    So... why would anyone want to add latency to the audio system and take care of the extra power cable ? It may be useful ONLY IF the PCI variant, D2/PM can't fit into the computer.
    When I bought my Asus sound card, I wanted something 'new', not the obsolete PCI version, but today I would choose the PCI version if I must buy a new sound card.

    Chapter three. Recently, I was looking for a PCI Express USB3 controller. I wasn't able to find one without: a) a molex/floppy power connector; b) PCI Express X4 connector. I'm trying to show that the power provided by a PCI Express X1 is not always enough for some devices.

    I think that the PCI slots are mature and powerful enough, even for today's standards, to be taken into consideration. A PCI Express X1 slot provides less power than a PCI slot, which may imply the need for an additional power connector to the card.

    On the Intel 486 / Pentium days there was a sound card, the AWE32 and AWE64. None of them were PCI-based, even if PCI was mature at the time. The bandwidth provided by the ISA slots was enough for these cards.

    Let's do some math. A sound card using 4 channels , 16 bit/sample at 44,1 kHz sampling rate requires approx. 3,4 Mbit/sec bandwidth. Multiplying the number of channels, the sampling rate and the bits per sample, to today's standards, a sound card does not need more than 70 Mbps bandwidth. A PCI bus, 33 bit, 33 MHz, can provide 1 Gbps bandwidth, meaning that a sound card at today's standards can use only 1/16 of the bandwidth of the PCI/32bit/33MHz bus.
  • just4U - Tuesday, July 12, 2011 - link

    For me finding a PCle Sound card (locally) is a real pain. Atleast one I'd actually buy. PCI variants are around in abundance so there's still a call for it.
  • DanNeely - Tuesday, July 12, 2011 - link

    There's no single layout that will suit all GPU layouts; so it behooves you to pick a board with a good layout for what you want. With 3 slot GPUs being much rarer than 2 slot SLI configurations you can expect to remain on the short end of the stick for the number of suitable layouts.

    As for legacy PCI remaining on boards I don't expect to see it go away on full size ATX boards any time soon. With only 8 1x lanes available on the southbridge there isn't enough connectivity available to fill out all 5 slots and attach all the misc devices. Firewire, audio, non intel SATA, non intel nic (x2?), USB3 port pair (x2? x3?) all eat away at the SBs limited connectivity. If thunderbolt starts showing up in future systems it'll only make the connectivity situation worse.

    A 1 PCIe to 4 PCI bridge chip brings the situation back to roughly what it was in the prior generation; FW, and audio fire easily on PCI . It's not quite as good as it was on last generation chipsets though since you only have 11 connections vs 13 (8 + 5); I suspect this may be behind the drop in the number of boards with an x4 slot. While they could use a 1:4 PCIe bridge instead, older PCI parts are probably cheaper and they don't want to lose out on sales to the minority with at least one PCI device they still need. Exposing those ports to end users could have issues as well since they could bottleneck at much lower system load levels than DMI would.

    Unless Intel bumps the SB PCIe lane count to 12 (or goes PCIe 3.0 on the SB), goes all up SATA 3.0, and adds at least a half dozen USB3 ports (if not all up) to the next generation chipset I expect we'll see legacy PCI lanes on most of next years crop of boards as well.
  • PR3ACH3R - Monday, July 18, 2011 - link

    What (could have been )an amusing comment,
    if it wasnt for the fact that no one on his right mind would put a GTX 570 on any machine he cares about a High end Pcie sound card.

    not to mention total disregard of the simple (& obvious) fact that most professionals/users if theyr'e at all using an expensive pro card, 80% of it is PCI.

    But hey you have a point, they should have consulted you, not the market analysts.
  • poohbear - Monday, July 11, 2011 - link

    totally blows about the bios' lack of graphical layout. The asus boards are so convenient for having the newer bios' and i feel like im in 2011. These old bios' screens belong to the 90's that everyone still uses..:p
  • Mr Perfect - Monday, July 11, 2011 - link

    I would agree. I'd pass up this board on that fact alone. Everyone else has their act together when it comes to a UEFI interface, what the major malfunction here? Not acceptable from a tier one board manufacturer.
  • Taft12 - Monday, July 11, 2011 - link

    I dunno guys, if you can't handle a keyboard-only interface, you should hand in your geek-cred cards now. This is Anandtech after all!
  • cyberguyz - Tuesday, July 12, 2011 - link

    Y'know - the UEFI in those others do have an advanced mode that does all the same things as the old char mode bios. I kinda like the bootup UEFI and really don't want to dual-boot Windows on my Linux box just to fart around with a graphical bios (and yes, I have been working with character mode bios as long as character mode bios have existed - my earliest adventures with overclocking was with jumper caps on a 486 board ;D ).

    But to each their own.

Log in

Don't have an account? Sign up now