Back to Article

  • mapesdhs - Thursday, March 06, 2014 - link

    Having onboard 10GigE is a great feature, but oh no, not more Marvell controllers...
    when will vendors stop using these awful chips??

    I can't understand why Intel doesn't make available whatever technology it uses for its
    own SATA3 connections as a separate simple addon chip which vendors can use to
    provide extra SATA3 ports that _actually work properly_, because Marvell ports simply
    don't. Never entrust a RAID0 setup to a Marvell controller. If there was such a thing as
    a x4 or x8 PCIe Intel SATA3 card which offered 4 to 8 SATA3 ports, I'd buy some for sure,
    eg. for my ASUS P9X79 WS, Asrock X58 Extreme6, etc., and numerous older boards
    which don't native SATA3.

    Btw, strange thing, although Gigabyte's spec summary says this board supports the
    consumer i7 range of S2011 CPUs, their CPU support list table doesn't show any of
    them. Do you know i7 support is present at launch? Or coming later?

  • tyleeds - Thursday, March 06, 2014 - link

    Would be more interesting to know if the SFP+ port supports converged fabric. (FC+10Gbe or FCoE+10Gbe). Also: The internal USB drive is usually used for booting a bare-metal hypervisor like ESXi Reply
  • Ian Cutress - Saturday, March 08, 2014 - link

    I have the 6PXSV3 motherboard in to review which has the same issue - says on the page support for the consumer CPUs, but not in the QVL. I spoke with GIGABYTE Server, and they said they can only put CPUs into the QVL that Intel also says it officially supports, but they do unofficially support the consumer i7 series. I currently have the i7-4960X in the 6PXSV3, I just had to remember to use non-ECC with this CPU. Reply
  • SuperSpy00bob - Thursday, March 06, 2014 - link

    Assuming the large white connector about 1/4 down the right side of the board is the EATX 12v connector, does it scare anybody else that the wires are exposed? Reply
  • pattycake0147 - Thursday, March 06, 2014 - link

    Looks like a heatsink to me. Reply
  • DanNeely - Thursday, March 06, 2014 - link

    It's a power connector. See item #2 on the diagram in page 19 of the manual.
  • ZeDestructor - Thursday, March 06, 2014 - link

    Not even slightly. Reply
  • colonelclaw - Thursday, March 06, 2014 - link

    What's the reason behind using the SFP+ interface rather than 10GBase-T? Reply
  • Kevin G - Thursday, March 06, 2014 - link

    Fiber interfaces for 10 Gbit Ethernet have existed for years where as 10Gbase-T switches are still relatively new. This allows for users to select the transceiver that interfaces with their network. Reply
  • colonelclaw - Thursday, March 06, 2014 - link

    Ah right thanks. So if you were starting from scratch, the logical route to take would be 10Gbase-T for the compatibility with common cat6/rj45 setups? Reply
  • ZeDestructor - Thursday, March 06, 2014 - link

    For us yes, but unless I'm wrong, 10GbE over copper on 8P8C connectors is more expensive and less flexible than SFP+, which aside from fibre, can also be used with Direct Attach copper or an SFP to 8P8C convertor (although those are really rare).

    I like where this is going though.. on-board 10GbE... with any luck, we'll see classic copper-only 8P8C 10GbE implementations on higher-end consumer boards and switches will start getting cheap!
  • gsvelto - Friday, March 07, 2014 - link

    SFP+ offers significantly lower latency than 10GBase-T (up to 1µs per crossed port) and lower power too. This is mostly due to the block-encoding needed by 10GBase-T transmission protocol and the additional DSP processing required to clean the signal. SFP+ maximum cable reach is also higher and switches/adapters are generally cheaper than their 10GBase-T equivalents.

    All in all unless you're relying on existing legacy infrastructure and cannot do major changes to your cabling setup going for SFP+ cables and connectors is a win on pretty much all fronts.
  • sor - Wednesday, March 12, 2014 - link

    agreed. 10GBase-T is pretty horrible, and I hate the fact that the majority of server motherboards with on-board 10G have it. It's power hungry and hard to find switches for. I can only assume it's so that they don't have to put both 1G and 10G on different connectors. I'll go with direct copper SFP+ any day. Reply
  • electroball09 - Thursday, March 06, 2014 - link

    "many $$$$"
    much internet

    very speed

  • BMNify - Thursday, March 06, 2014 - link

    if GIGABYTE’s regional offices dont want you to know their prices without logging in to just find that simple cost v utility then we shall all just wait on amd to prove those new x2 10Gb/s arm motherboards, provid what people want to buy with a clear and public stated price or no sale Reply
  • kmi187 - Thursday, March 06, 2014 - link

    Completely agree with you here, I hate it when you have to jump through hoops to get an idea how much a certain solution is going to cost you. If they don't bother mentioning it clearly, I won't bother wasting money on their product either. Reply
  • Ian Cutress - Friday, March 07, 2014 - link

    That's just that retailers policy. I recently learned that GIGABYTE Server also sell at ServersDirect and Superbiiz, which do not have that login requirement. Post updated to reflect these links :) Reply
  • extide - Thursday, March 06, 2014 - link

    This would be the perfect board to upgrade my ZFS server into one day! Reply
  • Libertysyclone - Tuesday, March 11, 2014 - link

    Is IPMI too much to ask? even if it were an addin card I would be happy. Reply

Log in

Don't have an account? Sign up now