GIGABYTE Server MD60-SC0 In The Box

Server motherboards are typically quite spartan, giving only a couple of SATA cables in the box to help facilitate non-standard use. Due to the orientation of the motherboard and the layout of the dedicated SATA ports, GIGABYTE Server steps a little further in this regard by providing two mini-SAS to SATA cables for the two headers on board. This allows the 8 of the 10 SATA ports provided by the chipset to be used without extra hardware needing to be purchased. Also in the box is a rear IO shield, manuals and driver disks.

Many thanks to...

We must thank the following companies for kindly providing hardware for our test bed:

Thank you to OCZ for providing us with PSUs and SSDs.
Thank you to G.Skill for providing us with memory.
Thank you to Corsair for providing us with an AX1200i PSU and a Corsair H80i CLC.
Thank you to MSI for providing us with the NVIDIA GTX 770 Lightning GPUs.
Thank you to Rosewill for providing us with PSUs and RK-9100 keyboards.
Thank you to ASRock for providing us with some IO testing kit.
Thank you to Cooler Master for providing us with Nepton 140XL CLCs.

Test Setup

Test Setup
Processor Intel Xeon E5-2697 v3 x2 (2 x 145W),
 - 28C/56T, 2.3 GHz (3.6 GHz Turbo)
Intel Xeon E5-2650L v3 x2 (2 x 65W),
 
- 24C/48T, 1.8 GHz (3.6 GHz Turbo)
Intel Xeon E5-2687W v3 x2 (2 x 160W),
 - 20C/40T, 2.3 GHz (3.6 GHz Turbo)
Motherboards GIGABTYE MD60-SC0
Cooling Dynatron R14
Power Supply OCZ 1250W Gold ZX Series
Corsair AX1200i Platinum PSU
Memory ADATA XPG Z1 DDR4-2400 C16 8x8GB 1.2V
Memory Settings JEDEC @ 2133
Video Cards R7 240
Video Drivers AMD Catalyst 13.12
Hard Drive OCZ Vertex 3 256GB
Optical Drive LG GH22NS50
Case Open Test Bed
Operating System Windows 7 64-bit SP1
USB 2/3 Testing OCZ Vertex 3 240GB with SATA->USB Adaptor
BIOS and Software System Benchmarks
Comments Locked

17 Comments

View All Comments

  • macwhiz - Wednesday, December 3, 2014 - link

    I'm not surprised that there's no temperature data in the BIOS. Server admins don't look at the BIOS after they complete initial setup (or a major overhaul). It's accessible from the BMC, where it's useful in a server environment. When a server overheats, the admin is usually not in the same room—and often not in the same building, or even the same state. The important question is how the BMC firmware does at exposing that data for out-of-band management via IPMI, SNMP, or another standard solution. Does it play well with an Avocent UMG managment device, for instance? As a server admin, I could care less about seeing the temperature in the BIOS. What I care about is that my chosen monitoring solution can see if the temperature is going up—or any hardware fault is detected—and page me, even if the operating system isn't running. That's what BMCs are for!

    Don't apologize for using 240VAC power. Chances are very good that, even in a U.S. data center, it'll be on 240VAC power. Given the current needs of most servers, it's impractical to use 120VAC power in server racks—you'll run out of available amperage on your 120VAC power-distribution unit (power strip) long before you use all the outlets. Keep going down that road and you waste rack space powering PDUs with two or three cords plugged into them. It's much easier and more efficient all the way around to use 240VAC PDUs and power in the data center. Comparing a 20-amp 120V circuit to a 20-amp 240V circuit, you can plug at least twice as many of a given server model into the 240V circuit. Because the U.S. National Electrical Code restricts you to using no more than 80% of the rated circuit capacity for a constant load, you can plug in 16A of load on that 20A circuit. If the servers draw 6A at 120V or 3A at 240V, you can plug in two servers to the 120V power strip, or five servers into the 240V strip, before you overload it. So, once you get beyond a handful of computers, 240V is the way to go in the datacenter (if you're using AC power).
  • leexgx - Wednesday, December 3, 2014 - link

    mass server racks are Pure DC in some cases or 240v (i would of thought there be some very basic Temp monitoring in the BIOS but guess most of this is exposed elsewhere

    so i agree with this post
  • jhh - Thursday, December 4, 2014 - link

    208V 3-phase is probably more popular than 240V, as most electricity is generated as 3-phase, and using all 3 phases is important for efficiently using the power without being charged for a poor power factor.
  • mapesdhs - Thursday, December 4, 2014 - link


    In, you're still using the wrong source link for the C-ray test. The Blinkenlights site is
    a mirror over which I have no control; I keep the main c-ray page on my SGI site.
    Google for, "sgidepot 'c-ray'", 1st hit will be the correct URL.

    Apart from that, thanks for the review!

    One question: will you ever be able to review any quad-socket systems or higher?
    I'd love to know how well some of the other tests scale, especially CB R15.

    Ian.
  • fackamato - Friday, December 5, 2014 - link

    No 40Gb benchmarks?
  • sor - Monday, December 8, 2014 - link

    I was excited to see the QSFP, but it seems like it's not put to use. I've been loving our mellanox switches, they have QSFP and you can run 40Gbe or 4 x 10Gbe with a breakout cable, with each port. It provides absolutely ridiculous port density and great cost. You can find SX1012s (12 port QSFP) for under $5k, and have 48 10G ports in 1/2U at about $100/port. No funny business with extra costs to license ports. The twinax cable is much cheaper than buying 10G optics, too, but you have to stay close. Usually you only need fibre on the uplinks, anyway.
  • dasco - Saturday, March 9, 2019 - link

    Does it support udimm. As the documentation says that it supports only rdimm or lrdimm.
    Does gskill ram used in this test is udimm or rdimm Ecc ram.

Log in

Don't have an account? Sign up now