Board Features

The ASUS ROG Strix X299-XE Gaming, while a mouthful to actually say, has a consummate number of options for a mid-range gaming motherboard.

ASUS ROG Strix X299-XE Gaming
Warranty Period 3 Years
Product Page Link
Price $369.99 Amazon US
Size ATX
CPU Interface LGA2066
Chipset Intel X299
Memory Slots (DDR4) Eight DDR4
Supporting 128GB
Quad Channel
Up to DDR4 4133 (quad and dual channel)
Network Connectivity 1 x Intel I219V GbE
Onboard Audio Realtek ALC S1220A
PCIe Slots for Graphics (from CPU)  3 x PCIe 3.0
- 44 Lane CPU: x16/x16/x8 
- 28 Lane CPU: x16/x8/x1 
- 16 Lane CPU: x8/x8/x1 
PCIe Slots for Other (from PCH) 2 x PCIe 3.0 x4
Onboard SATA 8 x RAID 0/1/5/10
Onboard SATA Express None
Onboard M.2 1 x PCIe 3.0 x4 and SATA mode
1 x PCIe 3.0 x4 mode only
Onboard U.2 None
USB 3.1 ASMedia ASM3142 
1 x Type-A
1 x Type-C
1 x Onboard Headers
USB 3.0 Chipset
4 x Back Panel
4 x Onboard Headers
USB 2.0 Chipset
2 x Back Panel
2 x Onboard Headers
Power Connectors 1 x 24-pin ATX
1 x 8-pin CPU
1 x 4-pin CPU (optional)
Fan Headers 1 x 4-pin CPU
1 x 4-pin CPU OPT
1 x AIO PUMP
1 x W PUMP+
2 x Chassis
IO Panel 1 x LAN (RJ45) ports
2 x USB 3.1 10 Gbps, Type-A and Type-C
4 x USB 3.0
2 x USB 2.0
1 x SPDIF out
5 x Audio Jacks
1 x USB BIOS Flashback Button(s)
1 x ASUS Wi-Fi GO Module

The PCIe lane arrangement looks very odd at first glance, with ASUS seemingly emphasising single GPU bandwidth for Skylake-X CPUs, expecting users to use add-in cards for the other slots. Aside from the large heatsinks, nothing immediately stands out on this board: sure it has an Intel NIC, 802.11ac Wi-Fi, and an ASUS-specific S1220A audio codec, but unlike other boards in this price range, it doesn't have that extra 'knock-out' feature that separates it from other products. 

Test Bed

As per our testing policy, we take a high-end CPU suitable for the motherboard that was released during the socket’s initial launch and equip the system with a suitable amount of memory running at the processor maximum supported frequency. This is also typically run at JEDEC sub timings where possible. It is noted that some users are not keen on this policy, stating that sometimes the maximum supported frequency is quite low, or faster memory is available at a similar price, or that the JEDEC speeds can be prohibitive for performance. While these comments make sense, ultimately very few users apply memory profiles (either XMP or other) as they require interaction with the BIOS, and most users will fall back on JEDEC supported speeds - this includes home users as well as industry who might want to shave off a cent or two from the cost or stay within the margins set by the manufacturer. Where possible, we will extend our testing to include faster memory modules either at the same time as the review or a later date.

Readers of our motherboard review section will have noted the trend in modern motherboards to implement a form of MultiCore Enhancement / Acceleration / Turbo (read our report here) on their motherboards. This does several things, including better benchmark results at stock settings (not entirely needed if overclocking is an end-user goal) at the expense of heat and temperature. It also gives, in essence, an automatic overclock which may be against what the user wants. Our testing methodology is ‘out-of-the-box’, with the latest public BIOS installed and XMP enabled, and thus subject to the whims of this feature. It is ultimately up to the motherboard manufacturer to take this risk – and manufacturers taking risks in the setup is something they do on every product (think C-state settings, USB priority, DPC Latency/monitoring priority, overriding memory sub-timings at JEDEC). Processor speed change is part of that risk, and ultimately if no overclocking is planned, some motherboards will affect how fast that shiny new processor goes and can be an important factor in the system build.

Test Setup
Processor Intel i9 7900X (10C/20T, 3.3G, 140W)
Motherboard ASUS ROG Strix X299-XE Gaming (BIOS version 0802)
Cooling Corsair H115i
Power Supply Corsair HX750
Memory Corsair Vengeance LPX 4x8GB DDR4 2666 CL16
Corsair Vengeance 4x4GB DDR4 3200 CL16
Memory Settings DDR4 2666 CL16-18-18-35 2T
Video Cards ASUS Strix GTX 980
Hard Drive Crucial MX300 1TB
Optical Drive TSST TS-H653G
Case Open Test Bed
Operating System Windows 10 Pro 64-bit

 

Many thanks to...

We must thank the following companies for kindly providing hardware for our multiple test beds. Some of this hardware is not in this testbed specifically but is used in other testing.

Thank you to ASUS for providing us with GTX 980 Strix GPUs. At the time of release, the STRIX brand from ASUS was aimed at silent running, or to use the marketing term: '0dB Silent Gaming'. This enables the card to disable the fans when the GPU is dealing with low loads well within temperature specifications. These cards equip the GTX 980 silicon with ASUS' Direct CU II cooler and 10-phase digital VRMs, aimed at high-efficiency conversion. Along with the card, ASUS bundles GPU Tweak software for overclocking and streaming assistance.

The GTX 980 uses NVIDIA's GM204 silicon die, built upon their Maxwell architecture. This die is 5.2 billion transistors for a die size of 298 mm2, built on TMSC's 28nm process. A GTX 980 uses the full GM204 core, with 2048 CUDA Cores and 64 ROPs with a 256-bit memory bus to GDDR5. The official power rating for the GTX 980 is 165W.

The ASUS GTX 980 Strix 4GB (or the full name of STRIX-GTX980-DC2OC-4GD5) runs a reasonable overclock over a reference GTX 980 card, with frequencies in the range of 1178-1279 MHz. The memory runs at stock, in this case, 7010 MHz. Video outputs include three DisplayPort connectors, one HDMI 2.0 connector, and a DVI-I.

Further Reading: AnandTech's NVIDIA GTX 980 Review

 

Thank you to Crucial for providing us with MX300 SSDs. Crucial stepped up to the plate as our benchmark list grows larger with newer benchmarks and titles, and the 1TB MX300 units are strong performers. Based on Marvell's 88SS1074 controller and using Micron's 384Gbit 32-layer 3D TLC NAND, these are 7mm high, 2.5-inch drives rated for 92K random read IOPS and 530/510 MB/s sequential read and write speeds.

The 1TB models we are using here support TCG Opal 2.0 and IEEE-1667 (eDrive) encryption and have a 360TB rated endurance with a three-year warranty.

Further Reading: AnandTech's Crucial MX300 (750 GB) Review

 

Thank you to Corsair for providing us with Vengeance LPX DDR4 Memory, HX750 Power Supply, and H115i CPU Cooler

Corsair kindly sent a 4x8GB DDR4 2666 set of their Vengeance LPX low profile, high-performance memory for our stock testing. The heatsink is made of pure aluminum to help remove heat from the sticks and has an eight-layer PCB. The heatsink is a low profile design to help fit in spaces where there may not be room for a tall heat spreader; think a SFF case or using a large heatsink. Timings on this specific set come in at 16-18-18-35. The Vengeance LPX line supports XMP 2.0 profiles for easily setting the speed and timings. It also comes with a limited lifetime warranty. 

Powering the test system is Corsair's HX750 Power Supply. This HX750 is a dual mode unit able to switch from a single 12V rail (62.5A/750W) to a five rail CPU (40A max ea.) and is also fully modular. It has a typical selection of connectors, including dual EPS 4+4 pin four PCIe connectors and a whopping 16 SATA power leads, as well as four 4-pin Molex connectors.

The 135mm fluid dynamic bearing fan remains off until it is 40% loaded offering complete silence in light workloads. The HX750 comes with a ten-year warranty. 

In order to cool these high-TDP HEDT CPUs, Corsair sent over its latest and largest AIO in the H115i. This closed-loop system uses a 280mm radiator with 2x140mm SP140L PWM controlled fans. The pump/block combination mounts to all modern CPU sockets. Users are also able to integrate this cooler into the Corsair link software via USB for more control and options. 

BIOS and Software Benchmark Overview
Comments Locked

27 Comments

View All Comments

  • PeachNCream - Monday, December 11, 2017 - link

    Yup, but saying "never" speaks in absolute terms and that's not accurate.
  • HStewart - Monday, December 11, 2017 - link

    Multi-CPU systems have always been the market for severs and high end workstations. I purchase my Dual Xeon 5160 Supermicro for Lightwave 3d creations. These type system have application that used multiple threads and especially on servers.

    When I research for Dual Xeon systems, the advantage of multi-cpu Xeon ( not sure if applies to AMD ) was increase IO abilities. Plus at time 5160 was only dual-core - so it gave me 4 cores.

    Today's system with so much interest in increase core count especial on non-server enviroments is kind of strange - i guess instead of throwing faster performance - they throw cores in to it. But the AMD vs Intel core wars reminds me of old frequencies wars - it just silly to just to say you have more cores in non server enviroment where most of user interface and logic is single threaded. Yes in time multiple threads will come about - but it more difficult for software developers to do that user interface.

    Of course we can say never on this - because with multitasking, the more threads / cores the better it is. Especially in development enviroments with VM and compilers that can used multiple threads
  • SanX - Wednesday, December 13, 2017 - link

    "Inflate the cost", "complex socket" and "more expensive motherboards" sounds like words from Intel press releases. The tech is known for decades, costs nothing to implement, is working on xeons and everyone else including all graphics processors no matter what price.

    Times changed. Adding more cores already reaching it's thermal design limit, 200-300W and the game is over, so the performance scaling with core counts on the die becomes deeply sublinear for the most tasks, for example linear algebra. The only way which is practically left is increase of sockets on the board.
  • HStewart - Monday, December 11, 2017 - link

    I used to have a Pentium Pro motherboard - but with single CPU - it was a whopping $3500 back then.

    Now there is a big difference between Xeon and non-Zeon system besides the running CPU - Xeon have much greater IO performance than non Xeon CPU. I also have a dual 5160 3Ghz Zeon system and until some of later i7's - kept up with performance. It over ten years old and stills runs today - but I rarely run it now - just too much trouble ever since I got into laptops
  • HStewart - Monday, December 11, 2017 - link

    Just for clarification, the Pentium Pro motherboard supported dual cpus - just I never purchase extra CPU.
  • sonny73n - Monday, December 11, 2017 - link

    They just don't like the idea of us upgrading our system with only another same old CPU, instead of upgrading the whole system.
  • HStewart - Monday, December 11, 2017 - link

    I have always upgraded both the CPU and Motherboard

    The only exception if I could find newer Xeon cores for my Supermicro - especially if cost has gone down - but I do except trouble. When I building machines, it did not matter much - my older workstation system became a render node.
  • svan1971 - Thursday, December 14, 2017 - link

    dude they make 22 core and 32 core cpus aparently less is more
  • SanX - Monday, December 11, 2017 - link

    All mobos differing by the factor of mere 10% higher then others by some miniscule feature are inflated in price by the factor of 10. How much it costs to manufacturers to build these mobos in China? 20-25 bucks. If you doubt that wait for the next financial crisis to see their real price.
  • Ro_Ja - Monday, December 11, 2017 - link

    My old ass P35 motherboard has more USB ports compares to this one.

    I'm not saying that should but it's prolly cause for the PCI-e lanes,?

Log in

Don't have an account? Sign up now