Overclocking

Experience with the ASUS ROG Strix X299-XE

Automatic overclocking with the ASUS ROG Strix was as simple as selecting which preset we wanted on the EZ System Tuning from within the BIOS or through the AI Suite/DIP 5 software.

  1. The first setting attempted was the Extreme Tuning setting. When using this option, the board gives a warning about the cooling required. This yielded a 39% overclock (there is a splash screen during POST showing this info) which came out to 4.4 GHz on all cores and ~1.30V. We know from our past manual testing of this chip, those voltages are a bit high for this CPU and will not pass our tests as the setup overwhelms our AIO cooler.
  2. The next step below that is Fast Tuning. When applying this option, the board sets the CPU to all cores running 4.3 GHz and voltage sits at 1.26V on load. We were able to pass our stability tests without issue here, but the temperatures did break 90C. Just be careful with the Extreme Tuning setting and that the right cooling is there for the job. 

Manual overclocking was also easy through the ASUS BIOS. The major overclocking options are under one section - options like the CPU multiplier, BCLK, and Voltages for multiple domains are all found in the same menu. The ROG Strix did not have any issues with either set of memory and using the XMP profile so we were set there. The ROG Strix also did not have an issue with setting the memory to 3600 MHz. 

Just as with the TUF, about the only thing to note here with this board are some of the sensor readings. CPUz doesn't seem to read vCore. When setting automatically or manually, the value remains close to what is seen in the picture below around .9V or so. OCCT was also unable to read the proper voltage. I am not entirely sure why this is happening, however, my theory is CPUz/OCCT are not hitting the proper registers for that value as the ROG Strix has a few unique ICs for its Temperature monitoring. The good news is, the Dual Intelligent Processor 5 software picked it up accurately as well as my go-to temperature monitoring software, Coretemp. 

As expected, we did not run into any thermal issues on the VRM with the larger heatsink. The included fan cuts down temperatures an additional couple of degrees C in our testing without making a lot of noise. 

Overclocking Methodology

Our standard overclocking methodology is as follows. We select the automatic overclock options and test for stability with POV-Ray and OCCT to simulate high-end workloads. These stability tests aim to catch any immediate causes for memory or CPU errors.

For manual overclocks, based on the information gathered from the previous testing, starts off at a nominal voltage and CPU multiplier, and the multiplier is increased until the stability tests are failed. The CPU voltage is increased gradually until the stability tests are passed, and the process repeated until the motherboard reduces the multiplier automatically (due to safety protocol) or the CPU temperature reaches a stupidly high level (90ºC+). Our test bed is not in a case, which should push overclocks higher with fresher (cooler) air.

Overclocking Results

For the automatic overclock settings, like most others, we saw an overshoot on vCore. In this case, the clocks and voltage were simply too aggressive for our cooling (dual radiator liquid cooling) on the Extreme setting and failed for Fast Tuning. I'd reserve these for custom loops with 3x120mm of cooling and/or a delidded and re-TIM'd CPU. 

In our manual overclocks, the ROG Strix X299-XE Gaming topped out at 4.5 GHz along with the other boards tested as expected. The voltage to reach the clock speed for all boards so far are all within a small variance so nothing out of the ordinary there.

With LLC set to auto again, we did not see any vdroop and voltages stayed remarkably stable. At the top overclock of 4.5 GHz and 1.23V, the system pulled 321W from the wall. The larger heatsink was warm/hot to the touch (could easily leave my fingers on it) throughout our testing without using the included fan. The VRM idled at 42ºC while overclocked and topped out at 79ºC after 30 minutes of OCCT. Putting the fan on it, the temperatures dropped around 4ºC which is helpful. The fan was only audible when it really ramped up and had that familiar high-pitched whine when running about 4500RPM.  

Gaming Performance Final Words and Conclusion
Comments Locked

27 Comments

View All Comments

  • PeachNCream - Monday, December 11, 2017 - link

    Yup, but saying "never" speaks in absolute terms and that's not accurate.
  • HStewart - Monday, December 11, 2017 - link

    Multi-CPU systems have always been the market for severs and high end workstations. I purchase my Dual Xeon 5160 Supermicro for Lightwave 3d creations. These type system have application that used multiple threads and especially on servers.

    When I research for Dual Xeon systems, the advantage of multi-cpu Xeon ( not sure if applies to AMD ) was increase IO abilities. Plus at time 5160 was only dual-core - so it gave me 4 cores.

    Today's system with so much interest in increase core count especial on non-server enviroments is kind of strange - i guess instead of throwing faster performance - they throw cores in to it. But the AMD vs Intel core wars reminds me of old frequencies wars - it just silly to just to say you have more cores in non server enviroment where most of user interface and logic is single threaded. Yes in time multiple threads will come about - but it more difficult for software developers to do that user interface.

    Of course we can say never on this - because with multitasking, the more threads / cores the better it is. Especially in development enviroments with VM and compilers that can used multiple threads
  • SanX - Wednesday, December 13, 2017 - link

    "Inflate the cost", "complex socket" and "more expensive motherboards" sounds like words from Intel press releases. The tech is known for decades, costs nothing to implement, is working on xeons and everyone else including all graphics processors no matter what price.

    Times changed. Adding more cores already reaching it's thermal design limit, 200-300W and the game is over, so the performance scaling with core counts on the die becomes deeply sublinear for the most tasks, for example linear algebra. The only way which is practically left is increase of sockets on the board.
  • HStewart - Monday, December 11, 2017 - link

    I used to have a Pentium Pro motherboard - but with single CPU - it was a whopping $3500 back then.

    Now there is a big difference between Xeon and non-Zeon system besides the running CPU - Xeon have much greater IO performance than non Xeon CPU. I also have a dual 5160 3Ghz Zeon system and until some of later i7's - kept up with performance. It over ten years old and stills runs today - but I rarely run it now - just too much trouble ever since I got into laptops
  • HStewart - Monday, December 11, 2017 - link

    Just for clarification, the Pentium Pro motherboard supported dual cpus - just I never purchase extra CPU.
  • sonny73n - Monday, December 11, 2017 - link

    They just don't like the idea of us upgrading our system with only another same old CPU, instead of upgrading the whole system.
  • HStewart - Monday, December 11, 2017 - link

    I have always upgraded both the CPU and Motherboard

    The only exception if I could find newer Xeon cores for my Supermicro - especially if cost has gone down - but I do except trouble. When I building machines, it did not matter much - my older workstation system became a render node.
  • svan1971 - Thursday, December 14, 2017 - link

    dude they make 22 core and 32 core cpus aparently less is more
  • SanX - Monday, December 11, 2017 - link

    All mobos differing by the factor of mere 10% higher then others by some miniscule feature are inflated in price by the factor of 10. How much it costs to manufacturers to build these mobos in China? 20-25 bucks. If you doubt that wait for the next financial crisis to see their real price.
  • Ro_Ja - Monday, December 11, 2017 - link

    My old ass P35 motherboard has more USB ports compares to this one.

    I'm not saying that should but it's prolly cause for the PCI-e lanes,?

Log in

Don't have an account? Sign up now