ASRock X99 WS-E/10G Software

The software package from ASRock has gone through a slow evolution since Z77 into a single interface for all of ASRock’s functionality called A-Tuning. With the overclocking and gaming models the interface is slightly adjusted, but the green tint follows the majority of ASRock’s motherboard range. However the newest element to ASRock’s like is the APP Shop. This is essentially ASRock’s curated version of the Play Store/Microsoft Store, with only software ASRock feels is suited to their motherboard range.

Currently the software is fairly limited to Chrome, ASRock’s own software programs and a few Asian free-to-play titles. While offering this method to obtain software is interesting, it does open up a lot of questions. Will there be to-pay titles? What happens if one element of the store is filled with malware?

The APP Shop also offers a BIOS and Drivers section to update the system, but as of yet we have not had it working properly in our testing.

One suggestion has been that this software will only update the packages it downloads. There is another update tool in A-Tuning.

A-Tuning

The initial interface revolves around overclocking, giving the following three modes for automatic frequency adjustments:

Power Saving puts the CPU into a low power mode (1.2 GHz on the 5960X) and requires the system have full CPU load before slowly ramping up the speed over the next 6-10 seconds. This keeps power consumption down, but perhaps increases the responsiveness of the system by not having that initial high single core frequency. Standard mode is the default setting, and Performance mode puts the CPU into high frequency mode for any loading. Performance Mode also opens up the Advanced menu:

Here we have a list of Optimized CPU OC Settings similar to the BIOS and an auto tuning section. There is no list of auto tuning options similar to ASUS, for adjusting the stress tests or the optimum CPU temperature, although I would imagine that all the manufacturers might move in that direction at some point in the future.

The tools menu has a lot of space for ASRock to add in future features, but currently houses the ones they do have. XFast RAM allows the system to partition some of the RAM into a RAMDisk while also providing some RAMCache options:

XFast LAN is a customized interface for cFos, allowing users to prioritize certain applications over others:

Personally I find this interface extremely cumbersome, especially if there are a lot of applications to deal with. ASRock could design something with less white space and a more efficient amalgamation of the A-Tuning visual dimensions to make the process a lot easier. There is access to cFos directly with the Advance Setting button:

The software works with all the network ports on board, including the 10GBase-T ones.

Fast Boot enables options relating to UEFI quick booting by disabling certain options until the OS is loaded:

The Online Management Guard (OMG [!]) has been around for several generations of ASRock motherboards now, and offers the user the ability to disable the networking ports during certain times of the day.

ASRock’s Fan software in the software now mirrors that in the BIOS, giving a better sense for the user in what to adjust:

The FAN Test will detect the RPM for a given fan power, although the graph on the left is misnamed – what ASRock calls ‘FAN Speed (%)’ is actually deceptive because it means ‘Fan Power’ and the user has to do the mathematics in their head to adjust based on the table in the middle. If ASRock was on the ball, they would do the conversion in software and adjust the graph to read ‘Fan Speed (RPM)’ and adjust the axis appropriately from lowest Fan Speed to highest Fan Speed. Note that the high fan speeds above are actually the speeds from my liquid cooling pump.

The Dehumidifier tool in the software is identical to that in the BIOS, allowing the system to enable the fans after the system has been shut off in order to equalize the air temperature inside and outside the case. This has benefits in humid environments where the moisture in the air may condense inside the case during a cool night after a hot day.

The USB Key tool allows users to assign a particular USB drive with login data for particular Windows users. This means that users need not remember a particular long password to log in, and only specified users are allowed to log in. Though lose the USB drive and lose the ability to log in.

One of the newer tools in ASRock’s list is the DISK Health Report. This gives the critical information on the drives attached to the system, allowing SSD users to see the life of their drive. This drive has been at the heart of my motherboard test beds now for almost three years and is still going forward.

The next tab along the top is the OC Tweaker, featuring the more critical options from the BIOS for manual overclocking along with some options to save overclock profiles. The way this is shown in ASRock’s software is quite user-unfriendly, and I would suggest that the next iteration of the software gives an experienced user an easier way to adjust frequencies and voltages without dealing with sliding bars and scrolling.

The System Info tab gives the hardware monitor page by default, giving information about the frequencies, fan speeds and voltages in the system. Most other manufacturers have a way of recording this data, or seeing it plotted on a graph while running a stress test, but ASRock is behind on this front at this time.

The Hardware Monitor section of System Info is identical to that in the BIOS, showing where hardware is installed with a mouse over giving basic details. This is handy for investigating which memory stick, USB or PCIe device is not being detected.

The Live Update tab is, by comparison to MSI, limited. Although I knew there were updates to the platform when I run this software, it failed to find the updated drivers. Here it also does not say how big each download is. If a user is on a limited or slow bandwidth package having to download 300MB of audio or graphics drivers can be detrimental.

While ASRock’s software package is presented in a good way, and there are a number of helpful tools, there are various aspects here that miss the mark in terms of user experience.

BIOS In The Box, Test Setup and Overclocking
Comments Locked

45 Comments

View All Comments

  • gsvelto - Tuesday, December 16, 2014 - link

    Where I worked we had extensive 10G SFP+ deployments with ping latency measured in single-digit µs. The latency numbers you gave are for pure-throughput oriented, low CPU overhead transfers and are obviously unacceptable if your applications are latency sensitive. Obtaining those numbers usually requires tweaking your power-scaling/idle governors as well as kernel offloads. The benefits you get are very significant on a number of loads (e.g. lots of small file over NFS for example) and 10GBase-T can be a lot slower on those workloads. But as I mentioned in my previous post 10GBase-T is not only slower, it's also more expensive, more power hungry and has a minimum physical transfer size of 400 bytes. So if you're load is composed of small packets and you don't have the luxury of aggregating them (because latency matters) then your maximum achievable bandwidth is greatly diminished.
  • shodanshok - Wednesday, December 17, 2014 - link

    Sure, packet size play a far bigger role for 10GBase-T then optical (or even copper) SFP+ links.

    Anyway, the pings tried before were for relatively small IP packets (physical size = 84 bytes), which are way lower then typical packet size.

    For message-passing workloads SFP+ is surely a better fit, but for MPI it is generally better to use more latency-oriented protocol stacks (if I don't go wrong, Infiniband use a lightweight protocol stack for this very reason).

    Regards.
  • T2k - Monday, December 15, 2014 - link

    Nonsense. CAT6a or even CAT6 would work just fine.
  • Daniel Egger - Monday, December 15, 2014 - link

    You're missing the point. Sure Cat.6a would be sufficient (it's hard to find Cat.7 sockets anyway but the cabling used nowadays is mostly Cat.7 specced, not Cat.6a) but the problem is to end up with a properly balanced wiring that is capable of properly establishing such a link. Also copper cabling deteriorates over time so the measurement protocol might not be worth snitch by the time you try to establish a 10GBase-T connection...

    Cat.6 is only usable with special qualification (TIA-155-A) over short distances.
  • DCide - Tuesday, December 16, 2014 - link

    I don't think T2k's missing the point at all. Those cables will work fine - especially for the target market for this board.

    You also had a number of other objections a few weeks ago, when this board was announced. Thankfully most of those have already been answered in the excellent posts here. It's indeed quite possible (and practical) to use the full 10GBase-T bandwidth right now, whether making a single transfer between two machines or serving multiple clients. At the time you said this was *very* difficult, implying no one will be able to take advantage of it. Fortunately, ASRock engineers understood the (very attainable) potential better than this. Hopefully now the market will embrace it, and we'll see more boards like this. Then we'll once again see network speeds that can keep up with everyday storage media (at least for a while).
  • shodanshok - Tuesday, December 16, 2014 - link

    You are right, but the familiar RJ45 & cables can be a strong motivation to go with 10GBase-T in some cases. For a quick example: one of our customer bought two Dell 720xd to use as virtualization boxes. The first R720xd is the active one, while the second 720xd is used as hot-standby being constantly synchronized using DRBD. The two boxes are directly connected with a simple Cat 6e cable.

    As the final customer was in charge to do both the physical installation and the normal hardware maintenance, a familiar networking equipment as RJ45 port and cables were strongly favored by him.

    Moreover, it is expected that within 2 die shrinks 10GBase-T controller become cheap/low power enough that they can be integrated pervasively, similar to how 1GBase-T replaced the old 100 Mb standard.

    Regards.
  • DigitalFreak - Monday, December 15, 2014 - link

    Don't know why the went with 8 PCI-E lanes for the 10Gig controller. 4 would have been plenty.

    1 PCI-E 3.0 lane is 1GB per second (x4 = 4GB). 10Gig max is 1.25 GB per second, dual port = 2.5 GB per second. Even with overhead you'd still never saturate an x4 link. Could have used the extra x4 for something else.
  • The Melon - Monday, December 15, 2014 - link

    I personally think it would be a perfect board if they replaced the Intel X540 controller with a Mellanox ConnectX-3 dual QSFP solution so we could choose between FDR IB and 40/10/1Gb Ethernet per port.

    Either that or simply a version with the same slot layout and drop the Intel X540 chip.

    Bottom line though is no matter how they lay it out we will find something to complain about.
  • Ian Cutress - Tuesday, November 1, 2016 - link

    The controller is PCIe 2.0, not PCIe 3.0. You need to use a PCIe 3.0 controller to get PCIe 3.0 speeds.
  • eanazag - Monday, December 15, 2014 - link

    I am assuming we are talking about the free ESXi Hypervisor in the test setup.

    SR-IOV (IOMMU) is not an enabled feature on ESXi with the free license. What this means is that networking is going to tax the CPU more heavily. Citrix Xenserver does support SR-IOV on the free product, which it is all free now - you just pay for support. This is a consideration to base the results of the testing methodology used here.

    Another good way to test 10GbE is using iSCSI where the server side is a NAS and the single client is where the disk is attached. The iSCSI LUN (hard drive) needs have something going on with an SSD. It can just be 3 spindle HDDs in RAID 5. You can use disk test software to drive the benchmarking. If you opt to use Xenserver with Windows as the iSCSI client. Have the VM directly connect to the NAS instead of using Xenserver to the iSCSI LUN because you will hit a performance cap from VM to host in the typical add disk within Xen. This is in older 6.2 version. Creedance is not fully out of beta yet. I have done no testing on Creedance and the contained changes are significant to performance.

    About two years ago I was working on coming up with the best iSCSI setup for VMs using HDDs in RAID and SSDs as caches. I was using Intel X540-T2's without a switch. I was working with Nexenta Stor and Sun/Oracle Solaris as iSCSI target servers run on physical hardware, Xen, and VMware. I encountered some interesting behavior in all cases. VMware's sub-storage yielded better hard drive performance. I kept running into an artifical performance limit because of the Windows client and how Xen handles the disks it provides. The recommendation was to add the iSCSI disk directly to the VM as the limit wouldn't show up there. VMware still imposed a performance ding on (Hit>10%) my setup. Physical hardware had the best performance for the NAS side.

Log in

Don't have an account? Sign up now