Miscellaneous Aspects & Final Words

The power consumption at the wall was measured with the display being driven through the HDMI port. In the graphs below, we compare the idle and load power of the VisionX 420D with other low power PCs evaluated before. For load power consumption, we ran Furmark 1.12.0 and Prime95 v27.9 together. The VisionX 420D is not the most power efficient PC around, but the target market (gamers) don't need to care too much. The numbers are not beyond the realm of reason for the combination of hardware components in the VisionX 420D.

Idle Power Consumption

Load Power Consumption (Prime95 + FurMark)

Thermal Performance

Given the active nature of the thermal solution and the size of the chassis, it would be fair to expect the unit to be able to handle full loading of the CPU and GPU without issues. The VisionX 420D manages to acquit itself very well in our tests. The following two graphs show the various clocks in the system as well as the temperatures with the unit subjected to more than 1 hour of continuous CPU and GPU loading.

The excellent thermal solution manages to keep the CPU temperature well below the junction temperature. The clocks also indicate that there is no throttling at play.

Concluding Remarks

ASRock continues to impress us with the capabilities it crams into a small chassis. The SD card reader, multiple audio outputs and Intel NIC are nice to have features. The AMD GPU (Radeon R9 270MX) is also a top notch choice for gaming in this form factor. The MHL port (along with the supplied MHL cable) is a unique feature of the unit. It allows users to mirror the display of a supported smartphone while also charging it. The WLAN component (Broadcom-based 2x2 802.11ac) is the best amongst all the mini-PCs that we have evaluated so far.

On the other side, ASRock should start thinking about supplying a SSD or a mSATA drive coupled with a smaller hard drive for the storage subsystem. We are also a bit surprised by the absence of a Blu-ray option for this configuration (either go with no ODD, or include one befitting a premium mini-PC). The choice of the GPU, while perfect for gaming, is not that great for videos (given lack of 4K decoding capabilities). As a final note, it is definitely time for ASRock to reconsider the bundled MCE remote. In its place, a mini-keyboard / trackpad combo would be a better option. Apart from these quibbles, there is nothing much to say against this unit. If you are looking for a non-DIY gaming mini-PC which doesn't skimp on features, it is hard to go wrong with the VisionX 420D.

VisionX 420D as a HTPC
Comments Locked

30 Comments

View All Comments

  • blackmagnum - Monday, September 1, 2014 - link

    Post-Anand... I see that the quality of the article still continues to impress. Thanks.
  • lurker22 - Monday, September 1, 2014 - link

    Yeah, a whole 2 days after he "officially" resigned lol. Wait a year before you evaluate ;)
  • pectorvector - Monday, September 1, 2014 - link

    The table at the bottom of the first page (look at the GPU row, Habey BIS-6922) has "Graphisc" written instead of Graphics.
  • TheinsanegamerN - Monday, September 1, 2014 - link

    Any word on temperatures? I know that toms hardware recorded temps in the 90c range with their model when it was reviewed. Did you guys observe anything similar? always wondered what would happen if you were to mill out the top and mount a nice fan there, blowing down on the components.
  • ganeshts - Monday, September 1, 2014 - link

    On the graph in the final section 'System Loading vs. Temperature Characteristics', you can see the CPU temperature rise to 90 C, but only with both Prime 95 and Furmark running simultaneously. This is hardly a valid practical use-case.

    I don't believe thermals are a cause for concern with this PC for normal workloads in home / office scenarios.
  • monstercameron - Monday, September 1, 2014 - link

    come on oems put a kaveri apu in one of em!
  • Nickname++ - Monday, September 1, 2014 - link

    FYI, I have the 420D running under Debian Linux and it can idle at ~12 W. The trick is to force PCIe ASPM (power management) using a kernel option, which is disabled in the ACPI configuration but well supported as it's all laptop components. I guess disabling it reduced the testing effort. Then enabling "laptop mode" gets you there.

    So as usual with Linux it's not plug n' play, but it's reasonable easy to lower the power for an always on HTPC+server combo.

    Another info: the Intel integrated graphics are disabled, and the AMD card is always on. With a hybrid laptop architecture I guess the idle power could get lower, like an Intel only NUC. But again, it's a simpler configuration for ASRock with a fixed set-up.
  • tuxRoller - Monday, September 1, 2014 - link

    Linux, and open source in general, doesn't exist at this site.
    You might as well say beos:)
  • yannigr2 - Monday, September 1, 2014 - link

    As long as there is no detailed info about the cpu/gpu in the charts, charts are still a red bar between gray bars that most people will never really spend time to understand what they represent. And now they are only 8 mini-PCs. If those become 12-15 or more in the future it will be a total hell of strange model numbers.
  • ganeshts - Monday, September 1, 2014 - link

    As a reader myself, I would first take a look at the table at the bottom of the first page and note down the two or three PCs that I hope to see how the PC under review fares against. The full details of each system are provided in that table with the drop-down selection.

    In addition, I do have data for 12-15 PCs even right now, but I choose the 6 - 7 appropriate PCs to compare against and only include those in the graphs.

    It is a trade-off between having cluttered graphs (presenting all the info for the reader in one view) vs. splitting the info into two (a table on one page, and cleaner graphs on other pages - but expecting the reader to do a bit of 'work' before viewing the graphs). I went with the latter for more readability. The benchmark numbers depend heavily on the DRAM being used, the storage subsystem configuration etc., and not just the CPU / GPU. Under these circumstances, I believe the 'split into two' approach is the better one.

    If you have any other suggestions on how to tackle this problem, I am all ears.

Log in

Don't have an account? Sign up now