Performance Metrics - II

In this section, we mainly look at benchmark modes in programs used on a day-to-day basis, i.e, application performance and not synthetic workloads.

x264 Benchmark

First off, we have some video encoding benchmarks courtesy of x264 HD Benchmark v5.0.

Video Encoding - x264 5.0

Video Encoding - x264 5.0

7-Zip

7-Zip is a very effective and efficient compression program, often beating out OpenCL accelerated commercial programs in benchmarks even while using just the CPU power. 7-Zip has a benchmarking program that provides tons of details regarding the underlying CPU's efficiency. In this subsection, we are interested in the compression and decompression MIPS ratings when utilizing all the available threads.

7-Zip LZMA Compression Benchmark

7-Zip LZMA Decompression Benchmark

TrueCrypt

As businesses (and even home consumers) become more security conscious, the importance of encryption can't be overstated. CPUs supporting the AES-NI instruction for accelerating the encryption and decryption processes have, till now, been the higher end SKUs. However, with Bay Trail, even the lowly Atom series has gained support for AES-NI. The Core i5-4200M in the VisionX 420D does support AES-NI. TrueCrypt, a popular open-source disk encryption program can take advantage of the AES-NI capabilities. The TrueCrypt internal benchmark provides some interesting cryptography-related numbers to ponder. In the graph below, we can get an idea of how fast a TrueCrypt volume would behave in the VisionX 420D and how it would compare with other select PCs. This is a purely CPU feature / clock speed based test.

TrueCrypt Benchmark

Agisoft Photoscan

Agisoft PhotoScan is a commercial program that converts 2D images into 3D point maps, meshes and textures. The program designers sent us a command line version in order to evaluate the efficiency of various systems that go under our review scanner. The command line version has two benchmark modes, one using the CPU and the other using both the CPU and GPU (via OpenCL). The benchmark takes around 50 photographs and does four stages of computation:

  • Stage 1: Align Photographs
  • Stage 2: Build Point Cloud (capable of OpenCL acceleration)
  • Stage 3: Build Mesh
  • Stage 4: Build Textures

We record the time taken for each stage. Since various elements of the software are single threaded, others multithreaded, and some use GPUs, it is interesting to record the effects of CPU generations, speeds, number of cores, DRAM parameters and the GPU using this software.

Agisoft PhotoScan Benchmark - Stage 1

Agisoft PhotoScan Benchmark - Stage 2

Agisoft PhotoScan Benchmark - Stage 3

Agisoft PhotoScan Benchmark - Stage 4

Dolphin Emulator

Wrapping up our application benchmark numbers is the Dolphin Emulator benchmark mode results.

Dolphin Emulator Benchmark

Performance Metrics - I Gaming Benchmarks
Comments Locked

30 Comments

View All Comments

  • blackmagnum - Monday, September 1, 2014 - link

    Post-Anand... I see that the quality of the article still continues to impress. Thanks.
  • lurker22 - Monday, September 1, 2014 - link

    Yeah, a whole 2 days after he "officially" resigned lol. Wait a year before you evaluate ;)
  • pectorvector - Monday, September 1, 2014 - link

    The table at the bottom of the first page (look at the GPU row, Habey BIS-6922) has "Graphisc" written instead of Graphics.
  • TheinsanegamerN - Monday, September 1, 2014 - link

    Any word on temperatures? I know that toms hardware recorded temps in the 90c range with their model when it was reviewed. Did you guys observe anything similar? always wondered what would happen if you were to mill out the top and mount a nice fan there, blowing down on the components.
  • ganeshts - Monday, September 1, 2014 - link

    On the graph in the final section 'System Loading vs. Temperature Characteristics', you can see the CPU temperature rise to 90 C, but only with both Prime 95 and Furmark running simultaneously. This is hardly a valid practical use-case.

    I don't believe thermals are a cause for concern with this PC for normal workloads in home / office scenarios.
  • monstercameron - Monday, September 1, 2014 - link

    come on oems put a kaveri apu in one of em!
  • Nickname++ - Monday, September 1, 2014 - link

    FYI, I have the 420D running under Debian Linux and it can idle at ~12 W. The trick is to force PCIe ASPM (power management) using a kernel option, which is disabled in the ACPI configuration but well supported as it's all laptop components. I guess disabling it reduced the testing effort. Then enabling "laptop mode" gets you there.

    So as usual with Linux it's not plug n' play, but it's reasonable easy to lower the power for an always on HTPC+server combo.

    Another info: the Intel integrated graphics are disabled, and the AMD card is always on. With a hybrid laptop architecture I guess the idle power could get lower, like an Intel only NUC. But again, it's a simpler configuration for ASRock with a fixed set-up.
  • tuxRoller - Monday, September 1, 2014 - link

    Linux, and open source in general, doesn't exist at this site.
    You might as well say beos:)
  • yannigr2 - Monday, September 1, 2014 - link

    As long as there is no detailed info about the cpu/gpu in the charts, charts are still a red bar between gray bars that most people will never really spend time to understand what they represent. And now they are only 8 mini-PCs. If those become 12-15 or more in the future it will be a total hell of strange model numbers.
  • ganeshts - Monday, September 1, 2014 - link

    As a reader myself, I would first take a look at the table at the bottom of the first page and note down the two or three PCs that I hope to see how the PC under review fares against. The full details of each system are provided in that table with the drop-down selection.

    In addition, I do have data for 12-15 PCs even right now, but I choose the 6 - 7 appropriate PCs to compare against and only include those in the graphs.

    It is a trade-off between having cluttered graphs (presenting all the info for the reader in one view) vs. splitting the info into two (a table on one page, and cleaner graphs on other pages - but expecting the reader to do a bit of 'work' before viewing the graphs). I went with the latter for more readability. The benchmark numbers depend heavily on the DRAM being used, the storage subsystem configuration etc., and not just the CPU / GPU. Under these circumstances, I believe the 'split into two' approach is the better one.

    If you have any other suggestions on how to tackle this problem, I am all ears.

Log in

Don't have an account? Sign up now