CPU Tests on Windows: Office

WinRAR 5.0.1: link

Our WinRAR test from 2013 is updated to the latest version of WinRAR at the start of 2014. We compress a set of 2867 files across 320 folders totaling 1.52 GB in size – 95% of these files are small typical website files, and the rest (90% of the size) are small 30 second 720p videos.

WinRAR 5.01, 2867 files, 1.52 GB

3D Particle Movement

3DPM is a self-penned benchmark, taking basic 3D movement algorithms used in Brownian Motion simulations and testing them for speed. High floating point performance, MHz and IPC wins in the single thread version, whereas the multithread version has to handle the threads and loves more cores.

3D Particle Movement: Single Threaded

3D Particle Movement: MultiThreaded

FastStone Image Viewer 4.9

FastStone is the program I use to perform quick or bulk actions on images, such as resizing, adjusting for color and cropping. In our test we take a series of 170 images in various sizes and formats and convert them all into 640x480 .gif files, maintaining the aspect ratio. FastStone does not use multithreading for this test, and results are given in seconds.

FastStone Image Viewer 4.9

Synthetic – 7-Zip 9.2: link

As an open source compression tool, 7-Zip is a popular tool for making sets of files easier to handle and transfer. The software offers up its own benchmark, to which we report the result.

7-zip Benchmark

Conclusions on Office Benchmarks

Similar to the professional tests, the gains here are in-line with what we would expect with +200 MHz overclocks.

CPU Tests on Windows: Professional Linux Performance
Comments Locked

103 Comments

View All Comments

  • Zoeff - Friday, August 28, 2015 - link

    As an owner of a 6700K that's running at 4.8GHz, this is a very interesting article for me. :)

    I've currently entered 1.470v in the UEFI and I can get up to 1.5v in CPUz. Anything lower and it becomes unstable. So I guess I'm probably on the high side voltage wise...
  • zepi - Friday, August 28, 2015 - link

    Sounds like a scorching voltage for 24/7 operations considering it is 14nm process... But obviously, we don't really know if this is detrimental on longer term.
  • 0razor1 - Friday, August 28, 2015 - link

    I believe it is. Ion shift. High voltage = breakdown at some level. Enough damage and things go amiss.
    When one considers 1.35+ for 22nm high, I wonder why we're doing this (1.35+) at 14nm.

    If it's OK, then can someone illustrate why one should not go over say 1.6V on the DRAM in 22nm, why stick to 1.35V for 14nm? Might as well use standard previous generation voltages and call it a day?

    Further, where are the AVX stable loads? Sorry, but no P95 small in place FFTs with AVX = NOT stable enough for me. It's not the temps ( I have an h100i) for sure. For example, on my 4670k, it takes 1.22VCore for 4.6GHz, but 1.27VCore when I stress with AVX loads ( P95 being one of them).

    It's *not* OK to say hey that synthetic is too much of a stress etc. I used nothing but P95 since K-10 and haven't found a better error catcher.
  • 0razor1 - Friday, August 28, 2015 - link

    To add to the above, downclocking the core on GPU's and running memcheck in OCCT is *it* for my VRAM stability tests when I OC my graphics cards. I wonder how people just 'look' for corruption in benchmarks like firestrike and call their OC's stable. It doesn't work.

    Run a game and leave it idle for ~ 10 hours and come back. You will find glitches all over the place on your 'stable' OC.

    Just sayin- OC stability testing has fallen to new lows in the recent past, be it graphic cards or processors.
  • Zoeff - Friday, August 28, 2015 - link

    I tend to do quick tests such as Cinebench 15 and HandBrake, then if that passes I just run it for a week with regular usage such as gaming and streaming. If it blue screens or I get any other oddities I raise the voltage by 0.01v. I had to do that twice in the space of 1 week (started at 1.45v, 4.8GHz)
  • Oxford Guy - Saturday, August 29, 2015 - link

    That's a great way to corrupt your OS and programs.
  • Impulses - Saturday, August 29, 2015 - link

    Yeah I do all my strenuous testing first, if I have to simulate real world conditions by leaving two tests running simultaneously I do it too... Like running an encode with Prime in the background; or stressing the CPU, GPU, AND I/O simultaneously.

    AFTER I've done all that THEN I'll restore a pre-tinkering OS image, unless I had already restored one after my last BSOD or crash... Which I'll do sometimes mid-testing if I think I've pushed the OC far enough that anything might be hinky.

    It's so trivial to work with backups like that, should SOP.
  • Oxford Guy - Sunday, August 30, 2015 - link

    If a person is using an unstable overclock for daily work it may be hard to know if stealth corruption is happening.
  • kuttan - Sunday, August 30, 2015 - link

    haha that is funny.
  • kmmatney - Saturday, September 19, 2015 - link

    I do the same as the OP (but use Prime95 and Handbrake). If it passes a short test there (say one move in Handbrake) I just start using the machine. I've had blue screens, but never any corruption issues. I guess corruption could happen, but the odds are pretty low. My computer gets backed up every night to a WHS server, so I can be fearless..

Log in

Don't have an account? Sign up now