Generational Tests on the i7-6700K: Windows Professional Performance

Agisoft Photoscan – 2D to 3D Image Manipulation: link

Agisoft Photoscan creates 3D models from 2D images, a process which is very computationally expensive. The algorithm is split into four distinct phases, and different phases of the model reconstruction require either fast memory, fast IPC, more cores, or even OpenCL compute devices to hand. Agisoft supplied us with a special version of the software to script the process, where we take 50 images of a stately home and convert it into a medium quality model. This benchmark typically takes around 15-20 minutes on a high end PC on the CPU alone, with GPUs reducing the time.

Agisoft PhotoScan Benchmark - Total Time

Cinebench R15

Cinebench is a benchmark based around Cinema 4D, and is fairly well known among enthusiasts for stressing the CPU for a provided workload. Results are given as a score, where higher is better.

Cinebench R15 - Single Threaded

FastStone Image Viewer 4.9

HandBrake v0.9.9: link

For HandBrake, we take two videos (a 2h20 640x266 DVD rip and a 10min double UHD 3840x4320 animation short) and convert them to x264 format in an MP4 container.  Results are given in terms of the frames per second processed, and HandBrake uses as many threads as possible.

HandBrake v0.9.9 LQ Film

HandBrake v0.9.9 2x4K

Hybrid x265

Hybrid is a new benchmark, where we take a 4K 1500 frame video and convert it into an x265 format without audio. Results are given in frames per second.

Hybrid x265, 4K Video

Generational Tests on the i7-6700K: Legacy, Office and Web Benchmarks Generational Tests on the i7-6700K: Linux Performance
Comments Locked

477 Comments

View All Comments

  • jwcalla - Wednesday, August 5, 2015 - link

    I kind of agree. I think I'm done with paying for a GPU I'm never going to use.
  • jardows2 - Wednesday, August 5, 2015 - link

    If you don't overclock, buy a Xeon E3. i7 performance at i5 price, without integrated GPU.
  • freeskier93 - Wednesday, August 5, 2015 - link

    Except the GPU is still there, it's just disabled. So yes, the E3 is a great CPU for the price (I have one) but you're still paying for the GPU because the silicon is still there, you're just not paying as much.
  • MrSpadge - Wednesday, August 5, 2015 - link

    Dude, an Intel CPU does not get cheaper if it's cheaper to produce. Their prices are only weakly linked to the production costs.
  • AnnonymousCoward - Saturday, August 8, 2015 - link

    That is such a good point. The iGPU might cost Intel something like $1.
  • Vlad_Da_Great - Wednesday, August 5, 2015 - link

    Haha, nobody cares abot you @jjj. Integrating GPU with CPU saves money not to mention space and energy. Instead of paying $200 for the CPU and buy dGPU for another 200-300, you get them both on the same die. OEM's love that. If you dont want to use them just disable the GPU and buy 200W from AMD/NVDA. And it appears now the System memory will come on the CPU silicon as well. INTC wants to exterminate everything, even the cockroaches in your crib.
  • Flunk - Wednesday, August 5, 2015 - link

    Your generational tests look like they could have come from different chips in the same series. Intel isn't giving us much reason to want to upgrade. They could have at least put out a 8-core consumer chip. It isn't even that much more die space to do so.
  • BrokenCrayons - Wednesday, August 5, 2015 - link

    With Skylake's Camera Pipeline, I should be able to apply a sepia filter to my selfies faster than ever before while saving precious electricity that will let me purchase a little more black eyeliner and those skull print leg warmers I've always wanted. Of course, if it doesn't, I'm going to be really upset with them and refuse to run anything more modern than a 1Giga-Pro VIA C3 at 650 MHz because it's the only CPU on the market that is gothic enough pending the lack of much needed sepia support in Skylake.
  • name99 - Wednesday, August 5, 2015 - link

    And BrokenCrayons wins the Daredevil award for most substantial of lack vision regarding how computers can be used in the future.

    For augmented reality to become a thing we need to, you know, actually be able to AUGMENT the image coming in through the camera...
    Today on the desktop (where it can be used to prototype algorithms, and for Surface type devices). Tomorrow in Atom, and (Intel hopes), giving them some sort of edge over ARM (though good luck with that --- I expect by the time this actually hits Atom, every major ARM vendor will have something comparable but superior).

    Beyond things like AR, Apple TODAY uses CoreImage in a variety of places to handle their UI (eg the Blur and Vibrancy effects in Yosemite). I expect they will be very happy to use new GPU extensions that do this with lower power, and that same lower power will extend to all users of the CI APIs.

    Without knowing EXACTLY what Camera Pipeline is providing, we're in no position to judge.
  • BrokenCrayons - Friday, August 7, 2015 - link

    I was joking.

Log in

Don't have an account? Sign up now