Alien: Isolation

If first person survival mixed with horror is your sort of thing, then Alien: Isolation, based off of the Alien franchise, should be an interesting title. Developed by The Creative Assembly and released in October 2014, Alien: Isolation has won numerous awards from Game Of The Year to several top 10s/25s and Best Horror titles, ratcheting up over a million sales by February 2015. Alien: Isolation uses a custom built engine which includes dynamic sound effects and should be fully multi-core enabled.

Alien Isolation on ASUS GTX 980 Strix 4GB ($560)

Alien Isolation on MSI R9 290X Gaming LE 4GB ($380)

Alien Isolation on MSI GTX 770 Lightning 2GB ($245)

Alien Isolation on MSI R9 285 Gaming 2GB ($240)

Alien Isolation on Integrated Graphics

Aside from a small dip by the Core i7-2600K when using the R9 285, the i3-7350K matches the other CPUs in Alien Isolation.

Legacy and Synthetic Tests Gaming: Total War: Attila
Comments Locked

186 Comments

View All Comments

  • Michael Bay - Saturday, February 4, 2017 - link

    >competition
    >AMD
  • Ranger1065 - Sunday, February 5, 2017 - link

    You are such a twat.
  • Meteor2 - Sunday, February 5, 2017 - link

    Ignore him. Don't feed trolls.
  • jeremynsl - Friday, February 3, 2017 - link

    Please consider abandoning the extreme focus on average framerates. It's old-school and doesn't really reflect the performance differences between CPUs anymore. Frame-time variance and minimum framerates are what is needed for these CPU reviews.
  • Danvelopment - Friday, February 3, 2017 - link

    Would be a good choice for a new build if the user needs the latest tech, but I upgraded my 2500K to a 3770 for <$100USD.

    I run an 850 for boot, a 950 for high speed storage on an adapter (thought it was a good idea at the time but it's not noticeable vs the 850) and an RX480.

    I don't feel like I'm missing anything.
  • Barilla - Friday, February 3, 2017 - link

    "if we have GPUs at 250-300W, why not CPUs?"

    I'm very eager to read a full piece discussing this.
  • fanofanand - Sunday, February 5, 2017 - link

    Those CPUs exist but don't make sense for home usage. Have you noticed how hard it is to cool 150 watts? Imagine double that. There are some extremely high powered server chips but what would you do with 32 cores?
  • abufrejoval - Friday, February 3, 2017 - link

    I read the part wasn't going to be available until later, did a search to confirm and found two offers: One slightly more expensive had "shipping date unknown", another slightly cheaper read "ready to ship", so that's what I got mid-January, together with a Z170 based board offering DDR3 sockets, because it was to replace an A10-7850K APU based system and I wanted to recycle 32GB of DDR3 RAM.

    Of course it wouldn't boot, because 3 out of 3 mainboards didn't have KabyLake support in the BIOS. Got myself a Skylake Pentium part to update the BIOS and returned that: Inexcusable hassle that, for me, the dealer and hopefully for the manufacturers which had advertised "Kaby Lake" compatibility for moths, but shipped outdates BIOS versions.

    After that this chips runs 4.2GH out of the box and overclocks to 4.5 without playing with voltage. Stays cool and sucks modest Watts (never reaching 50W according to the onboard sensors, which you can't really trust, I gather).

    Use case is a 24/7 home-lab server running quite a mix of physical and virtual workloads on Win 2008R2 and VMware workstation, mostly idle but with some serious remote desktop power, Plex video recoding ummp if required and even a game now and then at 1080p.

    I want it to rev high on sprints, because I tend to be impatient, but there is a 12/24 core Xeon E5 at 3 GHz and a 4/8 Xeon E3 at 4GHz sitting next to it, when I need heavy lifting and torque: Those beasts are suspended when not in use.

    Sure enough, it is noticible snappier than the big Xeon 12 core on desktop things and still much quieter than the Quad, while of course any synthetic multi-core benchmark or server load leaves this chip in the dust.

    I run it with an Nvidia GTX 1050ti, which ensures a seamless experience with the Windows 7 generation Sever 2008R2 on all operating systems, including CentOS 7 virtual or physical which is starting to grey a little on the temples, yet adds close to zero power on idle.

    At 4.2 GHz the Intel i3-7350K HT dual is about twice as fast as the A10-7850K integer quad at the same clock speed (it typically turbos to 4.2 GHz without any BIOS OC pressure) for all synthetic workloads I could throw at it, which I consider rather sad (been running AMD and Intel side by side for decades).

    I overclocked mine easily to 4.8 GHz and even to 5 GHz with about 1.4V and leaving the uncore at 3.8 GHz. It was Prime95 stable, but my simple slow and quiet Noctua NH-L9x65 couldn't keep temperatures at safe levels so I stopped a little early and went back to an easy and cool 4.6 GHz at 1.24V for "production".

    I'm most impressed running x265 video recodes on terabytes of video material at 800-1200FPS on this i3-7350K/GTX 1050ti combo, which seems to leave both CPU and GPU oddly bored and able to run desktop and even gaming workloads in parallel with very little heat and noise.

    The Xeon monsters with their respective GTX 1070 and GTX 980ti GPUs would that same job actually slower while burning more heat and there video recoding has been such a big sales argument for the big Intel chips.

    Actually Handbrake x265 software encodes struggle to reach double digits on 24 threads on the "big" machine: Simply can't beat ASIC power with general purpose compute.

    I guess the Pentium HT variants are better value, but so is a 500cc scooter vs. a Turbo-Hayabusa. And here the difference is less than a set of home delivered pizzas for the family, while this chip will last me a couple of years and the pizza is gone in minutes.
  • Meteor2 - Sunday, February 5, 2017 - link

    Interesting that x265 doesn't scale well with cores. The developers claim to be experts in that area!
  • abufrejoval - Sunday, February 12, 2017 - link

    Sure the Handbrake x265 code will scale with CPU cores, but the video processing unit (VPU) withing the GTX 10x series provides several orders of magnitude better performance at much lower energy budgets. You'd probably need downright silly numbers of CPU cores (hundreds) with Handbrake to draw even in performance and by then you'd be using several orders of magnitude more energy to get it done.

    AFAIK the VPU all the same on all (consumer?) Pascal GPUs and not related to GPU cores, so a 1080 or even a Titan-X may not be any faster than a 1050.

    When I play around with benchmarks I typically have HWinfo running on a separate monitor and it reports the utilization and power budget from all the distinct function blocks in today's CPUs and GPUs.

    Not only does the GTX 1050ti on this system deliver 800-1200FPS when transcoding 1080p material from x264 to x265, but it also leaves CPU and GPU cores rather idle so I actually felt it had relatively little impact on my ability to game or do production work, while it is transcoding at this incredible speed.

    Intel CPUs at least since Sandy Bridge have also sported VPUs and I have tried to them similarly for the MPEG to x264 transitions, but there from my experience compression factor, compression quality and speed have fallen short of Handbrake, so I didn't use them. AFAIK x265 encoding support is still missing on Kaby Lake.

    It just highlights the "identity" crisis of general purpose compute, where even the beefiest CPUs suck on any specific job compared to a fully optimized hardware solution.

    Any specific compute problem shared by a sufficiently high number of users tends to be moved into hardware. That's how GPUs and DSPs came to be and that's how VPUs are now making CPU and GPU based video transcoding obsolete via dedicated function blocks.

    And that explains why my smallest system really feels fastest with just 2 cores.

    The only type of workload where I can still see a significant benefit for the big Xeon cores are things like a full Linux kernel compile. But if the software eco-system there wasn't as bad as it is, incremental compiles would do the job and any CPU since my first 1MHz 8-Bit Z80 has been able to compile faster than I was able to write code (especially with Turbo Pascal).

Log in

Don't have an account? Sign up now