I have a confession to make. For the past year I’ve been using a 27-inch iMac as my primary workstation. I always said that if I had a less mobile lifestyle the iMac is probably the machine I’d end up with (that was prior to the announcement of the new Mac Pro of course). This past year has been the most insane in terms of travel, so it wasn’t a lack of mobility that kept me on the iMac but rather a desire to test Apple’s new Fusion Drive over the long haul.

It’s entirely possible to mask the overwhelmingly bad experience of a hard drive in a high performance machine by only sampling at the beginning of the journey. When the OS is a clean install, the drive is mostly empty and thus operating at its peak performance. Obviously Apple’s Fusion Drive is designed to mitigate the inevitable performance degradation, and my initial take on it after about a month of use was very good - but would it last?

I’m happy to report that it actually did. So today’s confession is really a two-parter: I’ve been using an iMac for the past year, and I’ve been using a hard drive as a part of my primary storage for the past year. Yeesh, I never thought I’d do either of those things.

Apple 2013 iMac
Configuration 21.5-inch iMac 21.5-inch Upgraded iMac 27-inch iMac 27-inch Upgraded iMac
Display 21.5-inch 1920 x 1080 21.5-inch 1920 x 1080 27-inch 2560 x 1440 27-inch 2560 x 1440
CPU (Base/Turbo) Intel Core i5-4570R (2.7GHz/3.2GHz) Intel Core i5-4570S (2.9GHz/3.6GHz) Intel Core i5-4570 (3.2GHz/3.6GHz) Intel Core i5-4670 (3.4/3.8GHz)
GPU Intel Iris Pro 5200 NVIDIA GeForce GT 750M (1GB GDDR5) NVIDIA GeForce GT 755M (1GB GDDR5) NVIDIA GeForce GTX 775M (2GB GDDR5)
RAM 8GB DDR3-1600 8GB DDR3-1600 8GB DDR3-1600

8GB DDR3-1600

Storage 1TB 5400RPM 1TB 5400RPM 1TB 7200RPM 1TB 7200RPM
WiFi 802.11ac
I/O 4 x USB 3.0, 2 x Thunderbolt, 1 x GigE, SDXC reader, headphone jack
Starting Price $1299 $1499 $1799 $1999

This year the iMacs get incrementally better. Displays and resolutions are the same, but silicon options are a bit quicker, 802.11ac is on deck and the SSDs all move to PCIe (including Fusion Drive). As tempted as I was to begin my first look at the 2013 iMac evaluating the impact of going to faster storage, it was the entry-level model that grabbed my attention first because of a little piece of silicon we’ve come to know as Crystalwell.

The CPU: Haswell with an Optional Crystalwell

The entry level 21.5-inch iMac is one of the most affordable options in Apple’s lineup. At $1499 Apple will typically sell you a dual-core notebook of some sort, but here you get no less than a quad-core, 65W Core i5-4570R. That’s four cores running at 2.7GHz, and capable of hitting up to 3.2GHz. In practice I pretty much always saw the cores running at 3.0GHz regardless of workload. I’d see some excursions up at 3.1GHz but for the most part you’re effectively buying a 3GHz Haswell system.

The R at the end of the SKU connotes something very special. Not only do you get Intel’s fastest GPU configuration (40 EUs running at up to 1.15GHz), but you also get 128MB of on-package eDRAM. The combination of the two gives you a new brand: Intel’s Iris Pro 5200.

The Iris Pro 5200 is a GPU configuration option I expect to see on the 15-inch MacBook Pro with Retina Display, and its presence on the iMac tells us how it’ll be done. In last year’s iMacs, Apple picked from a selection of NVIDIA discrete GPUs. This year, the entry level 21.5-inch model gets Iris Pro 5200 while the rest feature updated NVIDIA Kepler discrete GPUs. It’s the same bifurcation that I expect to find on the 15-inch MacBook Pro with Retina Display. As we found in our preview of Intel’s Iris Pro 5200, in its fastest implementation the GPU isn’t enough to outperform NVIDIA’s GeForce GT 650M (found in the 2012 15-inch rMBP). Apple’s engineers aren’t particularly fond of performance regressions, so the NVIDIA GPUs stick around for those who need them, and for the first time we get a truly decent integrated option from Intel.

Most PC OEMs appear to have gone the opposite route - choosing NVIDIA’s low-end discrete graphics over Intel’s Iris Pro. The two end up being fairly similar in cost (with Intel getting the slight edge it seems). With NVIDIA you can get better performance, while Intel should deliver somewhat lower power consumption and an obvious reduction in board area. I suspect Iris Pro probably came in a bit slower than even Apple expected, but given that Apple asked Intel to build the thing it probably felt a bit compelled to use it somewhere. Plus there’s the whole believing in the strategy aspect of all of this. If Apple could shift most of its designs exclusively to processor graphics, it would actually be able to realize board and power savings which would have an impact on industrial design. We’re just not there yet. Whether we ever get there depends on just how aggressive Intel is on the graphics front.

I already went through what the 128MB eDRAM (codename Crystalwell) gets you, but in short that massive on-package memory acts as a L4 cache for both the CPU and GPU. You get 50GB/s of bandwidth in both directions, and access latency somewhere between L3 cache and main memory requests.

OS X doesn’t seem to acknowledge Crystalwell’s presence, but it’s definitely there and operational (you can tell by looking at the GPU performance results). Some very specific workloads can benefit handsomely from the large eDRAM. At boot I suspect key parts of the OS are probably cached on-package as well, something that’ll have big implications for power usage in mobile. Unfortunately my review sample came with a hard drive, and these new iMacs aren’t super easy to break into (not to mention that Apple frowns upon that sort of behavior with their review samples), which hampered the user experience. OS X continues to do a good job of keeping things cached in memory, and the iMac’s 8GB default configuration helps there tremendously. Whenever I was working with data and apps in memory, the system felt quite snappy. I’ll get to the benchmarks in a moment.

The non-gaming experience with Iris Pro under OS X seemed fine. I noticed a graphical glitch under Safari in 10.8.5 (I saw tearing while scrolling down a long list of iCloud tabs) but otherwise everything else looked good.

iMac (Late 2013) CPU Options
  21.5-inch 27-inch
  Base Upgraded Optional Base Upgraded Optional
Intel CPU i5-4570R i5-4570S i7-4770S i5-4570 i5-4670 i7-4771
Cores / Threads 4 / 4 4 / 4 4 / 8 4 / 4 4 / 4 4 / 8
Base Clock 2.7GHz 2.9GHz 3.1GHz 3.2GHz 3.4GHz 3.5GHz
Max Turbo 3.2GHz 3.6GHz 3.9GHz 3.6GHz 3.8GHz 3.9GHz
L3 Cache 4MB 6MB 8MB 6MB 6MB 8MB
TDP 65W 65W 65W 65W 84W 84W
VT-x / VT-d Y / Y Y / Y Y / Y Y / Y Y / Y Y / Y
TSX-NI N Y Y Y Y Y

In typical Intel fashion, you get nothing for free. The 128MB of eDRAM comes at the expense of a smaller L3 cache, in this case 4MB shared by all four cores (and the GPU). Note that this tradeoff also exists on the higher end Core i7 R-series SKU, but 6MB of L3 is somehow less bothersome than 4MB. This is the lowest core:L3 cache ratio of any modern Intel Core series processor. The 128MB eDRAM likely more than makes up for this reduction, and I do wonder if this isn’t a sign of things to come from Intel. A shift towards smaller, even lower latency L3 caches might make sense if you’ve got a massive eDRAM array backing it all up.

CPU Performance
Comments Locked

127 Comments

View All Comments

  • g1011999 - Monday, October 7, 2013 - link

    Finally. I check anandtech several times recently for Iris Pro based iMac 21" review.
  • malcolmcraft - Thursday, October 9, 2014 - link

    It's nice, I agree. But for a full-size work station I'd not recommend Mac. /Malcolm from http://www.consumertop.com/best-desktop-guide/
  • Shivansps - Monday, October 7, 2013 - link

    I suspecting that the big loss in performance on high details compared to 750M may be related to L4 eDRAM running short than driver issue, as AA, Intel never had good performance with filters, they support hardware x2 AA yet?
  • tipoo - Monday, October 7, 2013 - link

    Yeah, doesn't AA hammer bandwidth? The eDRAM helps performance, but it's still quite low compared to what the other cards are paired with, even in best case scenarios.
  • IntelUser2000 - Tuesday, November 12, 2013 - link

    I don't think its just that. Compared to the competition like the Trinity's iGPU and the GT 650M, the texture fill rate is rather low. That impacts performance not only in texture bound scenarios with settings cranked up but anti-aliasing as well. The fillrate of the top of the line Iris Pro 5200 is about equal to Trinity while the version in the iMac would fall short. The GT 650M is 40% better than the top of the line Iris Pro and over 55% better than iMac version.

    There's also something to be desired about Intel's AA implementation. Hopefully Broadwell improves on this.
  • IanCutress - Monday, October 7, 2013 - link

    Interestingly we see Crystalwell not have any effect on CPU benchmarks, although we can probe latency as seen before.
  • willis936 - Monday, October 7, 2013 - link

    This seems counter intuitive. It's acting as a CPU+GPU shared cache correct? Intel architectures are relatively cache bandwidth starved and you'd think that 128MB of L4 would help keep the lower levels filled.
  • Flunk - Monday, October 7, 2013 - link

    Perhaps it means that the assumption that Intel architectures are relatively cache bandwidth starved is faulty.
  • name99 - Monday, October 7, 2013 - link

    Or that the working set of most benchmarks (if not most apps) is captured with a 4 or 6MB cache?
    Caching's basically irrelevant for data that is streamed through.
  • tipoo - Thursday, October 10, 2013 - link

    The L4 is pretty low bandwidth for a cache though.

Log in

Don't have an account? Sign up now