Putting Mac Pro Performance in Perspective: Professional Apps

With the legacy Mac Pro comparison out of the way, I wanted to put the Mac Pro's performance in perspective relative to other high-end, modern Macs. I grabbed the 2012 15-inch rMBP, the entry level and upgraded 2013 15-inch rMBPs as well as the new 27-inch Haswell iMac for comparison. I also included my 2009 8-core Mac Pro. I received a reader request to try running the Cycles benchmark in Blender (an open source 3D renderer), so I'll start with that:

Blender 2.69 Cycles Benchmark

Under OS X, this is still a CPU test - the GPUs remain idle and out of the picture here. If you're wondering why the 27-inch iMac doesn't do so hot here, it's because I'm testing the 4 core/4 thread Core i5 version while the rMBPs all have Hyper Threading enabled and are thus 4 core/8 thread configs. The extra logical core per physical core definitely increases architectural utilization/efficiency in this well-threaded test.

Here we have an example where Haswell's IPC improvements don't do a whole lot, which is music to the ears of those considering buying a Mac Pro now vs. when Haswell EP hits in about a year. There's a clear benefit to opting for even the compact Mac Pro over any of Apple's other Macs. The improvement in performance, particularly with the 12-core configuration, is astounding. The Mac Pro completes the rendering work in less than half of the time of the rMBP.

For my next two tests I'm going to be looking at Final Cut Pro 10.1 performance. Given how much Apple is focused on 4K video editing as a usage model for the new Mac Pro, I went out and filmed a bunch of samples in 4K and created a couple of benchmarks to stress the new Mac Pro. As I mentioned earlier, most effects rendering in FCP 10.1 seems to take place on the GPU(s) while everything else seems more CPU bound. For my first test I wanted a project that was lighter on the GPU, so I had a minimal number of effects and no transitions. While the GPU still plays a role (as you'll see below), this should be a good CPU test. I confirmed that portions of the test had no problems keeping all 12 cores/24 hardware threads busy, while others stayed around the 5 - 8 core range.

You'll notice that I have two configurations of the early 2009 Mac Pro listed. One features the machine's stock NVIDIA GeForce GT 120, while the other has been upgraded to Sapphire's Radeon HD 7950 Mac Edition. The only other point I should make is the 2.0GHz 2013 15-inch rMBP configuration features no discrete GPU, it only has Intel's Iris Pro graphics. The 2.3GHz model does feature an NVIDIA GeForce GT 750M GPU.

Final Cut Pro 10.1 - 4K Benchmark, CPU Workload

The first thing I noticed while running this test is how much the workload can impact CPU core utilization. Even though I was dealing with a substantial 4K project, only portions could spawn enough work to keep all 12 cores/24 threads busy. I suspect for video work the optimal cost/performance combination may be the 8-core/3.0GHz part. That being said, it is nice to have a handful of available cores to keep system responsiveness up even while working on rendering a big video project.

I made a point to talk about the GPU configurations before presenting the chart for a good reason: Final Cut Pro 10.1 appears to be incredibly dependent on GPU performance, especially when there's any sort of effects rendering going on. Note that simply moving to a lower clocked Haswell and ditching the discrete GPU causes the 15-inch rMBP to take more than 70% longer to complete this benchmark. I'm not sure how much of this has to do with Intel's graphics drivers just not being optimized for FCP's OpenCL workload, but if you're planning on doing any real work in the latest Final Cut Pro you're going to want a discrete GPU.

The new Mac Pro completed my first FCP 10.1 render test in around half the time of the older 2009 8-core Mac Pro with the upgraded GPU. Without the upgraded GPU, despite this being a very CPU bound test, the older Mac Pro is absolutely destroyed by the new config. The new Mac Pro manages to complete my 4K test in less than 1/6 of the time of the 2009 model without any ugprades.

The next FCP 10.1 benchmark adds in a significant number of effects and transitions to drive up GPU usage. I'm presenting two charts, one without the Iris Pro rMBP and GT 120 Mac Pro and one with them included:

Final Cut Pro 10.1 - 4K Benchmark, CPU+GPU Workload

Final Cut Pro 10.1 - 4K Benchmark, CPU+GPU Workload

The difference in performance between Intel's Iris Pro graphics and NVIDIA's GeForce GT 750M is staggering. The Iris Pro rMBP15 configuration takes nearly an hour to complete my test, while the dGPU configuration does it in a little over 21 minutes. Here the 27-inch iMac's beefy GPU seems to help make it faster than the rMBP notebooks. The new Mac Pro pulls ahead of the upgraded 2009 model, though not by as much as I would've expected. The second GPU isn't being used as much as it could be it seems. Once again, a standard 2009 model wouldn't fare nearly as well here. Even with a Radeon HD 4870 I bet we'd be seeing significantly lower performance.

The default GT 120 GPU gives us a slight indication of what a slower GPU would do to FCP performance here. What took the new Mac Pro with its dual FirePro D700s under 15 minutes to do, took an hour and 45 minutes to do on the 2009 model with entry level GPU. The same system but with a Sapphire Radeon HD 7950 dropped its render time to 18 minutes.

It is really surprising just how big of an impact GPU performance can have on Final Cut Pro 10.1. It makes total sense that Apple went dual GPUs top to bottom with the new Mac Pro. It seems the latest version of Final Cut Pro was designed with the new Mac Pro in mind, which is unfortunate for anyone who was hoping to get by with an older Mac Pro with a far less capable GPU. It's very clear to me that the Mac Pro is really designed to be an upgrade on all fronts (CPU, GPU, SSD and external IO). The question is how many of those parts have existing Mac Pro users upgraded on their own. The answer to that will ultimately determine how big of a step forward the new Mac Pro really is.

CPU Performance - Five Generations of Mac Pros Compared Mac Pro vs. Consumer Macs
Comments Locked

267 Comments

View All Comments

  • uhuznaa - Wednesday, January 1, 2014 - link

    For whatever it's worth: I'm supporting a video pro and what I can see in that crowd is that NOBODY cares for internal storage. Really. Internal storage is used for the software and of course the OS and scratch files and nothing else. They all use piles of external drives which are much closer to actual "media" you can carry around and work with in projects with others and archive.

    I fact I tried for a while to convince him of the advantages of big internal HDDs and he wouldn't have any of it. He found the flood of cheap USB drives you can even pick up at the gas station in the middle of the night the best thing to happen and USB3 a gift from heaven. They're all wired this way. Compact external disks that you can slap paper labels on with the name of the project on it and the version of that particular edit and that you can carry around are the best thing since sliced bread for them. And after a short while I had to agree that they're perfectly right with that for what they do.

    Apple is doing this quite right. Lots of bays are good for servers, but this is not a server. It's a workstation and work here means mostly work with lots of data that wants to be kept in nice little packages you can plug in and safely out and take with you or archive in well-labeled shelves somewhere until you find a use for it later on.

    (And on a mostly unrelated note: Premiere Pro may be the "industry standard" but god does this piece of software suck gas giants through nanotubes. It's a nightmarish UI thinly covering a bunch of code held together by chewing gum and duct tape. Apple may have the chance of a snowflake in hell against that with FCP but they absolutely deserve kudos for trying. I don't know if I love Final Cut, but I know I totally hate Premiere.)
  • lwatcdr - Wednesday, January 1, 2014 - link

    "My one hope is that Apple won’t treat the new Mac Pro the same way it did its predecessor. The previous family of systems was updated on a very irregular (for Apple) cadence. "

    This is the real problem. Haswell-EP will ship this year and it used a new socket. The proprietary GPU physical interface will mean those will probably not get updates quickly and they will be expensive. Today the Pro is a very good system but next year it will be falling behind.
  • boli - Wednesday, January 1, 2014 - link

    Hi Anand, cheers for the enjoyable and informative review.

    Regarding your HiDPI issue, I'm wondering if this might be an MST issue? Did you try in SST mode too?

    Just wondering because I was able to add 1920x1080 HiDPI to my 2560x1440 display no problem, by adding a 3840x2160 custom resolution to Switch Res X, which automatically added 1920x1080 HiDPI to the available resolutions (in Switch Res X).
  • mauler1973 - Wednesday, January 1, 2014 - link

    Great review! Now I am wondering if I can replicate this kind of performance in a hackintosh.
  • Technology Never Sleeps - Wednesday, January 1, 2014 - link

    Good article but I would suggest that your editor or proof reader review your article before its posted. It takes away from the professional nature of the article and website with so many grammatical errors.
  • Barklikeadog - Wednesday, January 1, 2014 - link

    Once again, a standard 2009 model wouldn't fair nearly as well here. Even with a Radeon HD 4870 I bet we'd be seeing significantly lower performance.

    Great review Anand, but I think you meant fare in that sentence.
  • name99 - Wednesday, January 1, 2014 - link

    " Instead what you see at the core level is a handful of conservatively selected improvements. Intel requires that any new microarchitectural feature introduced has to increase performance by 2% for every 1% increase in power consumption."

    What you say is true, but not the whole story. It implies that these sorts of small improvements are the only possibility for the future and that's not quite correct.
    In particular branch prediction has become good enough that radically different architectures (like CFP --- Continuous Flow Processing --- become possible). The standard current OoO architecture used by everyone (including IBM for both POWER and z, and the ARM world) grew from a model based on no speculation to some, but imperfect, speculation. So what it does is collect speculated results (via the ROB and RAT) and dribble those out in small doses as it becomes clear that the speculation was valid. This model never goes drastically off the rails, but is very much limited in how many OoO instructions it can process, both at the complete end (size of the ROB, now approaching 200 fused µ-instructions in Haswell) and at the scheduler end (trying to find instructions that can be processed because their inputs are valid, now approaching I think about 60 instructions in Haswell).
    These figures give us a system that can handle most latencies (FP instructions, divisions, reasonably long chains of dependent instructions, L1 latency, L2 latency, maybe even on a good day L3 latency) but NOT memory latency.

    And so we have reached a point where the primary thing slowing us down is data memory latency. This has been a problem for 20+ years, but now it's really the only problem. If you use best of class engineering for your other bits, really the only thing that slows you down is waiting on (data) memory. (Even waiting on instructions should not ever be a problem. It probably still is, but work done in 2012 showed that the main reason instruction prefetching failed was that the prefetched was polluted by mispredicted branches and interrupts. It's fairly easy to filter both of these once you appreciate the issue, at which point your I prefetcher is basically about 99.5% accurate across a wide variety of code. This seems like such an obvious an easy win that I expect it to move into all the main CPUs within 5 yrs or so.)

    OK, so waiting on memory is a problem. How do we fix it?
    The most conservative answer (i.e. requires the fewest major changes) is data pre fetchers, and we've had these growing in sophistication over time. They can now detect array accesses with strides across multiple cache lines, including backwaters, and we have many (at least 16 on Intel) running at the same time. Each year they become smarter about starting earlier, ending earlier, not polluting the cache with unneeded data. But they only speed up regular array accesses.

    Next we have a variety of experimental prefetchers that look for correlations in the OFFSETs of memory accesses; the idea being that you have things like structs or B-tree nodes that are scattered all over memory (linked by linked lists or trees or god knows what), but there is a common pattern of access once you know the base address of the struct. Some of these seem to work OK, with realistic area and power requirements. If a vendor wanted to continue down the conservative path, this is where they would go.

    Next we have a different idea, runahead execution. Here the idea is that when the “real” execution hits a miss to main memory, we switch to a new execution mode where no results will be stored permanently (in memory or in registers); we just run ahead in a kind of fake world, ignoring instructions that depend on the load that has missed. The idea is that, during this period we’ll trigger new loads to main memory (and I-cache misses). When the original miss to memory returns its result, we flush everything and restart at the original load, but now, hopefully, the runahead code started some useful memory accesses so that data is available to us earlier.
    There are many ways to slice this. You can implement it fairly easily using SMT infrastructure if you don’t have a second thread running on the core. You can do crazy things that try to actually preserve some of the results you generate during the runahead phase. Doing this naively you burn a lot of power, but there are some fairly trivial things you can do to substantially reduce the power.
    In the academic world, the claim is that for a Nehalem type of CPU this gives you about a 20% boost at the cost of about 5% increased power.
    In the real world it was implemented (but in a lousy cheap-ass fashion) on the POWER6 where it was underwhelming (it gave you maybe a 2% boost over the existing prefetchers); but their implementation sucked because it only ran 64 instructions during the run ahead periods. The simulations show that you generate about one useful miss to main memory per 300 instructions executed, so maybe two or three during a 400 to 500 cycles load miss to main memory, but 64 is just too short.
    It was also supposed to be implemented in the SUN Rock processor which was cancelled when Oracle bought Sun. Rock tried to be way more ambitious in their version of this scheme AND suffered from a crazy instruction fetch system that had a single fetch unit trying to feed eight threads via round robin (so each thread gets new instructions every eight cycles).
    Both these failures don’t, I think, tell us if this would work well if implemented on, say, an ARM core rather than adding SMT.

    Which gets us to SMT. Seems like a good idea, but in practice it’s been very disappointing, apparently because now you have multiple threads fighting over the same cache. Intel, after trying really hard, can’t get it to give more than about a 25% boost. IBM added 4 SMT threads to POWER7, but while they put a brave face on it, the best the 4 threads give you is about 2x single threaded performance. Which, hey, is better than 1x single threaded performance, but it’s not much better than what they get from their 2 threaded performance (which can do a lot better than Intel given truly massive L3 caches to share between threads).

    But everything so far is just add-ons. CFP looks at the problem completely differently.
    The problem we have is that the ROB is small, so on a load miss it soon fills up completely. You’d want the ROB to be about 2000 entries in size and that’s completely impractical. So why do we need the ROB? To ensure that we write out updated state properly (in small dribs and drabs every cycle) as we learn that our branch prediction was successful.
    But branch prediction these days is crazy accurate, so how about a different idea. Rather than small scale updating successful state every cycle, we do a large scale checkpoint every so often, generally just before a branch that’s difficult to predict. In between these difficult branches, we run out of order with no concern for how we writeback state — and in the rare occasions that we do screw up, we just roll back to the checkpoint. In between difficult branches, we just run on ahead even across misses to memory — kinda like runahead execution, but now really doing the work, and just skipping over instructions that depend on the load, which will get their chance to run (eventually) when the load completes.
    Of course it’s not quite that simple. We need to have a plan for being able to unwind stores. We need a plan for precise interrupts (most obviously for VM). But the basic idea is we trade today’s horrible complexity (ROB and scheduler window) for a new ball of horrible complexity that is not any simpler BUT which handles the biggest current problem, that the system grinds to a halt at misses to memory, far better than the current scheme.

    The problem, of course, is that this is a hell of a risk. It’s not just the sort of minor modification to your existing core where you know the worst that can go wrong; this is a leap into the wild blue yonder on the assumption that your simulations are accurate and that you haven’t forgotten some show-stopping issue.
    I can’t see Intel or IBM being the first to try this. It’s the sort of thing that Apple MIGHT be ambitious enough to try right now, in their current state of so much money and not having been burned by a similar project earlier in their history. What I’d like to see is a university (like a Berkeley/Stanford collaboration) try to implement it and see what the real world issues are. If they can get it to work, I don’t think there’s a realistic chance of a new SPARC or MIPS coming out of it, but they will generate a lot of valuable patents, and their students who worked on the project will be snapped up pretty eagerly by Intel et al.
  • stingerman - Wednesday, January 1, 2014 - link

    I think Intel has another two years left on the Mac. Apple will start phasing it out on the MacBook Air, Mac Mini and iMac. The MacBook rPros and finally the Mac Pro. Discreet x86 architecture is dead ending. Apple's going to move their Macs to SOC that they design. It will contain most of the necessary components and significantly reduce the costs of the desktops and notebooks. The Mac Pro will get it last giving time for the Pro Apps to be ported to Apple's new mobile and desktop 64-bit processors.
  • tahoey - Wednesday, January 1, 2014 - link

    Remarkable work as always. Thank you.
  • DukeN - Thursday, January 2, 2014 - link

    Biased much, Anand?

    Here's the Lenovo S30 I bought a couple of weeks back, and no it wasn't $4000 + like you seem to suggest.

    http://www.cdw.com/shop/products/Lenovo-ThinkStati...

    You picked probably the most overpriced SKU in the bunch just so you can prop up the ripoff that is your typical Apple product.

    Shame.

Log in

Don't have an account? Sign up now