The Mac Pro Review (Late 2013)

by Anand Lal Shimpi on 12/31/2013 3:18 PM EST
POST A COMMENT

249 Comments

Back to Article

  • sully213 - Tuesday, December 31, 2013 - link

    Hi TWiT Live viewers! Happy New Year's! Reply
  • Bone Doc - Wednesday, January 01, 2014 - link

    Finally, the authoritative review is here. Happy New Year Anand. Excellent work as always. Reply
  • kasthuri - Tuesday, December 31, 2013 - link

    great review! hoping that new MBP-Haswell next? happy new year to all! Reply
  • SignalPST - Tuesday, December 31, 2013 - link

    yea, when's the Haswell Macbook Pro review coming out? Reply
  • japtor - Tuesday, December 31, 2013 - link

    Little correction for the opening, the MacBook name didn't exist until 2006 I think, they were iBooks and PowerBooks back in 2004. Reply
  • Ryan Smith - Tuesday, December 31, 2013 - link

    Whoops. Good catch. Thanks. Reply
  • madwolfa - Tuesday, December 31, 2013 - link

    Happy New Year! Reply
  • mwildtech - Tuesday, December 31, 2013 - link

    Tahiti's roasting on an open fire... Whew!! Reply
  • mwildtech - Tuesday, December 31, 2013 - link

    To be fair this was running Furmark and is not a realistic load on the gpu's. I would be interested in seeing the CPU and GPU temps while gaming in something like BF4. Anyway you guys could test it? Great review as always! Reply
  • wildpalms - Friday, January 03, 2014 - link

    Gaming is not possible on the new Mac pro, at least not with any suitable level of performance. The GPU's are workstation class....and will crunch through rendering and other video type operations. Gaming will be lousy on these GPU's, as these are NOT the typical gaming type GPU's you may be used to. Reply
  • Haravikk - Monday, January 13, 2014 - link

    That's not completely fair; the D700's are what, 7970 (R9 280?) equivalents, and they will work with CrossFireX under Windows, so they should run pretty well. Granted you're absolutely right that they're not gaming GPUs so you shouldn't expect them to beat a decent gaming rig, but they'll do in a pinch. Besides, mwildtech was asking what kind of temperature the Mac Pro would reach while running games, not whether it'll be any good at doing so. Reply
  • eutectic - Tuesday, December 31, 2013 - link

    Can I volunteer a Lightroom license for testing? I think export is much, much better threaded in v5; it'd be nice to see that benchmarked. Reply
  • knweiss - Thursday, January 02, 2014 - link

    +1 Reply
  • piroroadkill - Tuesday, December 31, 2013 - link

    463W at the wall with a 450W DC power supply...

    Throttling to 2GHz, almost boiling GPU temps. Yeah, I think this machine could have done with being a bit larger to extend the mass of that heatsink, and include a PSU that won't be pushed to an unhealthy percentage of its maximum all the time.
    Reply
  • mwildtech - Tuesday, December 31, 2013 - link

    To be fair they was with Furmark and Prime 95 at the same time. Not a realistic load, Tahiti's running Furmark in a desktop in CFX can see similar temps with a AMD reference model. Also, 463w at the wall with 85% efficiency is only 393w being used by the workstation, seems within the safe limits. Reply
  • akdj - Wednesday, January 01, 2014 - link

    This is what you gleaned from such an insightful review....of a revolutionized desktop computer from Apple? Seriously? He was running a GPU and CPU 'poison' in order to find the ceiling. NOTHING in his real world testing including editing, rendering and transcoding 4k video increased core temps dangerously, nor did they spin the fans up audibly (a quiet room is typically 40-45dB). Wow. Amazing comprehension. Guess it fits with your 'name' Reply
  • Morawka - Wednesday, January 01, 2014 - link

    the very next paragraph he reached the same power use and same thermal throtteling using a "normal 4k workload", he just didnt go back and correct his first paragraph about it not being able to hit that ceiling on normal workloads. Reply
  • damianrobertjones - Thursday, January 02, 2014 - link

    "of a revolutionized desktop "

    Oh please stop. It's just a desktop with x or y and nothing amazingly special.
    Reply
  • akdj - Friday, January 03, 2014 - link

    "It's just a desktop with x or y and nothing amazingly special."....lol. Are you 16? I'm 43....and THIS is a revolution in desktop technology, power, size, speed, aesthetics, storage, expandability and power efficiency. 'X' and 'Y' are pretty F'ing 'significant' IMHO. Oh....yeah....it's Rev A. A baby. As a user of ridiculously large boxes, servers and heavy monitors over the years----to call it anything BUT revolutionary is ignorant. Revolutionary doesn't always have to equal success immediately---but with the decline in desktop sales...but still the 'need' to have desktop power, it's pretty cool someone thought outside of the 'box' Reply
  • Morawka - Wednesday, January 01, 2014 - link

    hell i dont think the cooler surface area is a problem, i think the Black Chassis, Black Cooler, Black PCB are all whats causing the high temps. they should have left the thermal core pure copper and not used any anodizing. Black keeps heat in! Reply
  • name99 - Wednesday, January 01, 2014 - link

    I guess you're unfamiliar with the concept of black body radiation... Reply
  • jasonelmore - Thursday, January 02, 2014 - link

    i am, but reading the wiki on "black body radiation" i fail to see how it applies to this Mac Pro. Reply
  • wallysb01 - Friday, January 03, 2014 - link

    Things colored black radiate (and absorb) heat faster than other colors. Black body radiation has nothing to do with things actually being black. Reply
  • Ppietra - Friday, January 03, 2014 - link

    black body radiation refers to the kind of radiation that a body emits due to its temperature.
    Most thermal radiation (at this kind of temperatures) is infrared, so it doesn’t matter what is the visible color of the objects surrounding the "hot" object.
    But even if the "hot" body emitted significant visible light, the black color of the surrounding objects would actually help absorb that energy which would then be dissipated as infrared radiation or by heat transfer to the air or other objects
    Reply
  • Ppietra - Friday, January 03, 2014 - link

    An object with black color only implies that it absorbs visible light. Thermal radiation is mostly infrared not visible light, so being black has no consequence since there is nothing emitting visible radiation internally. Externally the surface is very reflective so no problem there either - not that there would be one it wasn’t reflective Reply
  • cosmotic - Tuesday, December 31, 2013 - link

    It would be nice to see storage performance of the Mac Pro SSD against RAID on mechanical disks and SSD disks from a previous Mac Pro model. Reply
  • cosmotic - Tuesday, December 31, 2013 - link

    Including IOPS Reply
  • acrown - Tuesday, December 31, 2013 - link

    The early 2008 Mac Pro does not support hyperthreadimg as your charts indicate. Of course I could just be doing something wrong with mine... Reply
  • acrown - Tuesday, December 31, 2013 - link

    Stupid onscreen keyboard. I meant hyperthreading of course. Reply
  • Anand Lal Shimpi - Tuesday, December 31, 2013 - link

    Whoops, you're right! Fixed :) Reply
  • acrown - Tuesday, December 31, 2013 - link

    Great article by the way. I'm so on the fence about whether to get one to replace my current Mac Pro.

    The read is tempting me more and more though...
    Reply
  • ananduser - Tuesday, December 31, 2013 - link

    It's actually simple, it's the best OSX workstation for seemingly only Apple software that actually makes full use of the GPU setup.

    If your workflow revolves exclusively around FCX, it is the only workstation you'll need. If you're an average consumer wanting a powerful OSX machine you'd better get a consumer oriented imac.

    PS: No need for me to mention that if you'll need CUDA and Windows then it's a bad buy.
    Reply
  • akdj - Wednesday, January 01, 2014 - link

    I'd expect OpenCL to become more and more and MORE ubiquitous as time marches on and Moore's law in relation to CPU slows...and more computing can be taken care of via screaming fast GPUs. Again. Early adopters. But CUDA/Windows options are aplenty. Just more expensive and without twin GPUs. Without PCIe storage. And.....oh yeah, their Windows boxes. At least with the MP you can run Windows...and perhaps, as we saw Adobe so quickly do with HiDPI support post rMBP release (along with hundreds of other apps and software companies)---hopefully Windows 8.2/9.x realizes the more significant 'all around' gains utilizing OpenCL (nVidia too?) than the very, VERY select software titles that take advantage of CUDA....and when they do, it's primal in comparison to what OpenCL opens the doors for. Literally. Everything Reply
  • moppop - Wednesday, January 01, 2014 - link

    Considering CUDA is a GPGPU API there's no door that OpenCL opens that CUDA can't do...in fact, you could say that CUDA opens more doors on an Nvidia GPU. Nvidia also supports OpenCL since it was among the parties to help expand the api spec, but make no mistake, their flagship is CUDA.

    Aside from shunning Nvidia to market-segments whose software will be at least CUDA-accelerated (if there's GPU accelaration at all), my main beef with the Mac Pro, however, is that there are artificial limits placed by the design. Namely 1 CPU socket and only 4 DIMM slots.

    For VFX/3DCG pros, the reality is that GPU rendering simply isn't there yet. Your PRMan, Mental Ray, V-Ray (not real-time V-Ray), Arnold, and Mantra renderers are still very much in the CPU world. When professionals buy a machine they need it to work now, and not 5 years from now. While the Mac Pro certainly appeals to portions of the pro market-segment, it was a simply foolish reason to castrate the Mac Pro.
    Reply
  • frelledstl - Tuesday, December 31, 2013 - link

    "I have to admit that I've been petting it regularly ever since. It's really awesomely smooth. It's actually the first desktop in a very long time that I want very close to me."

    You lost me here dude. Scary...
    Reply
  • lilo777 - Tuesday, December 31, 2013 - link

    It's jusr AT delivering main Apple talking points. After all small size is the only [questionable] benefit of MP. How else can they justify skimping on GPU power no expandability etc. Reply
  • darkcrayon - Wednesday, January 01, 2014 - link

    Ahh yes, a willfully ignorant troll on any forum. "No expandability"... Reply
  • akdj - Wednesday, January 01, 2014 - link

    Lol...you're baaaack! To spew more bullshit? Expandability skimping? Developing thunderbolt hand in hand with Intel, decreasing the weight from 70 to 10 pounds yet blowing the doors off its predecessor with its 'skimpy' GPU offerings....hmmm, I'll take two. Sorry you've no need for the machine. Many that do will easily save time.....which in turn allows the computer to make more money....which allows it to pay for itself.
    Engineering art AND function AND the balls to pull it off
    Are you still missing your Soundblaster? Your serial and parallel connections?
    I'd like to think LILO has a life....but it's pretty much a given, ANY Apple story, review, even objective measurements Anand and team provide, LILO will be here....fast as he can. Quarterbacking from mom's basement with his Pentium 4 and Voodoo3DFX....feed the spider dude. Get out. Get some air. Then learn about computers. It wouldn't waste as much 'comment' space. You're obviously in need of an xbox...not a workstation with MORE Expandability than any other computer on the market and weighing a bit more than Dell and HP's 'workstation' laptops. Wow. Just. Wow. Hopefully some day anonymity will be taken away in these comment sections. Would make it nice to know some folks would just find a different place to troll
    Reply
  • bji - Thursday, January 02, 2014 - link

    That's alot of hate for something so insignificant. Reply
  • Dennis Travis - Thursday, January 02, 2014 - link

    He probably uses an old Adlib card and not even a soundblaster! :D Grin it's been so long I actually had google to remember the name Adlib! Reply
  • KVFinn - Tuesday, December 31, 2013 - link

    People have been avoiding crossfire AMD chips for awhile because of frame pacing issues (high frame but with stutters and frames out of sequence so looks worse overall) AMD recently fixed this but only in the 290 model. Do the D700s suffer from this issue in windows gaming? Reply
  • Ryan Smith - Tuesday, December 31, 2013 - link

    Yes. D700s are Tahiti based and as such have all the same limitations as the 7970/280x parts, where it has yet to be fixed for Eyefinity configurations (including tiled monitors). Reply
  • mesahusa - Thursday, January 09, 2014 - link

    why in hell would you even ask such a stupid question? its pretty obvious that this is for video editors and movie makers, not gamers -_- Reply
  • solipsism - Tuesday, December 31, 2013 - link

    Where is the Single Threaded Performance for the first graph on page 2? Reply
  • Anand Lal Shimpi - Tuesday, December 31, 2013 - link

    Behind the multithreaded curve, the two are almost identical :) Reply
  • Calista - Tuesday, December 31, 2013 - link

    It seem to follow the multithreaded graph perfectly, a tiny bit of a blue graph can be seen in the upper right corner. So it's actually hidden by the second graph. Reply
  • japtor - Tuesday, December 31, 2013 - link

    One thing I keep forgetting to ask since it hasn't been mentioned anywhere, does AirPlay display spanning/mirroring work? I figured it used QSV on the other Macs which this machine doesn't have, so just curious if they just left it out completely. Reply
  • skiboysteve - Tuesday, December 31, 2013 - link

    I wonder if they changed it to AMD hardware encode... Reply
  • Calista - Tuesday, December 31, 2013 - link

    So not really a proper upgrade for anyone owning a Mac Pro from the last few years unless Thunderbolt, faster GPU or a small form factor is needed.

    Anyway, it's an impressive package and it's clear Apple have brought with them a lot of the knowledge they have gained over the years building laptops.
    Reply
  • Lonyo - Tuesday, December 31, 2013 - link

    Indeed. As was the question with the "super thin" iMac... what's the point?

    It's all very well having a super small computer blah blah, but in this instance, for this type of machine and end user, what's the real benefit?
    The cost has gone up for a base model, performance per dollar has gone down compared to the previous one, there's no ability to upgrade GPUs.
    As soon as you start plugging in Thunderbolt devices, there goes your "sleek looks" etc. Plus it's more expensive to get a Thunderbolt HDD/etc than just stick one inside the case, further increasing costs.

    Yes, it looks nice, and from an engineering standpoint it's very well done, but... is it really the right product for the market?
    Reply
  • Calista - Wednesday, January 01, 2014 - link

    For anyone not planning on bringing the computer with them from time to time it's certainly not a very practical design. Desktop-space is often more highly valued than floorspace, and lack of upgrade paths are obviously a con.

    But for those with a need to bring a powerful computer with them on a set or similar it's a much more practical solution as compared to the previous design. I think Apple was quite aware what they were doing. A complete field setup with a 27" monitor, the Mac Pro, cables, keyboard and mouse is less than 20 kg. Much more than a laptop for sure, but still a fairly acceptable weight.
    Reply
  • Lonyo - Wednesday, January 01, 2014 - link

    Wouldn't it make more sense to have TWO designs then? A Mac Pro for people who need portability, and a Mac Pro for standard single location users...?

    I mean, I know Apple tends to be all about deciding what the consumer wants for them and removing choice as much as possible, but sometimes that's not the best way.
    Reply
  • akdj - Wednesday, January 01, 2014 - link

    Why doesn't this model fit that mold? For the stay at home/office/studio...one can easily AND reasonably tie thunderbolt storage together in a very acceptable and aesthetic way. Whether it be a drive enclosure...set of enclosures, TB docks that are now available adding more USB 3/HDMI/audio/et al I/O....who needs a huge box for slow internal 3.5" HDDs anymore? These PCIe SSDs tear the 2.5" models apart. Inside the 'old' style MP, a 'new' GPU on X16 takes up two slots! Sure doesn't leave much room for your MIDI, PCIe SSD or external pro sound card!
    I'm amazed at how few 'get it' here anymore. Especially after such an exhaustive review. I'm a bit biased as I make my mortgage and have for 22 years doing audio and video production. From hauling reel to reels, vinyl, film and racks of around and lighting gear to rMBPs, iPads (now with fill 64 channel wifi front of house control with Mackie) and this new Pro....I've shaved thousands of pounds from load ins and outs. Same in the camera realm. Working the last seven years with Discovery and it's subsidiaries in Alaska, I can't put into words what this machine means to us. And it's ability to pay itself off many times over just in the course of a year. Exciting times. Hopefully an evolution Win OEMs will consider as well. Shouldn't be any moving parts any longer. Wait and space are ALWAYS an issue. As is the price of power....the advantage of speed, and software developers following suit to unload computational crunching to the GPU
    Reply
  • nunomoreira10 - Tuesday, December 31, 2013 - link

    Now there just need to be a dual cpu single gpu offer to please everybody. Reply
  • zephonic - Tuesday, December 31, 2013 - link

    Thanks for the first thorough review of the MacPro, and on the last day of 2013!

    Happy New Year!
    Reply
  • solipsism - Tuesday, December 31, 2013 - link

    I'm surprised that you can't have 3x4K displays all off the TB ports since the one HDMI port is connected to TB Bus 0. Reply
  • lilo777 - Tuesday, December 31, 2013 - link

    The review is very disappointing. Normally workstation review would contain performance comparisons with other workstations not with all-in-one consumer computers equipped with mobile parts. How about comparing MP with real workstations? Perhaps that would put its size shrink into proper perspective. Reply
  • darkcrayon - Wednesday, January 01, 2014 - link

    From the review, there's no reason to believe the Windows performance would be much different from other similarly configured workstations (which we know are of similar cost), with similar CPUs and GPUs. And of course if you need to work in Final Cut Pro, there wouldn't be an exact comparison available anyway. Reply
  • hoboville - Thursday, January 02, 2014 - link

    "No reason to believe performance would be different".

    Interesting to hear you say that, as these GPUs are underclocked to meet the thermal headroom. For raw performance metrics, the gaming shows how CFX D700 compares to its consumer twin the 7970 GHz / R9 280X, it's slower. More RAM, sure, but it's not ECC which is what real workstations use. And if you're not using more than 3 GB...you're wasting money.

    This is a Final Cut Pro computer or a computer for those who only use Mac software. Too bad for them, as they have to pay more for less power.
    Reply
  • akdj - Wednesday, January 01, 2014 - link

    Did you read page one? There really isn't anything to 'compare' it TOO! No one else is offering different chipsets in their workstations. PCIe SSDs are rare and thunderbolt is all but non existent so far in Windows land Reply
  • wiz329 - Tuesday, December 31, 2013 - link

    @Ananad, does the fact that there are only 8 PCIe lanes available to the IO mean that we could see some bottlenecks if there are a large number of external devices attached/in use? Reply
  • wiz329 - Tuesday, December 31, 2013 - link

    *Anand Reply
  • tipoo - Tuesday, December 31, 2013 - link

    So you can't get the cards to be Firepros under Windows? I suspected something like that would be the case with the cost of actual firepros, since apple writes much of the graphics driver there's less of a difference in osx, while they seem like bog standard radeons with somewhat odd configs in Windows. That may take some value away for pros who work with high end apps in both. Reply
  • tipoo - Wednesday, January 01, 2014 - link

    There were also reports of 7900 series Radeons showing up as D*** series FirePros in OSX. It appears Apple is just eliminating the distinction between them, just calling standard Radeons FirePros. Reply
  • Ryan Smith - Wednesday, January 01, 2014 - link

    The device ID for the D700 is identical to the 7970 (1002-6798), so it will show up as a Radeon under Windows. Conversely, a 7970 may very well show up as a D700 under Mac OS X if the only thing being checked is the device ID. Reply
  • 666sheep - Thursday, January 02, 2014 - link

    It used to be device ID only, but it has changed in 10.9.2 drivers. Now 7970 (and 7870 XT) are recognized as "themself", like in 10.8. So there's another check, most likely vendor and/or subsystem ID.
    BTW, D500 and D700 are based on the same core - Tahiti XT2 and share one EFI ROM (but not the PC part, these are different for each card). I sent Anand an email with more detailed info.
    Reply
  • tipoo - Thursday, January 02, 2014 - link

    But can you install FirePro drivers for it under Windows? Reply
  • 666sheep - Friday, January 03, 2014 - link

    I doubt that at this moment drivers other than Apple Bootcamp ones will detect these cards as FirePro. One should test this empirically, though. Reply
  • wheelhot - Saturday, January 04, 2014 - link

    Yes, I'm one of them as I'm using Solid Edge in Windows and wonders if the D300/500/700 is supplied with actual FirePro drivers in Windows or not. Anyone care to clarify?

    I've been searching for anyone to run a specviewperf test as it'll be easier to know if they truly comes with a FirePro GPU drivers or not in Windows, sadly none yet :(
    Reply
  • melgross - Tuesday, December 31, 2013 - link

    I'd like to see a comparison pricing between the Mac Pro and a DIY machine using the same processors, not the i7. Whatever the difference, it's not the same type of machine if the CPU's are entirely different. That price difference, and possibly that of the mobo will be drastic, I would imagine, and would be a fairer comparison. Reply
  • acrown - Tuesday, December 31, 2013 - link

    Lucky for you that someone did that already then:

    http://www.futurelooks.com/new-apple-mac-pro-can-b...

    Spoiler - the Mac Pro is cheaper when you attempt to keep the same design restraints.
    Reply
  • lilo777 - Tuesday, December 31, 2013 - link

    Don't be ridiculous. The guy has no clue. He used two AMD W9000 cards as if this is is what D700 is. Reply
  • itpromike - Tuesday, December 31, 2013 - link

    Apparently you have no clue... The W9000 is spec for spec the exact same card as the D700. Every single spec and number are identical. Aside from Apples stupid and pointless naming convention this card is the same. Reply
  • KVFinn - Tuesday, December 31, 2013 - link

    >The W9000 is spec for spec the exact same card as the D700. Every single spec and number are identical.

    The W9000 is different, it has ECC ram for example. It's spec for spec the same as the 7970 but clocked lower.

    Though most of the differences in Pro grade GPU are confined to drivers.
    Reply
  • JlHADJOE - Wednesday, January 01, 2014 - link

    AFAIK the D700 is also a FirePro, and also has ECC on its VRAM. Reply
  • tipoo - Wednesday, January 01, 2014 - link

    Wrong, it has no ECC. On OSX Apple writes much of the graphics driver anyways, so they can get away with calling Radeons FirePros as ECC isn't a necessity to call them that. Reply
  • Kevin G - Wednesday, January 01, 2014 - link

    ECC on the FirePro's doesn't actually add additional RAM like it does on traditional server DIMM's. Instead as RAID5 like parity is performed on GPU memory channels to be able to be able to detect a memory error. Thus the 6 GB card will only have 5.25 GB available to use with ECC enabled. Since all the memory channel have to be used for a memory access, performance in some workloads takes a significant hit. I believe by default ECC is disabled for performance and memory capacity reasons.

    There is also one other difference between the D700 and the W9000: clock speeds and voltages. The D700 runs are a lower clock speed by default and presumably lower voltage to cut power consumption.
    Reply
  • DaveGirard - Wednesday, January 01, 2014 - link

    the D700 is clocked lower than the W9000. It's at 850MHz instead of 950. Reply
  • lilo777 - Wednesday, January 01, 2014 - link

    Except it does not have ECC memory or the Pro drivers which are the only things that differentiate Pro from consumer grade cards. As such they are consumers grade cards (and the two year old generation) which cost around $700 at most not the $3500 pro cards. Reply
  • japtor - Wednesday, January 01, 2014 - link

    There's never been a pro driver distinction in OS X, Radeons are validated for pro apps in OS X like FirePros in Windows. Granted there hasn't been the pro branding until now, but Apple does the drivers iirc so I don't see them bothering with splitting the driver base like AMD does. Reply
  • melgross - Wednesday, January 01, 2014 - link

    You know nothing about Apple's drivers. I would bet that at the very least, they are based on the pro driver configurations, as apple has little interest in gaming, and a lot of interest in pro users. If you look at the performance of this in a pro app you can see that performance is pretty good. Mac Pro's are used in NASA, drug company research labs, CAD shops, video, photography labs and studios and publishing. Game drivers are of no interest to them.
    Reply
  • solipsism - Tuesday, December 31, 2013 - link

    Note that he speculates that the CPU would be soldered (something no Mac Pro has ever had) and the thermal cap removed (something I believe Apple had only done once).

    Also note he doesn't have any more PCIe available for the SSD so he ends up going with the much slower SATA version but to make up some of the speed he gets 2x512GB in a RAID 0 configuration.

    I like the case they used and I'm expect to see *more* of these smaller cases hit the market for DIYer and from OEMs now that Apple has stepped in.
    Reply
  • Lonyo - Wednesday, January 01, 2014 - link

    There are lots of small cases on the market and there have been for a while now... sure there could be more, but they are already widely available with a hell of a lot of variety of designs... not sure exactly how you think Apple will have any real impact on this market.

    If anything is going to have an impact it would be Steam boxes because OEMs might start pulling their fingers out and designing more gaming oriented small boxes, although they also are already rather common, but not always available for end users, such as the Alienware system which has a horizontal GPU mount with a riser.
    Reply
  • solipsism - Wednesday, January 01, 2014 - link

    Where are all these OEM PCs with very small cases but high performance like the new Mac Pro? Reply
  • ananduser - Wednesday, January 01, 2014 - link

    There are pro laptops that take care of the size compromise.

    There is also the brilliantly designed HP Z1. The first AIO workstation, both compact and powerful *and* designed for(not against) user accessibility.

    Apple has neither.
    Reply
  • pr1mal0ne - Tuesday, December 31, 2013 - link

    Any details of the PSU? those seem missing. All i can do is scrape for clues in context. Where is the PSU located? how does it handle pushing 400W for an extended period of time (temp wise)? How much more load does it pull when you are pushing lots of data through the thunderbolt and WiFi channels? Reply
  • japtor - Tuesday, December 31, 2013 - link

    For location at least, if you check out iFixit's teardown it's located between the Xeon's board and the I/O board on the back. Reply
  • mdopp - Tuesday, December 31, 2013 - link

    Intel's SRP for the E5-1680 V2 is $1723
    see: http://ark.intel.com/de/products/77912/Intel-Xeon-...
    Reply
  • Goff - Tuesday, December 31, 2013 - link

    I'm curious if one of these Mac Pros could be recommended for programming. Specifically iOS, OSX and Unity 3D programming. I've spent all of my Apple and mobile programming years on either Mac Mini's or MacBook Pro's.
    Would a 4 or 6 core Mac Pro be of any benefit above and beyond an i7 iMac or a 15" MBPr? It seems a much clearer choice for the video, rendering, photo pros, than for the developer set.
    Any developers out there see a benefit to running Xcode on a Mac Pro?
    Reply
  • madmilk - Tuesday, December 31, 2013 - link

    Seems pretty pointless if you ask me. I guess compilation will be quicker on the 8 and 12-core configs, but on the 4/6 cores it won't be a big difference. As for GPU, the FirePros are not a whole lot faster than the GPUs in the iMacs. I guess if you like lots of monitors the Mac Pro has ports for six 2560x1600 monitors, but the rMBP allows three 2560x1600 which is already a vast amount of space. Reply
  • MichalT - Wednesday, January 01, 2014 - link

    You can get XCode to use the extra cores by typing in something like this:

    defaults write com.apple.dt.Xcode IDEBuildOperationMaxNumberOfConcurrentCompileTasks 8

    It speeds unity builds a bit for me, but it seems that between Unity and XCode they are not parallelizing enough tasks.

    GCC, however, uses the extra cores nicely and compilation speed increases nearly linearly with the number of cores; linking is still single threaded. I build using make, and I type in something like make debug -j9 (for my 8 core system this provided the best compilation time).
    Reply
  • whyso - Tuesday, December 31, 2013 - link

    Are the D series gpus actually firepro? Or are they simply consumer level gpus that apple has paid for a firepro name? What I mean is under 3d rendering apps (maya, 3ds max, solidworks, etc) do they perform like a firepro W series gpu or an underclocked 7970? Reply
  • Kevin G - Wednesday, January 01, 2014 - link

    Well considering that FirePro's on the PC side are the same consumer level chips with different drivers and features enabled, the difference is likely academic. On the OS X side, the consumer GPU's in Apple's Mac Pro have used the same OS X driver as the workstation counterparts. (Though this historically has applied only to nVidia. This is the first time a Fire Pro has gotten an official OS X release.) Reply
  • Gigaplex - Wednesday, January 01, 2014 - link

    The FirePros usually have ECC RAM which these cards don't. Reply
  • tipoo - Thursday, January 02, 2014 - link

    But can you use them as Firepros when dual booting Windows? Reply
  • hoboville - Thursday, January 02, 2014 - link

    Nvidia GPUs run those applications faster, the Mac Pro GPUs, while having more RAM, are underclocked to meet temps because of the small form factor. If you don't need ECC, and aren't using more than 3 GB of RAM, build a PC with R9 280Xs. If you want a serious workstation, buy Nvidia. Reply
  • HydraMac - Tuesday, December 31, 2013 - link

    @Ananad - Hey interesting results with that power virus and throttling but what would happen doing the same thing to the older more conventional MP running 2xGPU cards as well? I'd be curious how the old school machine handled the same type of thrashing. It would give a frame of reference as opposed to the results being shown in a vacuum i.e. unified core vs. conventional machine cooling. Reply
  • justizin - Tuesday, December 31, 2013 - link

    "All of that being said, I don’t expect there to be a lot of cross shopping between DIY builders and those looking for a Mac Pro."

    Actually, everyone I've ever known who worked at Apple was a hackintosh enthusiast and had a home-build machine faster than a Mac Pro at a far lower price. I assume since Apple has curtailed its' employee discounts in recent years, this trend will only continue to grow.
    Reply
  • Kevin G - Wednesday, January 01, 2014 - link

    For consumer systems, yes.

    For professional level systems, there indeed will be little overlap. The professional level DIY market is quiet small as it is preferred that companies order from an OEM like Dell or HP to get a centralized warranty, support and service. There is a price premium there from the OEM's but they do tend to follow through on their support contracts. This saves time instead of having to go through multiple vendors for support and RMA's equipment. The prices of professional level equipment on the PC side (Xeons, ECC memory, and graphics) don't offer the same mass market price benefits as consumer parts.
    Reply
  • darkcrayon - Wednesday, January 01, 2014 - link

    Exactly. If the DIY market were so large for Pro systems, there's no way HP or Lenovo could justify having them in their product line- and a DIY Windows machine doesn't even need a hacked OS. Reply
  • wkw - Tuesday, December 31, 2013 - link

    10 USB 2 ports on the Lenovo. Sweeeeeeet Reply
  • El Aura - Tuesday, December 31, 2013 - link

    Is the preferred order of TB port usage really 1, 2, 5 and not 1, 3, 5? Reply
  • Ryan Smith - Tuesday, December 31, 2013 - link

    Yes. 1 and 3 are on the same TB controller. Reply
  • JlHADJOE - Tuesday, December 31, 2013 - link

    You, sir, are a very hardworking man.
    Thanks very much for the review, and a happy new year to you.
    Reply
  • scribblemonger - Tuesday, December 31, 2013 - link

    "The part I haven’t quite figured out yet is how Apple handles DisplayPort functionality. All six Thunderbolt 2 ports are capable of outputting to a display, which means that there’s either a path from the FirePro to each Thunderbolt 2 controller or the PEX 8723 switch also handles DisplayPort switching. It doesn’t really matter from an end user perspective as you can plug a monitor into any port and have it work, it’s more of me wanting to know how it all works."

    The former is correct.
    Reply
  • funwithstuff - Tuesday, December 31, 2013 - link

    Could you please share more details of your FCP X benchmarks? I'd like to analyse where the pain points are in for different Macs. Reply
  • funwithstuff - Tuesday, December 31, 2013 - link

    And also, a quick typo fix. In the article you say you're testing the iMac 2013 i5-3.4GHz, but the charts all say i7-3.4GHz. Reply
  • macgeeky - Tuesday, June 03, 2014 - link

    Anand, which is it: are you testing the iMac Late 2013 i5 or i7?

    Thanks!
    Reply
  • AnTech - Tuesday, December 31, 2013 - link

    Apple should bring these to match the Mac Pro:

    - Thunderbolt 2 matte display (24-inch) 4K and 3D with USB 3 and SD card reader.
    - Wired extended keyboard with USB 3 hub built-in.
    Reply
  • miahshodan - Tuesday, December 31, 2013 - link

    Doesn't having external storage kind of negate the entire clean and small design? I would rather have a larger case with my extra drives in it and no messy cables all over my desk. Reply
  • Gigaplex - Wednesday, January 01, 2014 - link

    Yes. Yes it does. Reply
  • nedjinski - Tuesday, December 31, 2013 - link

    I guess you're using FCP as the reference for benchmarks because it is mac exclusive. Since most really serious video editor pros have migrated to Premiere it would be interesting to see if the numbers were different or better using that as your reference. Reply
  • funwithstuff - Wednesday, January 01, 2014 - link

    FCP X has been optimised for the Mac Pro and other NLEs haven't — Premiere doesn't make use of twin GPUs yet. Still, to say that "most serious video editor pros have migrated to Premiere" without any numbers or evidence would be a mistake; noise on forums doesn't necessarily translate to real numbers (e.g: http://www.fcp.co/final-cut-pro/news/1294-pbs-surv...

    Even though I prefer FCP X myself, most features and large-scale TV shows are still cut on Avid, not either of the others.
    Reply
  • CalaverasGrande - Wednesday, January 01, 2014 - link

    indeed, at our network we are all on Avid (or Dalet).
    Most Pros I know are on Avid or FCP.
    Reply
  • Bill Thompson - Wednesday, January 01, 2014 - link

    Final Cut Pro 7 numbers are irrelevant. Reply
  • nedjinski - Wednesday, January 01, 2014 - link

    These serious editors love FCP -

    https://www.youtube.com/watch?v=LxKYuF9pENQ
    Reply
  • Bill Thompson - Thursday, January 02, 2014 - link

    They liked Premiere Pro 1.0 too. Reply
  • Bill Thompson - Wednesday, January 01, 2014 - link

    The biggest issue is CUDA. There are many pro apps that see a huge speed increase with a CUDA compatible GPU (nVidia).

    Check out Octane, which extends CUDA to lots of apps including Cinema 4D (which makes the cinebench numbers look silly).

    If you are using apps that utilize CUDA, a windows PC with nVidia or an iMac would be much faster than the new Mac Pro.
    Reply
  • Dug - Monday, January 13, 2014 - link

    I would really like to see this. A compilation of new workstations including the Mac Pro, with popular CUDA enhanced apps, non enhanced apps, all benchmarked. Reply
  • Meaker10 - Tuesday, December 31, 2013 - link

    High end notebooks (like the alien ware 17/18) can upgrade the graphics card quite happily. Reply
  • akdj - Wednesday, January 01, 2014 - link

    Did you take a wrong turn? Reply
  • FunBunny2 - Tuesday, December 31, 2013 - link

    Has everybody forgotten? This is just a Cube with one round corner. I suppose Tim will claim that's been patented too. Reply
  • Y0ssar1an22 - Tuesday, December 31, 2013 - link

    Off the Mac Pro topic but how come the 2013 13" rMBP scores significantly lower than the 2012 and various MBAs in the Cinebench 11.5? I'm personally interested as I have one on order :-) It scores better in later tests (so presumably not a typo?) Cinebench caught my eye as the first cross-benchmark in the review.

    Thanks for this review, and looking forward to the rMBPs in depth!

    Reply
  • iwod - Tuesday, December 31, 2013 - link

    1. What are the likely chances of a Mac that does Desktop Class Gfx card with 2 x8 PCI-E and uses Desktop Haswell instead. Unless i miss anything surely this is a simple change in production line.
    2. SSD speed is slow, for a Peak rate of 2GB/s, it seems Apple firmware or Samsung Controller not capable of feeling up the peak bandwidth? So which is likely the cause?
    3. GFx ECC Ram. How much of a problem is it? For Professional market? And why Apple decide to ditch this since the price difference are minor for the price of Mac Pro.
    Reply
  • dwade123 - Tuesday, December 31, 2013 - link

    Who the **** put a trashcan here!? Reply
  • e375ued - Wednesday, January 01, 2014 - link

    Is there some convenient reason Anand let the Mac Pro off easy by using Prime95 instead of Intel Burn Test or linpack? Reply
  • Ryan Smith - Wednesday, January 01, 2014 - link

    It was my suggestion to try maxing out the Mac Pro, just to see if it would throttle (and if so, by how much). I picked Prime95 because it's good enough; not that there's anything wrong with IBT or Linpack, but all 3 of those are close enough that it shouldn't matter (and P95 is easy to use). Reply
  • jrs77 - Wednesday, January 01, 2014 - link

    Good test that shows that the thermal core design works like a charm, even when applying very heavy and rather unrealistic loads to the system.

    Most people will run these new Mac Pros with only having a scene rendered or a video-filter applied etc and in this case the system is basically dead-silent and street-noise totally drowns the noise of the fan anyways.

    Just a tad too expensive for me tho.
    Reply
  • Kevin G - Wednesday, January 01, 2014 - link

    The ‘mid range’ config is a far better value on the 2012 model since it is a 12 core model. The $200 savings can be put toward a better GPU.

    With regard to Cinebench, does it use AVX under OS X? I suspect that it does and that is where the majority of the single threaded CPU performance increase comes from. I strongly suspect that the single threaded performance advantage is far narrower in legacy code that doesn’t take advantage of AVX.

    I’m glad the 2012 model was tested with a Radeon 7950. The ability to upgrade GPU’s matters and it’ll keep the 2012 model competitive for awhile. The system will support future video cards that come in from the PC side of things. With UEFI on video cards now, there is little difference between a Mac and PC version. For what it is worth, I have stuck an EVGA GTX 770 into a 2012 Mac Pro without issue and no modification on the video card or OS X drivers. It just works.

    A bit of a random note is that the GPU connector used in the Mac Pro isn’t new to Apple: they used it for the G4 class daughter cards form 15 years back.

    The PLX chip doesn’t have to do any port switching as a single GPU can drive up to 6 surfaces. That would imply the six DP signals from one GPU are routed in pairs to each of the Falcon Ridge controllers for encapsulation.

    One shocking thing is that wall power draw exceeds that of the PSU’s DC rating. That is worrying as the system itself has only a 450W rated power supply. Due to the AC to DC conversion, there is an efficiency factor but the system has to be running close to its DC limit. Performing several file transfers over powered Thunderbolt devices could put the power draw beyond the rated DC limit. I wonder if Apple has implemented throttling based upon raw power consumption of the system as a whole in addition to temperature and power consumption of individual parts. Perhaps testing the system on a 240V AC circuit would alter things here as it is more efficient power delivery?

    One aspect not accounted for is memory expansion. The 2009/2010/2012 Mac Pro’s will work with registered ECC memory which brings their maximum capacity up to 128 GB. Memory bandwidth too is superior in the dual socket 2010/2012 models: six channels of 1333 Mhz memory does have more bandwidth than four channels at 1866 Mhz. Going multi-socket does carry some overhead but still a bit of a disappointment that the theoretical number didn’t improve.
    Reply
  • Bill Thompson - Wednesday, January 01, 2014 - link

    My guess is the nVidia-based iMac is faster with After Effects and Premiere because of CUDA.

    Davinci Resolve has been updated for OpenCL, but I don't think Octane or Adobe apps have.

    BTW, FCP X 10.1 displays multiple 4K streams in real time without rendering. It's a serious app.
    Reply
  • DaveGirard - Wednesday, January 01, 2014 - link

    no, the iMac is faster because those portions of AE and Premiere are likely terribly multithreaded so the single 3.5GHz turbo'd core of the 12-core doesn't beat the higher-clocked iMac. Reply
  • DaveGirard - Wednesday, January 01, 2014 - link

    PS - if you want a video compositor that is well multithreaded, you'll need to use Nuke. Reply
  • Bill Thompson - Wednesday, January 01, 2014 - link

    Nuke is also a CUDA app, not OpenCL. Reply
  • Bill Thompson - Wednesday, January 01, 2014 - link

    Benchmarks show CUDA being several times faster on the same system, the same CPU. Reply
  • akdj - Wednesday, January 01, 2014 - link

    Different software though....Adobe is and has been utilizing CUDA for some time now. Though OpenCL is definitely a better, more open standard that will affect everyday computing (not just video manipulation, encoding, decoding, 3D and CAD). Expect Adobe to adopt OpenCL by 6.5/7 (CC) Reply
  • Bill Thompson - Thursday, January 02, 2014 - link

    Lots of apps use CUDA and they all benefit greatly by it. Davinci Resolve has been updated, though I haven't seen benchmarks yet (I haven't looked) to compare CUDA versus OpenCL with that app.

    Bottom line - right now, if you use Adobe's apps you would be better off getting an iMac (CUDA compatible) or a Windows PC. If Adobe updates their apps so the Mercury playback engine is HW accelerated via OpenCL, then the Mac Pro is an option.

    I think it was a mistake to ignore CUDA and put AMD cards in the new Mac Pro. There are many high-end pro apps that use CUDA.
    Reply
  • Bill Thompson - Thursday, January 02, 2014 - link

    Correction - Premiere CC 7.1 has OpenCL support. It will be interesting to see benchmarks.

    Thanks for your feedback.
    Reply
  • mrbofus - Wednesday, January 01, 2014 - link

    "A single fan at the top of the Mac Pro’s cylindrical chassis pulls in cool air from the bottom of the machine and exhausts it, quietly, out the top."

    I'm confused; if the fan is at the top of the chassis, wouldn't it be exhausting the hot air out? Or if it is pulling in cool air from the bottom, it would have to bypass the hot air in the core, so then it's pulling cool air up the sides?
    Reply
  • Gigaplex - Wednesday, January 01, 2014 - link

    It pulls cold air in the bottom and exhausts hot air out the top. Reply
  • FunBunny2 - Wednesday, January 01, 2014 - link

    You could also mount a cup holder over the hot top exhaust vent to keep your latte' warm. There'll be an app for that. Reply
  • zepi - Wednesday, January 01, 2014 - link

    How about virtualization and for example VT-d support with multiple gpu's and thunderbolts etc?

    Ie. Running windows in a virtual machine with half a dozen cores + another GPU while using rest for the OSX simultaneously?

    I'd assume some people would benefit of having both OSX and Windows content creation applications and development environments available to them at the same time. Not to mention gaming in a virtual machine with dedicated GPU instead of virtual machine overhead / incompatibility etc.
    Reply
  • japtor - Wednesday, January 01, 2014 - link

    This is something I've wondered about too, for a while now really. I'm kinda iffy on this stuff, but last I checked (admittedly quite a while back) OS X wouldn't work as the hypervisor and/or didn't have whatever necessary VT-d support. I've heard of people using some other OS as the hypervisor with OS X and Windows VMs, but then I think you'd be stuck with hard resource allocation in that case (without restarting at least). Fine if you're using both all the time but a waste of resources if you predominantly use one vs the other. Reply
  • horuss - Thursday, January 02, 2014 - link

    Anyway, I still would like to see some virtualization benchs. In my case, I can pretty much make it as an ideal home server with external storage while taking advantage of the incredible horse power to run multiple vms for my tests, for development, gaming and everything else! Reply
  • iwod - Wednesday, January 01, 2014 - link

    I have been how likely we get a Mac ( Non Pro ) Spec.
    Nvidia has realize those extra die space wasted for GPGPU wasn't worth it. Afterall their main target are gamers and gaming benchmarks. So they decided for Kepler they have two line, one for GPGPU and one on the mainstream. Unless they change course again I think Maxwell will very likely follow the same route. AMD are little difference since they are betting on their OpenCL Fusion with their APU, therefore GPGPU are critical for them.
    That could means Apple diverge their product line with Nvidia on the non Professional Mac like iMac and Macbook Pro ( Urg.. ) while continue using AMD FirePro on the Mac Pro Line.

    Last time it was rumoured Intel wasn't so interested in getting a Broadwell out for Desktop, the 14nm die shrink of Haswell. Mostly because Mobile / Notebook CPU has over taken Desktop and will continue to do so. It is much more important to cater for the biggest market. Not to mention die shrink nowadays are much more about Power savings then Performance Improvements. So Intel could milk the Desktop and Server Market while continue to lead in Mobile and try to catch up with 14nm Atom SoC.

    If that is true, the rumor of Haswell-Refresh on Desktop could mean Intel is no longer delaying Server Product by a single cycle. They will be doing the same for Desktop as well.

    That means there could be a Mac Pro with Haswell-EP along with Mac with a Haswell-Refresh.
    And by using Nvidia Gfx instead of AMD Apple dont need to worry about Mac eating into Mac Pro Market. And there could be less cost involve with not using a Pro Gfx card, only have 3 TB display, etc.
    Reply
  • words of peace - Wednesday, January 01, 2014 - link

    I keep thinking that if the MP is a good seller, maybe Apple could enlarge the unit so it contains a four sided heatsink, this could allow for dual CPU. Reply
  • Olivier_G - Wednesday, January 01, 2014 - link

    Hi,

    I don't understand the comment about the lack of HiDPI mode here?

    I would think it's simply the last one down the list, listed as 1920x1080 HiDPI, it does make the screen be perceived as such for apps, yet photos and text render at 4x resolution, which is what we're looking for i believe?

    i tried such mode on my iMac out of curiosity and while 1280x720 is a bit ridiculously small it allowed me to confirm it does work since OSX mavericks. So I do expect the same behaviour to use my 4K monitor correctly with mac pro?

    Am I wrong?
    Reply
  • Gigaplex - Wednesday, January 01, 2014 - link

    The article clearly states that it worked at 1920 HiDPI but the lack of higher resolutions in HiDPI mode is the problem. Reply
  • Olivier_G - Wednesday, January 01, 2014 - link

    Well no it does not state that at all I read again and he did not mention trying the last option in the selector. Reply
  • LumaForge - Wednesday, January 01, 2014 - link

    Anand,

    Firstly, thank you very much for such a well researched and well thought out piece of analysis - extremely insightful. I've been testing a 6 core and 12 core nMP all week using real-life post-production workflows and your scientific analysis helps explain why I've gotten good and OK results in some situations and not always seen the kinds of real-life improvements I was expecting in others.

    Three follow up questions if I may:

    1) DaVinci Resolve 10.1 ... have you done any benchmarking on Resolve with 4K files? ... like FCP X 10.1, BMD have optimized Resolve 10.1 to take full advantage of split CPU and GPU architecture but I'm not seeing the same performance gains as with FCP x 10.1 .... wondering if you have any ideas on system optimization or the sweet spot? I'm still waiting for my 8 core to arrive and that may be the machine that really takes advantage of the processor speed versus cores trade-off you identify.

    2) Thunderbolt 2 storage options? ... external storage I/O also plays a significant role in overall sustained processing performance especially with 4K workflows ... I posted a short article on Creative Cow SAN section detailing some of my findings (no where as detailed or scientific as your approach I'm afraid) ... be interested to know your recommendations on Tbolt2 storage.

    http://forums.creativecow.net/readpost/197/859961

    3) IP over Tbolt2 as peer-to-peer networking topology? ... as well as running the nMPs in DAS, NAS and SAN modes I've also been testing IP over Tbolt2 .... only been getting around 500 MB/s sustained throughput between two nMPs ... if you look at the AJA diskwhack tests I posted on Creative Cow you'll see that the READ speeds are very choppy ... looks like a read-ahead caching issue somewhere in the pipeline or lack of 'Jumbo Frames' across the network ... have you played with TCP/IP over Thunderbolt2 yet and come to any conclusions on how to optimize throughput?

    Keep up the good work and all the best for 2014.

    Cheers,
    Neil
    Reply
  • modeleste - Wednesday, January 01, 2014 - link

    I noticed that the Toshiba 65" 4k TV is about the same price as the Sharp 32" The reviews seem nice.

    Does anyone have any ide what the issues would be with using this display?
    Reply
  • stevesup - Wednesday, January 01, 2014 - link

    Great review, per usual. Even Leo Laporte couldn't dig out a negative nugget to bash Apple with. Reply
  • milkod2001 - Wednesday, January 01, 2014 - link

    In a few months a market will probably be flooded with similar cases. Something like semitransparent case in this shape with decent led lighting could actually look quite nice.

    Back in Windows Vista times I was working with Mac Pro and iMac as graphic designer. It was a pleasure to work with compared to crappy, slow Vista based Pc.

    Now with w7 I can't think about single reason I'd want to spend almost twice for Mac Pro compared to W7 Pc(obviously nobody is forcing me to).

    Good job with review Anald.
    Reply
  • milkod2001 - Wednesday, January 01, 2014 - link

    sorry about misspelled name, can't find edit option for post Reply
  • Mat9912 - Wednesday, January 01, 2014 - link

    Can someone comment on the power consumption of the new Mac Pro when in standby/sleep mode? Reply
  • knweiss - Thursday, January 02, 2014 - link

    You'll find the info in the Mac Pro Environmental Report:
    http://images.apple.com/environment/reports/docs/M...
    Reply
  • nomorespam - Wednesday, January 01, 2014 - link

    Any idea why the three networking ports couldn't have been combined into a single PCIe 2.0 lane with a switch/bridge?

    By my math this comes to at most 3.3Gbps (412MB/s) unidirectional with all three ports saturated.

    Is this not a possibility or are there other considerations that make this impractical or undesirable?

    I'm thinking it would be really nice to get another 2 (ideally 3 for no bottleneck) of the PCIe 2.0 lanes to the USB 3.0 ports and this seems like a valid way to triple available USB 3.0 bandwidth.

    The only other place I can see to steal an additional PCIe 2.0 lane back to get to 4 port bottleneck free USB 3.0 I/O is to take one lane away from the SSD controller?

    Surely doing so reduces the bus width and resulting net performance even though the theoretical 1.5GB/s that three PCIe 2.0 lanes provide is still faster than the Mac Pro's shipping SSD's?

    Like Neil above, I would also like to know more about IP over thunderbolt 1 and 2 and how it works in the real world today - I would suspect the network stack is not in any way optimized for it at this point.
    Reply
  • OreoCookie - Wednesday, January 01, 2014 - link

    @Anand
    Any reason why you haven't posted results on your SSD consistency tests for the Mac Pro?
    Reply
  • wozwoz - Wednesday, January 01, 2014 - link

    Nice review - though rather too long. Sometimes, less is more :)

    Unfortunately, what is not answered, and remains 'unknown' is:

    * Is a D500 graphics card actually any faster than a D300 in real-world tests? [Note that the D500 has a lower clock speed]

    * Since the D700 apparently consumes vastly more power than the D300, how does the graphics card effect noise levels and thermal performance of the entire machine?

    I liked your chart of CPU turbo boost, as the number of cores in play changes ... first decent explanation of that issue.
    Reply
  • bizarrefish - Wednesday, January 01, 2014 - link

    Excellent review Anand, very comprehensive and just what I needed to help seal the decision to buy. I believe some of your benchmarks and stress testing was under Boot Camp Windows correct? I too want to use the system as my main Workstation/PC at home but for occasional gaming also. The system will spend most of its time in OSX but Is the driver support good enough to perform well in Boot Camp for modern gaming needs? Thanks. Reply
  • jackbobevolved - Saturday, April 19, 2014 - link

    I got the 12 core D700 model and it works great for gaming. The latest Catalyst drivers installed without issue and performance has been amazing. What really blew me away though was comparing the render and export speed on this machine against my old 3,1 8 core with a Radeon 5870. Several hour exports from FCPX were cut to just minutes. Reply
  • uhuznaa - Wednesday, January 01, 2014 - link

    For whatever it's worth: I'm supporting a video pro and what I can see in that crowd is that NOBODY cares for internal storage. Really. Internal storage is used for the software and of course the OS and scratch files and nothing else. They all use piles of external drives which are much closer to actual "media" you can carry around and work with in projects with others and archive.

    I fact I tried for a while to convince him of the advantages of big internal HDDs and he wouldn't have any of it. He found the flood of cheap USB drives you can even pick up at the gas station in the middle of the night the best thing to happen and USB3 a gift from heaven. They're all wired this way. Compact external disks that you can slap paper labels on with the name of the project on it and the version of that particular edit and that you can carry around are the best thing since sliced bread for them. And after a short while I had to agree that they're perfectly right with that for what they do.

    Apple is doing this quite right. Lots of bays are good for servers, but this is not a server. It's a workstation and work here means mostly work with lots of data that wants to be kept in nice little packages you can plug in and safely out and take with you or archive in well-labeled shelves somewhere until you find a use for it later on.

    (And on a mostly unrelated note: Premiere Pro may be the "industry standard" but god does this piece of software suck gas giants through nanotubes. It's a nightmarish UI thinly covering a bunch of code held together by chewing gum and duct tape. Apple may have the chance of a snowflake in hell against that with FCP but they absolutely deserve kudos for trying. I don't know if I love Final Cut, but I know I totally hate Premiere.)
    Reply
  • lwatcdr - Wednesday, January 01, 2014 - link

    "My one hope is that Apple won’t treat the new Mac Pro the same way it did its predecessor. The previous family of systems was updated on a very irregular (for Apple) cadence. "

    This is the real problem. Haswell-EP will ship this year and it used a new socket. The proprietary GPU physical interface will mean those will probably not get updates quickly and they will be expensive. Today the Pro is a very good system but next year it will be falling behind.
    Reply
  • boli - Wednesday, January 01, 2014 - link

    Hi Anand, cheers for the enjoyable and informative review.

    Regarding your HiDPI issue, I'm wondering if this might be an MST issue? Did you try in SST mode too?

    Just wondering because I was able to add 1920x1080 HiDPI to my 2560x1440 display no problem, by adding a 3840x2160 custom resolution to Switch Res X, which automatically added 1920x1080 HiDPI to the available resolutions (in Switch Res X).
    Reply
  • mauler1973 - Wednesday, January 01, 2014 - link

    Great review! Now I am wondering if I can replicate this kind of performance in a hackintosh. Reply
  • Technology Never Sleeps - Wednesday, January 01, 2014 - link

    Good article but I would suggest that your editor or proof reader review your article before its posted. It takes away from the professional nature of the article and website with so many grammatical errors. Reply
  • Barklikeadog - Wednesday, January 01, 2014 - link

    Once again, a standard 2009 model wouldn't fair nearly as well here. Even with a Radeon HD 4870 I bet we'd be seeing significantly lower performance.

    Great review Anand, but I think you meant fare in that sentence.
    Reply
  • name99 - Wednesday, January 01, 2014 - link

    " Instead what you see at the core level is a handful of conservatively selected improvements. Intel requires that any new microarchitectural feature introduced has to increase performance by 2% for every 1% increase in power consumption."

    What you say is true, but not the whole story. It implies that these sorts of small improvements are the only possibility for the future and that's not quite correct.
    In particular branch prediction has become good enough that radically different architectures (like CFP --- Continuous Flow Processing --- become possible). The standard current OoO architecture used by everyone (including IBM for both POWER and z, and the ARM world) grew from a model based on no speculation to some, but imperfect, speculation. So what it does is collect speculated results (via the ROB and RAT) and dribble those out in small doses as it becomes clear that the speculation was valid. This model never goes drastically off the rails, but is very much limited in how many OoO instructions it can process, both at the complete end (size of the ROB, now approaching 200 fused µ-instructions in Haswell) and at the scheduler end (trying to find instructions that can be processed because their inputs are valid, now approaching I think about 60 instructions in Haswell).
    These figures give us a system that can handle most latencies (FP instructions, divisions, reasonably long chains of dependent instructions, L1 latency, L2 latency, maybe even on a good day L3 latency) but NOT memory latency.

    And so we have reached a point where the primary thing slowing us down is data memory latency. This has been a problem for 20+ years, but now it's really the only problem. If you use best of class engineering for your other bits, really the only thing that slows you down is waiting on (data) memory. (Even waiting on instructions should not ever be a problem. It probably still is, but work done in 2012 showed that the main reason instruction prefetching failed was that the prefetched was polluted by mispredicted branches and interrupts. It's fairly easy to filter both of these once you appreciate the issue, at which point your I prefetcher is basically about 99.5% accurate across a wide variety of code. This seems like such an obvious an easy win that I expect it to move into all the main CPUs within 5 yrs or so.)

    OK, so waiting on memory is a problem. How do we fix it?
    The most conservative answer (i.e. requires the fewest major changes) is data pre fetchers, and we've had these growing in sophistication over time. They can now detect array accesses with strides across multiple cache lines, including backwaters, and we have many (at least 16 on Intel) running at the same time. Each year they become smarter about starting earlier, ending earlier, not polluting the cache with unneeded data. But they only speed up regular array accesses.

    Next we have a variety of experimental prefetchers that look for correlations in the OFFSETs of memory accesses; the idea being that you have things like structs or B-tree nodes that are scattered all over memory (linked by linked lists or trees or god knows what), but there is a common pattern of access once you know the base address of the struct. Some of these seem to work OK, with realistic area and power requirements. If a vendor wanted to continue down the conservative path, this is where they would go.

    Next we have a different idea, runahead execution. Here the idea is that when the “real” execution hits a miss to main memory, we switch to a new execution mode where no results will be stored permanently (in memory or in registers); we just run ahead in a kind of fake world, ignoring instructions that depend on the load that has missed. The idea is that, during this period we’ll trigger new loads to main memory (and I-cache misses). When the original miss to memory returns its result, we flush everything and restart at the original load, but now, hopefully, the runahead code started some useful memory accesses so that data is available to us earlier.
    There are many ways to slice this. You can implement it fairly easily using SMT infrastructure if you don’t have a second thread running on the core. You can do crazy things that try to actually preserve some of the results you generate during the runahead phase. Doing this naively you burn a lot of power, but there are some fairly trivial things you can do to substantially reduce the power.
    In the academic world, the claim is that for a Nehalem type of CPU this gives you about a 20% boost at the cost of about 5% increased power.
    In the real world it was implemented (but in a lousy cheap-ass fashion) on the POWER6 where it was underwhelming (it gave you maybe a 2% boost over the existing prefetchers); but their implementation sucked because it only ran 64 instructions during the run ahead periods. The simulations show that you generate about one useful miss to main memory per 300 instructions executed, so maybe two or three during a 400 to 500 cycles load miss to main memory, but 64 is just too short.
    It was also supposed to be implemented in the SUN Rock processor which was cancelled when Oracle bought Sun. Rock tried to be way more ambitious in their version of this scheme AND suffered from a crazy instruction fetch system that had a single fetch unit trying to feed eight threads via round robin (so each thread gets new instructions every eight cycles).
    Both these failures don’t, I think, tell us if this would work well if implemented on, say, an ARM core rather than adding SMT.

    Which gets us to SMT. Seems like a good idea, but in practice it’s been very disappointing, apparently because now you have multiple threads fighting over the same cache. Intel, after trying really hard, can’t get it to give more than about a 25% boost. IBM added 4 SMT threads to POWER7, but while they put a brave face on it, the best the 4 threads give you is about 2x single threaded performance. Which, hey, is better than 1x single threaded performance, but it’s not much better than what they get from their 2 threaded performance (which can do a lot better than Intel given truly massive L3 caches to share between threads).

    But everything so far is just add-ons. CFP looks at the problem completely differently.
    The problem we have is that the ROB is small, so on a load miss it soon fills up completely. You’d want the ROB to be about 2000 entries in size and that’s completely impractical. So why do we need the ROB? To ensure that we write out updated state properly (in small dribs and drabs every cycle) as we learn that our branch prediction was successful.
    But branch prediction these days is crazy accurate, so how about a different idea. Rather than small scale updating successful state every cycle, we do a large scale checkpoint every so often, generally just before a branch that’s difficult to predict. In between these difficult branches, we run out of order with no concern for how we writeback state — and in the rare occasions that we do screw up, we just roll back to the checkpoint. In between difficult branches, we just run on ahead even across misses to memory — kinda like runahead execution, but now really doing the work, and just skipping over instructions that depend on the load, which will get their chance to run (eventually) when the load completes.
    Of course it’s not quite that simple. We need to have a plan for being able to unwind stores. We need a plan for precise interrupts (most obviously for VM). But the basic idea is we trade today’s horrible complexity (ROB and scheduler window) for a new ball of horrible complexity that is not any simpler BUT which handles the biggest current problem, that the system grinds to a halt at misses to memory, far better than the current scheme.

    The problem, of course, is that this is a hell of a risk. It’s not just the sort of minor modification to your existing core where you know the worst that can go wrong; this is a leap into the wild blue yonder on the assumption that your simulations are accurate and that you haven’t forgotten some show-stopping issue.
    I can’t see Intel or IBM being the first to try this. It’s the sort of thing that Apple MIGHT be ambitious enough to try right now, in their current state of so much money and not having been burned by a similar project earlier in their history. What I’d like to see is a university (like a Berkeley/Stanford collaboration) try to implement it and see what the real world issues are. If they can get it to work, I don’t think there’s a realistic chance of a new SPARC or MIPS coming out of it, but they will generate a lot of valuable patents, and their students who worked on the project will be snapped up pretty eagerly by Intel et al.
    Reply
  • stingerman - Wednesday, January 01, 2014 - link

    I think Intel has another two years left on the Mac. Apple will start phasing it out on the MacBook Air, Mac Mini and iMac. The MacBook rPros and finally the Mac Pro. Discreet x86 architecture is dead ending. Apple's going to move their Macs to SOC that they design. It will contain most of the necessary components and significantly reduce the costs of the desktops and notebooks. The Mac Pro will get it last giving time for the Pro Apps to be ported to Apple's new mobile and desktop 64-bit processors. Reply
  • tahoey - Wednesday, January 01, 2014 - link

    Remarkable work as always. Thank you. Reply
  • DukeN - Thursday, January 02, 2014 - link

    Biased much, Anand?

    Here's the Lenovo S30 I bought a couple of weeks back, and no it wasn't $4000 + like you seem to suggest.

    http://www.cdw.com/shop/products/Lenovo-ThinkStati...

    You picked probably the most overpriced SKU in the bunch just so you can prop up the ripoff that is your typical Apple product.

    Shame.
    Reply
  • OreoCookie - Thursday, January 02, 2014 - link

    You link to a $2000 workstation without graphics cards and just 8 GB RAM. How does that invalidate Anand's claim? Reply
  • DukeN - Thursday, January 02, 2014 - link

    I paid $144 for the extra RAM and re-used my old workstation cards.

    Hardly double the price.
    Reply
  • zsquared - Thursday, January 02, 2014 - link

    So you are not counting the original cost of the workstation cards but you think you proved you point.

    And you're telling Anand "Shame"?
    Reply
  • Ppietra - Friday, January 03, 2014 - link

    memory: 1600 MHz vs 1866 MHz
    SSD: SATA600 vs PCIe
    no thunderbolt
    no graphics
    clearly the same specs
    Reply
  • DukeN - Friday, January 03, 2014 - link

    Even with a decent card it's something like $1500 less.This is a fact that isn't acknowledged by Anand, of course because not cozying up to Apple is potentially bad for business.

    Oh well I'll never be able to convince the Apple fanboys writing or reading here.
    Reply
  • Ppietra - Friday, January 03, 2014 - link

    the comparison wasn’t about what you can get with the same processor, but how much does it cost to have the same specs, and your comparison doesn’t do that, and you certainly don’t get a machine with the same performance since memory and storage are slower.
    How do you get it for $1500 less when your machine base price is 1000$ less. If you add 2 decent graphic cards (with comparable performance) and the extra memory to match you quickly get at the Mac Pro price and still you would probably get worse performance!
    Reply
  • Liquidmark - Friday, January 03, 2014 - link

    Where's your graphics card?
    8GB of ram?
    SATA SSD?

    Here's what I got so far from lenovo's site...

    Intel Xeon E5-1620 v2 Processor (10MB Cache, 3.70GHz)
    Windows 8 Pro 64 Downgrade Windows 7 Professional 64 English
    Tower 5x6 Mechanical Shell with low inrush current power supply
    Windows 8 Pro 64 RDVD English
    16GB ECC 2Rx4 PC3 1600MHz RDIMM
    NVIDIA® Quadro K2000D (2GB Dual link DVI+DVI, Mini DP) - (For Windows 7 ME8.0)
    NVIDIA® Quadro K2000D (2GB Dual link DVI+DVI, Mini DP) - (For Windows 7 ME8.0)
    DVI to VGA Dongle
    Integrated Audio
    Intel SATA HDD support (1-3 HDDs)
    Internal RAID - Not Enabled
    2.5" 256GB SATA SolidState Drive
    29-in-1 Media Card Reader - (For Windows 7 ME8.0)
    DVD Burner/CD-RW Rambo Drive (SATA) - Win7 ME8.0
    Integrated Ethernet
    USB Preferred Pro FullSize Win8 US Euro English 103P
    Lenovo USB Optical Wheel Mouse
    LineCord - US
    Publication - English

    $4,128.00

    Mac Pro with 16 gb instead of 12 for ram...

    $3,099.00

    Come on man...
    Reply
  • Liquidmark - Friday, January 03, 2014 - link

    Oop, I forgot the Applecare!

    Ok, Mac Pro= $3,348.00
    Lenovo Lenovo S30= $4,128.00
    Reply
  • RollingCamel - Thursday, January 02, 2014 - link

    No engineering software performance review? Reply
  • wheelhot - Saturday, January 04, 2014 - link

    I wonder why as well, I don't see anything wrong to do some windows engineering software tests Reply
  • damianrobertjones - Thursday, January 02, 2014 - link

    "I like the new Mac Pro’s chassis a lot. It’s a risk, but one that absolutely must be taken if the desktop is to continue to exist and thrive."

    Absolute rubbish... Sorry. We simply DO NOT have to change the case. Sure, of course, the option of having a case like this is fantastic but simply changing the case DOES NOT enable this to 'thrive'.
    Reply
  • AnnonymousCoward - Sunday, January 05, 2014 - link

    Agreed Reply
  • platinumjsi - Thursday, January 02, 2014 - link

    What are you using to monitor the GPU usage? I have been looking for a app for OSX for a while without any joy? Reply
  • hoboville - Thursday, January 02, 2014 - link

    Sigh, lots of fanboyism in the comments, without recognition that this is just a slower, more expensive PC, the only difference is that it can run OSX only programs. Here's some hardware facts:

    This machine is basically a dual-GPU Xeon workstation with 2x 7970 in Crossfire (D700). Nothing special. Ok, so each 7970 has 6 GB of RAM. Well, each 7970 is also underclocked...and the RAM isn't ECC, so if you want one of these workstations for serious GPU compute, you're going to be eating bit errors, and your data is going to be suspect. Real GPGPUs use ECC RAM, period. If ECC doesn't matter, then dual/triple/quad AMD GPUs of any stripe will do you fine. Even better now that R9 290(X) are out, and they have 4 GB of RAM.
    What if I need more local storage than 200 GB? Most raw video is bigger than that. So your files are stored on a NAS, but this machine only has gigabit NICs. If you want to take advantage of RAID throughput for massive files, you'll need 10 Gbit. But this machine can't use 10 Gbit NICs, as there's no place to put them.

    This workstation, then, isn't for serious compute, those who have big files, and it isn't for those who want to use the most powerful GPUs for rendering / modeling. That belongs to Nvidia, there are plenty of benchmarks out there attesting to that fact. You can't get Nvidia on this workstation, so what then? I guess you buy this machine for Mac-specific applications.

    And that's what this machine is for--Mac OS. If you want more power, UNIX/Linux/Windows boxes are where you go (not Apple-restricted Unix either). Are they bigger? Yes. Hotter? Yes. In fashion because small = sexy? Nope. And that's what this comes down to, looks, style, sleekness, and other metrics not relevant to performance. Sure, there's a niche for those who use Mac only software, but what if you want to do more? Apple has convinced people that style and a walled garden of software is more value than function, stop wasting your money and drop OSX!
    Reply
  • pmhparis - Thursday, January 02, 2014 - link

    Snort, the ignorant NVidia hobo fanboy complains of Mac fanboys...

    Professionals don't store video projects on internal storage, they use DAS devices like Thunderbolt or USB3 disk enclosures.
    Reply
  • Houston1 - Thursday, January 02, 2014 - link

    Incorrect. Reply
  • Chirpie - Friday, January 03, 2014 - link

    No, it's pretty spot on. Every video environment I've worked in does not keep the project files and assets on the machine. It's a very normal/typical way of doing business with many terabytes worth of files. I'll go one step further though and say that it's not just USB and Thunderbolt but even duplexed gigabit ethernet or optical, or a number of other flavors as well. Reply
  • FunBunny2 - Saturday, January 04, 2014 - link

    Steve was always the best snake oil salesman since Barnum. How Apple can contend that it spends billions and billions of dollars on R&D is baffling. It can't have cost that much to devise a square cornered rectangle, or single cornered Cube. The parts, 99.44% are off the shelf from suppliers. Reply
  • DotFab - Thursday, January 02, 2014 - link

    Many thanks for this impressive review of the MacPro 2013!!
    You treat every point and more I had in mind!
    A huge and fine work, I really feel like I know what's the MP 2013 now.

    Happy new year to AnandTech and to everyone !
    Reply
  • HisDivineOrder - Thursday, January 02, 2014 - link

    I love that Anand is discussing his well-known Apple addiction and the subsequent fanboyism he engages in. It is good. Admitting he has a problem means he can perhaps one day overcome it.

    One day. Today is not the day.

    How can anyone in their right mind suggest buying such a limited-expandability computer for anything NOT a low-power HTPC? If you pay this much money, you really ought to be able to easily change out the GPU(s).

    When you're so hooked on a company's products you're rubbing them like Gollum rubbing the Ring of Power, I think you've got to stop and take stock.
    Reply
  • Chirpie - Thursday, January 02, 2014 - link

    Uh, remind me again which low-power HTPC can run 16 4K video streams at once? Beyond that, why on earth would you buy this computer as a HTPC? The graphic cards would be a waste. If you're gonna bash, I demand some effort. Reply
  • Liquidmark - Friday, January 03, 2014 - link

    I don't see you admitting you have a problem with haterism. Reply
  • Wolfpup - Thursday, January 02, 2014 - link

    The problem with saying Apple's pricing isn't out of touch with reality is that you can't only compare this to high end workstation's from other companies...this is Apple's only desktop-ish device. These things 10 years ago used to start in the mid $1000-2000 range, and with inflation that would be cheaper still. They were STILL expensive, but at least not absurdly so.

    Yeah, Xeons, etc. cost a lot, but Apple doesn't provide options for people who want a high end notebook or desktop for normal use...this is the closest they get, and it's at least 2x as expensive as it should be for it's base unit (even with the Apple tax).
    Reply
  • OreoCookie - Thursday, January 02, 2014 - link

    I don't understand this comment: Apple does cover this *price range* with the Mac mini and the iMac. The 27" iMac sports up to 32 GB RAM, a decent graphics card and 4 fast cores. And since these machines come with Thunderbolt, you can expand them with the same ultrafast peripherals that also attach to a Mac Pro. The only thing that Apple does not offer to you is the product that you want for the price that you want (the xMac, a traditional tower system). Reply
  • lilo777 - Thursday, January 02, 2014 - link

    iMac is not a classic desktop. It's a A-I-O computer with its inherent disadvantages (i.e. CPUs,, GPUs etc. usually getting obsolete much faster than the monitor) Reply
  • OreoCookie - Thursday, January 02, 2014 - link

    I understand what the iMac is. It is nevertheless a desktop computer that covers the price segment between $1200 and $3500 in Apple's line-up. Compared to 10, 15 years ago, the demographics have changed: people have migrated to mobile computers for the most part, and the demographic who still use desktops are often quite happy using iMacs (e. g. have a look at The Verge's review of the Mac Pro where the video editors admit to using iMacs and Mac Pros, for instance).

    Certainly, if you want or need a traditional headless computer, Apple simply does not serve your needs. But looking ahead, Broadwell CPUs will be soldered to the mainboards. Most people will rely on the integrated graphics (which become increasingly powerful).
    Reply
  • Regular Reader - Friday, January 03, 2014 - link

    How often do you replace a CPU or graphics card? If you're a serious gamer, then Macs have never been the right machine for you and never will be. For people like me, the 27" iMac is perfect because we don't need a classic desktop. There's little reason to need a true desktop machine these days. AIO is the way to go. So much easier, you can get most of the power, they're quieter...the advantages far outweigh the negatives. Reply
  • wallysb01 - Friday, January 03, 2014 - link

    The iMac is not quieter than decent desktop PC. Maybe you’ve just been around absurdly loud computers? For the $1500-$2000 you pay for an iMac, you should be able to buy a pretty much silent PC with as much or more power than the iMac. Oh, and you get your choice of monitor or you can keep your old one that you still like just fine. Reply
  • Chirpie - Friday, January 03, 2014 - link

    I dunno man, usually the graphic card alone is enough to make it louder than an iMac. At least, until you're willing to start mucking with the RPMs through various mods/software hacks. Reply
  • Regular Reader - Friday, January 03, 2014 - link

    27" iMac. There's no need for much more. You can upgrade everything but the CPU.

    I used to be the DIY PC build type. I got sick of wires everywhere, intermittent cooling issues, and just generally having a desktop full of crap. I've had a 4-core i7 27" iMac for nearly 4 years now, haven't looked back, and it is more than enough to run OSX and Windows in parallel, even only having 8 GB of PC1333 RAM. And with a firmware update, my old 27" can support up to 32 GB. I have Thunderbolt even, along with FW800. If you need external SATA, OWC makes a component to do that (though you have to send your machine to them to get it installed). I'd happily buy another if I needed to do even more serious work than I do.
    Reply
  • wallysb01 - Friday, January 03, 2014 - link

    Actually, you can upgrade the CPU. What you can’t do is replace the monitor. Reply
  • Liquidmark - Friday, January 03, 2014 - link

    You can attach external monitors to the iMac. Reply
  • Liquidmark - Friday, January 03, 2014 - link

    "The problem with saying Apple's pricing isn't out of touch with reality is that you can't only compare this to high end workstation's from other companies..."

    This is a workstation. It has workstation components and is formally classified as such, so you kinda have to compare it to *gasp* OTHER workstations and match their spec as closely as possible to see if the price of the Mac Pro is reasonable or not. Anand is absolutely correct in comparing this to a HP Z420 which is HP's mid-range workstation right now.

    "this is Apple's only desktop-ish device."

    Ever heard of the Mac Mini?

    "Apple doesn't provide options for people who want a high end notebook or desktop for normal use.."

    Ever heard of the Macbook Pro or iMac?

    You can't discount the fact that the Mac Pro has Xeons under the hood just because you don't like the other options Apple offers. If the Mac Pro has Xeons under the hood, then you have to factor that into the price of the device. You don't get to ignore the engine in a Bently to claim that a Bently is thus overpriced when compared to a Toyota Corolla. You don't get to say that it should have an engine from a Toyota Corolla and that theuy shouldn't have the luxury features and hand-crafted attention to detail that come with a bently. If you want a Toyota corolla, go buy a Toyota Corolla. If you want a Dodge viper, go buy that. Don't tell Bently to make a Toyota Corolla or a Dodge viper and don't expect to buy a bently at the cost of a toyota corolla or dodge viper either because you seem to dismiss the facts that there are differences between the three.

    "it's at least 2x as expensive as it should be for it's base unit"

    Not according to actual price comparisons it isn't..
    Reply
  • Bobs_Your_Uncle - Thursday, January 02, 2014 - link

    I'm still wondering how that Nokia Lumia 1020 review is coming along !? Reply
  • p51mustang6 - Thursday, January 02, 2014 - link

    You should really do research, just a little, prior to making a review like this, you make a bold statement saying how the Mac Pro is so great and for so cheap, yet you compared it to two companies far from known for making anything professional. Try comparing the Mac Pro to The Origin Genesis Pro-X2 from Origin PC. It starts out with considerably higher specs with a LOWER price tag. They also offer up to dual Intel Xeon E5-2697 Dodeca-core processors (that's 12 cores each CPU for those of you who couldn't handle that) for a total of 24 cores (or twice that of the Mac Pro), up to dual 12GB NVIDIA Quadro K6000s (Apple doesn't even offer anything even closely comparable lol), up to 256GB of RAM (Apple offers up to 64GB), up to 4TB of SSD storage (compared to Apple's 1TB, granted PCI), comes standard with liquid cooling (Apple does not offer), up to an additional 12GB NVIDIA Tesla K40 (once again Apple offers nothing of the sort), Origin comes with one year warranty upgradable to 3 years but also comes standard with LIFETIME support with 24/7 United States based support (I wonder where Apple's support that you get 90 days of is based...lol) The starting price of the Origin is $3,712 compared to $3,999 of the Mac Pro which does not come with dual processors. The trash can is a complete rip off which requires you to go out and use their thunderbolt ports in order to do any real upgrading so you will have random things sitting on your desk, the Origin perhaps bigger, but at least all the goods will always be inside of it. Instead of spending all their time trying to make a computer a cylinder maybe Apple should have tried to compete with the real heavy hitters such as Origin PC. Reply
  • Louiek - Thursday, January 02, 2014 - link

    Hi, I am currently myself trying to compare a maxed out mac pro ~ $10k CAD with other OEM workstations of similar spec. I looked into origin but I can't seem to build a similar spec'd (i.e. single Xeon E5-2697v2 etc) that will cost under $11k CAD. Is there something I am missing as your comment leads me to believe that I can build a cheaper PC with origins with similar specs. Reply
  • Liquidmark - Friday, January 03, 2014 - link

    You can't the Origin machine only offers extreeeeme options that are ideal for gaming with neon lights. Its solution to things is to throw more cores at it and throw more ram at it even though the ram is slower... Reply
  • stingerman - Thursday, January 02, 2014 - link

    Sorry dude, triedto configure a comparable system and it costs more than the Mac Pro... Reply
  • Liquidmark - Friday, January 03, 2014 - link

    Ok, I'll bite...

    Mac Pro:

    2.7GHz 12-core with 30MB of L3 cache
    32GB (4x8GB) of 1866MHz DDR3 ECC
    256GB PCIe-based flash storage
    Dual AMD FirePro D700 GPUs with 6GB of GDDR5 VRAM each
    User's Guide (English)

    $8000 and weighs roughly 11 pounds

    GENESIS Pro-X2

    ASUS Z9PE-D8 WS
    Dual ORIGIN FROSTBYTE 120 Sealed Liquid Cooling Systems
    Dual Intel XEON E5-2630 v2 Hex-Core 2.6GHz (3.1GHz Turbo) 15MB Cache (That's 12 cores at a lower clock than the Mac Pro build)
    1000 Watt Corsair RM1000
    Dual 6GB NVIDIA Quadro 6000 (Non-SLI)
    32GB Kingston ECC 1600MHz (4x8GB)
    Genuine MS Windows 7 Professional 64-Bit Edition
    250GB Samsung 840 Evo Series
    ASUS 24X CD/DVD Burner
    On Board Audio
    Onboard Network Port
    ORIGIN Wooden Crate Armor
    1 Year Part Replacement and 45 Day Free Shipping Warranty with Lifetime Labor/24-7 Support
    ORIGIN Recovery USB3.0 Flash Drive
    ORIGIN PC G8 T-Shirt XL
    Microsoft Internet Explorer

    $11,017 and weighs over 70 pounds.

    Now, before anyone says anything, the tee shirt was free and the water cooling was the only offer plus they give a free games offer that I didn't take. Tho I probably should since apparently workstations are all about pro gaming, neon lights and being extreeeeeme.
    Reply
  • Liquidmark - Friday, January 03, 2014 - link

    Also, if anyone wants to argue that you can get dual 12-core on the origin machine, I'll simply point out that, at spec, I'd almost be able to buy two 12 core Mac Pros. Just saying. Reply
  • LorneKwe - Friday, January 03, 2014 - link

    How about we replace those Quadro 6000's with Radeon 7970s which are what the Mac Pro has inside. You can make any price comparison look like shit when you erroneously drop a few thousand dollars worth of GPUs into the build.

    Dual 6GB 7970s cost $1,400.

    Using that, we can build out a 24C Dual Xeon 2697v2 workstation, with 64GB of RAM, dual 512GB 840 Pro SSDs in RAID0, H100i cooling on each CPU, and it would come to around $10,000. Same price as the Mac Pro's kitted out, 12C config, double the CPU power, and a good chunk more GPU power as they aren't underclocked.

    You also have the option of going with 4x 7970s if you chose to, and if you forgo to ASUS Z9PE board for a proper workstation board, you can go up to 512GB of RAM instead of being capped at 64GB. If you want dual Quadro K5000s instead of 7970s, raise the PC's price by around $1600, but understand that these will outperform Apple's D700 pair by an enormous margin.

    If you're looking for portability, or absolutely require OS X for what you do; choose the Mac Pro. If you're looking for computer power that won't throttle down when faced with tough workloads; have a full-fledged workstation built for you and get much more bang for your buck.
    Reply
  • stingerman - Sunday, January 05, 2014 - link

    Sorry Dude, this is a Pro Workstation, that mean workstation class GPUs. 7970 is a great CPU but there is a reason it doesn't go into Workstations... Reply
  • madwolfa - Sunday, January 05, 2014 - link

    And what is it? Drivers? D700s in Mac Pro don't even have ECC memory enabled in them. Reply
  • wheelhot - Monday, January 06, 2014 - link

    yes, the drivers provided (and I believe you can only test this in Windows?) will determine if it's actual workstation class GPUs or just Radeons with the name FirePro slapped on it. I'm seriously hoping it's supplied with actual workstation GPU driver, as it'll greatly benefit the software I use. Reply
  • LorneKwe - Tuesday, January 07, 2014 - link

    I'd agree with you if we didn't have solid evidence that Apple's D700 is a repurposed 7970; lack of ECC memory being the best clue, and pricing being a great clue as well. Reply
  • scarhead - Saturday, January 04, 2014 - link

    One Xeon E5-2697 processor costs $2,614. Two costs $5,228. I doubt your $3,712 figure for complete system is accurate. Reply
  • p51mustang6 - Thursday, January 02, 2014 - link

    Apparently any comment disagreeing with the author gets deleted. Reply
  • chaos215bar2 - Friday, January 03, 2014 - link

    Clearly not. Perhaps your comment was rude, trolling, and / or being argumentative without actually adding anything to the discussion? Reply
  • Johan Niklasson - Friday, January 03, 2014 - link

    Great review - thanks. I just wanted to add that the new Mac Pro design and its finish does matter more than most people understand. The ones who work on graphic and video applications usually are artists with some sense of beauty and aesthetics. I am sure that the much more pleasant looking Mac Pro is a welcome addition to such people's workspaces.

    Also the smaller form factor and the silence will make it hard to resist!
    Reply
  • james-bond - Friday, January 03, 2014 - link

    Thanks for the great review. I wonder if some of the USB ports should have been on the other side of the chassis. Seems like the power cord and monitors are things that are rarely unplugged and are usually on the back side of current case designs. Having to reach around to plug and unplug a flash drive seems inconvenient.
    Please keep and eye out for an upgrade to Logic Pro (Apple's audio pro app). A would love for you to benchmark this in the same way you did Final Cut. I doubt Apple will be able to harness the second GPU for compute in audio applications do to latency in doing real time audio monitoring. Audio has taken a back seat to video apps in this version of the Mac Pro it seems.
    Reply
  • Ppietra - Friday, January 03, 2014 - link

    The machine is round so there isn’t a true back or front! You can position it in any orientation you like without a fuss. Ports to the front, back or side... Reply
  • scarhead - Monday, January 06, 2014 - link

    I think OP meant ports on two opposite sides. Monitor, keyboard, drive cables on one side. Empty USB & TB ports on the other. Like he said, to make plugging in USB easy. Reply
  • Ppietra - Monday, January 06, 2014 - link

    I don’t see why you would need that. The geometry and size of the machine makes that irrelevant... that would be something you would find useful with rectangular towers, where the back is inaccessible due to size and geometry and it is extremely complicated to use in different orientations. With this machine you can put it in whatever orientation. Several orientations make it easy to access all ports... why would you make it easy to access just some ports when you can do it for all!? Reply
  • wheelhot - Saturday, January 04, 2014 - link

    Anyone has any idea how the GPUs in the nMP performs in Windows? Does it perform with proper workstation GPU driver support (meaning softwares like SolidWorks, SolidEdge, NX, PTC Creo takes advantage of) or it uses the regular Radeon GPU driver?

    I'm curious why till now there's no one who tests the nMP with SpecViewPerf
    Reply
  • Comed1an - Saturday, January 04, 2014 - link

    Comparing hardware Y to hardware Z is pointless if there is no benchmarks to compare.

    What would be interesting is firing up some real life tests on software that is available for both OS X and Windows and then seeing what kind of DYI PC hardware is needed to match the Mac Pro.

    Thanks for the review.
    Reply
  • prashyboby - Saturday, January 04, 2014 - link

    Hi in the PCIe layout section you seem to have mentioned that PLX switch gibes 15GBPS throughput to CPU. How is it possible? The third CPU interface to which the switch is being connected has only 8 PCIe 3.0 lanes so it should max out at 8GBPS. Correct me if my understanding is wrong! Reply
  • Booster - Saturday, January 04, 2014 - link

    Looks like a trash can to me. I bet people will be confusing new Mac Pros with trash cans all the time, dumping in them all sorts of garbage, throwing in cigarette butts... Reply
  • stingerman - Sunday, January 05, 2014 - link

    You're just hilarious. What do you think a traditional PC case looks like? A box, some like dumpster? Mac Pro is a beautiful machine, this fit and finish, even the interior. Reply
  • HeavyClocker - Sunday, January 05, 2014 - link

    Buying this machine for gaming is Just BAD IDEA! Reply
  • stingerman - Sunday, January 05, 2014 - link

    Hmmm, got an iPad Air for gaming, but, editing movie length 4K will be great on this Mac Pro. Reply
  • estern53 - Monday, January 06, 2014 - link

    Now all we need Apple to do is make a prosumer version of the Mac Pro for under 2k for the rest of us. Reply
  • wheelhot - Monday, January 06, 2014 - link

    They already did, it's called the iMac, or MacMini Reply
  • tipoo - Monday, January 06, 2014 - link

    A Mini redesigned as a mini version of this might be cool, especially with a discreet GPU. Reply
  • affinityseattle - Wednesday, January 08, 2014 - link

    The LR test is a bit off. LR is not great at exporting. As a pro, I've found the trick is to stack export processes. The more cores you have, the more it can handle and utilize the CPU. So, the iMac i5 might be faster on a single export, but the Mac Pro should spank it if you start dividing the export up. Also, if you have a 1000-image export, the imac and mbp will overheat and reduce the CPU power (TLD). For a pro machine, these types of usage are relevant. Reply
  • GRAFiZ - Wednesday, January 08, 2014 - link

    It's an impressive product... but, as with most Apple designs, form is first, function is second. The fact that it has to scale CPU speed to reduce heat says all I need to know. Obviously the Apple fanbois will argue "THATS A THEORETICAL SITUATION!!!" but who cares? The fact is I'm buying brand new parts at the highest premium on the market... thermal throttling should NEVER EVER be necessary.

    Bottom line, like all Apple products, it's impressive... but, you can do better for less money elsewhere.
    Reply
  • DotFab - Wednesday, January 08, 2014 - link

    You've read it all wrong!

    The work load that put the MP under slower run was purely artificial.
    It's not anything actual programs run.
    The conclusion to draw is that the global thermal dissipation is great!

    You missed the point of the test.
    Reply
  • lukarak - Thursday, January 09, 2014 - link

    You clearly don't care, but people who will actually use it, will care that it won't throttle down when they load it.
    Only haters care about a situation that will never take place. As they can, in their frustration and insignificance, gloat about something.
    Reply
  • GRAFiZ - Thursday, January 09, 2014 - link

    No hater here... I just think it's poor design that a brand new product, costing as much as TEN THOUSAND DOLLARS can not handle a theoretical max load without overheating.

    I can build a dual octa-core Xeon E5v2 system for far less that could process the same simulated work load without any thermal of processor speed restrictions at all.

    But, I guess if you want the smallest little desktop made, it's really your only choice. I just find it funny that when you spend as much as a brand new car might cost on a desktop computer, that any such limitations would need to be accepted at all.
    Reply
  • wordsofpeace - Friday, January 10, 2014 - link

    If Apple had made it 10mm wider and maybe 20mm taller, the extra thermal capacity could have allowed more headroom. But no, it had to be 9.9" x 6.6" and 11lbs. It's almost as if the marketing dept. decided on the most wow factor specs and poor old engineering had to come up with a solution.
    Don't get me wrong, I'd love one on my desk, but I too don't understand Apple's addiction to form over function.
    Reply
  • tsk2 - Thursday, February 20, 2014 - link

    I share your view. I enjoy my mac pro 2008 (packed with all the stuff I need) and a nice cinema display. Sure, both are big, but they both look nice and I don't feel limited. I have tried small nice looking boxes in the past and my experience has always been that it is a lot of effort to expand, add cables, and still get that uncluttered feel. A bigger box, I can live with, but this solution, albeit "initially" good looking, is too short term. I wish Apple would notice that there are users who fall in our category.. Reply
  • Dandu - Friday, January 10, 2014 - link

    Hi,

    It's possible to use a 2 560 x 1 440 HiDPI definition, with a NVIDIA card, a 4K Display and the (next) version of SwitchResX.

    I have tested that : http://www.journaldulapin.com/2014/01/10/ultra-144...
    Reply
  • Haravikk - Sunday, January 12, 2014 - link

    The news about the USB3 ports is a bit strange, doesn't that mean a maximum throughput of 4gbps? I know most USB3 storage devices will struggle to push past 500mb/sec, but that seems pretty badly constrained. Granted, Thunderbolt is the interface that any storage *should* be using, but the choices are still pretty poor for the prices you're paying, and no-one offers Thunderbolt to USB3 cables (only insanely priced hubs with external power).

    Otherwise the review is great, though it'd be nice to see more on the actual capabilities of Apple's FirePro cards. Specifically, how many of the FirePro specific features do they have such as 30-bit colour output, EDC, ECC cache memory, order-independent-transparency (under OpenGL) and so-on? I'm assuming they do given that they're using the FirePro name, but we really need someone to cover it in-depth to finally put to rest claims that consumer cards would be better ;)
    Reply
  • eodeot - Monday, February 24, 2014 - link

    I'd love a realistic comparison with an i7 4770k and say, 780ti.

    You also compare 12 cored version to older 12 core versions that hide behind (fairly) anonymous xeon labeling that hide their chip age (sandy/ ivy bridge/haswell...). I'd like to see in how any real world applications does a 12 core chip perform faster. Excluding 3d work and select video rendering, I doubt there is much need to extra cores. You note how its nice to have buffer of free cores for everyday use, while heavy rendering- but I never noticed a single hiccup or a slowdown with 3d rendering on my i7 4770k with all 8 logical cores taxed to their max. How much of better performance then "butter smooth" one already provided with a much cheaper CPU can you get?

    Also you compare non apple computers with same ridiculous CPU/GPU combinations. Who in their right mind would choose a 4core Xeon chip over a haswell i7? The same goes for silly "workstation" GPU over say a Titan. Excluding dated opengl 3d apps, no true modern workstation benefits from a "workstation" GPU, if we exclude select CUDA based 3d renderers like iray and vray rt that can benefit from 12gb of ram. GPUs included with Apple Mac pro have 2gb... Not a single valid reason a sane person would buy such a card. Not one.

    Also, you point out how gaming makes the most sense on windows, but do no such recommendation for 3d work. Like games, 3d programs perform significantly better under directX and that leaves windows as a sole option for any serious 3d work...

    I found this review interesting for design Apple took, but everything else appears one sided praise...
    Reply
  • pls.edu.yourself - Wednesday, February 26, 2014 - link

    QUOTE: "The shared heatsink makes a lot of sense once you consider how Apple handles dividing compute/display workloads among all three processors (more on this later)."

    Can anyone help point me to this. I think one of my GPU's is not being used.
    Reply
  • PiMatrix - Saturday, March 08, 2014 - link

    Apple Fixed the HiDPI issue on Sharp K321 in OS 10.9.3. Works great. Supported HiDPI resolutions are the native 3840x2160, and HiDPI: 3200x1800, 2560x1440, 1920x1080, and 1280x720. You can also define more resolutions with QuickResX but the above seem to be enough. Using 3200 x1800 looks fantastic on this 4K display. Great job Apple! Reply
  • le_jean - Monday, March 10, 2014 - link

    Any information on updated 60Hz compatibility concerning Dell's UP 2414Q in 10.9.3?
    I would be very interested to get some feedback in relation to:
    nMP & Dell UP 2414Q
    rMBP & Dell UP 2414Q

    I remember in anandtech review of late 2013 nMP there have been issues concerning that specific display, while Sharp and ASUS performed just fine
    Reply
  • philipus - Monday, April 14, 2014 - link

    As a happy photo amateur, I have to say the previous Mac Pro is good enough for me. I have the early 2008 version which I like because of its expandability. Over the years I have added drives, RAM and most recently a Sonnet Tempo Pro with two Intel 520 in order to get a faster system. As cool and powerful as the new Mac Pro is, it would cost me quite a lot to add Thunderbolt boxes for the drives I currently use, so it is not worth it for me.

    I do agree that it is about time a manufacturer of desktop computers pushed the platform envelope. It's been tediously samey for a very long time. I'm not surprised it was Apple that made the move - it's in Apple's DNA to be unexpected design-wise. But as much as it is nice to see a radical re-design of the concept of the desktop computer, I think a future version of the Mac Pro needs to be a bit more flexible and allow more user-based changes to the hardware. Even if I could afford the new Mac Pro - and I would also place it on my desktop because it's really pretty - I wouldn't want to have several Thunderbolt boxes milling around with cables variously criss-crossing and dangling from my desk.
    Reply
  • walter555999 - Saturday, June 07, 2014 - link

    Dear Anand, could you post how to connect a up2414Q to macbook pro retina (2013) ? I have tried a cable mini display port-HDMI. But there are no image in the dell monitor. Thank you very much. Walter Reply
  • Fasarinen - Saturday, August 09, 2014 - link

    Thanks for an excellent review. (And hello, everybody; this is my first post on this site.)

    I noticed, in the "GPU choices" section, what seems to be a very useful utility for monitoring the GPU. The title on the top of the screen is "OpenCL Driver Monitor"; the individual windows (which are displaying graphs of GPU utilisation) seem to be titled "AMDRadeonXL4000OpenCLDriver".

    I'm probably just being dim, but a bit of googling doesn't shed much light. If anybody could point to me to where this utility can be obtained from, I'd be most grateful.

    Thanks ....
    Reply

Log in

Don't have an account? Sign up now