Thunderbolt 2

The new Mac Pro integrates three Intel Falcon Ridge Thunderbolt 2 controllers. These are the fully configured controllers, each supporting and driving two Thunderbolt 2 connectors on the back of the Pro for a total of 6 ports.

Pairing Thunderbolt 2 with Ivy Bridge EP is a bit tricky as Apple uses Thunderbolt 2 for display output as well as data. Typically you’d route all display through processor graphics, but in the case of IVB-EP there is no integrated graphics core. On a DIY PC you enable display output over Thunderbolt 2 by running an extra cable out of the discrete GPU and into a separate input that muxes the signal with PCIe and ships it out via another port as Thunderbolt. Here’s where Apple’s custom PCB work comes in handy as all of this is done internal to the Mac Pro. The FirePro’s display outputs are available via any two of the six Thunderbolt 2 ports, as well as the lone HDMI port on the back of the Mac Pro.

How does Thunderbolt 2 differ from the original? For starters, it really would’ve been more accurate to call it Thunderbolt 4K. The interface is fully backwards compatible with Thunderbolt 1.0. You can use all previous Thunderbolt peripherals with the Mac Pro. What’s new in TB2 is its support for channel bonding. The original Thunderbolt spec called for 4 independent 10Gbps channels (2 send/2 receive). That meant no individual device could get access to more than 10Gbps of bandwidth, which isn’t enough to send 4K video.

Thunderbolt 2 bonds these channels together to enable 20Gbps in each direction. The total bi-directional bandwidth remains at 40Gbps, but a single device can now use the full 20Gbps. Storage performance should go up if you have enough drives/SSDs to saturate the interface, but more importantly you can now send 4K video over Thunderbolt. Given how big of a focus 4K support is for Apple this round, Thunderbolt 2 mates up nicely with the new Mac Pro.

So far I’ve been able to sustain 1.38GB/s of transfers (11Gbps) over Thunderbolt 2 on the Mac Pro. Due to overhead and PCIe 2.0 limits (16Gbps) you won’t be able to get much closer to the peak rates of Thunderbolt 2.


The impact of chaining a 4K display on Thunderbolt 2 downstream bandwidth

Here’s where the six Thunderbolt 2 and three TB2 controllers come into play. Although you can daisy chain a 4K display onto the back of a Thunderbolt 2 storage device, doing so will severely impact available write bandwidth to that device. Remember that there’s only 20Gbps available in each direction, and running a 3840 x 2160 24bpp display at 60Hz already uses over 14Gbps of bandwidth just for display. I measured less than 4Gbps of bandwidth (~480MB/s) available for writes to a Thunderbolt 2 device downstream from the Mac Pro if it had a 4K display plugged in to it. Read performance remained untouched since display data only flows from host to display, leaving a full 20Gbps available for reads. If you’re going to connect Thunderbolt 2 devices to the Mac Pro as well as a 4K display, you’ll want to make sure that they aren’t on the same chain.

If we start numbering in the top left corner of the 2 x 3 array of Thunderbolt ports and go left to right down the stack, you'll want to first populate ports 1, 2 and 5 before filling in the rest. The diagram below should help simplify:

SSD, Dual Gigabit Ethernet & 802.11ac WiFi Power Consumption & Noise
Comments Locked

267 Comments

View All Comments

  • zepi - Wednesday, January 1, 2014 - link

    How about virtualization and for example VT-d support with multiple gpu's and thunderbolts etc?

    Ie. Running windows in a virtual machine with half a dozen cores + another GPU while using rest for the OSX simultaneously?

    I'd assume some people would benefit of having both OSX and Windows content creation applications and development environments available to them at the same time. Not to mention gaming in a virtual machine with dedicated GPU instead of virtual machine overhead / incompatibility etc.
  • japtor - Wednesday, January 1, 2014 - link

    This is something I've wondered about too, for a while now really. I'm kinda iffy on this stuff, but last I checked (admittedly quite a while back) OS X wouldn't work as the hypervisor and/or didn't have whatever necessary VT-d support. I've heard of people using some other OS as the hypervisor with OS X and Windows VMs, but then I think you'd be stuck with hard resource allocation in that case (without restarting at least). Fine if you're using both all the time but a waste of resources if you predominantly use one vs the other.
  • horuss - Thursday, January 2, 2014 - link

    Anyway, I still would like to see some virtualization benchs. In my case, I can pretty much make it as an ideal home server with external storage while taking advantage of the incredible horse power to run multiple vms for my tests, for development, gaming and everything else!
  • iwod - Wednesday, January 1, 2014 - link

    I have been how likely we get a Mac ( Non Pro ) Spec.
    Nvidia has realize those extra die space wasted for GPGPU wasn't worth it. Afterall their main target are gamers and gaming benchmarks. So they decided for Kepler they have two line, one for GPGPU and one on the mainstream. Unless they change course again I think Maxwell will very likely follow the same route. AMD are little difference since they are betting on their OpenCL Fusion with their APU, therefore GPGPU are critical for them.
    That could means Apple diverge their product line with Nvidia on the non Professional Mac like iMac and Macbook Pro ( Urg.. ) while continue using AMD FirePro on the Mac Pro Line.

    Last time it was rumoured Intel wasn't so interested in getting a Broadwell out for Desktop, the 14nm die shrink of Haswell. Mostly because Mobile / Notebook CPU has over taken Desktop and will continue to do so. It is much more important to cater for the biggest market. Not to mention die shrink nowadays are much more about Power savings then Performance Improvements. So Intel could milk the Desktop and Server Market while continue to lead in Mobile and try to catch up with 14nm Atom SoC.

    If that is true, the rumor of Haswell-Refresh on Desktop could mean Intel is no longer delaying Server Product by a single cycle. They will be doing the same for Desktop as well.

    That means there could be a Mac Pro with Haswell-EP along with Mac with a Haswell-Refresh.
    And by using Nvidia Gfx instead of AMD Apple dont need to worry about Mac eating into Mac Pro Market. And there could be less cost involve with not using a Pro Gfx card, only have 3 TB display, etc.
  • words of peace - Wednesday, January 1, 2014 - link

    I keep thinking that if the MP is a good seller, maybe Apple could enlarge the unit so it contains a four sided heatsink, this could allow for dual CPU.
  • Olivier_G - Wednesday, January 1, 2014 - link

    Hi,

    I don't understand the comment about the lack of HiDPI mode here?

    I would think it's simply the last one down the list, listed as 1920x1080 HiDPI, it does make the screen be perceived as such for apps, yet photos and text render at 4x resolution, which is what we're looking for i believe?

    i tried such mode on my iMac out of curiosity and while 1280x720 is a bit ridiculously small it allowed me to confirm it does work since OSX mavericks. So I do expect the same behaviour to use my 4K monitor correctly with mac pro?

    Am I wrong?
  • Gigaplex - Wednesday, January 1, 2014 - link

    The article clearly states that it worked at 1920 HiDPI but the lack of higher resolutions in HiDPI mode is the problem.
  • Olivier_G - Wednesday, January 1, 2014 - link

    Well no it does not state that at all I read again and he did not mention trying the last option in the selector.
  • LumaForge - Wednesday, January 1, 2014 - link

    Anand,

    Firstly, thank you very much for such a well researched and well thought out piece of analysis - extremely insightful. I've been testing a 6 core and 12 core nMP all week using real-life post-production workflows and your scientific analysis helps explain why I've gotten good and OK results in some situations and not always seen the kinds of real-life improvements I was expecting in others.

    Three follow up questions if I may:

    1) DaVinci Resolve 10.1 ... have you done any benchmarking on Resolve with 4K files? ... like FCP X 10.1, BMD have optimized Resolve 10.1 to take full advantage of split CPU and GPU architecture but I'm not seeing the same performance gains as with FCP x 10.1 .... wondering if you have any ideas on system optimization or the sweet spot? I'm still waiting for my 8 core to arrive and that may be the machine that really takes advantage of the processor speed versus cores trade-off you identify.

    2) Thunderbolt 2 storage options? ... external storage I/O also plays a significant role in overall sustained processing performance especially with 4K workflows ... I posted a short article on Creative Cow SAN section detailing some of my findings (no where as detailed or scientific as your approach I'm afraid) ... be interested to know your recommendations on Tbolt2 storage.

    http://forums.creativecow.net/readpost/197/859961

    3) IP over Tbolt2 as peer-to-peer networking topology? ... as well as running the nMPs in DAS, NAS and SAN modes I've also been testing IP over Tbolt2 .... only been getting around 500 MB/s sustained throughput between two nMPs ... if you look at the AJA diskwhack tests I posted on Creative Cow you'll see that the READ speeds are very choppy ... looks like a read-ahead caching issue somewhere in the pipeline or lack of 'Jumbo Frames' across the network ... have you played with TCP/IP over Thunderbolt2 yet and come to any conclusions on how to optimize throughput?

    Keep up the good work and all the best for 2014.

    Cheers,
    Neil
  • modeleste - Wednesday, January 1, 2014 - link

    I noticed that the Toshiba 65" 4k TV is about the same price as the Sharp 32" The reviews seem nice.

    Does anyone have any ide what the issues would be with using this display?

Log in

Don't have an account? Sign up now