For those that might not be too familiar with the standard, Thunderbolt is Intel’s high-bandwidth, do-everything connector, designed as a potential future path for all things external to a system—displays, USB devices, external storage, PCI Express, and even graphics cards. Thunderbolt supports up to 10Gb/s bandwidth (uni-directional) for each port, which is double what USB 3.0 offers, but the cost to implement Thunderbolt tends to be quite a bit higher than USB. For that reason, not to mention the ubiquity and backwards compatibility of USB 3.0 ports, we haven’t seen all that many Thunderbolt-equipped Windows laptops and motherboards; mostly the ports are found on higher-end motherboards.

For those that need high bandwidth access to external devices, however, even 10Gb/s may not be enough—specifically, 4K/60 video resolutions can require around 15Gb/s. As we’ve previously discussed, with Thunderbolt 2 Intel is doubling the bandwidth with Thunderbolt 2 up to 20Gb/s per port (bi-directional) by combining the four 10Gb/s channels into two 20Gb/s channels, thus enabling support for 4K/60 support. The ASUS Z87-Deluxe/Quad motherboard is the first motherboard to support the standard, and as expected you get two 20Gb/s ports courtesy of the single Falcon Ridge controller. Combined with the HDMI port, that gives the board the potential to drive three 4K displays at once. And if Thunderbolt 2 support isn’t enough for your enthusiast heart, ASUS is also including their NFC Express accessory for Near-Field Communication.

Here’s the short specifications summary for the Z87-Deluxe/Quad; we’re awaiting further details on expected availability and pricing, but given the Z87-Deluxe/Dual runs $350 we’d expect the new board to come in above that price point.

  • 2 x Intel Thunderbolt 2 ports
  • 1 x HDMI port
  • 4 x DIMM slots
  • 3 x PCIe 3.0/2.0 x16 slots
  • 10 x SATA 6Gbit/s ports
  • 8 x USB 3.0 ports with USB 3.0 Boost
  • 8 x USB 2.0 ports
  • ATX form factor

Source: ASUS Press Release

POST A COMMENT

56 Comments

View All Comments

  • r3loaded - Tuesday, August 20, 2013 - link

    That's great and all but apart from some incredibly expensive enterprise-grade storage systems and laptop docks, what would I use Thunderbolt for? Not trolling, I seriously want to know what stuff is available that I can actually use. Reply
  • Kevin G - Tuesday, August 20, 2013 - link

    One thing I've seen that doesn't fit into storage, laptop docking or displays have been done video capture boxes. They're outside of consumer prices but they're out there.

    The other devices I've seen are Ethernet NICs or FireWire adapters. Both these could arguably be out into the laptop docking category.
    Reply
  • repoman27 - Tuesday, August 20, 2013 - link

    https://thunderbolttechnology.net/products gives a pretty decent overview of what's available. As for what you can actually use, that's sort of dependent on your particular situation. Reply
  • Zalansho - Tuesday, August 20, 2013 - link

    I for one am very interested in the NFC functionality, though after a little digging on the Asus site it doesn't seem to support some of the neater things like Android Beam. Any chance of a review of this or the Dual, or the NFC unit by itself? Reply
  • Kevin G - Tuesday, August 20, 2013 - link

    I'm just irks that Intel hasn't gotten channel bonding to work right until now. A four channel, 10 Gbit/channel has enough bandwidth to drive an 8k display @ 30 Hz. That would truly be a significant jump for professionals who have generally been limited to 2560 x 1600 from 2004 to 2012. Even now 4k is barely in the market place and the new IO standards are being slightly modified to just support it (ie no 120 Hz 2D or 3D @ 60 Hz per eye). Demand is there in the market place for higher resolution displays and Intel could have provided *the defacto* standard to get there if they hadn't botched TB bonding initially. Reply
  • DarkXale - Tuesday, August 20, 2013 - link

    Careful there with channels with lanes. Thunderbolt has 4 lanes, each which is capable of transferring only in one direction - and when paired producing 2 channels.
    (10up + 10down + 10up + 10down = 20up + 20down)

    The lanes would need to be come half-duplex capable in order to drive 8k @ 30hz.

    For another example: PCI-E 16x has 32 lanes, each which transfers only in a single direction.
    Reply
  • repoman27 - Tuesday, August 20, 2013 - link

    Is there any evidence that Intel ever intended to implement channel bonding on the original Light Peak / Thunderbolt silicon? There is scant little to indicate that they even intended to support dual channel links, let alone channel bonding.

    The high end display market generally moves far slower than those responsible for penning the interconnect or content delivery standards. DisplayPort 1.2 has been around for almost 4 years, and DP 1.2 capable GPUs for nearly three, yet the first DP 1.2 HBR2 and MST capable displays only hit the market less than a year ago. Furthermore, I don't think there are any native eDP 1.2+ panels with support for HBR2 out there yet.

    20 Gbit/s for Thunderbolt 2 vs. 17.28 Gbit/s for DP 1.2 is not a significant difference, and Thunderbolt is just a meta-protocol used to transport DP and PCIe packets anyway.
    Reply
  • Kevin G - Tuesday, August 20, 2013 - link

    The bonding functionality was to be similar to how multiple PCI-e lanes work together to form a wider channel. Thus all 40 Gbit of bandwidth going over a single copper cable was supposed to be able to be utilized by a single device. The catch is that DP didn't play well in this mode. Reply
  • repoman27 - Tuesday, August 20, 2013 - link

    I understand the theory, but as far as I can tell there is zero evidence to suggest that channel bonding was ever on the table prior to Falcon Ridge. When did anyone from Intel ever allude to this? Why would Intel possibly implement DP 1.2 in a Thunderbolt controller before doing so in their own IGPs? Do you have any links to back up the notion that they even attempted channel bonding with the early silicon? (I'm genuinely curious, btw, not just trying to be argumentative.) Reply
  • Kevin G - Wednesday, August 21, 2013 - link

    The context of bonding has often been mentioned in reference to TB networking (note that Intel's optical based interconnects for rack based servers seem eerily familiar). Most of this was when Thunderbolt was known as LightPeak. Intel's presentation on the matter:

    And there is bit of research done at MS with regards to Thunderbotl/LightPeak too (PDF):

    http://research.microsoft.com/pubs/144715/4208a109...

    Of note from the above MS paper:

    "We built a prototype network interface card using Light
    Peak technology (shown in Figure 1) The prototype card is a
    Gen2 x4 PCI-Express add-in card and contains one host
    interface, an integrated crossbar switch and transceiver pair
    with four 10 Gbps optical ports with modified USB cable
    connectors. The integrated non-blocking crossbar switch is
    capable of delivering an aggregate bandwidth of 80 Gbps (40
    Gbps receive and 40 Gbps transmit) through the optical ports
    and 10 Gbps to/from the host system. Traffic from one
    optical port to another optical port can be transmitted directly
    without any interaction with the host CPU. Each transceiver
    module supports two interfaces and provides electrical to
    optical conversion. "

    There are a couple of odd ball things in that paragraph. First is the obvious asymmetrical bandwidth nature of the IO card: 4x lanes at PCI-E 2.0 speeds is 20 Gbit in each direction while the networking side was capable of 40 Gbit aggregate. As discussed in the paper, this was for multipathing and fail over as most enterprise networking has to support.
    Reply

Log in

Don't have an account? Sign up now