Meet the GeForce GTX 1060 Founders Edition

We’ll start off our look at GeForce GTX 1060 cards with NVIDIA’s Founders Edition card. That there’s a retail reference card for GTX 1060 is actually a bit of a surprise here. NVIDIA has not consistently offered retail reference cards for the GTX x60 family over the years. These cards sell for lower margins, and with their lower TDP and overall simpler design, there’s simply not the same need or market for a retail reference design as with the higher end cards.

As a result, while the Founders Edition cards are a bit harder to swallow with the higher-end GeForces, here it makes a bit more sense. It’s essentially an NVIDIA exclusive design separate from the partner cards, albeit at a higher price. But perhaps more importantly, it’s one of the only two GTX 1060 blower designs being sold, which makes it a unique offering amidst the many open air designs on the market. Albeit one at $299, $50 over the base GTX 1060’s MSRP.

GeForce GTX 1060 Cards
  NVIDIA GTX 1060 Founders Ed. ASUS STRIX GTX 1060 OC
Base Clock 1506MHz 1620MHz
Boost Clock 1709MHz 1848MHz
Memory Clock 8Gbps GDDR5 8.2Gbps GDDR5
VRAM 6GB 6GB
Length 9.75" 11.75"
Width Double Slot Double Slot
Cooler Type Blower Open Air
Price $299 $314

In terms of styling, the GTX 1060 Founders Edition is designed to look like the higher end Founders Edition cards. This keeps a consistent look to the Founders Edition lineup, however a consistent look is also as far as the similarities go. Absent is the metal shroud of the high-end cards, replaced with a simpler plastic shroud, which is more along the lines of what you’d expect for a cheaper, lower TDP card. That said, even just handling the card makes it clear that NVIDIA took care to maintain a high build quality; the shroud is solid with no flimsiness or weak seams. Of all of the plastic NVIDIA blowers I’ve looked at over the years, this is likely the best, easily ahead of the reference GTX 680 and other efforts.

A lot of this has to do with the fact that while the shroud is plastic, NVIDIA has otherwise built the card like one of their higher-end cards, with plenty of fasteners and some metal – aluminum, I suspect – used at key points. Going under the hood also continues this analogy, as we find a metal baseplate running the length of the card, with an aluminum heatsink for the GPU covering much of the board.

The amusing bit is that the blower itself is larger than the card’s PCB due to the amount of space a blower needs for the fan and heatsink assemblies. The GTX 1060 PCB measures just 6.75” long – not too much longer than the PCIe x16 slot connector – while the shroud adds another 3” to that, bringing the total length to 9.8”. NVIDIA keeps the PCB and shroud flush with each other through some careful engineering with the overhanging part of the shroud, but it’s none the less an interesting sight when the cooler is larger than the board it needs to cool.

Looking at the PCB itself, what we find is a pretty typical design for a Gx106 card. NVIDIA has shifted the 3+1 phase power delivery circuitry to the front of the board, allowing the GPU and its associated GDDR5 RAM to be placed near the rear. Curiously, there are two empty GDDR5 pads here, despite the fact that GP106 and its 192-bit memory bus can only connect to 6 pads. From what I hear, this PCB is designed to accommodate GP104 as well (a potential third tier GP104 card, perhaps?), leading to the additional memory pads. In any case this is a simple but functional design, like previous NVIDIA reference PCBs before it.

Moving on, towards the top of the card we find the requisite power connector. NVIDIA’s reference PCB doesn’t actually place the 6-pin power connector on the PCB itself; rather it’s built into the shroud with an internal wire to carry it the rest of the way. This is an unusual design choice, as we haven’t seen this done before for cards with short PCBs. As far as I know there’s no technical reason NVIDIA needed to do this, but it does allow for the 6-pin connector to be placed at the far end of the card, which I imagine will please system integrators and others who are accustomed to short, minimally visible cable runs.

Meanwhile, looking at NVIDIA’s display I/O configuration, it’s physically unchanged from the other Pascal reference boards. This means we’re looking at 3x DisplayPort 1.4, 1x HDMI 2.0b, and 1x DL-DVI-D.

SLI Gets Removed

The one connector you won’t find on the GTX 1060 is the SLI connector. NVIDIA has been rethinking their SLI strategy for the Pascal generation, which as we’ve seen for GTX 1080/1070 has resulted in NVIDIA deprecating 3 and 4-way SLI support. GTX 1060 has not escaped this retooling either, with NVIDIA removing (or rather, not building in) most of their traditional SLI support.

Given the struggling state of multi-GPU scaling that I discussed in the GTX 1080 review, I had initially suspected that removing SLI support from the GTX 1060 was a further consequence of those scaling difficulties. However in discussing the matter with NVIDIA, they informed me that the rationale for this decision is based on economics more than technical matters. As it turns out, SLI is not used very often with GTX x60 class cards. Due to the aforementioned scaling issues there’s little reason to buy a pair of x60 cards right off the bat as opposed to a single x70/x80 card, and meanwhile by the time most customers were ready to upgrade again, it was 12+ months down the line, and buying a next generation card made would make more sense than doubling up on a now older card.

The end result is that NVIDIA has opted to remove SLI support for the GTX 1060 series, making the feature exclusive to the GTX 1070 and above. For customers looking for more performance this essentially locks them into following what has long been our own recommended path: buy the fastest card first (scale up), and then go SLI after that if you need more performance (scale out). As I noted before this is an economic decision – this was a feature that few people were using in this class of card – but at the same time it is a removed feature, so it’s also not really a positive thing. That said, in keeping with our traditional advice on multi-GPU I don’t consider SLI the best way to go with a mainstream card to begin with, so it’s not a feature I’m going to miss.

In any case, while NVIDIA has removed formal support for SLI from this class of product, it’s important to note that they haven’t removed multi-GPU support entirely. DirectX 12’s built in mGPU support supersedes SLI in some cases, so game developers can still offer multi-GPU support if they’d like. NVIDIA pulling SLI support only impacts scenarios where NVIDIA’s drivers were responsible for mGPU: games using DX11 or lower, and games using DX12’s implicit Linked Display Adapter (LDA) mode. Games that use explicit multi-adapter (e.g. Ashes of the Singularity) or explicit LDA can still setup mGPU. So multi-GPU on GTX 1060 isn’t truly dead, though I also don’t think we’re going to see too many game vendors bother to formally qualify a GTX 1060 mGPU setup regardless of what the API allows, since it’s going to be so uncommon.

The GeForce GTX 1060 Founders Edition & ASUS Strix Review Meet the ASUS ROG Strix GeForce GTX 1060 OC
Comments Locked

189 Comments

View All Comments

  • anandreader106 - Friday, August 5, 2016 - link

    First thought: Still no Doom benchmarks being factored in?

    Ryan,

    You are my favorite GPU reviewer. Period. However I do think I need clarity on your Final Words.

    It's my opinion that DirectX 11 performance is "good enough" from Nvidia and AMD thus far in this new generation. So I'm left wondering, why aren't you going more in-depth with DirectX 12 and Vulcan titles/performance? Wouldn't that give us the best indication of what to expect going forward?
  • cknobman - Friday, August 5, 2016 - link

    The best indication you will get is that when reviewing Nvidia cards none of these things will be addressed?

    Why, because Nvidia is not doing so hot at them and it would not make their cards look better than AMD's.

    Look at the other 1060 benchmarks and comparisons and you will see that:
    A. Nvidia is behind on dx12 and the 480 => 1060
    B. @1080p the 1060 is overkill and a $200 480 4gb (or even a $180 470) is all you need
    C. Because of Nvidia's "founders edition" price gouge model most 3rd parties are trying to get away with charging more than $250. Reality is most 1060's are >= $270 which makes the AMD 480 the better buy.
  • StrangerGuy - Friday, August 5, 2016 - link

    It's funny the AMD fanboys always harp about the evil $300 1060 and never mentions how their favorite $200 480 is essentially vaporware and 8GB versions price gouged to death.
  • Ryan Smith - Friday, August 5, 2016 - link

    "It's my opinion that DirectX 11 performance is "good enough" from Nvidia and AMD thus far in this new generation. So I'm left wondering, why aren't you going more in-depth with DirectX 12 and Vulcan titles/performance? Wouldn't that give us the best indication of what to expect going forward?"

    The benchmark suite only gets updated periodically. It's a lot of effort to design and validate a testing sequence, and then run (and possibly re-run) 30 some-odd cards through it. So adding games has the net effect of slowing things down even further.

    At this point we're updating the testbed to Broadwell next month, at which point we'll refresh the games list as necessary.

    Though I will note that there's a reason we run so many (9) games: one game is too small of a sample size. Right now Doom is the only Vulkan game on the market,* so while it's a very interesting first look at Vulkan, it's not something that's going to be representative of Vulkan as a whole.

    * We'll ignore DOTA 2 since it's not meaningfully GPU limited on these fast cards
  • CHADBOGA - Friday, August 5, 2016 - link

    Doom is one of those few games out there that will inspire people to go one way or the other and should be included in your benchmark suite.
  • Scali - Saturday, August 6, 2016 - link

    Aside from that, the Vulkan implementation in DOOM is not yet complete.
    As you can read in the DOOM FAQ, they use AMD shader intrinsics extensions, but no equivalent for nVidia. Likewise, on AMD hardware, async compute is enabled, on nVidia it is not yet. The FAQ says they're still working on optimizing the code with nVidia.

    While it may be interesting to benchmark DOOM's Vulkan implementation to get an idea of where we currently stand, I don't think it is mature enough at this point to say anything about performance in Vulkan games in general, or how AMD and nVidia stack up, since you're comparing apples to oranges at this point.
  • rhysiam - Friday, August 5, 2016 - link

    I too am curious as to why the whole DX11 vs 12 comparison wasn't even raised. DX12 does not appear once in the conclusion page. The 1060 is the better DX11 card, no question. It's early days for DX12, but what we're seeing so far is enough to suggest things may well be quite different. The three DX12 titles in the review (Hitman, RoTR & AoS) are the three strongest games for the 480 by far. Add Doom via Vulkan into the mix and you have 4 NextGen API titles that put the 480 at or above 1060 performance. Of course we can't make hard and fast recommendations based on a few titles like this, but surely it's worth mentioning at least, if not exploring in detail?

    This might be a minor point except for the fact that you dismiss the 4GB 480 based on speculation/extrapolation that its VRAM won't be enough to keep it competitive future demanding titles. Surely those demanding titles will increasingly be (or at least offer) DX12 though? So if you're advocating a 1060 over the 480 4GB based on longevity and future performance, the DX12 question has to be raised doesn't it?
  • rj030485 - Friday, August 5, 2016 - link

    Think Ryan needs to work on his math. He says the 1060 is 17% faster than the 480 in GTA V when the difference more like 30%.
  • Ryan Smith - Friday, August 5, 2016 - link

    Oh geeze. This is what happens when you read the wrong column in a spreadsheet. Thanks!
  • onemoar@gmail.com - Friday, August 5, 2016 - link

    I don't know why anands witcher 3 scores are so low
    I am pushing 80FPS in places with everything turned up to ultra and post effects on with no hairworks

Log in

Don't have an account? Sign up now