Meet ASUS’s GTX 560 DirectCU II Top

As we mentioned previously, the card we’re sampling today is ASUS’s GTX 560 DirectCU II Top, the faster of ASUS’s two GTX 560s, and the 2nd fastest card on NVIDIA’s launch list. It’s clocked at 925MHz for the core and 4200MHz (data rate) for the memory, giving it a sizable 115MHz (14%) core clock and 196MHz (5%) memory clock advantage over the GTX 560’s reference clocks. As we’ll see when we get to benchmarking, this is fast enough to start challenging reference clocked GTX 560 Tis.

We’ve reviewed a number of ASUS DirectCU cards in the past, but this is the first card we’ve reviewed from them that uses the DirectCU II cooler. What’s the difference? The DirectCU (I) was an open air cooler with single fan mounted at the center of the card; the DirectCU II uses two 80mm fans. In practice it’s similar to some previous coolers we’ve seen such as MSI’s Twin Frozr, however ASUS has managed to throw in their own unique adjustments.

Notably, the DirectCU II uses a different fan configuration than what we normally see with these dual-fan cards. Normally these cards are symmetrical – the left and right fans are the same. In the case of the DirectCU II ASUS is using different fans, with the left side using a taller 9 blade fan while the right side uses a more typical shorter 11 blade fan. ASUS hasn’t specifically mentioned why they do this – we’d assume they found it more efficient – but it’s an interesting deviation from what we normally see with this style of a cooler.

Removing the shroud and fans, we’re down to the actual HSF the cooler uses. ASUS has ditched the two large aluminum heatpipes running across the top of the card for a trio of smaller copper heatpipes running directly from the base of the HSF to the secondary heatsink over the rear of the card. These dual aluminum heatsink configurations are typical for dual-fan cards. Flipping the HSF over and we can see the base of the unit; ASUS has the heatpipes make direct contact with the GPU rather than using a baseplate to spread the heat out some. Both the heatpipes and the primary heatsink end up making contact with the GPU as a result.

Cooling for the VRMs is provided by small spring-loaded heatsinks attached directly to the MOSFETs. The airflow from the fans provides cooling for these heatsinks, along with some airflow over the GDDR5 memory modules to keep the card’s RAM cool.

Overall the build quality of the GTX 560 DirectCU II is up to the same solid standards we see out of an ASUS DirectCU card. ASUS continues to be the only manufacturer we see regularly using stiffening brackets with an open air cooler, making the card extremely rigid and almost impossible to flex. Even the shroud is surprisingly durable, having apparently been coated in some kind of metal or metal-like rigid plastic.

Moving on, with the factory overclock power consumption is undoubtedly higher than NVIDIA’s reference value of 150W. ASUS and other video card manufacturers don’t normally provide a TDP for the card, but we’d guess it’s around 170W, similar to that of the reference GTX 560 Ti. Providing this power is a pair of PCIe power sockets, which ASUS has helpfully oriented at the top of the card rather than the rear as we’ve seen on most other GTX 560/460 cards.

As is normally the case with ASUS DirectCU cards, the tradeoff in their design is that their cards are a bit bigger on average. Like the DirectCU (I), ASUS’ shroud is longer than the PCB itself, so while the PCB is only around 9.15” the total length of the card is just over 9.8”, versus 9” for the reference GTX 560 Ti. Meanwhile for display outputs, ASUS is following the typical NVIDIA design: 2x DVI ports and a single mini-HDMI port.

Rounding out the package, ASUS includes their usual collection of utilities: GamerOSD and SmartDoctor. SmartDoctor hasn’t changed since the last time we’ve looked at it; it still provides a passible but ultimately dated GUI for monitoring and overclocking the card. ASUS is offering voltage tweaking on this card for overclockers looking to push the card higher than 925MHz, so overvolting through SmartDoctor is a necessity for getting much farther than 925MHz. A good GF114 GPU should be able to hit near 1GHz with some extra voltage, though in our case we were only able to push our card to 950MHz, even at 1.075v.

The rest of the collection is your usual pack-in fare: a multi-language quick setup guide, a mini-HDMI to HDMI dongle, 2 molex-to-PCIe power adapters, and a DVI-to-VGA dongle.

Finally, as the GTX 560 DirectCU II Top is ASUS’s top-tier GTX 560, it also has a top-tier price: ASUS is charging an additional $20 over the GTX 560 OC, for an MSRP of $219.

Index New Release 275 Drivers & The Test
POST A COMMENT

66 Comments

View All Comments

  • DanNeely - Wednesday, May 18, 2011 - link

    Where exactly are you finding a $300ish 2560x monitor? IIRC even the best sales Dell had on refurb 3007's only dropped as a low as $800ish, and with the potential inventory of off lease 3007's mostly gone by now and the 3008 and 3011's being significantly more expensive deals that good aren't likely to repeat themselves in the future. Reply
  • L. - Thursday, May 19, 2011 - link

    My mistake you two ;)

    I was thinking about a lcd pannel from idunnowho that had 2xyz * something resolution and that was dirt cheap .. obviously 2560*1440 aren't common at all and overpriced.

    On the other hand, you could make the argument for a dual monitor setup below 400 bucks that spans more pixels and thus makes more use of the gfx.
    Reply
  • Stas - Wednesday, May 18, 2011 - link

    You fail once again. You plan on keeping this card until 20" screens hit 2560x1600/1440? It will probably only be... oh, idk... 10 years?
    And $330 for a decent 2560 screen? Links plz.
    Reply
  • L. - Thursday, May 19, 2011 - link

    Yes sir, I would like to be agressive to you to ?

    On the other hand ... 10 years ??

    My Amoled screen on my n900 has a dpi small enough to cram more than 4*full HD on a 20" pannel, something that will happen soon enough as the oled processes mature.

    Again, my mistake on the monitor price, memory error.
    Reply
  • L. - Thursday, May 19, 2011 - link

    Who would buy a 200 bucks card to play on a single 150 bucks monitor when the whole config costs 700+ bucks ?

    200 bucks is A_DECENT_AMOUNT_OF_MONEY for a GFX, it means you're a gamer (maybe a poor one though) and it means you might be interested in dual screen (meh you spent 700 bucks on the tower, why not 2*150 for dual 22" 1080p monitors ?).
    Reply
  • L. - Tuesday, May 17, 2011 - link

    I'm seeing quite a trend with AMD stuff getting better scores (relatively) on more recent and demanding games, and I'm wondering if it would be time to weight games differently for a better comparison.

    For example here, on the important/demanding/modern games (let's take Metro 2033 and Crysis to have undisputable arguments here), the 560 doesn't ever come close to a 6950 and only the best version can beat the 6870 by almost nothing.

    If somebody buys a gfx today, it might be to use it for at least another year, and in that sense, the weight of less important games should be diminished a lot, including hawx-120fps-fest, other 100+ fps titles and the clearly nVidia-favoring Civ5.

    What is important here, is that NOONE has any interest in buying a gtx560 today, because of the following extremely important points :

    -> AMD offerings do better in more demanding games, and will thus do better in future games
    -> AMD offerings (6950 for example) have more memory, which WILL be used in next-gen games for sure, as for every gen
    -> Noone cares if they have 110 or 120 fps in hawx, which is a console game anyway

    I believe the use of any PC component for gamers can be summarized to this in the end :

    -> can it play this game ? 30+
    -> can it play it well ? 60+

    Because any of those components will for most people be used 2 years from now, the fact that older / less-demanding games get 110 fps is completely irrelevant, might as well show 557 fps in quake3 as a benchmark...

    As a summary, could you anandtech guys please tweak your test list / weighting in order to better inform the less-informed readers of your website ?

    It is utter nonsense to state today that a 560Ti "trades blows" with a 6950 or that a factory OC'd 560 "comes close" to a 6950 either.

    The one real true fact is the 6950 gets a huge win in all demanding titles, has 1GB more vRAM and can even unlock+OC very easily to levels none of the 560 versions can reach.

    nVidia has done some great stuff in the past, but one has to admit that outside of quad-sli gtx580 there is no use in buying anything nVidia this round, as AMD offers better performance + performance/watt at every price point this time around.

    There is one argument for nVidia and that argument (no, not the drivers, because you do NOT play on linux) is the nVidia goodies like 3d gaming and other minor stuff.
    Reply
  • crimson117 - Tuesday, May 17, 2011 - link

    I half agree with you... some of your commentary is good (HAWX lol) but one particular conclusion is not tenable:

    "AMD offerings do better in more demanding games, and will thus do better in future games"

    When Mass Effect 3 comes out, I expect that like Mass Effect 2 it will strongly favor nVidia GPU's - unless they rewrote the entire engine.

    New games cannot be classified into demanding vs non-demanding - each game engine has its favorite factors, be it clock speed, memory bandwidth, stream processors, ROP's, etc, so I expect each game will have its favorite card.
    Reply
  • formulav8 - Tuesday, May 17, 2011 - link

    The thing is, in games that people can actually use the horsepower, the Radeon is the best card.

    If you care about getting 500fps on Quake3 instead of 450fps, then the GTX is the better card.
    Reply
  • lowlymarine - Tuesday, May 17, 2011 - link

    The problem is that if they DON'T complete rewrite the entire engine, Mass Effect 3 will continue to be a festival of even mid-range cards breaking 60 FPS. While there's nothing wrong with that per se - ME2 is one of the better-looking games out there despite being not particularly intensive, after all - it still means that nVidia's slight advantage over AMD in that game is meaningless. Compare that to Crysis where even the 6970 falls short of 60 FPS at WUXGA, and the sizable lead AMD carries over the competition there has real, noticeable impact on the game. Reply
  • Stas - Wednesday, May 18, 2011 - link

    Correction:
    New games cannot be classified into demanding vs non-demanding - each game engine has its favorite chip developer. Money and politics will decide performance in certain games.
    Reply

Log in

Don't have an account? Sign up now