Back to Article

  • djc208 - Saturday, January 14, 2012 - link

    The thing I hate about this kind of Mobil strategy is that without doing your homework the average person isn't going to know that one number difference in their laptop could make a big difference in graphics performance and longevity. Reply
  • GenSozo - Saturday, January 14, 2012 - link

    In addition to that, I'm a mainstream buyer who does my homework, and I'm not going to pay the big bucks (or medium bucks, as the case may be) for last gen, rehashed hardware, no matter how they spitshine it. On principle, at the very least. Reply
  • Roland00Address - Saturday, January 14, 2012 - link

    So is amd going to release any VLIW4 discrete graphics to crossfire with the trinity gpu? Or is a VLIW4 going to crossfire with a VLIW5? Or is amd just going to drop APU+DGPU crossfire with trinity? Reply
  • JarredWalton - Sunday, January 15, 2012 - link

    Asymmetrical CrossFire is exactly that: different GPUs that still manage to (sort of, in theory) work together. That's actually the big hurdle to overcome, and I'd imagine it's why my initial testing of Llano's CrossFire with 6630M didn't pan out so well. Llano is 400 cores and 6630M is 480 cores, so I imagine ACF with VLIW4 and VLIW5 wouldn't be all that different. Here's to hoping ACF works (a lot) better when Trinity launches. Reply
  • Wolfpup - Wednesday, January 18, 2012 - link

    Does that even WORK though to begin with? Last I payed attention to it, it sounded like crossfire with the current 'A' CPUs and a separate GPU was basically non-functional.

    Even if drivers have improved, it still annoys me...instead of using a GPU on the CPU + a seperate GPU, SOOOOOO much better all around to dump the integrated GPU, leaving either a cheaper to guild CPU, or tons of transistors for more cores and cache, and then put all those extra transistors on the GPU instead-an 800 core or 960 core part instead of 480, for example.

    Granted AMD's 'A' series doesn't make me furious like Intel's worthless video does, but even still it doesn't actually make sense.
  • AlB80 - Sunday, January 15, 2012 - link

    1. VLIW5 32-bit is a very effective architecture.
    2. I think Trinity will be VLIW4 32-bit (Cayman has 64-bit support). And AMD will find solution to glue VLIW4 and VLIW5 parts in Xfire.
  • bennyg - Sunday, January 15, 2012 - link

    Smart-bummed wordsmithery does not change that a minor respin being deliberately associated with the next Gen of cards is flat out dishonest.

    And surely nvidia would get the lawyers onto them super quick as soon if they ever said the word "green" in relation to graphics cards :-)
  • XZerg - Sunday, January 15, 2012 - link

    I don't care what kinda shitty options they provide as long as the laptop manufacturers have a TB port and there are solutions out to exploit the PCIe bus to use external switchable desktop GPU. Sure not the perfect beat the internals in performance but the other benefits more than make up for the slow performance. Reply
  • bennyg - Sunday, January 15, 2012 - link

    Yep I agree completely. But both sides would be doing themselves out of expensive mobile gpu sales... I think theyd rather sell a GF114 as a 580M rather than a 560Ti, and a Barts as a 6970M not 6870. The difference is many hundreds in RRP alone. Reply
  • JarredWalton - Monday, January 16, 2012 - link

    External GPUs via Thunderbolt may not have a fast enough interface bandwidth. Remember that a single x16 PCIe 2.0 connection can push 8GB/s (80Gbps with 8/10 encoding); PCIe 3.0 will be double that with ~16GB/s in each direction (128Gbps with 128/130 encoding). Thunderbolt is up to 10Gbps in each direction, which is only 1.25GB/s; even with two TB connections you're still only getting about a fourth of the bandwidth of an x16 PCIe 2.0 connection. Reply
  • DanNeely - Monday, January 16, 2012 - link

    A PCIe2.0 x4 equivalent is consistently fast enough for gaming. A 1x is not (the ~30% average slowdown consists of some games taking no penalty and others suffering as much as a 75% drop in FPS). Unfortunately TechPowerUp didn't test 2x bandwidth; I'm guessing they only did physical slot testing instead of the tape the contacts method needed to get a 2x effective slot in a desktop.
  • JarredWalton - Monday, January 16, 2012 - link

    Keep in mind that article is over two years old; many of those titles are a lot less demanding than current generation games, and I'm not sure if any of them support DX11 features. That said, HD 5870 on desktop is still faster than everything short of SLI/CF configurations on laptops. I'd have to see testing done with things like GTS 430 and HD 6770 to get a better feel for what the performance loss due to limited PCIe bandwidth will be on recent titles. Maybe it's something to investigate when I get time. :-) Reply
  • DanNeely - Monday, January 16, 2012 - link

    The results have remained fairly consistant since the first Toms Hardware test I saw using IIRC PCIe1.0 and an 8800 series card; meaning the 8800 w 1.0 1/4x slots took similar hits to the 5870 with 2.0 1/4x slots. Without taking an inordinate amount of time to bench a dozen+ modern games I can't categorically say it won't have any effect, I'd be shocked if it turned out to do so. Reply
  • tipoo - Sunday, January 15, 2012 - link

    Strategy aside I still wish they reserved new main numbers for the new generation of graphics cores. There was a time when card names made sense and refreshes using the same architecture would be ***50 parts. Now no one can tell what generation card it is without looking it up, and mobile cards are even worse. Reply
  • eanazag - Monday, January 16, 2012 - link

    I understand they may perform better than originally released products due to maturity, but it is not really a new GPU. I think they should have just opted to throw an R2 on the end so people have a sensible way to tell the difference.
    I find it interesting they used an Intel system in addition to the AMD for examples, too (in the slides). I think throwing a few more games would have been better; like 10 total.
  • DanNeely - Monday, January 16, 2012 - link

    The problem is OEM sales/marketing types. Once the first 7xxx series parts are out all 6xxx series parts are obsolete by definition and (they think) Joe Luser will decide any computer with them inside is outdated crap. A half dozen or a dozen years ago when a GPU was simple enough that a top-bottom redesign was possible every generation (and even when not, adding an additional increment of hardware video decode was possible from the die shrink) this didn't matter.

    Designing new chips isn't going to get easier as time passes by; and mass educating lusers is an even more impossible challenge so the current state of affairs is something we geeks are just going to have to live with.

Log in

Don't have an account? Sign up now