MSI’s GeForce N470GTX & GTX 470 SLI

by Ryan Smith on 7/30/2010 1:28 PM EST
POST A COMMENT

41 Comments

Back to Article

  • tech6 - Friday, July 30, 2010 - link

    Is it just me getting old or have desktop PC become somewhat boring? There seems to be a lot more sizzle and innovation in mobile, server and even home theater tech. Reply
  • lunarx3dfx - Friday, July 30, 2010 - link

    I have to agree, I really miss the days of Overclocking with dip switches and jumpers, and when a 100 MHz OC actually meant performance you could feel and see. It's not as much fun as it used to be. The mobile market, smartphones especially, has gotten very interesting in recent years especially if you have gotten into homebrew and seeing what these devices are really capable of. Reply
  • araczynski - Saturday, July 31, 2010 - link

    i wouldn't say boring, just that each new iteration of cards is becoming less and less important.

    game developers aren't pushing hardware as hard as fast as the manufacturers would want them to.

    i've had my 4850's in CF since they came out, and i'm still playing everything at 1080p to this day, granted, i don't use AA, but i never have, so don't care.

    dragon age, mass effect 2, star craft 2 all smooth as butter, why am i going to waste time/money with a new video solution?

    this is still on my E8600(?) 3.16Ghz C2D. (win7).

    people are buying too much into the marketing, so cudos to their marketing departments, or which anandtech is one i suppose.
    Reply
  • 7Enigma - Monday, August 02, 2010 - link

    Hmm...have you checked to see if you are CPU limited in games? I have the same CPU and would guess that at 1080p resolution you could very well be CPU limited at stock E8600 speeds. I have the same proc and it is easy as pie to OC to close to 4GHz. I currently run at 3.65GHz at stock voltage and game at 3.85GHz (again stock voltage) with little more tweaking than upping the frequency (9.5X multi, 385MHz bus). And that's just with an OC'd 4870, dual 5850's surely could use the extra cpu power at such a (relatively) low resolution.

    HTH
    Reply
  • quibbs - Monday, August 02, 2010 - link

    In my humble opinion, games drove the PC market into mainstream. It spurred the development of most of the major components. This includes graphics cards. Especially graphic cards. But it seems that game development for the PC, while still major, isn't what it once was. This has to slow down the video card market as the games for PC are less numerous.
    Perhaps a major breakthrough in gaming (3d, holograms,etc) will continue the card wars, but I think it will eventually head in a different direction. A reversal if you will, energy efficient cards in smaller form factor that are very potent at pushing graphics. Think about it, the cards are getting bigger and more power hungry as they grow in capabilities. At some point this will be unsustainable (in many ways).
    Some company will realize that less is more and will produce such a product (when technology permits), and will kick off the new wars.
    Just a thought....
    Reply
  • piroroadkill - Monday, August 02, 2010 - link

    We're getting more and more powerful hardware, but most games are being developed with consoles in mind, so the benefit of having a vastly more powerful machine is very small.

    I don't feel like I need to upgrade my 4890 and C2D @ 3.4 for gaming, at all. To be honest, my 2900XT didn't really need upgrading, but it ran really hot and started to balls up
    Reply
  • softdrinkviking - Monday, August 02, 2010 - link

    i don't know what a 2900xt is, but i upgraded from a radeon 3870 because i couldn't play butcher bay or a couple of other titles at my screens native 1920x1200. that was without AA or MSAA or any kind of ambient occlusion or anything, it just couldn't hold the framerates at all.
    so i got a 5850, and it's been great for everything under the sun. sometimes i can even max out the AA and stuff and the games play fine. (not in crysis)

    anyway, the current gen of consoles games can usually take advantage of a regular LCDTV's 1080p, so if i can't play a game at that resolution without any extra visual enhancements selected, that sounds like there is room for improvement.

    As for the future of gaming, i personally believe that computers will supplant consoles eventually.
    maybe as soon as 15 years?
    it's just a guess, i can't back it up, but i personally can't see how sony will be able to afford losing so much money on a new generation console.
    if the current generation lasts long enough for sony to take advantage of the, only recently profitable, PS3, then home PCs will have a chance to get much faster and cheaper before a PS4 has time to come to fruition.

    it just seems like the consoles will eventually become financially untennable.
    damn, my dog just died, gotta go.
    Reply
  • afkrotch - Monday, August 02, 2010 - link

    All depends on hardware and games. I have to turn down my game settings on Metro 2033. I use a C2D @ 3.3 ghz and a GTS 250. I get way too much framerate drops at 1920x1200 no AA/AF.

    Granted, I can still play any game out there, so long as I'm willing to lower the quality/resolutions.
    Reply
  • arnavvdesai - Friday, July 30, 2010 - link

    I just wanted to ask if the author or the staff on AT feel that desktop graphics card are a shrinking market? Is the continued investment by ATi and Nvidia into the development of newer cards seem justified? I own a 5870 and barring a technical fault I dont plan to upgrade for another 3 years as I dont see myself upgrading my monitors.
    Is the market slowly but surely reducing or am I just wrong in assuming that? If yes, then where should these companies research into?
    Reply
  • nafhan - Friday, July 30, 2010 - link

    Well, the tech that goes into making a high end card eventually makes it into mobile, low end, integrated, cell phone, and console parts through a combination of cutting down the high end and successive silicon process improvements. So, ATI and Nvidia don't expect to make all their R&D money back from the initial run of high end GPU's. High end GPU's are basically proof of concepts for new technology. They'd certainly like to sell a bunch of them, but mostly they want to make sure the technology works. Reply
  • afkrotch - Monday, August 02, 2010 - link

    Let's not forget that these same cards end up with different firmware and get called Quadros, FireGL, or whatever else. Reply
  • Solidstate89 - Friday, July 30, 2010 - link

    First time poster in the Anandtech comments, just want to first off say that this is a great site and I've been reading it for years. By far the most unbiased and intelligently written tech site.

    However regarding this MSI 470; I assume you did a review on this because of its claims of using superior hardware like the chokes you mentioned. However I was wondering why you didn't do it on the MSI GTX 470 Twin Forzr II that uses MSI's special dual-fan cooling system. I've been looking at the card since it was first announced I've yet to find one site that has done a review on it. The closest I found was a review for the 465 Twin Frozr II on Guru3D, but I wanted to see the temps for the 470. I have a feeling you might even pleasantly surprised by the temps of a GF100 card if they do indeed fall in line with what the 465 was getting.

    Perhaps you could do it as part of a special Non-reference design article like you did with the 5870s not too long ago. I'm not even sure if you read these comments, but I just thought I'd give input for something that I'd like to see and I'm sure many others might find interesting as well, especially if you were to do a comparison piece.
    Reply
  • Patrick Wolf - Friday, July 30, 2010 - link

    That would be cool if they tested all card's with special coolers. But I don't think they chose the MSI card. They needed an extra 470 for the article and MSI was the one that accepted the requet. Reply
  • Ryan Smith - Friday, July 30, 2010 - link

    That's pretty much the correct answer. We needed a reference-style GTX 470 and MSI obliged our request. We review a lot of custom cards, but you'll find that we are unable to review every last custom card due to the fact that there are a ton of them. Reply
  • edi_opteron - Friday, July 30, 2010 - link

    AMD sold 16 Million 5000 series during these 6months while nVidia was Silent! I'm really interested in this amazing performance...really....It's very interesting for me to see a 470GTX beats AMD's top-end HD5870! but let's be honest after six months nVidia must release sth like fermi to at least compete AMD's VGAs...but don't go so far, AMD's Southern Islands will be in stores after few months and i don't think that nVidia could reply AMD except with it's fermi ! and i advise you guys to go for AMD's 6000 series if you can wait for some days! Reply
  • ggathagan - Saturday, July 31, 2010 - link

    ... and when AMD releases the 6000 series, someone else will same the same thing regarding Nvidia's next generation of GPU.
    Most people are better off buying the best they can get for the money they want to spend at the time they want to buy it.
    Otherwise, it turns into a never-ending waiting game.
    Reply
  • billdcat4 - Friday, July 30, 2010 - link

    Newegg has a promo code for the MSI GTX 470 cutting 10% off of its original price
    EMCYVNV39

    This brings the card to $269 before $20 MIR
    Reply
  • politbureau - Friday, July 30, 2010 - link

    Still somewhat disappointed not to see Vantage numbers in Anand GPU reviews. This would be an easily repeatable benchmark for home users that doesn't involve strictly game benchmarks. I surmise the mindset is that Vantage is purely a benchmark or WR tool, but then I'd question why there has been such a recent push on benchmarking hardware (ie "Four Flagship X58 Motherboards Reviewed").

    This is pretty much the only reason I've switched to reading GPU reviews from Guru3D, as I much prefer the tone and clarity here at Anand.

    $0.02
    Reply
  • Ryan Smith - Friday, July 30, 2010 - link

    The short answer is that we use different tools for different types of articles.

    Synthetic benchmarks can be very handy for isolating specific aspects of a piece of hardware's performance. For example if we want to do texturing tests then Vantage is the de-facto way to go. However when it comes to overall performance, synthetic benchmarks do not tell you how a game will play on the card because they are in fact not a game - they're synthetic.

    More specifically, AMD and NVIDIA put a lot of effort in to their drivers to optimize the performance of their products on games and benchmarks alike. However they don't have the resources to work on every last game, and a large factor in deciding what to spend their limited time on is to see what review sites are using. Optimizing their drivers for commonly benchmarked games can directly impact their sales by making the performance of their cards improve in the games that ultimately impact the recommendations of editors. Or to put this another way: it's to their benefit to optimize their drivers to make their cards look good in reviews.

    It goes without saying that we would prefer that every game is appropriately optimized, but this isn't realistic. So we have to pick our games based on what we think is going to be the most relevant to our readers while taking in to consideration that we're indirectly affecting what games get optimized.

    So what does this have to do with 3DMark? As we previously established, 3DMark is synthetic - it isn't a game. If GPU manufacturers focus on improving 3DMark performance, then they're doing so at the detriment of real games. For a GPU review we do not use 3DMark as part of our general benchmark suite because it would only tell you how well a card performs on 3DMark, and it would be a signal to GPU makers to optimize for synthetic benchmark. People buy video cards to play games and run GPGPU applications, not to run 3DMark.

    As for other types reviews, this is not an issue. In other articles the GPU is held constant, so we're using these tools purely as a diagnostic tool rather than to evaluate a GPU. For GPU makers there's nothing to "win" in those reviews, and motherboard makers they can't optimize for 3DMark. It's a problem that's distinctly GPU-only.
    Reply
  • Quidam67 - Saturday, July 31, 2010 - link

    Your rationale seems well thought out, but I'm not entirely buying it. Calling 3DMark "synthetic" is in my opinion semantics. It's no more synthetic than a game were you to compare it to another game using a different 3D engine. If anything, it's more well rounded as it is specifically designed to test 3D performance under a number of different metrics. Also, 3D Mark is an evolving product. It's not like they only ever wrote one version. Vantage is just the latest version, designed to keep the metrics contemporary with the latest hardware; drivers and API's. Also, you seem to contend that people who play games therefore aren't interested in running 3D mark. I've been a gamer since the Commodore Vic 20 and I always run 3D mark whenever I get a new card. Yes, I know it's not going to tell me everything I need to know about the performance of the card, but it is one piece of the jigsaw puzzle, besides of which, it is a fun and easy way to gauge impacts and stability especially when overclocking your system (not just the GPU). I for one would appreciate seeing published results as part of a GPU review.

    Lastly, while it is probably almost certainly true that drivers are optimised with 3D Mark in mind, it is hard to believe they are optimised in such a way as to offer zero percent benefit to any other application other than 3D Mark. I mean, 3D Mark is after all using a common set of API's. From what I've observed, game-specific driver optimisations (ATI and nVidia) typically offer gains only in the single % digit range. For example it's not as if Crysis has been "mastered" by driver optimisations -a poorly programmed game can't be fixed by improving drivers (just the same as a good hair-cut doesn't guarantee you are going to get laid, at least it never worked for me)

    In my view, Anandtech should reconsider its position not to include this so-called synthetic bench-mark, as I sincerely think you guys are making way too much out of it.
    Reply
  • erple2 - Saturday, July 31, 2010 - link

    Normally I'd agree with you, but I think that Ryan hit the nail on the head. I can recall when NVidia and ATI were optimizing the specific internal compilers specifically for 3Dmark. Tuning for one specific engine (which is what they had to do) is very time consuming and difficult, and takes more time away from optimizing the drivers in general (or for a more "worthy" cause - another popular game).

    I seem to recall the optimizations were things like:

    if product executable matches 3dmark.exe
    then
    do special tweaks that work only for the 3dmark app
    else
    do nothing
    endif

    I think that's the situation we're trying to avoid nowadays - optimizing specifically for a single executable (ultimately where you end up when you want your numbers to look "best"), hurts everything else but that one thing you're optimizing for.

    There's a difference between optimizing for the API vs. the specific executable...
    Reply
  • mapesdhs - Saturday, July 31, 2010 - link


    I've spent some time trying to find out why my 8800GT SLI setup was the
    same or faster than a friend's 4890 CF system. Using 3DMark06 did prove
    very useful in working out the reason (shader performance). See:

    http://www.sgidepot.co.uk/misc/pctests.html
    http://www.sgidepot.co.uk/misc/stalkercopbench.txt

    Thus, though I agree with Ryan about the dangers of GPU makers
    optimising for these metrics, they can be revealing sometimes, in my
    case coming to the conclusion that, if one is still playing older games,
    then buying a newer card might not give that much of a speedup since
    the newer design may focus on newer features.

    What about including the Vantage results as a point of interest, kindof
    a 'by the way' addition, but downplaying the importance of such data?
    Focus on the game results, but include Vantage as an 'appendix' in
    terms of presentation.

    Ian.
    Reply
  • Porksmuggler - Monday, August 02, 2010 - link

    Wait, you're trying to find out why your 8800GT SLI is faster than 4890CF? You need to look elsewhere than the GPUs, 8800GT SLI is not even as fast as 4850 CF, I have both on otherwise identical setups. Your links show testing of just 3DMark06, and you tested with two entirely different systems?

    Ryan is spot on, don't use synthetic benchmarks to compare similar generation GPUs. They are primarily used for testing when the GPU is the constant.

    It seems from the replies above there is confusion about how the coding/engine of a synthetic benchmark (so-called? really?) is very different from actual games. Reviewers do far better with a battery of games, as Anandtech uses for their articles.
    Reply
  • imaheadcase - Saturday, July 31, 2010 - link

    Thats listed in THE TEST, but its excluded from graphs.

    Would like to compare that to it, considering upgrading to this card from that.
    Reply
  • anactoraaron - Saturday, July 31, 2010 - link

    Yeah I noticed that also. More importantly I'd like to see 2 of those in SLI along with 2 5770s in CF. I think the 2 5770's in CF has been the best bang for your buck for awhile now, especially 2 can be had for $270AR. As I recall, the 5770 (single card setup) was ~5% less than what a 260 core 216 would get you but it was cheaper (about $30 less ATM at newegg). If you have the ability to do CF/SLI having 2 lower priced cards makes the 460/470 & 5850 decision easy IMO. However since SLI scales performance better it would be interesting to see if the added $60 for 2 260's is justified against 2 5770's.

    Maybe for the next article?
    Reply
  • Kyanzes - Saturday, July 31, 2010 - link

    MORE minimum frame rate measurements in the articles please!!! That's the whole essence of it! Ofc, for sheer comparison, max FPS could also be included for sure, but the MIN FPS is the real interesting part.

    Keep it up pls!!!!

    One of the reasons I tend to check out HardOCP is that they include MIN FPS.

    Please, please do it often. Do it every time.
    Reply
  • Tunnah - Saturday, July 31, 2010 - link

    awesome review as usual but how come the focus on the 470 ? i thought with the release of the 460 the 470 was kind of like..its the core i7 940 to the 920 - sure it's faster but the price difference doesn't warrant the minor speed bump, and it OC's like a dream

    also, from what i've read the 460 has amazing scaling with SLI..and the temperatures are better

    but this is just what i've read in 1 review so waiting to read it here before i start to believe it :D
    Reply
  • ggathagan - Saturday, July 31, 2010 - link

    RTFA

    3rd sentence:
    "As part of a comprehensive SLI & CrossFire guide we’re working on for next month we needed a second GTX 470 for testing GTX 470 SLI operation, and MSI answered our call with their N470GTX."
    Reply
  • Tunnah - Saturday, July 31, 2010 - link

    yeah i read it, was just saying it seemed a little redundant to do a full article on an overpriced card that was being overshadowed by a cheaper, newer revision Reply
  • Matrices - Saturday, July 31, 2010 - link

    The 460 numbers are still there, so what's to complain about?

    And the 470 can be had for $275 now rather than the $350 MSRP.

    If you actually read the benchmarks, you'd see that it's hardly 'overshadowed' by the 460 on performance. It's noticeably faster in more demanding titles.
    Reply
  • Tunnah - Sunday, August 01, 2010 - link

    seems my comments are being taken out of context, sorry guys not used to posting on a board i'm an IRC kind of guy

    my overshadowing comment was in reference to SLI only

    was asking why study the 470 SLI at the moment when the 460 seems to be grabbing more headlines, especially with it's scaling capabilities

    the 460 SLI numbers were what i was asking about, as from what i've read in other reviews the scaling is amazing and it brings it up against, and sometimes passes, the 5850 in XF

    even though i know a big SLI round up is coming it just seemed weird to focus on the 470, but as they say they've been waiting for a second one to do SLI testing for a while..
    Reply
  • mapesdhs - Saturday, July 31, 2010 - link


    I see the 8800GT in the test setup summary, but why no results for it (especially SLI) in
    the performance tables?

    Ian.
    Reply
  • Perisphetic - Sunday, August 01, 2010 - link

    A picture of twin jet engine exhaust on the sticker & a software that's called afterburner. Can this be used for this new type of hot air drilling or just plain marshmallow roasting??? Reply
  • Perisphetic - Sunday, August 01, 2010 - link

    But jokes aside, where in the software is the setting for heat shrinking tube? Reply
  • nmctech - Monday, August 02, 2010 - link

    I noticed a few days back they released the Quadro Fermi cards 4000, 5000 and 6000. I found a couple of gamer reviews but a more thorough review of the cards for 3D use would be nice.

    Have you guys had a chance to check those out yet?
    Reply
  • mapesdhs - Wednesday, August 04, 2010 - link


    I expect they'll review them eventually, but more likely reviews for the new cards
    will appear on other sites first, eg. those aimed at users of Maya, ProE, CATIA, etc.

    Presumably they'll run Viewperf, Cinebench, etc. among other things. I have two
    Quadro FX 5500s to test (after which I'll put them up for sale), so I can gather
    some results, post the data on my site for comparison to whoever reviews the
    newer cards. If anyone here is interested, let me know (mapesdhs@yahoo.com)
    and I'll send out a URL when the tests are done.

    Btw, I was surprised to see NVIDIA's summary shows the 5500 is 3X faster than
    the 5800:

    http://www.nvidia.com/content/PDF/product-comparis...

    so it should be interesting to see how two 5500s SLI compare to the new 6000,
    sans any differences in CPU/RAM/mbd that might affected the results (my system
    is a 4GHz i7 860, so the two cards will be running 8X/8X for SLI).

    Ian.
    Reply
  • hsew - Tuesday, August 03, 2010 - link

    I wish SOMEBODY would do an article on multiple GPU scaling , CFX and TriSLI, on AMD vs Intel.

    Something like:

    Core i7 980X, Core i7 9xx, Core i7 8xx, Core i5 7xx, Core i5 6xx, Core i3 5xx.

    Phenom II X6, X4, X3, X2, Athlon II X4, X3, X2.

    all systems 4GB ram each.

    Now, I know that such an article would likely take an astronomical amount of time to write, BUT, it would answer a seriously nagging question:

    Do you really need four or more cores in a Multi-GPU system? Do you even need an Intel CPU to effectively run a Multi-GPU system?
    Reply
  • Exelius - Wednesday, August 04, 2010 - link

    I think the reason this hardware is so boring is that the difference between low-end cards and high-end cards is so high. Low end cards are far more popular though; and game companies aim for the lowest common denominator. Thus there is no market for exciting cards because there are no games that can use them.

    NVidia knows this; and are desperately trying to find a new market for their hardware. ATI knew this, which is why the merger with AMD happened. I'm guessing NVidia won't last long as an independent company; Fermi for HPC isn't catching on quickly and I don't think NVidia is in a stable enough position to convince HPC users to begin the costly and time consuming project of moving to Fermi. I think they need an Intel, IBM or HP behind them for that to happen.

    But yes, PC graphics have become boring. Blame $400 PCs and smartphonea for that.
    Reply
  • Heatlesssun - Saturday, August 07, 2010 - link

    Haven't played with a high-end system lately have you? Graphics boring on high-end PCs, you gotta be kidding me! 3D Surround, just amazing stuff that that $400 PC and smart phone need not apply. Reply
  • Patrick Wolf - Monday, August 16, 2010 - link

    It'd be great if you explained under what conditions you record temps. Things like using a case or an open bench? Are there any additional fans blowing on the card(s)? Room temp? How long do you run Furmark and what settings are used? Reply
  • Harm Nano - Sunday, May 08, 2011 - link

    GTX470 SLI RUNS HOT NO MORE ! INSTALLED TWO ZALMAN 3000N COOLERS , TAKING ABOUT DAY AND NIGHT, WELL WORTH THE THE MONEY , TEMPS - BEFORE THE ZALMANS ---KOMBUSTOR 96C -100C LESS THAN A MIN! - FANS AT A 100 % ,LOUD! ,CRYSIS HIGH 80C- TO LOW 90 C, FANS AT 100% AS WELL -- ZALMANS INSTALLED -TEMPS NOW KOMBUSTOR 79C-83C AFTER 5 MIN STEADY AS A ROCK, CRYSIS 67C TO 76C , THE ONLY THING LOUD NOW IS THE CPU COOLER, AT IDLE THE CPU AND THE TWO ZALAMAN ARE AT THE SAME TEMPS 39C [ AMD CORE 23C] , GPU ARE MANUAL SET IN ASUS M4N98TD EVO, MOTHERBOARD BIOS TO RUN MANUAL SET, LOW SET AND HIGH SET ,ON A COOL DAY ,RPM ARE A LOW 1400 ,ON A HOT DAY 2800 TO 3200 RPM , AS AUTO AS YOU CAN GET , COOL AND QUIET , TO FIT SLI ,TWO THINGS, A MUST HAVE ,RIGHT MB ,ASUS M4N98TD EVO AND THE RIGHT CASE ,TEMPEST EVO. Reply

Log in

Don't have an account? Sign up now