Budget Battle: HyperMemory vs. TurboCache

by Derek Wilson on 5/12/2005 9:00 AM EST
POST A COMMENT

33 Comments

Back to Article

  • A554SS1N - Tuesday, May 17, 2005 - link

    Hello Derek, are you planning to test the AGP 6200 version which uses the NV44A core - it's virtually the same core as the PCI-E Turbo-Cache cards, except it is a native AGP solution. They are all 64-bit as far as NVidia have told me and although all board makers only provide passive cooling, that's all it needs. I've only seen one brief review on another website so far (of the Inno3D one) that didn't really have enough information. As reviews on Anandtech are very thorough, it would be good to test and compare this card. The core is clocked at 350MHz like the Turbo-cache cards and the card has 128mb of 500MHz effective memory. Reply
  • Zoomer - Monday, May 16, 2005 - link

    http://www.anandtech.com/video/showdoc.aspx?i=2413

    Typo?

    " ...the bear minimum in graphics cards supports DX9 level graphics...
    "

    Shouldn't bear be bare?
    Reply
  • pxc - Friday, May 13, 2005 - link

    The x300 HM results wouldn't be so disappointing if ATI had not been saying that HM cards would be faster than TC cards.

    Derek: 128MB HM doesn't make much of a difference. I had an XPRESS 200M laptop with 128MB dedicated memory plus HyperMemory and performance was horrible, much much slower than a x300 SE. I wouldn't be surprised if turning off HM made the card faster.
    Reply
  • PrinceGaz - Friday, May 13, 2005 - link

    #23 Jarred- "Despite the fact that the system was high-end, the performance of the cards is the limiting factor. A 3.0 GHz or 3000+ CPU with the same amount of RAM would likely post similar scores. However, 512MB of RAM would have something of an impact on performance. Anyway, $100 gets you 1GB of RAM these days, so we're more or less through with testing anything with 512MB of RAM or less."

    Whilst I agree that 1GB and it is not unreasonable for even budget systems to have that amount, $100 certainly won't buy you 1GB of OCZ PC3200 with 2-2-2 timings. Given that the performance of these cards is very dependent on system memory and a budget system is likely to use budget memory, it is important to have benchmarks done with 2.5-3-3 timings or even 3-3-3.

    I support AT's use of a very fast CPU etc in all their usual graphics-cards reviews, and a very fast GPU for all their CPU reviews, so that performance is dependent on the component being tested rather than the rest of the system. However with the TurboCache and HyperMemory cards, the system memory effectively becomes an important part of the card, so it is important to test with the sort of budget memory these cards would be used with.

    By using premium memory, AT has probably reduced the importance of the quantity of onboard memory and skewed the relative results of the tested cards. The 64MB TC may have performed significantly better in comparison with the others, had the test system been outfitted with 1GB of budget memory at 3-3-3 timings.
    Reply
  • kmmatney - Thursday, May 12, 2005 - link

    #28, you can get decent integrated graphics with the ATI XPRESS 200 chipset - however the only motherboard at NewEgg is $102, which is a bit high. I once saw them for around $80.

    http://www.newegg.com/Product/Product.asp?Item=N82...
    Reply
  • bupkus - Thursday, May 12, 2005 - link

    "...perhaps upcoming integrated graphics solutions from ATI and NVIDIA will be as compelling as these parts show value products can be."

    I have a need for cheap but functional graphical PCs for smaller children, so I'm still waiting for a replacement for the NVIDIA nForce2 IGP. What's the holdup?
    Reply
  • RadeonGuy - Thursday, May 12, 2005 - link

    Moral of the story is they both suck Reply
  • DerekWilson - Thursday, May 12, 2005 - link

    Our comparison tests will continue to be done on higher end hardware. Our environments are designed to test the performance of graphics hardware not to determine the actual performance an end user can expect.

    We run these games on a fresh install of Windows XP SP2. Audio hardware is not installed and the service is disabled in Windows. Nothing that isn't necessary to get the graphics to the screen is not enabled.

    Most people have lots of background programs, services, and audio running. With the way Intel and AMD are approaching dual core, multitasking is perched to become even more pervasive. This will enevitably widen the gap between our tests and real world performance when we are looking at graphics cards. Our CPU performance tests will continue to grow and include more and more multitasking as we are able to come up with the tests.

    Of course, #25 has a very valid point -- it may well be worth it to run one test in a price targeted system running a more "typical use" environment for reference purposes. Budget systems cluttered with loads of apps running in the background are no place to build a hardware comparison, but that doesn't mean they are entirely unuseful.
    Reply
  • jediknight - Thursday, May 12, 2005 - link

    #17
    It would be useful, however, to include benchmarks with all budget hardware - to see if you really can get acceptable (30fps+) gaming performance with these cards in realistic settings.
    Reply
  • Wellsoul2 - Thursday, May 12, 2005 - link

    Thanks Derek.

    Seems crazy that there's no DVI since CRT's are
    going away fast. Even my old budget 9200SE card
    had DVI.

    Just wondering..I would think with 17inch LCD's
    around $175 and 19inch around $300 (with DVI)
    that this is the future.
    What do you all think?
    Reply
  • JarredWalton - Thursday, May 12, 2005 - link

    Despite the fact that the system was high-end, the performance of the cards is the limiting factor. A 3.0 GHz or 3000+ CPU with the same amount of RAM would likely post similar scores. However, 512MB of RAM would have something of an impact on performance. Anyway, $100 gets you 1GB of RAM these days, so we're more or less through with testing anything with 512MB of RAM or less.

    Consider these tests as a "best case" scenario for budget graphics performance. It will work, but it won't be too impressive. Dropping detail levels will also help, of course. However, reduced detail levels don't really help out performance in some games as much as you might want. Doom 3, for instance, doesn't get a whole lot faster going from High to Medium to Low detail (at least in my experience).
    Reply
  • patrick0 - Thursday, May 12, 2005 - link

    A Budget-card comparision, with 1024MB System memory? I think it would have been better to use 512MB of system-memory. Reply
  • DerekWilson - Thursday, May 12, 2005 - link

    Wellsoul2,

    The ATI solution does not have DVI -- just one HD15 analog port.

    These cards are fine for running native resolution desktops, but you are not going to want to push them over 1024x768 even without AA/AF ... as you can see the numbers dropped off at that resolution for most tests and 800x600 is really the sweet spot for gaming with these cards.

    If you really want to guess about the framerate, just remember that moving from 1024x768 to 1280x1024 increases the number of pixels per frame by a greater ammount and percentage than when moving from 800x600 to 1024x768. Eventhough we can't effectively extrapolate performance from our (fairly linear) graph, we can bet that we'd be getting < 30fps which is considered unplayable by our standards. As HL2 posts relatively high framerates (compared to other games of this generation), we made the call to limit our tests to 1024x768 and lower.

    If LCD users want to game on a 1280x1024 panel with these cards, they'll need to do it at a non-native resolution.
    Reply
  • Wellsoul2 - Thursday, May 12, 2005 - link

    Again..anyone with an LCD monitor would prefer
    native resolution numbers. No amount of AA/AF
    seems to make up for the crappy interpolated
    video at lower resolutions.
    At least give the numbers for Doom3/HL2..
    Reply
  • Wellsoul2 - Thursday, May 12, 2005 - link

    I'd like video reviews with 1280x1024 resolution.
    This is the native resolution of my 19in LCD.

    It would seem more useful to have tests for the
    low budget cards at high resolution for LCD and
    using no AA/AF.

    People I know are buying the new cheap cards because they have DVI output to use with LCD.
    If you buy a cheap/mid priced computer they often
    have built in analog video only.

    I'd like to know if you can run Half Life 2 at
    1280x1024 with no AA/AF on these cards and what
    the frame rate is.
    Reply
  • cryptonomicon - Thursday, May 12, 2005 - link

    WTF?

    what is this thing? these puny performance cards with only 32mb memory can beat previous generations of retail mid-range cards??? how the heck...
    Reply
  • KristopherKubicki - Thursday, May 12, 2005 - link

    stevty2889 and others:

    The tests are run on the highest end components to assure the bottlenecks are the video card and not the CPU. Furthermore, since we test every single other video card and motherboard on the same setup, it makes sense for us to use the same hardware this time around as well.

    Neither the X300 nor the 6200 will receive a magical advantage by using low end hardware instead of high end.

    Kristopher
    Reply
  • Hikari - Thursday, May 12, 2005 - link

    The x300 is more like a neutered 9600 instead of a 9200 I thought. Given that the one tested here won in HL2, that would seem to be the case, no? Reply
  • Marlin1975 - Thursday, May 12, 2005 - link

    ^

    Ignore, just re-read :)
    Reply
  • Marlin1975 - Thursday, May 12, 2005 - link

    DerekWilson see my other post at 5. I then saw the graphs are not wrong, BUT you said the "The 16 and 32 MB TC cards round out the bottom and top of the pack respectively" which is not true. The 64mb and 16mb round out the bottom, while the 32mb is at the top. Reply
  • OrSin - Thursday, May 12, 2005 - link

    Why test $60 video cards on systems with the highend chips and memory. No one test goodyear tires on ferrari. I want to see these test ran on 2800 CPU and kinmax memory. As it stands this a waste to me.

    Also why not test some on board cards to these.
    To see if its even worth upgrade the Intel ,ATI, or NV solution to these.

    Reply
  • stevty2889 - Thursday, May 12, 2005 - link

    What the heck? The system it was tested on was:
    Microsoft Windows XP SP2
    ASUS A8N-SLI Deluxe
    AMD Athlon FX-53
    1GB OCZ PC3200 @ 2:2:2:9
    Seagate 7200.7 HD
    OCZ Powerstream 600W PS

    Nobody that buy's these cards is going to be running on a system like that..should have been tested with a sempron, or 2.8ghz P4/A64 2800+ type setup instead..
    Reply
  • DerekWilson - Thursday, May 12, 2005 - link

    Sorry about leaving off the machine specs -- I've updated the article.

    Actually, that was quite an oversight as the system these cards are run in is very important to note when looking at the numbers.

    Marlin1975, InuYasha is correct -- the 32MB card out performed the 64MB part in almost every test. It wasn't until we upped the resolution to unplayable degrees that the 64MB part was able to make up the difference.
    Reply
  • bob661 - Thursday, May 12, 2005 - link

    Does anyone know what motherboard and how much ram was used in their test system? Reply
  • CrystalBay - Thursday, May 12, 2005 - link

    #3 I was wondering the same thing myself, hmmm. Reply
  • InuYasha - Thursday, May 12, 2005 - link

    #4 the 32MB and 64MB results are not backward.

    if i remember correctly from HOCP, the 32MB has faster memory and that makes a huge difference than the amount of memory.
    Reply
  • Icehawk - Thursday, May 12, 2005 - link

    I don't see the machine specs anywhere either? I'm curious if these were tested on the "standard" uber-machine or tested on what kind of PC someone buying these would actually have. Somehow, by the #s generated I think this was on the uber-machine. While interesting to see ultimate performance I think end-users would also be served by showing more realistic performance #s.

    A couple of minor typos ;)
    Reply
  • Marlin1975 - Thursday, May 12, 2005 - link

    OK I think it is a mis-tpe, not the graohs that are wrong on page 7...

    "Unreal Tournament 2004 shows our 32MB HyperMemory performing on par with the 64MB TurboCache part in the middle of the pack. The 16 and 32 MB TC cards round out the bottom and top of the pack respectively."
    Reply
  • gimpsoft - Thursday, May 12, 2005 - link

    be alot better if it could be the other way around taking video card memory and using it for windows then ill really pay for it lol letting video card borrow system memory bad idea for the future

    anyways that's just me
    Reply
  • Marlin1975 - Thursday, May 12, 2005 - link

    Do you have the 32mb and 64mb cards backwards? Reply
  • DAPUNISHER - Thursday, May 12, 2005 - link

    I'm on my first cup of coffee, so help me out here Derek,what was the test setup config please? Reply
  • erinlegault - Thursday, May 12, 2005 - link

    LOL. I beat the losers who care about having the 1st post. Reply
  • erinlegault - Thursday, May 12, 2005 - link

    Unfortunately, I don't think this was a completely fair test of hypermemory, since the Radeon X300 (synonymous with 9200) is complete crap, while Geforce 6200 is probably adequate for most users (I think there equivalent or better than the Radeon 9600's) Reply

Log in

Don't have an account? Sign up now