General Application Performance

Click to enlarge

The performance of the board in our application benchmarks was good and the board seemed a bit more responsive than the 680i, although the benchmarks were a tossup. While not directly competitive with the Intel P965 in our Nero Recode Test, we did find the board to perform consistently close in our full application benchmark test suite that includes audio/video benchmarks not listed in our preview. However, our 975X and P965 chipsets still offer the best overall performance.

In our Nero Recode test we consistently found the performance of the 680i/650i to be lacking due to disk access issues. The conversion process would consistently slow down while the disk was being accessed. The quality of the video conversion was not affected but it appears under heavy CPU usage that disk performance suffers at this time with both NVIDIA chipsets.

Synthetic Graphics Performance

The 3DMark series of benchmarks developed and provided by Futuremark are among the most widely used tools for benchmark reporting and comparisons. Although the benchmarks are very useful for providing apple to apple comparisons across a broad array of GPU and CPU configurations they are not a substitute for actual application and gaming benchmarks. In this sense we consider the 3DMark benchmarks to be purely synthetic in nature but still very valuable for providing consistent measurements of performance.

General Graphics Performance

General Graphics Performance

In our 3DMark06 test, each platform score is basically the same although we see the DFI RD600 based motherboard leading slightly. We attribute this slight difference to the automatic overclocking of the PCI Express graphics slot to 125MHz with our ATI X1950XT. This feature is included in the LinkBoost technology featured on the NVIDIA 680i motherboards when utilizing an approved NVIDIA graphics card. In the more memory and CPU sensitive 3DMark01 benchmark we see the Intel 975X board pulling away from the other boards due to its superior memory bandwidth at stock settings. Even though this holds true to some degree for the ASUS P5B-E based on the P965 chipset, we see it scoring slightly lower than our NVIDIA solutions due to relaxed MCH timings that allow it to excel in overclocking with FSB rates consistently hitting 520+. Although our reported Sandra memory bandwidth scores along with Memtest86 testing consistently show the RD600 performing better than the 680i or 650i, our 3D01 benchmark is not showing this advantage. NVIDIA based chipsets perform very well in graphics tests and the scores from our Intel DB975XBX2 indicate a highly tuned system.

General System Performance

The PCMark05 benchmark developed and provided by Futuremark was designed for determining overall system performance for the typical home computing user. This tool provides both system and component level benchmarking results utilizing subsets of real world applications or programs. The benchmark is useful for providing comparative results across a broad array of Graphics subsystems, CPU, Hard Disk, and Memory configurations along with multithreading results. In this sense we consider the PCMark benchmark to be both synthetic and real world in nature while providing consistency in our benchmark results.

General System Performance

The ASUS 650i is competitive in this benchmark although we expected slightly better performance based upon our other scores. The 650i and 680i scored very well on the single task disk benchmarks and both performed almost equally on the graphics subsystem tests where they led the field. However, our 975X and P965 chipset boards won the multitasking tests while the RD600 finished in the middle on most of the tests.

Test Setup and Overclocking Gaming Performance
Comments Locked

27 Comments

View All Comments

  • JarredWalton - Monday, December 25, 2006 - link

    The big problem with AGP is that it only allowed for one high-speed port. PCIe allows for many more (depending on chipset), plus you get high up and down bandwidth, whereas AGP had fast writes (CPU to card) but slow reads (card to CPU). X8 PCIe is still at least as fast as 8X AGP in terms of bandwidth, and in most instances we aren't stressing that level of bandwidth.
  • Lord Evermore - Monday, December 25, 2006 - link

    x8 PCIe can be as slow as AGP4X depending on the traffic pattern. 4 lanes of PCIe (or 8 half-lanes technically; the number of lanes in each direction in x8) is 1GBps, AGP4X is 1.066GBps. So if most of the data were being streamed in one direction, those two would be equivalent, theoretically. AGP8X would have 2.13GBps in which to stream that uni-directional data. If half the data were going in each direction, then x8 PCIe would be equivalent to AGP8X since they'd both have 1GBps available for each direction, or 2GBps half the time for AGP actually (though performance might be lower with AGP because of the non-independent half-duplex nature).

    But since AGP4X is probably still capable of handling the majority of applications, it doesn't really matter much.

    Too bad we can't manually control the number of lanes in use to a particular slot. It would be very interesting to compare performance using the same graphics card on the same mainboard using x1, which could depending on the pattern be about equal to a simple PCI card or AGP1X, to x2, x4, x8 and x16 (since x16 can in some cases be comparable to AGP8X). That would help to definitively say whether all the increased bandwidth is actually making a difference, or if other factors are involved.
  • Lord Evermore - Monday, December 25, 2006 - link

    AGP 3.0 supports multiple slots depending on what the chipset is designed to support. According to Wikipedia, HP AlphaServer GS1280 has up to 16 AGP slots. Those basically all connect to a single interface on the chipset. It's likely that since it's a part of the AGP3 spec, every chipset could have supported multiple ports, but normal mainboard makers never used it. There were probably reasons that it wouldn't have worked well for an SLI type feature, possibly the read/write bandwidth issue.

    Any chipset designer also could have just put in multiple AGP interfaces I bet, even if they only supported one card a time. Don't know what effect that would have on bandwidth or contention for access to the CPU. The cards probably also would have not been able to work in any sort of SLI configuration where the data had to go over the chipset bus.
  • PrinceGaz - Friday, December 22, 2006 - link

    Your article starts with questions about this, and they remain unresolved at least up until nForce4 chipsets to my knowledge (because I have one). Of course I'm not stupid enough to risk using nVidia's hardware firewall and associated drivers, but even their IDE drivers can cause a normal installation of Windows XP to have trouble starting which means I cannot safely enable NCQ (I have a dual-core processor) or even benefit from any acceleration the nForce4 chipset might provide, because the nVidia drivers are unstable.

    I once used to trust nVidia, especially with drivers back in the early GeForce days, but the latest official GeForce drivers have been bug-ridden what with incorrect monitor refresh-rate detection (even after using the .inf file), and stupidity like doubling the reported memory clock speed of the card when it had always previously been correct.

    Their good graphics-card drivers were why I bought an nForce4 based board, and also on this site's recommendation, and I must admit I'm only so-so about it. It works and does everything it says it should on the box, but the computer doesn't feel as responsive as it should and I suspect that is partly because I had to revert to the default Microsoft disk drivers.

    All reviews of nVidia chipset motherboards should include a mention about their driver issues (bugs) until they are fixed. Just because you test a mobo for one day and it seems to work and overclock to a given level, does not mean it can be trusted day-in day-out. If you cannot install the IDE drivers, then NCQ and other hard-drive features are negated. If the hardware firewall drivers are so bad no one with any sense goes near them, then that hardware in the chipset is worthless and could best be described as a liability.

    I like this site, but it would be nice if you sometimes looked back on products you've been given earlier in the year and report on whether they actually lived up to expectations. Assuming you get to keep any of your stuff. If you don't, then the opinions of the writers becomes almost meaningless because anything looks good for a day or two.
  • Tanclearas - Saturday, December 23, 2006 - link

    Gary Key should be sensitive to this issue more than anyone. Gary tried to facilitate contact between me and Nvidia to try to nail down the cause of the hardware firewall corruption issues. He contacted Nvidia several times for me, and I was contacted by an Nvidia rep twice. I provided the Nvidia rep with detailed steps that I had used to install Windows and the drivers. I conducted tests without any software installed, and continually experienced issues. I provided screen shots of errors to the rep as well. I offered to install Windows and drivers of any version they requested, using whatever steps they wanted.

    After providing them with all of the details and making that offer, Nvidia never contacted me again. Gary followed up with me, and contacted Nvidia again on my behalf to try to get them to get in touch with me. Ultimately, they just removed official support for the firewall. I am honestly surprised a class action suit never came of it. Nvidia used the hardware firewall as a selling feature, then made no attempt to solve the issues that were being experienced by many users, and finally just pulled the plug on it.

    Anyway, I too have little faith in Nvidia actually taking the issues seriously and finding a solution. I'm not going to say that I'll never buy a board with an Nvidia chipset again, but I can guarantee I won't be buying 680/650 when there are already known issues, and any future board based on an Nvidia chipset will have to go through months of retail availability and positive user feedback before I'd be willing to try again.
  • LoneWolf15 - Tuesday, December 26, 2006 - link

    Insightful post. I'm still using an nForce 4 Ultra chipset board (MSI 7125 K8N Neo4 Platinum), and it's been good for me, but I've never used their firewall software after hearing reports from others.

    The current 680i issues have led me to the same conclusion as you: I have no interest in buying an nVidia chipset mainboard next time around (so far, Intel's i975X seems to be the only one I'd be interested in). It seems nVidia has a history of sweeping troubles (i.e., this issue, first-generation PureVideo fiascos with the NV40/45 graphics chipsets that I'm surprised never caused a class-action, the nForce3 250Gb firewall that didn't provide the acceleration they first claimed it did) under the rug if they cannot resolve them through software fixes, and hope nobody raises enough of a ruckus (a method which seems to have worked well for them).

    I've just bought a new Geforce graphics card, but experiencing the PureVideo issues alone caused me to skip to ATI for two generations. It's also taught me to read forums with additional user experiences of a product for the first month after release, before I purchase. It seems review sites often miss driver issues/bugs in first-rev. hardware, due to limited time envelopes for review, or not being able to test with as wide a variety of hardware as the community (admittedly, not their fault). I'm not willing to pay the early-adopter/rev 0.9 price any more.
  • KeypoX - Saturday, December 23, 2006 - link

    anyone notice how low quality these articles have become? A couple years ago this site was a decent place to get some info but now ...

    Please go back to the old good qual cause now you guys are not good at all ... i feel pretty sad everytime i visit the site
  • Xcom1Cheetah - Friday, December 22, 2006 - link

    Was just wandering isn't the power numbers of idle and full load are a little to high for the stability of the system.. i m not sure but i feel the higher power is going to reduce the stability of the over clock in the longer run...
    Performance and feature wise it look pretty ideal to me.. only if its power number has been inline with P965.

    Any chance that these power number coming down due to the BIOS fix/update.?
  • JarredWalton - Friday, December 22, 2006 - link

    I doubt the power req's will drop much at all over time. However, higher power draw doesn't necessarily mean less stable. It does mean you usually need more cooling, but a lot of it is simply a factor of the chipset design. I'm pretty sure 650i is a 90nm process technology, but for whatever reason NVIDIA has always made chips that run hot. The Pentium 4 wasn't less stable because it used more power, though, and neither is the nForce series.

    Perhaps part of the cause of the high power is that NVIDIA uses HyperTransport as well as the Intel FSB architecture. Then having two chips that run hot.... Added circuitry to go from one to the other? I don't know. Still, the ~40W power difference is pretty amazing (in a bad way).
  • Avalon - Friday, December 22, 2006 - link

    For $130, that's a pretty good looking board. I was expecting the 650SLI chipset based boards to be more around $150-$175. Now this makes me curious as to how 650Ultra will pan out.

Log in

Don't have an account? Sign up now