Test Setup

Standard Test Bed
Performance Test Configuration
Processor: Intel Core 2 Duo E6600
(2.4GHz, 4MB Unified Cache)
RAM: OCZ Flex XLC (2x1GB), 2.30V
(Micron Memory Chips)
Hard Drive: Western Digital 150GB 10,000RPM SATA 16MB Buffer
Seagate 750GB 7200.10 7200RPM SATA 16MB Buffer
System Platform Drivers: Intel - 8.1.1.1010
NVIDIA - 9.35, 8.26
ATI - 6.10
Video Cards: 1 x MSI X1950XTX , 2 x MSI 8800GTX for SLI testing
Video Drivers: ATI Catalyst 6.11, NVIDIA 97.44 for SLI Testing
CPU Cooling: Scythe Infinity
Power Supply: OCZ GameXstream 700W
Optical Drive: Sony 18X AW-Q170A-B2, Plextor PX-B900A
Case: Cooler Master CM Stacker 830
Motherboards: ASUS Striker Extreme (NVIDIA 680i) - BIOS 0505
ASUS P5N-E SLI (NVIDIA 650i) - BIOS 0101
ASUS P5B-E (Intel P965) - BIOS 0601
DFI LANParty UT ICFX3200-T2R/G (AMD RD600) - BIOS 12/07
Intel D975XBX2 (Intel 975X) - BIOS 2333
Operating System: Windows XP Professional SP2
.

A 2GB memory configuration is now standard in the AnandTech test bed as most enthusiasts are currently purchasing this amount of memory. Our choice of high-end OCZ Flex XLC memory from OCZ offered a very wide range of memory settings during our stock and overclocked test runs. We also utilized our Corsair XMS2 Dominator (Twin2x2048-9136C5D) memory on this board to verify DDR2-1066 compatibility with another memory type. We are currently testing several other memory modules ranging from TwinMOS DDR2-800 down to A-DATA DDR2-533 for compatibility and performance benchmarks. Our memory timings are set based upon determining the best memory bandwidth via MemTest 86 and test application results for each board. This includes optimizing the memory sub-timings to ensure each board performs at its absolute best.

We are utilizing the MSI X1950XTX video card to ensure our 1280x1024 resolutions are not completely GPU bound for our motherboard test results. We did find in testing that applying a 4xAA/8xAF setting in most of today's latest games created a situation where the performance of the system starts becoming GPU limited. Our video tests are run at 1280x1024 resolution for this article at High Quality settings. We also tested at 1600x1200 4xAA/8xAF for our NVIDIA SLI results on the two NVIDIA based boards. Although not reported, we also completed the same SLI tests at 1920x1200 4xAA/8xAF but did not report the scores as the performance delta between each board was the same as the 1600x1200 results.

All of our tests are run in an enclosed case with a dual optical/hard drive setup to reflect a moderately loaded system platform. Windows XP Pro SP2 is fully updated and we load a clean drive image for each system to ensure driver conflicts are kept to a minimum.

Overclocking

ASUS P5N-E SLI
Overclocking Testbed
Processor: Intel Core 2 Duo E6600
Dual Core, 2.4GHz, 4MB Unified Cache
1066FSB, 7x Multiplier
CPU Voltage: 1.52500V (default 1.3250V)
Cooling: Scythe Infinity Air Cooling
Power Supply: OCZ GameXStream 700W
Memory: OCZ Flex XLC (2x1GB) (Micron Memory Chips)
Video Cards: 1 x MSI X1950XTX , 2 x MSI 8800GTX for SLI
Hard Drive: Western Digital 150GB 10,000RPM SATA 16MB Buffer
Seagate 750GB 7200.10 7200RPM SATA 16MB Buffer
Case: Cooler Master CM Stacker 830
Maximum CPU OC: 402x9 (4-4-4-12 1T, 804MHz, 2.34V), CPU 1.52500V
3622MHz (+51%)
Maximum FSB OC: 502x7 (4-4-4-12 2T, 804MHz, 2.34V), CPU 1.50000V
3519MHz (+89% FSB)
.


We were easily able to reach a final benchmark stable setting of 9x402 FSB resulting in a clock speed of 3622MHz. The board was capable of running at 9x409 FSB but would consistently fail several of our game benchmarks. We attributed this overclock limit to the lack of proper cooling for our MCP and SPP at the voltages we set. We added additional fan cooling along with proper heatsink paste to the SPP and a small passive heatsink to the MCP unit and reached 9x417 before stability became an issue again. We believe this is near the limit of the CPU/chipset combination. Vdroop was terrible on this board during overclocking with an average drop of .06 ~.09V during load testing.



We operated our memory at 4-4-4-12 1T with all sub-timings set at Auto for a final speed of 804MHz. We could not get the board stable with CAS 3 and 1T settings at DDR2-800. We were able to finally reach DDR2-1066 at 5-6-6-24 2T in a stable manner but the memory latencies were so lax that performance actually decreased in most benchmarks. The BIOS has the majority of memory sub-timings set very tight along with the SPP timings. We found this to be our issue with trying to operate memory at the higher speeds or utilizing lower latency settings. However, we have found this board performed extremely well with the 4-4-4-12 1T settings due to the aggressive sub-timings. We were able to run 4-4-4-12 2T timings up to around DDR2-880 but had to switch to 5-5-5-15 2T timings up to DDR2-1000, anything over that required very loose timings across all memory settings for stability.


We dropped the multiplier on our E6600 to seven and were able to reach 502 FSB without an issue. We were able to enter XP at 7x509 and the board would post at 7x519. However, at least with our sample, stability over 504FSB dropped off quickly and we feel like the board was designed with 500FSB in mind but not much more. We have seen user results that are around 480~506FSB with and without additional cooling so we feel confident that our review sample is not "special". We have a full retail kit arriving and will verify our results against it. We will also utilize the retail kit when we have other 650i SLI boards to review. Overall, the overclocking capability surprised us as we initially expected this chipset to be closer to a maximum bus speed of around 450 FSB.

ASUS P5N-E SLI Basic Features General Performance
Comments Locked

27 Comments

View All Comments

  • JarredWalton - Monday, December 25, 2006 - link

    The big problem with AGP is that it only allowed for one high-speed port. PCIe allows for many more (depending on chipset), plus you get high up and down bandwidth, whereas AGP had fast writes (CPU to card) but slow reads (card to CPU). X8 PCIe is still at least as fast as 8X AGP in terms of bandwidth, and in most instances we aren't stressing that level of bandwidth.
  • Lord Evermore - Monday, December 25, 2006 - link

    x8 PCIe can be as slow as AGP4X depending on the traffic pattern. 4 lanes of PCIe (or 8 half-lanes technically; the number of lanes in each direction in x8) is 1GBps, AGP4X is 1.066GBps. So if most of the data were being streamed in one direction, those two would be equivalent, theoretically. AGP8X would have 2.13GBps in which to stream that uni-directional data. If half the data were going in each direction, then x8 PCIe would be equivalent to AGP8X since they'd both have 1GBps available for each direction, or 2GBps half the time for AGP actually (though performance might be lower with AGP because of the non-independent half-duplex nature).

    But since AGP4X is probably still capable of handling the majority of applications, it doesn't really matter much.

    Too bad we can't manually control the number of lanes in use to a particular slot. It would be very interesting to compare performance using the same graphics card on the same mainboard using x1, which could depending on the pattern be about equal to a simple PCI card or AGP1X, to x2, x4, x8 and x16 (since x16 can in some cases be comparable to AGP8X). That would help to definitively say whether all the increased bandwidth is actually making a difference, or if other factors are involved.
  • Lord Evermore - Monday, December 25, 2006 - link

    AGP 3.0 supports multiple slots depending on what the chipset is designed to support. According to Wikipedia, HP AlphaServer GS1280 has up to 16 AGP slots. Those basically all connect to a single interface on the chipset. It's likely that since it's a part of the AGP3 spec, every chipset could have supported multiple ports, but normal mainboard makers never used it. There were probably reasons that it wouldn't have worked well for an SLI type feature, possibly the read/write bandwidth issue.

    Any chipset designer also could have just put in multiple AGP interfaces I bet, even if they only supported one card a time. Don't know what effect that would have on bandwidth or contention for access to the CPU. The cards probably also would have not been able to work in any sort of SLI configuration where the data had to go over the chipset bus.
  • PrinceGaz - Friday, December 22, 2006 - link

    Your article starts with questions about this, and they remain unresolved at least up until nForce4 chipsets to my knowledge (because I have one). Of course I'm not stupid enough to risk using nVidia's hardware firewall and associated drivers, but even their IDE drivers can cause a normal installation of Windows XP to have trouble starting which means I cannot safely enable NCQ (I have a dual-core processor) or even benefit from any acceleration the nForce4 chipset might provide, because the nVidia drivers are unstable.

    I once used to trust nVidia, especially with drivers back in the early GeForce days, but the latest official GeForce drivers have been bug-ridden what with incorrect monitor refresh-rate detection (even after using the .inf file), and stupidity like doubling the reported memory clock speed of the card when it had always previously been correct.

    Their good graphics-card drivers were why I bought an nForce4 based board, and also on this site's recommendation, and I must admit I'm only so-so about it. It works and does everything it says it should on the box, but the computer doesn't feel as responsive as it should and I suspect that is partly because I had to revert to the default Microsoft disk drivers.

    All reviews of nVidia chipset motherboards should include a mention about their driver issues (bugs) until they are fixed. Just because you test a mobo for one day and it seems to work and overclock to a given level, does not mean it can be trusted day-in day-out. If you cannot install the IDE drivers, then NCQ and other hard-drive features are negated. If the hardware firewall drivers are so bad no one with any sense goes near them, then that hardware in the chipset is worthless and could best be described as a liability.

    I like this site, but it would be nice if you sometimes looked back on products you've been given earlier in the year and report on whether they actually lived up to expectations. Assuming you get to keep any of your stuff. If you don't, then the opinions of the writers becomes almost meaningless because anything looks good for a day or two.
  • Tanclearas - Saturday, December 23, 2006 - link

    Gary Key should be sensitive to this issue more than anyone. Gary tried to facilitate contact between me and Nvidia to try to nail down the cause of the hardware firewall corruption issues. He contacted Nvidia several times for me, and I was contacted by an Nvidia rep twice. I provided the Nvidia rep with detailed steps that I had used to install Windows and the drivers. I conducted tests without any software installed, and continually experienced issues. I provided screen shots of errors to the rep as well. I offered to install Windows and drivers of any version they requested, using whatever steps they wanted.

    After providing them with all of the details and making that offer, Nvidia never contacted me again. Gary followed up with me, and contacted Nvidia again on my behalf to try to get them to get in touch with me. Ultimately, they just removed official support for the firewall. I am honestly surprised a class action suit never came of it. Nvidia used the hardware firewall as a selling feature, then made no attempt to solve the issues that were being experienced by many users, and finally just pulled the plug on it.

    Anyway, I too have little faith in Nvidia actually taking the issues seriously and finding a solution. I'm not going to say that I'll never buy a board with an Nvidia chipset again, but I can guarantee I won't be buying 680/650 when there are already known issues, and any future board based on an Nvidia chipset will have to go through months of retail availability and positive user feedback before I'd be willing to try again.
  • LoneWolf15 - Tuesday, December 26, 2006 - link

    Insightful post. I'm still using an nForce 4 Ultra chipset board (MSI 7125 K8N Neo4 Platinum), and it's been good for me, but I've never used their firewall software after hearing reports from others.

    The current 680i issues have led me to the same conclusion as you: I have no interest in buying an nVidia chipset mainboard next time around (so far, Intel's i975X seems to be the only one I'd be interested in). It seems nVidia has a history of sweeping troubles (i.e., this issue, first-generation PureVideo fiascos with the NV40/45 graphics chipsets that I'm surprised never caused a class-action, the nForce3 250Gb firewall that didn't provide the acceleration they first claimed it did) under the rug if they cannot resolve them through software fixes, and hope nobody raises enough of a ruckus (a method which seems to have worked well for them).

    I've just bought a new Geforce graphics card, but experiencing the PureVideo issues alone caused me to skip to ATI for two generations. It's also taught me to read forums with additional user experiences of a product for the first month after release, before I purchase. It seems review sites often miss driver issues/bugs in first-rev. hardware, due to limited time envelopes for review, or not being able to test with as wide a variety of hardware as the community (admittedly, not their fault). I'm not willing to pay the early-adopter/rev 0.9 price any more.
  • KeypoX - Saturday, December 23, 2006 - link

    anyone notice how low quality these articles have become? A couple years ago this site was a decent place to get some info but now ...

    Please go back to the old good qual cause now you guys are not good at all ... i feel pretty sad everytime i visit the site
  • Xcom1Cheetah - Friday, December 22, 2006 - link

    Was just wandering isn't the power numbers of idle and full load are a little to high for the stability of the system.. i m not sure but i feel the higher power is going to reduce the stability of the over clock in the longer run...
    Performance and feature wise it look pretty ideal to me.. only if its power number has been inline with P965.

    Any chance that these power number coming down due to the BIOS fix/update.?
  • JarredWalton - Friday, December 22, 2006 - link

    I doubt the power req's will drop much at all over time. However, higher power draw doesn't necessarily mean less stable. It does mean you usually need more cooling, but a lot of it is simply a factor of the chipset design. I'm pretty sure 650i is a 90nm process technology, but for whatever reason NVIDIA has always made chips that run hot. The Pentium 4 wasn't less stable because it used more power, though, and neither is the nForce series.

    Perhaps part of the cause of the high power is that NVIDIA uses HyperTransport as well as the Intel FSB architecture. Then having two chips that run hot.... Added circuitry to go from one to the other? I don't know. Still, the ~40W power difference is pretty amazing (in a bad way).
  • Avalon - Friday, December 22, 2006 - link

    For $130, that's a pretty good looking board. I was expecting the 650SLI chipset based boards to be more around $150-$175. Now this makes me curious as to how 650Ultra will pan out.

Log in

Don't have an account? Sign up now