Original Link: http://www.anandtech.com/show/2521
NVIDIA 780a: Integrated Graphics and SLI in Oneby Gary Key on May 6, 2008 12:00 AM EST
- Posted in
After being blitzed by the NVIDIA marketing machine at CES 2008 about upcoming chipsets, we were excited about the technological possibilities NVIDA was planning to deliver a few weeks later. As it turns out, it was a few months later but as of today NVIDIA is officially introducing the nForce 780a SLI chipset and its family companions, the 750a SLI and 730a chipsets.
At first look, it appears NVIDIA has mastered the marketing checklist with features ranging from HyperTransport 3.0 and PCI Express 2.0 to the environmentally friendly Hybrid Power and performance enhancing Hybrid SLI capabilities. Of course, AMD has featured HT 3.0 and PCI Express 2.0 on their 790FX chipset since November and the 780G had Hybrid CrossFire operating since March. However, AMD does not offer Hybrid power capabilities nor does the flagship 790FX offer integrated graphics capabilities. We will have to wait a few more months for the AMD 790GX to arrive for those two features.
In the meantime, NVIDIA sits alone as it starts to roll out integrated graphics on all of its chipsets over the next few months. NVIDIA is calling this technology a motherboard GPU or mGPU for short. We think the inclusion of integrated graphics on all chipsets is a definite step in the right direction and one that we applaud if done correctly. Our first results indicate that NVIDIA is on the right path, although one that was a little bumpy for us.
The most important design element of the nForce 780a SLI and other chipsets in this product family is the mGPU. Based upon the 8400GS core, it offers decent casual gaming and application performance as a standalone unit. This capability is nothing new as integrated graphic chipsets have been around for a long time. However, the IG performance is clearly a step above what NVIDIA has offered in the past, but a step below the current 780G from AMD. Besides offering extensive HD playback capabilities and additional monitor outputs, its primary purpose is seguing into NVIDIA’s Hybrid SLI technology.
Hybrid SLI offers two different and very distinct technologies that consist of GeForce Boost and Hybrid Power. GeForce Boost allows for the pairing of the mGPU with a discrete graphics card (dGPU) to provide SLI capability to improve 3D performance. Since the mGPU is an 8400GS in disguise, the natural pairing of this technology is with a discrete 8400GS card. NVIDIA provides support for the 8500GT also as its performance closely matches that of the mGPU, anything higher would result in a mismatch in performance and negate any benefits of adding an inexpensive dGPU.
The true technological gem is the HybridPower functionality as it allows the mGPU to function as the primary display for most application tasks and high definition playback duties while the discreet graphics card is in standby waiting to tackle demanding 3D tasks. We use the term standby, but the system actually turns off the dGPU to conserve power until required. In actual practice, we noticed a slight delay when switching from the mGPU to the dGPU, something that we believe driver and BIOS tuning can resolve. However, the biggest drawback at this time is that only two discreet graphics solutions are supported, the 9800GTX and 9800GX2 cards.
So let’s take a detailed look at the chipset specifications and delve into the performance results of the 780a SLI chipset against its immediate competition from AMD.
One Chipset Fits All
NVIDIA is targeting the 780a as their top chipset for the AMD enthusiast and has gone to great lengths to ensure this is a better alternative than the AMD 790FX. NVIDIA designed this chipset to provide a total platform solution that includes a robust integrated graphics engine or the option to run NVIDIA’s SLI, Quad SLI, or 3-way SLI configuration.
The NVIDIA 780a SLI chipset is built on TSMC's 65nm process technology. NVIDIA is keeping the exact transistor count private along with several other details concerning the internal layout. That said, the 780a SLI product family includes the 780a SLI MCP and the often utilized nForce 200 chipset that provides an extra 32 lanes of PCI Express 2.0. NVIDIA separates these 32 lanes to provide a single x16 link and dual x8 links for three- way SLI.
The 780a SLI MCP provides 19 lanes in total, 16 are dedicated to the nForce 200 link and the other three are available for x1 slots. NVIDIA 780a SLI MCP sports an intergrated GigE MAC interface, 12 USB 2.0 ports, HDA audio interface, five PCI slots, and support for six SATA drives and two PATA peripherals. RAID 0,1, 0+1, and 5 are fully supported along with native AHCI support now.
NVIDIA uses a highly optimized unified memory architecture (UMA) design, and all graphics memory is shared with system memory with the ability to access up to 512MB of system memory. The core clock speed operates at 500MHz, but unlike the 780G we have not discovered any BIOS options offering the ability to overclock the core. We did try NVIDIA’s performance tool to overclock the core and shaders, but our test sample would lockup at any setting.
Hybrid SLI support is fully implemented on the 780a with the release of the 174.15 driver set this week. Hybrid SLI supports current 8400GS and 8500GT based cards. In early testing, we have seen increases of up to 30% in games with a 8400GS card. The additional card completely changes the game play dynamics of this chipset and allows several recent games to play at 1024x768 or 1280x1024 medium quality settings and still keep the frame rates in the 30FPS to 50FPS range. However, smooth game play at those settings is not possible in Crysis unless you drop the quality setting to low.
NVIDIA integrates their PureVideo HD capabilities into the 780a. PureVideo HD offers hardware acceleration for decoding VC-1, H.264 (AVC), WMV, and MPEG-2 sources up to 1080p resolutions. Advanced de-interlacing is available when using a Phenom processor. We generally found CPU utilization rates and output quality to be near or equal to that of the 9600GT.
On the audio side, the HDMI interface offers support for 8-channel LPCM, provided you install the necessary NVIDIA driver set. Our driver support disks had this driver installation tucked away from the normal chipset installation, so be sure to load it if you want multi-channel LPCM. This feature matches Intel’s G35 chipset and is a far better alternative to the AMD 780G that sports 2-channel LPCM for the HTPC audience.
Rounding out the video capabilities of the 780a is analog output, DVI/HDMI interfaces, and a internal TMDS. The 780a features dual independent displays that allow resolution, refresh rates, and display data to be completely independent on the two display paths. NVIDIA provides HDCP support with on-chip key storage for the DVI or HDMI interfaces but is regulated to a single interface during playback operations. The biggest drawback we found was the 300MHz RAMDAC that only supports resolutions up to 1920x1440 at 75Hz. The DVI support is limited to a single-link TDMS rated at 162MHz pixel clock, which translates to a 1900x1200-resolution limit.
HyperTransport 3.0 capability (5.2GT/s+ interface) is included and is important in getting the most out of the 780a graphics core. With a Phenom onboard, the 780a will perform post-processing on high-definition content and it makes a difference in image quality and fluidity during 1080p playback.
The Rest of the Family
Remove the nForce 200 chipset and what we have left is the nForce 750a SLI product. In our initial testing, we have not noticed any measurable differences in performance between the two chipsets.
NVIDIA is also providing the nForce 730a product for entry-level configurations. This MCP will not officially support DDR2-1066 memory or provide SLI capabilities, aside from Hybrid SLI. The general performance of this chipset in the right configuration will equal its big brothers.
We have received conflicting information from NVIDIA as to whether or not the 730a supports HybridPower. This diagram indicates it doesn't, however NVIDIA's Reviewer's Guide claims it does support HybridPower.
Beginning now, all new NVIDIA chipsets will ship with integrated graphics (which NVIDIA is now calling the mGPU), regardless of what market segment they are targeted at. It's a particularly bold move by NVIDIA but much appreciated given that the mGPU in all of its chipsets will receive PureVideo HD and thus can fully accelerate H.264/MPEG-2/VC1 decode streams.
While it's unlikely that many would purchase a high-end motherboard based on the NVIDIA nForce 780a SLI chipset and simply use its integrated graphics, the mGPU in the 780a is the same GPU used in the 750a, 730a, 720a and the GeForce 8200 based motherboards, so the discussion here is far reaching.
AMD 780G vs. NVIDIA 780a Graphics Architecture
AMD has built a superior Integrated graphics part this time around, both from a technical standpoint and in terms of realized performance. It isn't that AMD really went much further than NVIDIA in terms of engineering something great: they just selected a higher performance core to integrate into their chipset than NVIDIA did.
Neither AMD nor NVIDIA told us exactly how the built their interface to the system bus and system memory, but the lack of a local framebuffer does mean that fast and as low latency as possible communication with system memory are required. In both cases, the discrete GPU from which the integrated part is derived uses a 64-bit width connection to local memory. In both cases, since system memory offers a 128-bit wide but these parts make use of a wider bus to help compensate for increased latency to system memory. Increasing local (on die) cache would also help here, but since IGP solutions are as low cost as possible it doesn't seem likely that we've got loads more cache to play with.
We used 3dmark's single texture test to try to get an idea of memory bandwidth. The test largely removes computation overhead and ends up just pulling in as much data as possible as fast as possible and throwing it up on the screen. The result in MTexels/sec shows that NVIDIA has a bit of an advantage here, but the gap isn't huge. This means that performance differences will likely come down to compute power rather than bandwidth
|AMD 780G||NVIDIA nForce 780a|
|3DMark '06 Single Texture Fillrate||910.6 MTexels/s||983.4 MTexels/s|
Past here, NVIDIA and AMD integrated hardware diverge. AMD's solution is based on the RV610 graphics core. In fact, it is an RV610 core shrunk to 55nm and integrated into the Northbridge. This means we get 8 5-wide blocks of shader processors (SPs -- 40 total). In the very very worst case, we get 8 shader ops per clock (which isn't likely to happen in any real situation). Compare this to NVIDIA's G86 based 8 SP offering with a maximum of 8 shader ops per clock and we see quite a difference emerge. AMD's IGP can handle 8 vector instructions per clock and then some, while the similar code could run at 2 instructions per clock on NVIDIA hardware.
Of course, this difference isn't as decimating to NVIDIA as one might think at first blush. We must remember that NVIDIA cranks up it's shader clock to ridiculous speeds while AMD's shaders all run at core clock speed. With AMD and NVIDIA core clocks both coming in at 500MHz, NVIDIA's shader core runs at 1200MHz. In spite of the fact that AMD's part can do more operations per clock (probably averaging out to somewhere between 3x and 4x; it heavily depends on the application), NVIDIA is able to do 2.4x as many clocks per second which closes the gap a bit.
The only discrete part with 8 SPs is the GeForce 8300 which is OEM only. As of this writing, NVIDIA has not confirmed details other than core and shader speeds and the number of SPs in the part with us. They have stated that their integrated hardware is simlar to the 8400/8500 in order to optimize the benefit of Hybrid SLI, so it's possible the number of texutre and ROP units are 8 each. Of course, if half the number of SPs is "similar" to the 8400 and 8500 parts, we can't really be sure until NVIDIA confirms the details. We do know that AMD's hardware has 4 texture and 4 render outs since it is RV610. With so few SPs, and the competition sticking with 4/4 texture/render units, we suspect that this is what NVIDIA has done as well.
What is clear is that either way, AMD's hardware is more robust than NVIDIA's offering. Our performance tests reflect this, as we will soon show.
Integrated Graphics Performance & GeForce Boost
As expected, AMD's 780G manages to outperform NVIDIA's integrated graphics steadily across the board:
|AMD 780G||NVIDIA 780a/GeForce 8200||% Performance Advantage (AMD)|
|Half Life 2 Episode Two (10x7)||43.1||30.2||42.7%|
|Microsoft Flight Simulator X (10x7)||24.6||21.4||15.0%|
|Company of Heroes (10x7)||29.4||19.4||51.5%|
|Unreal Tournament 3 (10x7)||22.9||16.8||36.3%|
|Crysis (10x7 LowQ)||20.3||16.9||20.1%|
Once again, although this comparison matters more for the nForce 730a and GeForce 8200 motherboards, NVIDIA's mGPU just doesn't compare to AMD's.
The performance advantage ranges from 15% to just over 50%, and the only surprising part here is that AMD doesn't do better given the theoretical advantage it holds over NVIDIA. As we mentioned before, it's doubtful that many will buy a nForce 780a board and use its integrated graphics to play games but the AMD performance advantage holds true for 750a and GeForce 8200 platforms as well. For a company that has been criticising Intel's integrated graphics performance as of late, we would expect nothing short of the best scores here.
If the mGPU performance of the nForce 780a (or any of the other new NVIDIA chipsets) isn't enough, you can simply toss in a low end discrete GPU (dGPU) and NVIDIA's latest drivers will enable GeForce Boost. GeForce Boost is nothing more than SLI but between a mGPU and dGPU. Given how slow the mGPU is, GeForce Boost will only actually improve performance with a low-end dGPU and thus NVIDIA only supports GeForce Boost with either a GeForce 8400GS or GeForce 8500 GT.
With GeForce Boost enabled, the display driver also comes up with a new name for the mGPU + dGPU combo. If you combine a nForce 780a with a GeForce 8400GS you get a GeForce 8500 and if you pair the 780a with an 8500 GT the driver will report the mix as a GeForce 8600.
|NVIDIA 780a + GeForce 8400 GS||Half Life 2 Episode Two||MS Flight Simulator X||Company of Heroes||Crysis||Unreal Tournament 3|
|mGPU + dGPU (GeForce Boost)||50.3||39.6||45.5||30.1||22.2|
|% Increase due to GF Boost||22.0%||0.0%||17.6%||48.3%||3.7%|
With a GeForce 8400 GS we actually see decent scaling from a dGPU to the GeForce Boost mode. The added performance is large percentage-wise but in raw numbers it's nothing huge. You're basically getting a smoother gaming experience with GeForce Boost enabled, at least in those games where bridgeless SLI is supported.
|NVIDIA 780a + GeForce 8500 GT||Half Life 2 Episode Two||MS Flight Simulator X||Company of Heroes||Crysis||Unreal Tournament 3|
|mGPU + dGPU (GeForce Boost)||47.7||37.8||49.3||26.3||27.3|
|% Increase due to GF Boost||0.0%||7.0%||1.4%||6.0%||-17.5%|
GeForce Boost does next to nothing with an 8500 GT and in the case of Unreal Tournament 3, performance actually decreases. Of course it's a safe bet that future driver updates will improve scaling and performance from GeForce Boost.
AMD supports a similar technology with its 780G:
|AMD 780G + Radeon 3450||Half Life 2 Episode Two||MS Flight Simulator X||Company of Heroes||Crysis||Unreal Tournament 3|
|mGPU + dGPU (Hybrid CrossFire)||61.4||37.3||54.3||31.4||32.7|
|% Increase due to Hybrid CF||19.0%||23.5%||47.6%||34.8%||14.7%|
High Definition Video Decode Acceleration
The beauty of ubiquitous integrated graphics is that if you build a system based on one of these new NVIDIA chipsets, you get full H.264/VC1/MPEG-2 decode acceleration. We ran three quick tests to make sure that the PureVideo HD features were working and comparable to AMD's 780G chipset:
As expected, the 780a's PureVideo HD is good to go and gives us enough leftover CPU power to actually use our systems while playing back HD video.
Low Power SLI: HybridPower
It's not just the CPU guys that are taking power consumption seriously these days, NVIDIA is too. With the nForce 780a, NVIDIA finally introduces a technology it has been talking about for several months: HybridPower.
With all of NVIDIA's 2008 chipsets featuring integrated graphics, HybridPower enables a discrete graphics card to shut off when not in use, relying on the motherboard's integrated graphics (mGPU) to handle display output.
The technology works like this: the discrete GPU (dGPU) is plugged into a standard PCIe slot, however the display is connected to the mGPU. With HybridPower running in Power Savings mode, the mGPU handles all rendering and display output, while the dGPU remains turned off completely (not idling, but completely turned off, even the fan stops spinning). In Performance Boost mode, the dGPU is turned on and it handles all 3D rendering and display operations but its frame buffer is copied to system memory before being displayed by the mGPU. The dGPU renders all frames but the mGPU actually displays them. Since the dGPU's frame buffer needs to be copied to system memory and is actually displayed by the mGPU there is a small performance hit for enabling HybridPower, thankfully it is negligible.
BIOS and driver support for HybridPower is nothing short of outstanding, the install process is virtually seamless. Generally when dealing with integrated graphics and switching between a mGPU and dGPU you'll need a couple reboots and maybe a reconfigure of the BIOS before you can get display output. With the nForce 780a we simply plugged in a supported NVIDIA GPU and everything else worked itself out.
One problem we countered was related to behavior of the platform with its unsigned graphics driver. The issue is this: the nForce 780a's IGP uses the same graphics driver as the GeForce 9800 GX2 we attempted to install, however that graphics driver won't automatically be installed due to the fact that it is unsigned. It requires a re-run of the NVIDIA installation utility if the user adds a graphics card after the fact in order to get it to install properly. We are assuming that final drivers will be signed and this won't be a problem once the product is available for retail sale, but for now it can be confusing since no errors are thrown and you need to look at device manager before you realize that the GX2 driver wasn't properly installed.
The real problems with HybridPower arise when attempting to switch between using the mGPU and dGPU. The public and reviewers alike were both led to believe (by both AMD and NVIDIA) that the platform/driver would intelligently switch between the mGPU and dGPU - this isn't the
real world functionality of the platform.
Switching between the HybridPower modes must be done manually; while NVIDIA would like for the transition to be automated and seamless, this is the first incarnation of the technology and support for application-sensing technology just isn't there yet.
Luckily, NVIDIA developed a very simple tool that sits in your systray, allowing you to switch between HybridPower modes. Simply right-click the tool, select the appropriate operating mode and the driver enables or disables the appropriate GPU.
There are some limitations; first and foremost, only the GeForce 9800 GTX and GeForce 9800 GX2 are supported by HybridPower. On the chipset side, the nForce 720a, 730a, 750a, 780a and all of the GeForce 8x00 series motherboards support HybridPower. For most users, you'll need a new motherboard and a new GPU to take advantage of HybridPower.
Certain 3D applications won't let you change state while they are running, so you may have to quit applications like 3dsmax before you are able to switch power modes. NVIDIA's utility reminds you of this:
When switching HybridPower modes, the state of one GPU gets moved to the other, meaning that the process isn't instantaneous. The more windows you have open and the more GPUs you have in the system, the slower the process will be. On a single GeForce 9800 GTX it took between 4 and 7 seconds to switch modes, which honestly wasn't too bad.
When we outfitted the system with a GeForce 9800 GX2, featuring two GPUs, the process took up to 13 seconds. The amount of time it takes to switch modes depends entirely on the number of windows open, with 40 windows open the GeForce 9800 GTX took a maximum of around 6 seconds to switch modes, compared to 13 seconds for a GeForce 9800 GX2 thanks to its two GPUs. The transition time would be even higher on a 3 or 4 GPU system.
The type of windows open doesn't seem to have an impact on the transition time between HybridPower modes, simply the number of windows (and their associated memory footprint). The problem is that a dual-purpose machine (one used for work and gaming) can easily have a large number of windows open, and waiting more than 10 seconds for anything to complete easily makes a system feel slow/sluggish.
The power savings were absolutely worth it, see for yourselves:
|Save Power Mode (dGPU Disabled)||Boost Performance Mode (dGPU Enabled)|
|Total System Power Consumption (Idle)||115W||165W|
Since the mGPU is just as capable of decoding HD video as the dGPU in this case, it is possible to build an actual gaming HTPC out of something like the nForce 780a. You no longer have to sacrifice performance in order to keep power consumption down, you can have a multi-GPU setup but still watch movies thanks to HybridPower.
30" LCD Owners Need not Apply
The ASUS M3N-HT Deluxe only offers an analog VGA output as well as a digital, single-link HDMI output. The problem with this configuration is that while it is possible to convert HDMI to DVI, there is no way of outputting a dual-link DVI signal. In other words, the resolutions needed by 30" displays won't be reachable via the mGPU.
If this assumption is correct and there is no way to output a dual-link DVI signal from the mGPU (the reviewer's guide indicates only a single-channel integrated TMDS), then it almost entirely negates the point of HybridPower and 3-way SLI on this motherboard. Anyone investing a serious amount of money into graphics cards may also have reason to invest in a 30" display, which as it stands will be unsupported by this platform unless the display is driven directly off of the graphics cards themselves, in which case HybridPower won't work.
This is absolutely unacceptable and would prevent us from recommending the 780a as anything more than just another SLI motherboard. HybridPower is quite possibly the best feature for a high-end SLI user and if it won't work with 30" displays then its usefulness is severely degraded.
Unfortunately there's no other workaround here, NVIDIA simply chose wrong with its lack of support for dual-link DVI and we won't see this problem fixed until a new revision of the mGPU makes its way into later chipsets.
We selected the ASUS M3N-HT Deluxe as our NVIDIA 780a SLI platform representative today. NVIDIA provided this board for the press kits as it is one of the most feature laden 780a SLI boards appearing in the market. ASUS has ensured us that widespread availability should occur over the next couple of weeks along with the introduction of the new CrossHair II ROG board. MSI also provided their K9N2 Diamond board for a future review that is in progress.
Our tests today will concentrate on the performance of the 780a SLI chipset compared to its immediate competition from AMD, the 790FX that has been in the market place for a few months. Since the 780a SLI features integrated graphics functionality, we will also provide IGP results against the NVIDIA 750a/730a and AMD 780G chipsets.
We selected identical components for our five test beds, except for the motherboard choice obviously. Our choice of processors represents the top end of AMD’s Phenom lineup, the 9850BE. We also verified compatibility with a wide array of processors ranging from the entry level LE1600 to the 4850e, 6400+ X2, and the latest Phenom X3 8750.
General Chipset Performance - PCMark Vantage
Futuremark claims PCMark Vantage for Vista is the most complete total system-benchmarking suite to date and our experience has shown this to be true. Futuremark designed the benchmarks to test the performance of each major subsystem - CPU, memory, graphics, and disk performance based on real world applications. The final score presented at the conclusion of each test run takes into account the results of each carefully tailored test within the benchmark suite.
The performance delta between the five boards is minimal in the final score and even in the individual test suites. This does not surprise us due to the memory controller being on the processor and our setups running the same components and timings. The primary differentiator would be the disk controller design implemented by AMD and NVIDIA along with chipset driver support under Vista. In this case, those differences are not measurable in the test applications that PCMark Vantage utilizes.
General Chipset Performance w/ mGPU Enabled - PCMark Vantage
Unlike the discreet GPU tests, we expected to see some differentiation between the chipsets but it just was not there. The difference between the top scoring 780G and the 730a chipset is only 2%.
In the end, at least in these particular tests, any of the chipsets would be acceptable for a general application work.
We will take a brief look at general media performance with Adobe Photoshop Elements 6.0. Our test is one recommended by Intel, but the test itself is fair and results are very repeatable. This test simply measures the amount of time required to fix and optimize 103 different photos weighing in at 63MB. We report results in seconds, with lower times indicating better performance.
There is only a 1% difference between the chipsets with the 780a slightly edging out the 790FX in this CPU/Memory intensive benchmark. We attribute the differences to BIOS tuning although the 780G scoring measurably lower than the 790FX surprised us.
Media Encoding Performance
We are utilizing Nero Recode 2, DivX 6.8, and Windows Media Encoder x64 for our video encoding tests. The scores listed include the full encoding process represented in seconds, with lower numbers indicating better performance.
Nero Recode 2
Our first series of tests is quite easy - we take our original Office Space DVD and use AnyDVD to rip the full DVD to the hard drive without compression, thus providing an almost exact duplicate of the DVD. We then fire up Nero Recode 2, select our Office Space copy on the hard drive, and perform a shrink operation to allow the entire movie along with extras to fit on a single 4.5GB DVD disc. We leave all options on their defaults except we uncheck the advanced analysis option.
DivX 6 Converter
Our second test has us using DivX Converter to encode a 23MB AVI file into a pleasing 1080P DivX format for playback.
Windows Media Encoder x64
Our final test consists of using Windows Media Encoder's advanced video profile to encode a 128MB WMV file into one suitable for progressive download across our Web server.
Our CPU intensive encoding tests did not generate a clear-cut winner. Once again, the differences between all of the chipsets are minor at best. We recommend when purchasing a board based on these chipsets that you chose it based on features and support as the scores just do not justify another decision.
Audio Encoding Performance
We will utilize iTunes 7.6 for our audio encoding test, as it is one of the most utilized audio applications available due to the immense popularity of the iPod. As in previous articles, we are using an INXS Greatest Hits CD for testing, which contains 16 tracks totaling 606MB of songs. We use iTunes to convert our WAV files into ACC format. We utilize 256kbps and variable bit rate options for the test.
File Compression Performance
In order to save space on our hard drives and provide another CPU crunching utility, we utilize WinRAR 3.71 to perform compression tests. WinRAR fully supports multithreaded operations for users with dual core or multi-processor systems. Our test folder contains 444 files, 10 subfolders, and 602MB worth of data. We utilize default settings in WinRAR and defragment our hard drive before each test.
3D Rendering Performance
For 3D modeling and rendering, we are using POV-Ray. We used the 3.7 beta 24 that has SMP support and ran the built-in multithreaded benchmark.
We did not have a clear cut winner in the audio or rendering tests, only in the file compression test was there a measurable difference with the 790FX being about 3% faster than the 730a board. Even this difference is minor and so far our boards are matched evenly.
As usual, we test gaming performance with a variety of current games. We run our benchmarks at 1280x1024 when utilizing the Zotac 9800GTX.
Company of Heroes
Despite having been around for a while, Company of Heroes is still one of the best strategy games available. The game is heavily GPU limited but also requires brute CPU processing power to keep things smooth during heavy action. The built-in benchmark is very responsive to GPU as well as CPU scaling, hence we think it provides good insight into overall system gaming power.
Without doubt, Crysis is the sternest 3D test for any motherboard and GPU solution. The Crysis team reached new heights in developing a FPS engine that brings even the highest specification systems to their knees. Users have begun to base purchasing decisions solely on how well a component handles this game. As we are using a single card or IG solution here, we have to stick with medium or low detail levels. We utilize a custom demo that captures game play on the Harbor level.
Unreal Tournament 3
UT3 landed on our PC's hard drive a while ago and has not failed to deliver the ultra fast FPS shooter we all expected. The 3D engine provides ultra fast-paced action requiring a system capable of sustaining frame rates during periods of intense online gaming action. We play a three minute round of action with 11 bots on the CTF Coret level and generate an average FPS score based on five benchmark sessions.
Microsoft Flight Simulator X
One of the longest running titles on the PC, Flight Simulator is great game for people wanting to enjoy flight without all the hassles. Depending on how you configure FSX, you can have a CPU or GPU restricted setup. Our flight recording is based on a six minute flight around Honolulu in a Cessna 172 and we generate our benchmark results with FRAPS.
Half-Life 2: Episode 2
Half-Life 2: Episode 2 is our last title and it represents a game engine that is very scalable and works well on an IG board or one equipped with SLI or Crossfire. We utilize a custom demo that has indoor and outdoor action along with several firefights to ensure we stress the GPU.
Take your pick and be happy in knowing that at least with a discreet GPU installed, any of the boards are capable gaming platforms. The overall winner is the 790FX, but the differences in scores between each platform are minimal at best. Crysis and Flight Simulator X show a maximum difference of 6%, while most of the tested titles show less than a 3% difference.
In our hard drive transfer tests, we copy our two folders from our source drive to an identical target drive to test the transfer speed of the disk controller. Our 780a board has the fastest transfer rates although the SB600 on the 790FX and SB700 on the 780G products keep pace with the NVIDIA offerings.
We utilized a Maxtor One Touch external 300GB drive for our USB 2.0 transfer tests that feature the same file folders used in our hard drive transfer test. This particular setup has support for Firewire 400, Firewire 800, and USB 2.0. Our Firewire tests are dependent on the IEEE 1394 chipset utilized and will be covered in the motherboard review.
Historically, NVIDIA and Intel have had very good USB performance compared to AMD/ATI. AMD finally addressed USB performance concerns with the SB600 although performance still lags compared to other chipsets. The SB700 utilized on the 780G boards offered additional performance improvements and is about on par with Intel and NVIDIA now.
The SB600 equipped 790FX board trails the NVIDA and SB700 equipped 780G slightly in our transfer tests. Overall, the differences are fairly minor in actual usage and will not be noticed by users.
Memory and Overclocking Performance
One of the most disappointing aspects of testing the ASUS M3N-HT Deluxe board was the general lack of compatibility with most of the 2GB modules at our disposal. The first few BIOS releases would rarely run a 2x2GB configuration in a stable manner and a 4x2GB setup was out of the question. The last BIOS release allowed us to complete our benchmarking sessions with a standard 4GB setup and 8GB was usable with one particular part from G.Skill. However, this board would lock immediately if you manually changed the settings for the onboard video memory from auto to a manual selection. The good news is that ASUS has replicated several of our problems and we expect a new BIOS release shortly for use in the motherboard review.
In the meantime, we are providing three test results with our memory at DDR2-800 and DDR2-1066 at stock processor settings along with a DDR2-1000 setting with an HTT overclock to 250 while retaining the same base CPU speed. DDR2-1066 operation was not always stable with 4GB or even 2GB for that matter, but we were able to complete our benchmark tests after relaxing several timings and increasing memory voltage from 2.0 to 2.1V. Overall, DDR2-1066 performance provided slightly improved latencies along with better read and copy throughput. Initial test results with several applications show a slight improvement of 1%~3% at stock CPU settings.
We wanted to show direct comparisons to the 790FX platform along with scaling and performance oriented timings, but the stability of the board with 4GB setups is just not to a point yet where we can guarantee 24/7 stability with performance oriented setups. We hope the BIOS updates from ASUS will address this quickly. Our initial testing with the MSI K9N2 Diamond board indicates this problem is strictly BIOS related as 4GB configurations are working well so far on this board.
Our most disappointing results with the board were during our overclocking tests. We constantly had to fight the 4GB memory problems and finally switched over to 2GB without much success either. This particular setup would only operate around 2.8GHz stable but required 1.48V and relaxed memory timings to reach this level.
Our particular CPU sample has done 3.3GHz at the same voltages on the 790FX platform. After discussions with NVIDIA and ASUS, the feel our current overclocking maladies are BIOS related at this time. We tend to believe them as manually setting the CPU voltage resulted in the board applying an additional 0.10V to the CPU. We noticed several other settings that would overvolt at various times but the CPU voltage was consistent and bothersome.
After a few days of trying to clock this board to meet the standards set by the 790FX platform, we decided to call it quits until the next BIOS release. One item worth mentioning is the BIOS design and options opened up by ASUS is nothing short of spectacular, especially in the memory timing section. We will go into more detail in the motherboard review, but if the memory, voltage, and OC problems are properly addressed, this should be one special board for Phenom overclockers.
NVIDIA's chipsets have almost always been pretty decent (if not excellent with the nForce4 for AMD), their only issues were usually price and a lack of any compelling features to justify the added cost. SLI was always the biggest selling point of NVIDIA's platforms but with the nForce 780a (and its new lineup of chipsets in general), NVIDIA is attempting to bring more value to the table.
Honestly the biggest attraction to the nForce 780a SLI platform is its support for HybridPower, which will finally allow gamers to build machines that are both high performance and are efficient on power usage. Thankfully you aren't limited to the 780a for HybridPower support as these motherboards won't come cheap. Our ASUS M3N-HT Deluxe board will carry an introductory price of $249, something we are not used to seeing in the current AMD market sector.
Whether or not this price tag is worth the premium over the nForce 750a SLI boards is up for debate. It's not really in our opinion as we do not believe the current AMD processor series is capable of the required computational power needed to support 3-way SLI or Quad SLI configurations. This is not a knock against NVIDIA as AMD has the same problem with Quad CrossFire; it just reflects the current state of the processor offerings from AMD.
HybridPower is clearly in its infancy, the lack of dual-link DVI support from the mGPU means that owners of 30" displays can't enjoy the benefits until the next generation of NVIDIA chipsets come out. We would like to see eventual automated switching between HybridPower modes, not to mention a reduction in switch time for multi-GPU setups, but we'll take what we can get as a starting point. The list of GPUs that support HybridPower will hopefully continue to grow as NVIDIA would be doing its customer base a disservice by reserving the feature for only its highest end graphics cards.
Then there's the plain fact that what we're looking at here is an expensive Socket-AM2+ chipset, and while AMD can be competitive at lower price points, at the very high end of the market there's simply no reason to go with anything non-Intel right now. With Intel's G45 chipset due out later this summer, we would much rather see an Intel solution from NVIDIA shipped quickly as the combination of a mGPU with H.264 decode acceleration and HybridPower could be enough to actually make NVIDIA's platforms competitive in the Intel space.
Looking to the future, we wonder what will happen to NVIDIA's chipset business. Giving every chipset integrated graphics is a good move, but is it possible that it is too little, too late? Nehalem will begin shipping this year and next year we should start to see models with integrated graphics, leaving NVIDIA with SLI as the only thing it has to bring to the table once again. Losing on the integrated graphics performance front to AMD is also troublesome. Surpassing Intel's IGP performance is nothing to crow about for a GPU manufacturer; it's the competing GPU manufacturers that you have to beat, and here NVIDIA falls short. We want to see NVIDIA raising the bar for mGPU performance relative to AMD, not lowering it.