DFI CFX3200-DR: ATI RD580 Tweak Attack

by Wesley Fink on 5/8/2006 12:05 AM EST
POST A COMMENT

25 Comments

Back to Article

  • Stele - Wednesday, May 10, 2006 - link

    IMHO the main disadvantage of using the Sil3114 instead of a newer SATA controller like the 3132 is not so much the 1.5Gbps transfer rate, but the fact that it's PCI-based and hence badly bottlenecked once you fill all four channels. 1.5Gbps is a theoretical maximum and HDDs today are nowhere near that limit - in fact we left ATA-133 without even breaking the ATA-100 limit. Since each SATA HDD gets a dedicated 1.5Gbps channel, arrays won't saturate the SATA interface either - rather, they would saturate the slow PCI interface as previously mentioned. Furthermore, the other significant feature of SATA II - NCQ - is of virtually zero relevance to most users, unless one is using the board in a corporate server; not impossible, but not likely either. Therefore, to harp on a figure that has generally been more a marketing tool (as was ATA-133) than a real necessity says little of the real issues at hand.

    In return, using the Sil3114 means using a tried and tested product whose characteristics are very well known by now. Board engineers would know how best to design around it - what special requirements (signal integrity, trace lengths, coupling etc), if any, need to be factored in, how the controller performs and behaves and so on. Furthermore, the 3114 provides 4 SATA ports for maximum expansion capability - there is no 4-port version of the 3132 as you alluded to in the review - not yet anyway. You may well be right about having a truckload of 3114s to get rid off, so that's likely a factor too. Perhaps when the 3132 gains a 4-port counterpart (and DFI finishes off 3114 inventory) then we may see newer stuff to come. :)

    On another side note, yes there would probably be many people who would appreciate the insane options in BIOS, but I do agree that they should make the UI more user-friendly, e.g. by having Automatic as a choice and/or by placing advanced options in sub-menus to distinguish them from the main options. That would satisfy enthusiasts of all levels, from the mad hatters down to the ones who are just starting out. :)

    Generally a review well done! :)
    Reply
  • Stele - Wednesday, May 10, 2006 - link

    quote:

    ... there is no 4-port version of the 3132...


    Perhaps that statement should be clarified/qualified a little - there is no 4-port version of the 3132 (i.e. PCIe + SATA II + 3.0Gbps) in a single controller IC. The 3132 supports, and hence is expected to be used with, SATA port multipliers - primarily the SII 3726, which can support up to 5 drives. In future, DFI could use a 3132 with one SATA channel routed to an external connector (as in the Asus A8N32-SLI) while the other channel could be connected to a 3726 to provide an additional 4 or even 5 internal HDD channels.

    However, this would create two problems - the need for another IC (the board is already very cramped as it is!) and, as already discussed, the need to gain sufficient experience with the new ICs in the lab before they can be confidently implemented and designed around. Cost and time-to-market factors may also have played a role in DFI's choice.
    Reply
  • proamerica - Monday, May 08, 2006 - link

    This is a poor quality review. The reviewer complains too much about variables that actually improve performance when handled by the right person. The overclocking potential of this board is beyond all other 939 boards I have owned, including the A8R32-MVP... People all over the place are reporting the highest overclocks ever achieved for memory and CPUs. You have to know what you are doing I'm afraid, and yes it requires using all the settings in the BIOS. That is the caveat of buying this board, its hard to use, and it takes time to figure it out, but once you do its worth it. I stably OC'ed my X2 3800 to 2940Mhz, and I currently run it 24/7 at 2700Mhz. Is 2940Mhz the highest OC I have ever gotten with this processor? Yes, stably, by far the highest. One of the greatest aspects of this board is that it will overclock really high but it doesn't take a lot of voltage to get things stable.

    Why does the review say: "but this DFI does make us wonder how many end users will actually devote the time to master 32 levels of drive strength, and DQS skew levels of +/- 0 to 255 in 511 levels." Lets see, an extremely expensive motherboard from a company known for making the most tweakable boards around... And you wonder if end-users are going to bother? Yeah they're going to bother. If they don't, they should have purchased something else.

    Bottom line, this board beats the A8R32-MVP hands down, its just harder to use than the Asus.
    Reply
  • Wesley Fink - Tuesday, May 09, 2006 - link

    The review pointed out what you clearly found. It's a difficult board to master, but the options and performance can be outstanding. Some want to take the time to master it, others would prefer a board that is easier to overclock. The real poiunt is the DFI CFX3200 still needs work. The BIOS does NOT need to be so difficult to master, and it wouldn't be if more intelligent choices were made for auto settings.

    The CFX3200 is not a bad board, it is just a very difficult board to use and master - even for an experienced enthusiast.
    Reply
  • Zoomer - Wednesday, May 10, 2006 - link

    As mentioned by someone else, the auto (default) settings are nicely choosen - for BH5 memory.

    Perhaps it would be wise to point that out somewhere, or provide an option of memory: BH5/Normal/Valueram/Manual
    Reply
  • ozzimark - Monday, May 08, 2006 - link

    "Running four double-sided 512MB or 1GB DIMMs is much more demanding than running two DS DIMMs, and like almost every board we have tested the Command Rate needed to drop to 2T with 4 DS DIMMs."

    using a high dram drive strength should allow for stable opteration at 1T with 4 double rank sticks in.. :)
    Reply
  • bigtoe36 - Monday, May 08, 2006 - link

    From the Tweak guide on the bleedinedge forum.

    "Max async latency - options 7 thru 10 are all you should need, 7 for agressive tight timings 10 for high fsb overclocks. This option HAS TO BE SET MANUALLY

    Read Preamble - 4.5 thru 6 is all you need worry about, 4.5 for BH5 etc and 6 for high fsb overclocks. i usually use 5.5 and 6. Again HAS TO BE SET MANUALLY"

    http://www.bleedinedge.com/forum/showthread.php?t=...">http://www.bleedinedge.com/forum/showthread.php?t=... for the full guide.

    Wesley you have the options posted on page 4 the wrong way round, its easy to do as I often get them confused.

    Tony
    Reply
  • ozzimark - Monday, May 08, 2006 - link

    with dfi boards, it's long been my experience that manually setting MAL/RP is a VERY BAD thing. Reply
  • bigtoe36 - Monday, May 08, 2006 - link

    Normally that would be the case but DFI were setting 4.5 and 5 as hidden defaults. Now if you are running Bh5 you will have no problems, but most everything else would have issues.

    Thats why i quoted in my guide and on MANY forums you have to set these manually to get the best from the board.
    Reply
  • mbhame - Monday, May 08, 2006 - link

    Where's the USB/Firewire CPU Utilization and I/O? Where's IDE performance?
    Throughput is not indicative of real-world performance for any user I know.
    Reply
  • poohbear - Monday, May 08, 2006 - link

    nice mobo and all, but is it really worth $240 usd?! i think that money would be better spent on a decent mobo and the savings on a better vid card.:/ Reply
  • cornfedone - Monday, May 08, 2006 - link

    WAY too expensive and no tangible performance increase over RD480 mobos.

    The mobo companies are out to pork consumers with sky high prices for commodity mobos. The RD480/RD580 chipsets are pretty low cost chipsets and the mobo designs less than stellar to say the least. For that Asus, DFI, Sapphire et al are asking outrageous prices for mobos with long lists of problems. None of these mobo companies has delivered a properly functioning mobo, they provide no tech support and they don't listen to their customers. All they do is use the hardware review sites as PIMPS to SHILL products that aren't ready for Prime Time.

    With no serial port, only one usable PCI slot, a $200+ price tag, Mickey Mouse board layout design, too many BIOS adjustments that have little or no benefit, lack of quality tech and customer support, etc. the DFI mobo can sit on the shelf until Hell freezes over as far as I am concerned. Anyone willing to pay $200 for a malfunctioning mobo deserves exactly what they get or don't get.

    PT Barnum is still alive and flourishing in the mobo industry.
    Reply
  • Marlowe - Monday, May 08, 2006 - link

    I think the Sapphire PURE Crossfire A9RD580 suffers from the same problems as you mention. Just too many settings in bios to master. I expect you don't have the time to test this motherboard as well? I've actively worked with it to or from in three weeks now.. without even getting the HTT over 290 and get my ram to work at 2,5-3-3 settings :P Also in contrast to DFI, Sapphire has very poor bios and software support :)

    I might just be a n00b tho! But one should think almost a month of focus should be enough to get a computer working..
    Reply
  • Peter - Monday, May 08, 2006 - link

    And yet again, we are seeing RAM performance attributed to the chipset - on an AMD64 chipset. Page 5 says:

    "Optimum tRAS
    In past reviews, memory bandwidth tests established that a tRAS setting of 11 or 12 was generally best for nForce2, a tRAS of 10 was optimal for the nForce3 chipset, a tRAS of 7 was optimal for the nForce4/ATI RD480/ULi M1697 chipsets, and a tRAS of 10 produced the best bandwidth on the ULi 1695. The ASUS A8R32-MVP review established that a tRAS setting of 8 produced the highest bandwidth on the RD580 chipset."

    Hello? As has been pointed out numerous times with those articles (every time, in fact), and as you certainly know, chipsets on AMD64 platforms do not even connect to the RAM. The CPU does that. Paragraphs like the above quoted are just plain nonsense.

    Dear reviewers, are we being thick or are we just stuck too deeply in cut&paste land? You've been dragging this silly mistake along for three years now.

    regards,
    Peter
    Reply
  • JarredWalton - Monday, May 08, 2006 - link

    The CPU does indeed house the memory controller, but that doesn't mean the chipset doesn't have an impact on memory timings. The point is that tRAS was tested at varying levels to determine an optimal settings. While nF4, Rx480, and M1697 got best results with tRAS set to 7, M1695 liked 10 and RD580 appears to do best with ~8. Realistically, the difference between tRAS 5 and tRAS 10 in actual applications (i.e. not memory benchmarks) is going to be less than 1 or 2%. However, it's good to be clear that we're using 2-2-2-8-1T timings because those appear to be better overall than 2-2-2-5-1T. Reply
  • Calin - Monday, May 08, 2006 - link

    While the memory controller is on the processor (and have very little in common with the chipset), one must note that the chipset will access the memory with different purposes, like DMA (Direct Memory) access from hard drive controllers, or integrated video chipsets needs a lot of bandwidth to the memory. In this, the processor is "left outside" the transfer, and the memory controller on the processor does the copy job.
    I don't know why different chipsets will favour different tRAS values, but the chipset needs to access the memory controller without intervention from processor
    Reply
  • Visual - Monday, May 08, 2006 - link

    so this board has drive strength settings for everything and their mother... but is that needed? is it ever useful?
    if they all default to max anyway, what good is the ability to set it at 31 lower settings?
    and its porbably the same with many other options - if they're set to the right value already and have a warning "do not change or your system will puke" in the comments, why do we even have those options?
    Reply
  • Calin - Monday, May 08, 2006 - link

    Maybe when set at the max value, they create "echo" in other nearby lines (disrupting other signals) Reply
  • JarredWalton - Monday, May 08, 2006 - link

    Reaching maximum overclocks - just like fine tuning a typical BIOS - requires a lot of tweaks. Getting top performance from every memory type available using "Auto" settings is not likely to happen. You can discover through trial and error where the "sweet spot" is for your particular RAM, and you might find that it gives you and extra 100-200 MHz.

    For example, memory skew is mostly (as I understand it) a way of increasing stability. You tweak the memory so that signals are read/sent slightly out of phase with "default", and that can be used to compensate for higher clock speeds. You would end up adjusting skew at various overclock levels to maximize stability. Drive strength is another option for tuning the system to work optimally with your RAM and CPU at various speeds; higher voltages and clock speeds would respond differently to varying drive strengths.

    The problem is, finding the optimal values for even one configuration is a trial and error process that can literally take weeks or even months. Do most people need that or even want that? Probably not. For the few that do, they'll probably love this board. That's why Wes says it would be nice to hide the less frequently used options and give them reasonable "Auto" settings. In the extreme, choosing even drive strength and DQS skew while leaving all other settings the same represents 16,744,448 potential settings (three separate drive strengths with 32 potential settings, and 511 skew settings).

    The good news is that there are people out there with a better understanding of the low level details that are writing guides to help others optimize performance without testing every setting.
    Reply
  • Clauzii - Monday, May 08, 2006 - link

    It looks like CrossFire is becoming a potent and competitive subject, despite what a lot of people said a year ago, and with this board from DFI, it looks like the future is indeed bright for people who want´s ATI Crossfire or thought they didn´t.

    It also looks that DFI has indeed become a star in the motherboard market - especially when the outdated SATA chips get a trip to the eternal outer space silicon fields - and gets an 600 injection.
    To me it also seems that these boards must be near rocksolid, since I don´t see any mentions of strange behavior - nice.

    Crossfire software (CCC and the horror that belongs to it!) needs to be solved by ATI as soon as possible!! as it looks to be the only thing holding back on more people getting it.

    Thanks for a Nice and pretty well written article :)
    Reply
  • rqle - Monday, May 08, 2006 - link

    "...breathlessly waiting for DFI's AM2 and Conroe motherboards."
    Great board, but not sure where this new mainboard will fit in since AM2 is coming, many can opt for the nforce expert if they need a board before AM2.

    hoping AM2 version is in the works and will be release soon as well.
    Reply
  • electronox - Monday, May 08, 2006 - link

    *sigh*

    as far as gaming benchmarks go, what we really need to learn to do is to focus on the lowest framerates rather than the highest framerates (or even the average framerate). fink, anand, and co., you guys offer a progressive tech-journalism and no doubt have thought about what FPS performance really means.

    in its most important application, FPS performance means the ability to convey a smooth, fluid visual experience without noticeable dips or jerks in motion. sadly, with the way things are marketed now, the overall fluidity of gaming is sacrificed to reach those peak framerates we all obsess about in our benchmarking suites.

    as a long time gamer and enthusiast-sector consumer, i wish such high profile websites as yours would pay more attention to the worst parts of FPS gaming - the parts of the game where the intensity of in-game content is notched up, but often our video settings must be turned down in order to prevent epileptic siezures. such media attention might, in turn, lead industry developers to optimize their drivers for this exceedingly common problem which, in my opinon, is just as easily quantifiable and ever bit as important as average FPS performance.

    my thoughts, electronox.
    Reply
  • Dfere - Monday, May 08, 2006 - link

    I have to agree. I make good money, but I no longer have the time to play with bleeding edge components and do modding. I know this is an enthusiast site, but at least for me , and I think a large amount of readers, an analysis of the max you might get out of a bleeding edge system is not all the value your site brings. A lot of posts by the readers show they have mid range systems. Thus I can only agree that an analysis of the FPS "issues" described above with a mid range system would help readers identify what would best go with their current system, not just a top of the line upgrade. I know your testing tries to determine , for example, CPU limits or GPU limits...... but it really only does so on bleeding edge systems..... and these comments were already mirrored in the latest AGP vid card releases......(why compare a new AGP card with new processor when most AGP owners have 754 systems.... etc) Reply
  • JarredWalton - Monday, May 08, 2006 - link

    I think it all depends on what game you're talking about, and how the impact is felt in the fluidity of the FPS score. These days, the vast majority of first-person shooters have a pretty consistent FPS, at least in normal gaming. In benchmarks, you're often stressing the games in a somewhat unrealistic sense -- playing back a demo at three or four times the speed at which it was recorded. Why does that matter? Well, depending on the game engine, loading of data can occur in the background without actually slowing performance down much, if at all. In a time demo, you don't generally get that capability, since everything moves much faster.

    There are several other difficulties with providing minimum frame rates. Many games don't report instantaneous frames per second and only provide you with the average score. (Doom 3, Quake 4, Call of Duty 2, Half-Life 2, Day of Defeat: Source all generate scores automatically, but don't provide minimum and maximum frame rates.) If we notice inconsistent frame rates, we do generally comment on the fact. About the only game where I still notice inconsistent frame rates is Battlefield 2 with only 1GB of RAM -- at least on a system of this performance level. (I suppose I should throw in Oblivion as well.)

    Sure, we could use tools like FRAPS together more detailed information, but given that there's a limited amount of time to get reviews done, would you rather have fewer games with more detailed stats, or more games with average frame rates? Realistically, we can't do both on every single article. Our motherboard reviews try to stay consistent within motherboards, our processor reviews do the same within CPU articles, and the same goes with graphics cards and other areas. If we have an article where we look at results from one specific game, we will often use that to establish a baseline metric for performance, and readers that are interested in knowing more about the benchmark can refer back to that game article.

    Average frame rates are not the be-all, end-all of performance. However, neither are they useless or meaningless. we run into similar problems if we report minimum frame rates -- did the minimum frame rate occur once, twice, frequently? As long as people understand that average frame rates are an abstraction representing several layers of performance, than they can glean meaning from the results. You almost never get higher average frame rates with lower minimum frame rates, or conversely lower average frame rates with higher minimum frame rates -- not in a single game. In the vast majority of benchmarks, an increase in average frame rate of 10 FPS usually means that minimum frame rates have gone up as well -- maybe not 10 FPS, but probably 7 or 8 FPS at least.

    In the end, without turning every article into a treatise on statistics, not to mention drastically increasing the complexity of our graphs, it's generally better to stick with average frame rates. Individual articles may look at minimum and maximum frame rates as well, but doing that for every single article that uses a benchmark rapidly consumes all of our time. Are we being lazy, or merely efficient? I'd like to think it's the latter. :-)

    Regards,
    Jarred Walton
    Hardware Editor
    AnandTech.com
    Reply
  • OvErHeAtInG - Monday, May 08, 2006 - link

    Good answer :) Also I think that minimum framerates (while very important in gameplay) are much more impacted by the videocard used. With a motherboard review, we're much more concerned with overall performance, which is exactly what you gave us with the avg. framerate numbers... Reply

Log in

Don't have an account? Sign up now