Board Features and Layout





MSI's "Circupipe" Northbridge heatsink design will make yet another appearance, this time atop an X48 MCH. Our past experience with this particular cooling implementation left us thinking aesthetics must have been a bigger concern for MSI's thermal engineers than performance. Once again, we cannot help but feel as though form has been improperly placed ahead of function. We believe this may be the reason the board is somewhat resistant to maintaining stable operation with an MCH voltage above 1.4V. In addition, we would really like to see MSI take advantage of the through-hole mounting they have built into the X48 Platinum - the plastic pushpins and thin springs used for mounting the Circupipe are nowhere near capable of providing sufficient pressure to support optimal heat transfer. In summary, we applaud the use of copper and high-quality heat pipes for wicking heat from the MCH, but it's not hard to identify a few simple things that MSI could do to make overall thermal performance a lot better. In any case, we would almost always recommend replacing the thermal interface material (TIM) with some of your favorite thermal compound and this time is no different.





The X48 Platinum makes use of a VRM 11.1 compliant PWM (pulse-width modulator) IC from Intersil and an 8-phase power delivery design. We had no problems supplying our QX9650 processor with the power needed for stable 4GHz operation. The solid capacitors and beefy inductors used on the board scream quality - MSI has done an excellent job selecting superior components for use in this board's power circuit. Given a choice, we always prefer the application of engineering effort which focuses on design decisions that result in credible improvements over marketing hype and the extravagant use of worthless feature sets. There is a big difference between producing a well-balanced platform and one which attempts to offer too much.

We also found that the high-side MOSFETS stayed quite cool even under heavy load. In fact, we have noticed that vendors that use MOSFETS based on the older, larger body packages in general deliver boards that run much cooler in this respect. A manufacturer's decision to move to the smaller package is mainly influenced by the need for a smaller form factor with a footprint that allows them to pack more MOSFETS into an equivalent space. In turn, not only does this increase the circuit's overall power density but it also negatively impacts each component's ability to efficiently dissipate conversion losses to the surrounding environment - the result of which is a hotter running system.

For this reason, it is not uncommon for some users to believe that heatsinks attached to the MOSFETS are located where they are for the principle purpose of cooling these components. In actuality, this is rarely the case. A lot of the time this turns out to be a good place to position additional masses of copper and aluminum, and these are thermally coupled to the MCH (and sometimes the Southbridge) through one or more heatpipes. MOSFETS are generally capable of withstanding extremely elevated operating temperatures and additional cooling often provides little to no actual benefit. For this reason we urge you to ignore any fault you may have found with MSI for not covering the MOSFETS along the top edge of the board.

One small detail that we really appreciated was the decision to reinforce the CPU heatsink mounting holes with rings of strengthening material. Little improvements such as these serve to consistently remind us of MSI's new commitment to detail. The area around the CPU socket is well clear of interference, meaning the installation of even the most massive air cooling solutions should not be a problem. The fan headers for the CPU and any fan connected to the back of our case were a little oddly positioned - be sure to seat these first before mounting your heatsink if you plan to use a large HSF.



While pink and light blue certainly would not be our first choice or any choice in colors, colors aren't usually a major problem if the board performs well. Where we do find actual fault with MSI's choice in color coding is the memory slots. Traditionally, same colored slots are used to indicate the appropriate installation locations when running a kit of dual-channel memory. MSI has instead chosen to segment the slots by color based on which channel they belong to (pink being Channel A and light blue being Channel B). In the end this matters very little as long as the user understand where the modules should be placed. Which ones are the right ones, you may ask? We found our greatest overclocking was accomplished when using Slot 2 of Channel A and Slot 2 of Channel B.

Specifications Board Features and Layout (Continued)
Comments Locked

21 Comments

View All Comments

  • taylormills - Monday, February 4, 2008 - link

    Hi all,

    Just a newbie question.

    Does this indicate that sound cards will be moving to PCI Express.

    Just curious because I have an older Board and am going to want to upgrade and I find it hard to fit a sound card around my twin 8800 boards. Due to them taking up the available slots.

    Any info ?
  • karthikrg - Saturday, February 2, 2008 - link

    how many ppl are using even crossfire 2x let alone think about crossfire 4x? 4 pcie slots IMHO is overkill. hope amd at least delivers crossfire x drivers in time. else it'll all be an utter waste.
  • ninjit - Friday, February 1, 2008 - link

    At the beginning of the article you mention that this is a DDR3 board, yet in the specifications chart you have lines for:

    [quote] DDR2 Memory Dividers [/quote]

    &

    [quote] Regular Unbuffered, non-ECC DDR2 Memory to 8GB total [/quote]
  • nubie - Friday, February 1, 2008 - link

    "many x16 devices are only capable of down-training to speeds of x4 or x8 and without this bridge chip the last x1 lane would be otherwise useless." This does interest me, I have had 3 nvidia cards (2x6600GT and 6200) running on a plain jane MSI neo4 OEM (Fujitsu Seimens bios), simply by cutting the ($25) 6200 down to a x1 connector and cutting the back out of one of the motherboard x1 slots to allow the 6600GT to fit physically.

    I thought that part of the PCIe standard was auto-negotiation, wouldn't any device NOT compatible with x1 be breaking the standard?

    http://picasaweb.google.com/nubie07/PCIEX1/photo#5...">http://picasaweb.google.com/nubie07/PCIEX1/photo#5...

    I am very curious about this, as the PCIe technology doesn't seem to be getting as much use as it could(IE it is MUCH more flexible than it is given credit for). The PCIe scaling analysis at Tomshardware showed that an 8800GTS was still quite capable at x8, so on PCIe 2.0 a x4 slot could be used for gaming at acceptable resolutions! (I am fully aware that only the first 2 slots are PCIe 2.0)

    The new Radeon "X2" card with 4 outputs could fit in this motherboard 3 times over, that is 12 displays on 1 PC with off-the-shelf technology!! With the quad-core and 12 displays, 2 PCs at around ~$1,000-$3,000 apiece could service a whole classroom of kids using learning software, typing tutor programs, or browsing the web. Even with regular old 2 output video cards you could get 8 displays on a much cheaper rig with sub-$50 video cards. So I wouldn't say "the performance potential of such a setup is marginal", unless I was measuring performance in such meaningless terms as how many $xxx video cards I can jam in a PC to get xx% increase.
  • kjboughton - Friday, February 1, 2008 - link

    You are correct when you say that PCIe devices are capable of auto-negotiating their link speeds; however, not all devices will allow for negotiated speeds of only x1. This includes most video cards, which will allow themselves to train to x16, x8, and x4 speeds but not x1. They are flexible to the extent possible, but nowhere does the PCIe specification require that that all devices support all speeds...after all, cards that make use of an x8 mechanical interface are obviously incapable of x16 speeds, too...
  • smeister - Friday, February 1, 2008 - link

    What's with the memory reference voltage?
    On the specification page (pg 2)
    Memory Reference Voltage Auto, 0.90V ~ 1.25V

    It should be half the DDR3 memory voltage
    1.5V x 0.5 = 0.75V, so should be: Auto, 0.75V ~ 1.25V
  • kjboughton - Friday, February 1, 2008 - link

    If you want half of 1.50V then leave it on 'Auto'...regardless, the lowest manually selectable value is 0.90V.
  • DBissett - Thursday, January 31, 2008 - link

    I can't find it now, but a couple of days ago I found this X48 board listed on MSI's website along with an X48C which would take either DDR3 or DDR2. Would be great to be able to use it now with DDR2 and upgrade to DDR3 when the prices get sane and it becomes clear why DDR3 is better.

    Dave
  • feraltoad - Thursday, January 31, 2008 - link

    almost always recommend replacing the thermal interface material (TIM)

    You state to replace teh TIM for the PWM and Chipset heatpipe coolers. I have a question regarding that. I have a IP35 pro, and I bought a new case. I thought now might be a good time to replace my pushpins with bolts, but I am hesitant about removing the thermal pad. I know that a direct contact to the heatpipe cooling system will result in better heat transfer, but I am afraid of shorting something out. Is it safe to have the cooler setting directly on the PWM? Does the pad also function as an insulator? I can live with a bit higher temps, but I can't live with killing my MOBO. Anyone's comments with some experience on this would be greatly appreciated.
  • ButterFlyEffect78 - Thursday, January 31, 2008 - link

    I get 9631mb/s on my Nvidia EVGA 680i chipset at only 750mhz ddr2 with 4-4-3-5 1T.

    And my brother who owns an Intel P35 Foxconn Mars board gets 9132mb/sec at 950mhz ddr2 with 5-5-5-18 2t.

    So what is the point on moving to ddr3 when it offers no performance gains in memory bandwidth even at a whopping 1600mhz. Is it just me who thinks Cas7 is wayyyy too high to even consider to push ddr3 to the market right now?

    I believe this is what only Intel wants so it can make AMD look old just like how they forced AMD a few years ago to make AM2 boards that only supported dd2.

Log in

Don't have an account? Sign up now