Back to Article

  • moiettoi - Friday, June 27, 2008 - link

    Hi all

    This sounds like a great board and for some-one like me that uses 4x22"monitors and does heaps of multi tasking it sounds perfect and would gladly pay the price asked.

    BUT why is such a great board slowed right down by not having DDR3 memory sticks,,,because from what I've read at the momment there is not that much difference with running this and what I have now which is a quad core with DDR3 which runs great but I do overwork it.So bigger would be better.

    You would think and I'm sure they already know that it would be common sence to make this board with DDR3 as it is it's only fault as far as I can see.

    We will probably see that board come out soon or next in line once they have sold enough of these to satify there egos.

    Great board but,,,,just not yet I will be waiting for the next one out which will have to carry DDR3,,,if they want to go forward in thier technolagy.

  • VooDooAddict - Thursday, February 07, 2008 - link

    For testers of large distributed systems this is an awesome thing to have sitting on your desk.

    You can have a small server room running on one of these.

    The biggest shortfall I see is cramming enough RAM on it.
  • iSOBigD - Tuesday, February 05, 2008 - link

    I'm actually very disappointed with 3D rendering speed. Going from 1 core to 4 cores takes my rendering performance up by close to 400% (16 seconds to 4.something seconds, etc.) in Max with any renderer. (I've tried Scanline, MentalRay and VRay) I'm surprised that going from 4 to 8 gives you 40-60% more speed. That's pretty pathetic, so I suspect the board is to blame, not the software. Reply
  • martin4wn - Tuesday, February 05, 2008 - link

    Actually 40-60% is not disappointing at all, it's quite impressive. You are encountering the realities of Amdahl's law, which is that only the parallel part of the app scales. Here's a simple workthrough:

    Say the application is 94% parallel code and 6% serial. As you add cores, say the parallel part scales perfectly, so doubles in speed with every doubling in core count. Now say the runtime on one core is 16 seconds (your example). Of that, 1 second is serial code and the other 15 seconds is parallel code running serially.

    Now running on a 4 core machine, you still have the 1s serial, but the parallel part drops to 15/4 = 3.75 seconds. Total runtime 4.75s. Overall scaling is 3.4x. Now go to 8 cores. Total runtime = 1 + 15/8 = 2.87s. Scaling of 60% going from 4 cores to 8 cores, and overall scaling of 5.5x

    So the numbers are actually consistent with what you are seeing. It's a great illustration of the power of Amdahls law - even an app that is 94% parallel still only gains 60% going from 4 to 8 cores even with perfect scaling, and it's really hard to get good scaling at even moderate core counts. Once you get to 16 or more cores, expect scaling to fall off even more dramatically.
  • ChronoReverse - Tuesday, February 05, 2008 - link

    This is why I'm quite happy with my quad core. What would probably be the useful limit on the desktop would be a quad core with SMT. After that faster individual cores will be needed regardless of how parallel our code gets (face it, you're not getting 90% parallelizeable software most of the time and even then 8 cores over 4 isn't getting more than about 50% boost in the best case for 90% parallel code). Reply
  • FullHiSpeed - Tuesday, February 05, 2008 - link

    Why the heck does this D5400XS MB support only the QX9775 CPU ??? If you need to use 8 cores you can get a lot more bang for the buck with quad core Xeon 5400 series, with only 80 watts TDP each, up to 3 ghz. For a TOTAL of $508 ($254 each quad ) you can have 8 cores @ 2 Ghz.

    Last month I built a system with a Supermicro X7DWA-N MB ($500) and 4 gig of DDR2 667 ($220) and a single 2.83 Ghz Xeon E5440 ($773) , which I use to test Gen 2 PCIE, dual channel 8 Gb/s Fibre Channel boards, two boards at once.
  • Starglider - Tuesday, February 05, 2008 - link

    Damnit. AMD could've destroyed this if they'd gotten their act togther. Tyan makes a 4 socket Opteron board that fits into an E-ATX form factor;">

    I was strongly tempted to get one before the whole Barcelona launch farce. If AMD hadn't made such horrible execution blunders and could have devoted the kind of resources Intel had to a project like this, we could have four Barcelonas running at 3 to 3.6 GHz with eight DDR2 slots all on a dedicated channel. Ah well. Guess I'll be waiting for Nehalem.
  • enigma1997 - Tuesday, February 05, 2008 - link

    Note what Francois said in his Feb 04 reply re memory timing"> Do you think it would help the latency and make it closer to DDR2/DDR3 ones? Thanks. Reply
  • enigma1997 - Tuesday, February 05, 2008 - link

    CL3 FBDIMM from Kingston would be "insanely fast"?! Have a read of this artcile:"> Reply
  • Visual - Tuesday, February 05, 2008 - link

    I must say, I am very disappointed.

    Not from performance - everything is as expected on this front... I didn't even need to see benchmarks about it.

    But prices and availability are hell. AMD giving up on QuadFX is hell. Intel not letting us use DDR2 is hell.

    I was really hoping I could get a dual-socket board with a couple (or quad) PCI-express x16 slots and standard ram, coupled with a pair of relatively inexpensive quadcore CPUs. Why is that too much to ask?

    The ASUS L1N64-SLI WS board has been available for an eon now, costs less than $300 and has quite a good feature set. Quadcore Opterons for the same socket are also available for more than a quarter, some models as cheap as $200-$250.
    Unfortunately, for some god-damned reason neither ASUS not AMD are willing to make this board work with these CPUs. The board works just fine with dual-core Opterons, all the while using standard unbuffered unregistered DDR2 modules, but not with quad cores? WTF.

    And that board is old like the world now. I am quite certain AMD could, if they wanted, have a refresh already - using the newest and coolest chipsets with PCIe 2.0, HT 3.0, independent power planes for each cpu, etc.
    Intel could also certainly have a dual socket board that works with cheap DDR2, have plenty PCI-express slots, and the cheap $300 quad-core Xeons that are out already instead of the $1500 "extremes".

    I feel like the industry is purposely slowing, throttling technological progress. It's like AMD and Intel just don't want to give us the maximum of their real capabilities, because that would devalue their existing products too quickly. They are just standing around idly most of the time, trying to sell out their old tech.
    Same as nVidia not letting us have SLI on all boards, or ATI not allowing crossfire on nforce for that matter.
    Same as a whole lot of other manufacturers too...
    I feel like there is some huge anti-progress conspiracy going on.
  • SiliconDoc - Thursday, February 07, 2008 - link

    You're onto something there, just make it ten times worse, and you'll have the real picture. I've seen hardware years ago that puts current harddrives to shame. So for whatever reasons, things are limited, like the 56k modem was.
    A friend just bought an HD2900 Pro (got it yesterday), 256bit 512mb. There were rumors about that there was a 512 bit version, he swore he saw it advertised. Well, to make a long story short, the $163 512bit version was getting bios flashed and overclocked up to the $400 XT 2900 whatever...(it was the same core apparently ) and it got sold real quickly and then pulled.... it's a ghost now...
    I looked for one since I just found out, and saw one at some place online for nearly $400, at one at a music store online posted but not in stock - special order only, likely a mere pic-e-presence.
    In other words they can pump those things out like mad, and depending on how much turkey they want in their bank...they start doing calculations, and when the consumer "catches" them, it's like anything else.
    Let's face it, prices have gone a bit wild lately, and the big boys must have ringing cash registers in their eyes.
    If they can pump 300 or 500 or 2 grand out of people instead of Disney World or Vegas, they'ell do it, and they see the drooling...,
    slobberer out
  • Anonymous Freak - Tuesday, February 05, 2008 - link

    Have you checked the prices of Xeons vs. equivalent Core 2 Extreme recently?

    According to Pricewatch, the Xeon 5472 (3.0 GHz, 1600 MHz bus,) is about $1029/1050. The Core 2 Extreme QX9650 (3.0 GHz, 1600 MHz bus,) is $1038/$1166. I can't find the QX9770 on Pricewatch, but other searches find it is about $1600, while the equivalent Xeon is $1400.

    The Core 2 "Extreme" line has, since its inception, been more expensive than equivalent Xeons. Heck, it might be cheaper to pick up the Xeon equivalent of the 9775 than to pick up the 9775 itself.
  • Anonymous Freak - Tuesday, February 05, 2008 - link

    You state: "We tested Skulltrail with only two FB-DIMMs installed, but even in this configuration memory latency was hardly optimal:"

    This is a major flaw in your benchmarking. As [url=">]your own[/url] Mac Pro review shows, quad-channel FB-DIMMs have lower latency, and higher bandwidth, than dual-channel. You should have filled all four FB-DIMM sockets. The latency penalty on multiple AMBs only applies to multiple AMBs on the same channel. For example, in a 5400-based server with four sockets per channel, having four total FB-DIMMs (one per channel, say 4 GB each,) produces better results than eight total FB-DIMMs (two per channel, 2 GB each.) And a sixteen FB-DIMM total (four per channel, 1 GB each,) system fares worst of all. Of course, that is assuming the TOTAL amount of RAM remains the same for each configuration. If you have an application that can benefit from massive amounts of RAM, having the extra RAM will far outweigh the performance penalty of the extra AMBs per channel. (In my example, moving from 16 GB of RAM using four 4 GB FB-DIMMs to 64 GB by having sixteen 4 GB FB-DIMMs, would produce performance benefits to certain applications just from the amount of RAM.)

    In addition, the new chipset, and newer FB-DIMM modules with newer AMBs, produces better results than the first-generation counterparts. For example, your Mac Pro benchmark showed CPU-Z latencies of 87 ns (quad-channel) and 92 ns (dual-channel, worse,) for the Mac Pro, vs. 52 ns for a Core 2 Duo with DDR-2 800; the new benchmark shows 79 ns for the 5400 chipset in dual-channel (assuming the same %, quad-channel should show 74 ns,) vs. 55 ns for a Core 2 Quad with DDR-2 800. Yeah, 74 is still slower than 55, but it's better than the 87 ns the (original) Mac Pro scored. (The new Mac Pro should see an improvement on par with this Skulltrail board over the old Mac Pro.)
  • Anand Lal Shimpi - Tuesday, February 05, 2008 - link

    You are correct on the FBD/latency issue. We didn't have small enough FB-DIMMs on hand to run a 4x1GB configuration, but the difference in latency is still not enough to change the situations where Skulltrail is outperformed by its desktop counterparts. The situation will be improved a bit but the point that I was trying to make is that in applications that can't take advantage of all 8 cores, Skulltrail will be slower thanks to its higher latency memory subsystem.

    Take care,
  • Googer - Monday, February 04, 2008 - link

    For being a premium enthusiast product with a $500 price tag and server DNA, this thing better come with an intergrated SAS controller too. There are plenty of other server/workstation motherboards in this price range that offer SAS, if performance is the purpose for Skulltrails existance, there's no reason for it to be left out. 15,000 RPM drives for the win. Reply
  • dansus - Monday, February 04, 2008 - link

    I would imagine you would see more difference if you used the multi threaded .dll (mt.dll) with x264 when encoding.

    Especially if your doing a 2+ pass encode where the first pass typically uses 50% cpu.

    I can see myself buying one later in the year as prices come down. At the very least, i can do two quad core encodes at once.
  • JKing76 - Monday, February 04, 2008 - link

    It's no secret that at a certain point, the computer "enthusiast" market is more about bragging than performance. But this is the most absurd and pointless release of pure e-penis waggling I've ever seen, and as a computer engineer I am literally embarrassed that a legitimate company like Intel is responsible. The EPA should fine Intel for this debacle, penalize them for each machine sold, and confiscate the computers of anyone selfish and stupid enough to buy one. Reply
  • SiliconDoc - Thursday, February 07, 2008 - link

    Wow. I get a kick out the bloggers that so often find so many problems with the really high end machines. Strangely enough they never seem to post their "rig" stats when they are having a big fit of complaints.
    I suspect the real problem is massive e-penis envy, and expecting the government to shutdown a private firms product, confiscate purchasers products unless they pass your "needs" test, and maybe give them a greenpeace fine and carbon tax (I know it crossed someones mind) seems to me to be the biggest green streak of jealousy I've yet to witness.
    The bottom line is, more than 99% of the freaks reading this review would wet their pants and float off into heavenly bliss if they found the "Skulltrail" ( that's what I find offensive - the sick name ) on their desk in the morning.
    I find the whole thing much like let's say a bunch of guys at an auto show putting down the swing-up door 10/80 stainless brand new XXX sports car, when deep down inside not a one of them would turn down the set of keys, no matter how often they'ed claim otherwise.
    Suddenly all that extra cpu-horsepower here would be the prudent reserve for the upcoming releases that no doubt very soon will make use of it all, since dou and quads are now getting to be commonplace.
    It's just all so amusing, when Jones' hate the new McMansion, basically because they aren't living it.
  • Nihility - Monday, February 04, 2008 - link

    Why didn't AMD make this available with phenom? would have won them the performance crown (sorta since this apparantly doesn't scale very well). Reply
  • legoman666 - Monday, February 04, 2008 - link

    the scalability has nothing to do with the platform, it has to do with the apps themselves. 2x phenoms will scale no better than 2x Intel quad. There are simply few programs out there designed to take advantage of >1 core, much less 8. Reply
  • chizow - Monday, February 04, 2008 - link


    we don't have a problem recommending it, assuming you are running applications that can take advantage of it. Even heavy multitasking won't stress all 8 cores, you really need the right applications to tame this beast.

    Not sure how you could come to that conclusion unless you posted some caveats like 1) you're getting it for free from Intel or 2)you're not paying for it yourself or have no concern about costs.

    Besides the staggering price tag associated with it ($500 + 2 x Xeon 9770 @$1300-1500 + FB-DIMM premium) there's some real concerns with how much benefit this set-up would yield over the best performing single socket solutions. In games, there's no support for Tri-SLI and beyond for NV parts although 3-4 cards may be an option with ATI. 3 seems more realistic as that last slot will be unusable with dual-cards.

    Then there's the actual benefit gained on a practical basis. In games, looks like its not even worth bothering with as you'd most likely see a bigger boost from buying another card for SLI or CrossFire. For everything else, they're highly input intensive apps, so you spend most of your work day preparing data to shave a few seconds off compute time so you can go to lunch 5 minutes sooner or catch an earlier train home.

    I guess in the end there's a place for products like this, to show off what's possible but recommending it without a few hundred caveats makes little sense to me.
  • chinaman1472 - Monday, February 04, 2008 - link

    The systems are made for an entirely different market, not the average consumer or the hardcore gamer.

    Shaving off a few minutes really adds up. You think people only compile or render one time per project? Big projects take time to finish, and if you can shave off 5 minutes every single time and have it happen across several computers, the thousands of dollars invested comes back. Time is money.
  • chizow - Monday, February 04, 2008 - link

    I didn't focus on real-world applications because the benefits are even less apparent. Save 4s on calculating time in Excel? Spend an hour formatting records/spreadsheets to save 4s...ya that's money well spent. The same is true for many real world applications. Sad reality is that for the same money you could buy 2-3x as many single-CPU rigs and in that case, gain more performance and productivity as a result. Reply
  • Cygni - Monday, February 04, 2008 - link

    As we both noted, 'real world' isnt just Excel. Its also AutoCAD and 3dsmax. These are arenas where we arent talking about shaving 4 seconds, we are talking shaving whole minutes and in extreme cases even hours on renders.

    This isnt an office computer, this isnt a casual gamers machine. This is a serious workstation or extreme enthusiast rig, and you are going to pay the price premium to get it. Like I said, this is a CAD and 3D artists dream machine... not for your secretary to make phonetrees on. ;)

    In this arena? I cant think of any machines that are even close to it in performance.
  • chizow - Monday, February 04, 2008 - link

    Again, in both AutoCAD and 3DSMax, you'd be better served putting that extra money into another GPU or even workstation for a fraction of the cost. 2-3x the cost for uncertain increases over a single-CPU solution or a second/third workstation for the same price. But for a real world example, ILM said it took @24 hours or something ridiculous to render each Transformer frame. Say it took 24 hours with a single Quad Core with 2 x Quadro FX. Say Skulltrail cut that down to 18 or even 20 hours. Sure, nice improvement, but you'd still be better off with 2 or even 3 single CPU workstations for the same price. If it offered more GPU support and non-buffered DIMM support along with dual CPU support it might be worth it but it doesn't and actually offers less scalability than cheaper enthusiast chipsets for NV parts. Reply
  • martin4wn - Tuesday, February 05, 2008 - link

    You're missing the point. Some people need all the performance they can get on one machine. Sure batch rendering a movie you just do each frame on a separate core and buy roomfulls of blade servers to run them on. But think of an individual artist on their own workstation. They are trying to get a perfect rendering of a scene. They are constantly tweaking attributes and re-rendering. They want all the power they can get in their own box - it's more efficient than trying to distribute it across a network. Other examples include stuff like particles or fluid simulations. They are done best on a single shared memory system where you can load the particles or fluid elements into a block of memory and let all the cores in your system loose on evaluating separate chunks of it.

    I write this sort of code for a living, and we have many customers buying up 8 core machines for individual artists doing exactly this kind of thing.
  • Chaotic42 - Tuesday, February 05, 2008 - link

    Anyone can come up with arbitrary workflows that don't use all of the power of this system. There are, however, some workflows which would use this system.

    I'm a cartographer, and I deal with huge amounts of data being processed at the same time. I have mapping program cutting imagery on one monitor, Photoshop performing image manipulation on a second, Illustrator doing TIFF separates on a third, and in the background I have four Excel tabs and enough IE tabs to choke a horse.

    Multiple systems makes no sense because you need so much extra hardware to run them (In the case of this system, two motherboards, two cases, etc) and you'll also need space to put the workstations (assuming you aren't using a KVM). You would also need to clog the network with your multi-gigabyte files to transfer them from one system to another for different processing.

    That seems a bit more of a hassle than a system like the one featured in the article.

  • Cygni - Monday, February 04, 2008 - link

    I dont see any problem with what he said there.

    All you talked about was gaming, but lets be honest here, this is not a system thats going to appeal to gamers, and this isnt a system setup for anyone with price concerns.

    In reality, this is a CAD/CAM dream machine, which is a market where $4-5,000 rigs are the low end. In the long run for even small design or production firms, 5 grand is absolute peanuts and WELL worth spending twice a year to have happy engineers banging away. The inclusion of SLI/Crossfire is going to move these things like hotcakes in this sector. There is nothing that will be able to touch it. And thats not even mentioning its uses for rendering...

    I guess what im saying is try to realize the world is a little bit bigger than gaming.
  • Knowname - Sunday, February 10, 2008 - link

    On that note, is there any studies on the gains you get in CAD applications by upgrading your videocard?? How much does the gpu really play in the process?? The only significant gain I can think of for CAD is quad desktop monitors per card with Matrox vid cards. I don't see how the GPU (beyond the RAMDAC or whatever it's called) really makes a difference. Pls tell me this, I keep wasting my money on ATI cards (not mention my G550 wich I like, but it wasn't worth the money I spent on it when I could have gotten a 6600gt...) just on the hunch they'd be better than nvidea due to the 2d filtering and such (not really a big deal now, but...) Reply
  • HilbertSpace - Monday, February 04, 2008 - link

    A lot of the 5000 intel chipsets let you use riser cards for more memory slots. Is that possible with skully? Reply

Log in

Don't have an account? Sign up now