Dual socket motherboards have been around for ages, but dual socket enthusiast motherboards have a far shorter history. Back during the days where instruction level parallelism seemed to have no end in sight, having more than one CPU just didn't make sense for the masses. Most Windows applications weren't multithreaded and CPU prices just weren't what they are today.

Many of the same types of applications that benefit from multiple cores today were still around back then; 3D rendering, animation and image processing were all multithreaded CPU hogs. The problem is that if you wanted more than one CPU you generally had to make a choice between a tweakable, high performance enthusiast motherboard or a workstation board. Workstation motherboards were much more expensive, not nearly as flexible from a component standpoint and hardly ever performed as well as their desktop counterparts - the only real benefits were a more robust design and of course, the ability to support multiple CPUs.

Over the years we saw a few important dual-socket enthusiast motherboards arrive on the scene, the most popular of which was arguably ABIT's BP6. For all intents and purposes the BP6 was a desktop motherboard, it just had two CPU sockets. Intel's Celeron processors were cheap enough where you could pop in a couple, overclock them and have a pretty decent workstation based on an enthusiast desktop motherboard. Tradeoffs? There were none. It was a very popular board.

Times do change and eventually AMD/Intel stopped getting amazing returns from simply increasing instruction level parallelism and clock speed with their CPUs. The two turned to thread level parallelism to carry them through the next decade of microprocessor evolution; seemingly overnight, everyone had multiple cores in their systems.

The advent of the multi-core x86 CPU all but eliminated the need for a dual socket enthusiast platform. If you needed more cores simply toss a multi-core CPU in your desktop board and you were good to go. When Intel introduced the first quad-core desktop x86 processors things got even worse for dual socket motherboards. Most applications have a tough time using more than two cores, a single quad core CPU covered virtually all bases - and they were affordable too.

AMD didn't have a quad-core CPU until the recent launch of Phenom. In order to fill the gap between the dual core Athlon 64 X2 and the delayed arrival of Phenom, AMD dusted off plans to introduce a dual socket enthusiast platform and called it Quad FX.

The idea was simple: build an enthusiast platform that used normal dekstop components but had two sockets. With dual-core CPUs this meant that you'd have four cores in a system, and when quad-core arrived you'd have a healthy 8, all on an enthusiast class motherboard.

Quad FX was abandoned by AMD (although it does promise an upgrade path to quad-core CPUs), largely because while you had to buy an expensive motherboard and two dual cores to put the Quad in Quad FX, Intel was shipping faster, single socket, quad-core CPUs.

Intel did see some merit in AMD's Quad FX platform and actually released an ill-prepared competitor, something it called V8. Intel basically took a workstation Xeon motherboard and recommended enthusiasts purchase a pair of quad-core Xeon processors, giving you an 8-core alternative to Quad FX. The problem with the V8 platform was that it was expensive, there was no multi-GPU support and it required expensive FB-DIMMs thanks to its Xeon heritage.


The original V8 board was straight from the server world

Last April, Intel announced that it would be releasing a successor to V8, codenamed: Skulltrail. Designed to fix many of the problems with V8, Intel kept its promise to release the platform despite AMD's abandonment of the Quad FX project.

Today we have a preview of Skulltrail, which Intel expects to make available this quarter. Unlike Intel's Centrino or vPro, Skulltrail isn't officially a "platform" it's just a name for a motherboard and CPU combination, nothing more. The motherboard is the Intel D5400XS, based on Intel's 5000 series server/workstation chipset (yes, FB-DIMMs are still a requirement). The board supports any LGA-771 CPU, but Skulltrail is designed to be used with a new processor: the Core 2 Extreme QX9775.

The CPUs
Comments Locked

30 Comments

View All Comments

  • chizow - Monday, February 4, 2008 - link

    quote:

    we don't have a problem recommending it, assuming you are running applications that can take advantage of it. Even heavy multitasking won't stress all 8 cores, you really need the right applications to tame this beast.


    Not sure how you could come to that conclusion unless you posted some caveats like 1) you're getting it for free from Intel or 2)you're not paying for it yourself or have no concern about costs.

    Besides the staggering price tag associated with it ($500 + 2 x Xeon 9770 @$1300-1500 + FB-DIMM premium) there's some real concerns with how much benefit this set-up would yield over the best performing single socket solutions. In games, there's no support for Tri-SLI and beyond for NV parts although 3-4 cards may be an option with ATI. 3 seems more realistic as that last slot will be unusable with dual-cards.

    Then there's the actual benefit gained on a practical basis. In games, looks like its not even worth bothering with as you'd most likely see a bigger boost from buying another card for SLI or CrossFire. For everything else, they're highly input intensive apps, so you spend most of your work day preparing data to shave a few seconds off compute time so you can go to lunch 5 minutes sooner or catch an earlier train home.

    I guess in the end there's a place for products like this, to show off what's possible but recommending it without a few hundred caveats makes little sense to me.
  • chinaman1472 - Monday, February 4, 2008 - link

    The systems are made for an entirely different market, not the average consumer or the hardcore gamer.

    Shaving off a few minutes really adds up. You think people only compile or render one time per project? Big projects take time to finish, and if you can shave off 5 minutes every single time and have it happen across several computers, the thousands of dollars invested comes back. Time is money.
  • chizow - Monday, February 4, 2008 - link

    I didn't focus on real-world applications because the benefits are even less apparent. Save 4s on calculating time in Excel? Spend an hour formatting records/spreadsheets to save 4s...ya that's money well spent. The same is true for many real world applications. Sad reality is that for the same money you could buy 2-3x as many single-CPU rigs and in that case, gain more performance and productivity as a result.
  • Cygni - Monday, February 4, 2008 - link

    As we both noted, 'real world' isnt just Excel. Its also AutoCAD and 3dsmax. These are arenas where we arent talking about shaving 4 seconds, we are talking shaving whole minutes and in extreme cases even hours on renders.

    This isnt an office computer, this isnt a casual gamers machine. This is a serious workstation or extreme enthusiast rig, and you are going to pay the price premium to get it. Like I said, this is a CAD and 3D artists dream machine... not for your secretary to make phonetrees on. ;)

    In this arena? I cant think of any machines that are even close to it in performance.
  • chizow - Monday, February 4, 2008 - link

    Again, in both AutoCAD and 3DSMax, you'd be better served putting that extra money into another GPU or even workstation for a fraction of the cost. 2-3x the cost for uncertain increases over a single-CPU solution or a second/third workstation for the same price. But for a real world example, ILM said it took @24 hours or something ridiculous to render each Transformer frame. Say it took 24 hours with a single Quad Core with 2 x Quadro FX. Say Skulltrail cut that down to 18 or even 20 hours. Sure, nice improvement, but you'd still be better off with 2 or even 3 single CPU workstations for the same price. If it offered more GPU support and non-buffered DIMM support along with dual CPU support it might be worth it but it doesn't and actually offers less scalability than cheaper enthusiast chipsets for NV parts.
  • martin4wn - Tuesday, February 5, 2008 - link

    You're missing the point. Some people need all the performance they can get on one machine. Sure batch rendering a movie you just do each frame on a separate core and buy roomfulls of blade servers to run them on. But think of an individual artist on their own workstation. They are trying to get a perfect rendering of a scene. They are constantly tweaking attributes and re-rendering. They want all the power they can get in their own box - it's more efficient than trying to distribute it across a network. Other examples include stuff like particles or fluid simulations. They are done best on a single shared memory system where you can load the particles or fluid elements into a block of memory and let all the cores in your system loose on evaluating separate chunks of it.

    I write this sort of code for a living, and we have many customers buying up 8 core machines for individual artists doing exactly this kind of thing.
  • Chaotic42 - Tuesday, February 5, 2008 - link

    Anyone can come up with arbitrary workflows that don't use all of the power of this system. There are, however, some workflows which would use this system.

    I'm a cartographer, and I deal with huge amounts of data being processed at the same time. I have mapping program cutting imagery on one monitor, Photoshop performing image manipulation on a second, Illustrator doing TIFF separates on a third, and in the background I have four Excel tabs and enough IE tabs to choke a horse.

    Multiple systems makes no sense because you need so much extra hardware to run them (In the case of this system, two motherboards, two cases, etc) and you'll also need space to put the workstations (assuming you aren't using a KVM). You would also need to clog the network with your multi-gigabyte files to transfer them from one system to another for different processing.

    That seems a bit more of a hassle than a system like the one featured in the article.

  • Cygni - Monday, February 4, 2008 - link

    I dont see any problem with what he said there.

    All you talked about was gaming, but lets be honest here, this is not a system thats going to appeal to gamers, and this isnt a system setup for anyone with price concerns.

    In reality, this is a CAD/CAM dream machine, which is a market where $4-5,000 rigs are the low end. In the long run for even small design or production firms, 5 grand is absolute peanuts and WELL worth spending twice a year to have happy engineers banging away. The inclusion of SLI/Crossfire is going to move these things like hotcakes in this sector. There is nothing that will be able to touch it. And thats not even mentioning its uses for rendering...

    I guess what im saying is try to realize the world is a little bit bigger than gaming.
  • Knowname - Sunday, February 10, 2008 - link

    On that note, is there any studies on the gains you get in CAD applications by upgrading your videocard?? How much does the gpu really play in the process?? The only significant gain I can think of for CAD is quad desktop monitors per card with Matrox vid cards. I don't see how the GPU (beyond the RAMDAC or whatever it's called) really makes a difference. Pls tell me this, I keep wasting my money on ATI cards (not mention my G550 wich I like, but it wasn't worth the money I spent on it when I could have gotten a 6600gt...) just on the hunch they'd be better than nvidea due to the 2d filtering and such (not really a big deal now, but...)
  • HilbertSpace - Monday, February 4, 2008 - link

    A lot of the 5000 intel chipsets let you use riser cards for more memory slots. Is that possible with skully?

Log in

Don't have an account? Sign up now