Analysis

The Supermicro Twin 1U combined with Intel's latest quad core technology offers an amazing amount of processing power in a 1U box. Just a bit more than two years ago, four cores in a 1U server was not a common setup. Now we get four times as many cores in the same space. Even better is that the second node increases power requirements by only 55%, while a second server would probably double the power needed. Considering the very competitive price, we can conclude that the Supermicro 1U Twin is probably the most attractive offer on the market today for those looking for a HPC node or rendering farm node.

The situation is different for the other markets that Supermicro targets: "data center and high-availability applications". What makes the 1U Twin so irresistible for the HPC and rendering people resulted in a few shortcomings for the HA applications, for example the heavy duty web applications. Although there is little doubt in our mind that Supermicro has used a high-quality, high-efficiency power supply, the fact remains that it is a single point of failure that can take down both nodes. Of course, with a decent UPS protection, you can take away the number one killer of power supplies: power surges. And it must be said that several studies have shown that failing hardware causes only 10% of the total downtime. About 50% of that, or 5% in total, is the result of a failed PSU. With a high quality PSU and a UPS with power surge protection, that percentage will be much lower. So it will depend on your own situation whether this is a risk you are willing to take. More than a third of the downtime is caused by software problems, and another third is planned downtime for upgrades and similar tasks. Those are downtimes that the Supermicro Twin with its two nodes is capable of avoiding with software techniques such as NLB and other forms of clustering.

A lot of the SMBs we are working with are looking collocate their HA applications servers and would love to run two web (NLB) and database servers (clustered) in only 2U. Right now, those servers typically take 4U and 6U, and the Supermicro twin could reduce those collocation costs considerably.

The biggest shortcoming is one that can probably be easily resolved by Supermicro: the lack of an SAS controller. It is not the higher performance of SAS that makes us say that, but VMWare's lack of support for SATA drives and controllers. A few Supermicro Twin 1U together with a shared storage solution (FC SAN, iSCSI SAN) could be an ideal platform to virtualize: you could run a lot of virtual nodes on the different physical nodes which results in consolidation and thus cost reduction, and the physical nodes offer high availability. However, VMWare ESX server does not support SATA controllers very well, so booting from a SAN is then the only option, which increases the complexity of the setup. A SAS controller would allow users to boot from two mirrored disks.

A SAS controller and a redundant power supply would make the Supermicro Twin 1U close to perfect. But let's be fair: the Supermicro Server Twin 1U is an amazing product for its primary market. It's not every day that we meet a 16 core server which saves you 100W of power and cuts rack space collocation in half... all for a very competitive price. Also, two nodes each with eight cores will remain a very interesting solution for applications such as rendering farms even when the quad core Xeon MP ("Tigerton") and AMD's quad core Opteron ("Barcelona") arrive. The reason is that performance will be competitive and that the price of four socket quad core systems will almost certainly be quite high. The Supermicro Twin 1U is an interesting idea which has materialized in an excellent product.

Advantages:
  • Cuts necessary rack space in half, superb computing density
  • Very high performance/power ratio, simply superior power supply
  • Highest performance/price ratio today
  • Excellent track record of Supermicro
Disadvantages:
  • No SAS controller (for now?)
  • Hard to virtualize with ESX server
  • Cold swap PSU
Electricity Bill
Comments Locked

28 Comments

View All Comments

  • JohanAnandtech - Monday, May 28, 2007 - link

    Those DIMM slots are empty :-)
  • yacoub - Monday, May 28, 2007 - link

    ohhh hahah thought they were filled with black DIMMs :D
  • yacoub - Monday, May 28, 2007 - link

    Also on page 8:

    quote:

    In comparison, with 2U servers, we save about 130W or about 30% thanks to Twin 1U system

    You should remove that first comma. It was throwing me off because the way it reads it sounds like the 2U servers save about 130W but then you get to the end of the sentence and realize you mean "in comparison with 2U servers, we save about 130W or about 30% thanks to Twin 1U". You could also say "Compared with 2U servers, we save..." to make the sentence even more clear.

    Thanks for an awesome article, btw. It's nice to see these server articles from time to time, especially when they cover a product that appears to offer a solid TCO and strong comparative with the competition from big names like Dell.
  • JohanAnandtech - Monday, May 28, 2007 - link

    Fixed! Good point
  • gouyou - Monday, May 28, 2007 - link

    The part about infiniband's performance much better as you increase the number of core is really misleading.

    The graph is mixing core and nodes, so you cannot tell anything. We are in an era where a server has 8 cores: the scaling is completely different as it will depend less on the network. BTW, is the graph made for single core servers ? dual cores ?
  • MrSpadge - Monday, May 28, 2007 - link

    Gouyou, there's a link called "this article" in the part on InfiniBand which answers your question. In the original article you can read that they used dual 3 GHz Woodcrests.

    What's interesting is that the difference between InfiniBand and GigE is actually more pronounced for the dual core Woodcrests compared with single core 3.4 GHz P4s (at 16 nodes). The explanation given is that the faster dual core CPUs need more communication to sustain performance. So it seems like their algorithm uses no locality optimizations to exploit the much faster communication within a node.

    @BitJunkie: I second your comment, very nice article!

    MrS
  • BitJunkie - Monday, May 28, 2007 - link

    Nice article, I'm most impressed by the breadth and the detail you drilled in to - also the clarity with which you presented your thinking / results. It's always good to be stretched and great example of how to approach things in structured logical way.

    Don't mind the "it's an enthusiast site" comments. Some people will be stepping outside their comfort zone with this and won't thank you for it ;)
  • JohanAnandtech - Monday, May 28, 2007 - link

    Thanks, very encouraging comment.

    And I guess it doesn't hurt the "enthusiast" is reminded that "pcs" can also be fascinating in another role than "Hardcore gaming machine" :-). Many of my students need the same reminder: being an ITer is more than booting Windows and your favorite game. My 2-year old daughter can do that ;-)
  • yyrkoon - Monday, May 28, 2007 - link

    It is however nice to learn about InfiniBand. This is a technology I have been interrested in for a while now, and was under the impression was not going to be implemented until PCIe v2.0 (maybe I missed something here).

    I would still rather see this technology in the desktop class PC, and if this is yet another enterprise driven technology, then people such as myself, who were hoping to use it for decent home networking(remote storage) are once again, left out in the cold.
  • yyrkoon - Monday, May 28, 2007 - link

    quote:

    And I guess it doesn't hurt the "enthusiast" is reminded that "pcs" can also be fascinating in another role than "Hardcore gaming machine" :-). Many of my students need the same reminder: being an ITer is more than booting Windows and your favorite game. My 2-year old daughter can do that ;-)


    And I am sure every gamer out there knows what iSCSI *is* . . .

    Even in 'IT' a 16 core 1U rack is a specialty system, and while they may be semi common in the load balancing/failover scenario(or maybe even used extensively in paralell processing, yes, and even more possible uses . . .), they are still not all that common comparred to the 'standard' server. Recently, a person that I know deployed 40k desktops/ 30k servers for a large company, and would'nt you know it, not one had more than 4 cores . . . and I have personally contracted work from TV/Radio stations(and even the odd small ISP), and outside of the odd 'Toaster', most machines in these places barely use 1 core.

    I too also find technologies such as 802.3 ad link aggregation, iSCSI, AoE, etc interresting, and sometimes like playing around with things like openMosix, the latest /hottest Linux Distro, but at the end of the day, other than experimentation, these things typically do not entertain me. Most of the above, and many other technologies for me, are just a means to an end, not entertainment.

    Maybe it is enjoyable staring at a machine of this type, not being able to use it to its full potential outside of the work place ? Personally I would not know, and honestly I really do not care, but if this is the case, perhaps you need to take notice of your 2 year old daughter, and relax once in a while.

    The point here ? The point being: pehaps *this* 'gamer' you speak of knows a good bit more about 'IT' than you give him credit for, and maybe even makes a fair amount of cash at the end of the day while doing so. Or maybe I am a *real* hardware enthusiast, who would rather be reading about technology, instead of reading yet another 'product review'. Especially since any person worth their paygrade in IT should already know how this system (or anything like) is going to perform beforehand.

Log in

Don't have an account? Sign up now