Iwill

Matt Chang of Iwill showed me their newest 8-way AMD Opteron monster. The H8502 can power 16 AMD Opteron cores and can accommodate no less than 128 GB of RAM.

It is a massive server machine, of course, but a system administrator can take this server to a LAN party if he wants; two SLI capable PCI Express x16 slot are available. That is, if he can carry the 80 Kg heavy chassis…

All joking aside, the H8502 I/O board is pretty impressive: 2 x16, one x4 PCI express slot, courtesy of the NVIDIA nForce Pro chipset, and 5 PCI-X slots, thanks to the AMD 8132 PCI-X tunnel.

This monster is powered by a redundant 1350 Watt power supply.

Iwill also showed a very impressive DPK66-S Socket 771 board for Dempsey with no less than 16 DIMM slots for FB-DIMMs, capable of offering 64 GB of RAM space. Most boards are limited to 12 FB-DIMMs.

A Socket-F Opteron solution was also on display.

Iwill told us that ECC buffered DDR-II at 800 MHz will be supported on this board.

However, our sources tell us that the available engineering samples of the socket-F Opteron are not capable of working together with DDR-II 800. So right now, boards that were made to support DDR-II 800 cannot be validated.

A new revision, which is expected to be available to the motherboard manufacturers around mid-April, should solve that. With DDR-II 800, only 4 DIMMs per CPU can be used. Once you lower the memory speed to DDR-II 667, you can use all 8 DIMMs per CPU.


Supermicro

Supermicro had a whole range of SAS capable servers available, but the few pictures that I took were too blurry. Angela D. Rosario and Alex Hsu gave me some interesting facts about their company: Supermicro is now the 5 th largest x86 server manufacturer (in volume) and has more than 40% of the white server box market. Supermicro sells more than 700,000 servers each year, and has more than 150 different motherboards available.

We’ll probably test some of Supermicro’s servers in the near future.


Conclusion

The servers of the near future contain multi-core 64 bit CPUs, SAS storage and massive amounts of memory. What kind of memory is not so clear. Of the ones to whom we spoke, most manufacturers feel that normal DDR-2 will be more attractive than FB-DIMMs in 2006. This might change, of course, in 2007.

Socket-F and AM2 gives us a déjà-vu. On Cebit 2003, the socket 940 and 754 boards were there, but few Athlon 64 and Opterons were available. On Cebit 2006, all motherboard manufacturers are ready with their socket-F and AM2 boards, but AMD seems a bit late to the party again. Intel launches the new Xeon Dempsey, but draws little attention to it. The spotlight is clearly on the Woodcrest successor.

ASUS
Comments Locked

19 Comments

View All Comments

  • AkaiRo - Monday, March 13, 2006 - link

    When you talk about SAS you have to clarify if you are referring to SAS 3.5" or SAS SFF (Small Form Factor). SAS 3.5", which is what the companies you are talking about in the article are using, is only a waypoint on the roadmap. SAS 3.5" and low-end/mid-range SATA enclosures use U320 connectors. High End SATA enclosures can use fibre or RJ-45 connectors as well. However, there are SAS (and SATA) SFF enclosures out on the market already (HP's Modular Storage Array 50 enclosure).

    SAS/SATA SFF is the designated target for the majority of storage subsystems in the next few years because server manufacturers are going to increasing focus more on spindle count affecting overall I/O than anything else. The SAS SFF drives use the platters from the 15,000rpm drives which are 2.5" in size, which is why the largest SAS SFF drives for now are 146GB. There is quite an initiative by the biggest players who deal in servers, workstations/desktops, AND notebooks, to move to a common platform for ALL three classes of machines, but it's a chicken and egg thing with everyone waiting for someone else to provide the incentive to make the switch.
  • Calin - Tuesday, March 14, 2006 - link

    The 2.5 inch drives are physically too small to reach high capacities, and many of the buyers don't know anything about the hard drive they have except capacity. As a result, a physically smaller, less warm, even supposedly higher performance drive at a higher price will be at disadvantage compared to a physically larger, warmer and even lower performance at a lower price. Especially taking into account that you can buy 500GB 3.5inch drives, but only 120GB 2.5inch drives
  • themelon - Monday, March 13, 2006 - link

    This is nothing new. Granted once you go beyond 4 you have to run them slower....
  • JohanAnandtech - Tuesday, March 14, 2006 - link

    8 Dimms per CPU was very uncommon and required expensive components and engineering. I have seen on the HP DL585, but there 8 DIMMs result in DDR266 speed, which is serious performance penalty. Most DDR boards are still limited to 4 DIMMs per CPU.

    With DDR-2 6 - 8 DIMMs per CPU is relatively easy to do, at least at DDR-II 667 speeds. You'll see 6-8 DIMMs also on affordable solutions, not on high eend server only. That is new :-)
  • Beenthere - Monday, March 13, 2006 - link

    SAS don't impress me none at this stage. Yes it's more reliable than SATA drives but almost anything is. Drive performance is virtually identical with SAS and SCSI 320. All I see is a lower manufacturing cost that hasn't been passed on yet.
  • ncage - Monday, March 13, 2006 - link

    Improving performance is not the whole point of SAS. SCSI 320 is already fast as it is. Heck SCSI 160 is fast. Anyawys i digress. Its the ability to use SATA cables in a server which is a big deal when your dealing with a little 1U case. Its also the ability to Mix/Match SATA with SCSI with for some data centers could dramtically save money. If you mixed SATA/SCSI you could have a combination of Peformance/Redudancy/Cost all in one package. Granted "Critical" data centers will probably be all SCSI. I wouldn't advise eBay put SATA drivers on their servers :). You can't expect each reviesion of storage connection technology to provide better performance...sometimes it not about peformance at all.
  • Calin - Tuesday, March 14, 2006 - link

    There are enough servers that don't need hard drive performance, and will run anything mirrored in RAM. As a result, one could use the same boxes, only with different hard drives for different tasks. Makes everything simpler if you have a single basic box.
  • dougSF30 - Monday, March 13, 2006 - link

    Rev E DC Opteron TDPs have also always been 95W. The SC Rev E parts were 89W.

    http://www.amdcompare.com/us%2Den/opteron/Default....">http://www.amdcompare.com/us%2Den/opteron/Default....

    You can look up the Rev E Opteron parts at the above link.

  • dougSF30 - Monday, March 13, 2006 - link

    These are likely not the parts you see at 68W with Rev F, so again, power is not rising (it is actually falling with Rev F).

    There has been a 68W "blade TDP" point that Rev E Opterons have been sold at, in addition to the 55W and 30W points.

    So, I suspect you are simply seeing 95W and 68W TDP families for Rev F, just like Rev E. Rev F will allow for higher frequency parts within those families, in part due to a DDR2 controller taking less power than DDR1, in part due to SiGe strain being incorporated into the 90nm process.

Log in

Don't have an account? Sign up now