Tyan is also after the GPU server market. The 4U barebones FT72-B7015 accepts two Xeon 5600s and up to 18 DIMMs, but that's not what sets it apart. The 10 PCI-e 16x and the special backplane with many 4 pin cables are: the barebone can support up to eight (!) GPU cards.

To feed those power hungry beasts is an available 2+1 1200W power supply configuration.

All this PCI Express goodness is made possible by two Tylersburg (Intel 5520) chipsets and four PLX PEX8647 PCIe switches. 

It seems that the fight in this niche market is going to be fierce as other players like ASUS are also offering such server products.

 

Atom Servers... The Next Generation of Xeons
Comments Locked

17 Comments

View All Comments

  • rahvin - Monday, March 14, 2011 - link

    And why exactly would you even want Tegra (regardless of version) in a server? It brings nothing to a server, Atom has a use, low power, x86 compatibility and is ideal for low use, high IO workloads. What niche would Tegra serve in a rackmount server?
  • zhill - Monday, March 14, 2011 - link

    Good discussion of low-power servers and where they make sense and don't from James Hamilton at Amazon Web Services (this is his personal blog, not official Amazon):
    http://perspectives.mvdirona.com/2010/05/18/WhenVe...

    "Where very low-power, low-cost servers win is:
    1. Very cold storage workloads....The core challenge with cold storage apps is that overall system cost is dominated by disk but the disk needs to be attached to a server. We have to amortize the cost of the server over the attached disk storage. The more disk we attach to a single server, the lower the cost. But, the more disk we attach to a single server, the larger the failure zone. Nobody wants to have to move 64 to 128 TB every time a server fails. The tension is more disk to server ratio drives down costs but explodes the negative impact of server failures. So, if we have a choice of more disks to a given server or, instead, to use a smaller, cheaper server, the conclusion is clear. Smaller wins. This is a wonderful example of where low-power servers are a win.
    2. Workloads with good scaling characteristics and non-significant local resource requirements. Web workloads that just accept connections and dispatch can run well on these processors. However, we still need to consider the “and non-significant local resource” clause. If the workload scales perfectly but each interaction needs access to very large memories for example, it may be poor choice for Wimpy nodes. If the workload scales with CPU and local resources are small, Wimpy nodes are a win."
  • rahvin - Monday, March 14, 2011 - link

    The other key thing is there are a few circumstances that make virtualization a very bad choice. Virtual severs handles heavy IO very poorly with significant latency even if the individual workloads are terribly small. This is an area where the Atom servers are big sellers. Negligible power load, small form factor (1U, 18" deep) and a full hardware even if it's crappy hardware. One of the models comes with two mini-itx atom boards in the same 1U and they each have their own hard drive.

    This is a niche market, but based on Supermicro producing several models I bet they are selling quite a few. I'd be curious to find out if Supermicro would reveal how many of the Atom servers they sell. I know there is quite a bit of discussion around about these things being the perfect custom firewall (one of those heavy IO loads) on a corporate network.
  • mino - Monday, March 14, 2011 - link

    "However, it seems that the HTX slot is at the end of its lifetime. The upcoming Xeons seem to come with a PCI-express 3.0 controller integrated, so they should be able to offer a low latency interface of up to 12.8 GB/s, or twice as much."

    AFAIR Hyper Transport has, thanks to the communication protocol itself, about 1/2 the latency of PCI. Regardless the physical layer.
  • JMC2000 - Tuesday, March 15, 2011 - link

    That's what I didn't understand. HT3.0 has lower latency and higher bandwidth than PCI-E 3.0, so how can it be near the end of its life? HTX3.0 has been in use ever since the launch of C32/G34.
  • jcandle - Wednesday, March 16, 2011 - link

    Actually the article is quite correct. There are a limited number of HTX expansion cards produced and since most of these cards are designed for specialized tasks. There's a considerably small market for these cards to begin with. Its easier to kill off HTX in favor of a PCI-E where the same card can be used in both Intel and AMD platforms. With PCI-E 3.0, HTX is essentially dead. Now, that doesn't mean HT is dead; only HTX. Even now with Infiniband cards there are far better optimization to be made on the software side to increase performance than eeking out the very drop of latency from HTX over PCI-E.
  • jcandle - Wednesday, March 16, 2011 - link

    Anyone notice the incorrect specs. According to their datasheet the 6F+ has only two slots the HTX and 1 PCI-E. If its like the usual Supermicro board the remaining slots should be unpopulated. The 6F as it appears has only 68 PCI-E lanes not 80 indicated in the article. 5 full lane PCI-E 16x slots made for anything less than a 4U/workstation is a considerably more rare.

    H8QGL-6F/H8QGL-iF:
    3 PCI-Express 2.0 x16
    2 PCI-Express 2.0 x8 (using x16 slot)
    1 PCI-Express 2.0 x4 (using x16 slot)

    H8QGL-6F+/H8QGL-iF+:
    1 HyperTransport slot
    1 PCI-Express 2.0 x16

Log in

Don't have an account? Sign up now