Back to Article

  • PCTC2 - Thursday, January 17, 2013 - link

    It is very reminiscent of the shape of Intel/Quanta's boards.

    On a side node, the last link has a %20 in the front of the http.
  • JarredWalton - Thursday, January 17, 2013 - link

    Fixed link - thanks! Reply
  • speculatrix - Thursday, January 17, 2013 - link

    an openblade standard would be great, being able to buy blades and chassis from multiple vendors, so you could have some high efficiency cheap running Arm A15 blades along with some high end 56xx processors. The network arrangement would allow easy upgrade from low cost 10/100/1000 to CX4, SFP+, etc.

    Of course, the fans would also have to be modular so as to right-size the cooling and be upgradeable.
  • JohanAnandtech - Thursday, January 17, 2013 - link

    Not to mention that it would great to get some real modular network switches. In most cases you can chose between overpriced and ridiculously overpriced switches. Reply
  • Assimilator87 - Thursday, January 17, 2013 - link

    Is this standard limited to 2P? There are plenty of 2P boards that're EATX or SSI CEB, but when I was looking into a 4P Folding rig, the boards were substantially more expensive and they were all custom form factors. A standardized 4P layout would be awesome for DIYers. Reply
  • Kevin G - Thursday, January 17, 2013 - link

    It would have been nice to have seen the location of the SATA ports closer to the front of the motherboard. Running cables over the motherboard and along the side of the chassis just doesn't seem to be optimal.

    It would have been even nicer if they created a standardized back plane for hot swap usage that'd scale in terms of drive count while providing the basic front panel IO (VGA, RS-232, power/reset buttons etc.)

    I'm also curious if there would be a four socket G34 solution in this format. Dropping down to one DIMM per channel I think would create enough board space for the clever engineer. Four socket LGA 1567 I think would be possible with the usage of risers for the DIMM slots as several other LGA 1567 solutions utilize.
  • Beenthere - Thursday, January 17, 2013 - link

    As usual AMD is customer-centric and not the bully that InHell is. Good job by AMD as they have delivered again, unlike the unscrupulous talking heads who have been convicted umpteen times for violation of anti-trust laws and several times for U.S. tax evation. Reply
  • phoenix_rizzen - Thursday, January 17, 2013 - link

    The 3U mobo, with 24 DIMMs, dual CPUs, and 4 low-profile SAS controllers (like the LSI 9207-8e) stuffed into a 2U chassis with a couple of SSDs installed locally, would make a great head unit to a storage server. Connect a bunch of JBOD chassis stuffed with drives to the SAS controllers, and away you go.

    Don't know why they would limit the number of PCIe slots in the 2U chassis. There's plenty of half-height/half-length cards out there.
  • Ktracho - Thursday, January 17, 2013 - link

    Nowadays I don't think the HPC market would want only high-powered CPUs in their power-efficient compute servers. You can dramatically increase performance per watt by adding GPUs. Do a couple GPUs fit horizontally, especially if you have just one power supply? Reply
  • JohanAnandtech - Friday, January 18, 2013 - link

    I am not a market researcher, but I do think that the non-GPU HPC market is still considerably larger than the GPU enabled HPC market. There are thousands of smaller HPC apps where developers are probably not going to do the investment to redesign their apps for GPUs, and even the large companies like Ansys only have a few apps that are GPU accelerated (Fluent is, LS-DYNA not AFAIK). Reply

Log in

Don't have an account? Sign up now