Back to Article

  • extide - Thursday, June 12, 2014 - link

    That's pretty sweet! So does this thing have virtualized storage and/or networking or is it just a simple setup where each drive is mapped directly to a specific node? (I am talking about when using the 2x2.5" storage nodes) Also how is the networking done? Is there an internal switch? Very cool, though! Reply
  • groundhogdaze - Thursday, June 12, 2014 - link

    Those nodes remind me of the old slot type pentiums. Remember those? That is some good server pr0n there. Reply
  • MrSpadge - Thursday, June 12, 2014 - link

    Except that these are systems instead of CPU cards :) Reply
  • creed3020 - Thursday, June 12, 2014 - link

    This is a very interesting product!

    I took would like to know how the storage nodes are made available to the compute nodes. The exciting part of this solution in the scalability. Additional nodes can be made available dynamically when compute needs grow and potentially be brought down when loads are lower. Should be very power efficient to utilize the nodes in such a fashion.

    The 2.5" form factor for the SSDs seems somewhat odd when you think about how the space could be better served by a PCIe type form factor but then Gigabyte would have to come up with some proprietary design which breaks the freedom of dropping in whatever SSD you prefer or require.
  • Vepsa - Thursday, June 12, 2014 - link

    I'm guessing the engineering sample is 2.5" while final release will be mSATA based on the article. However, either would be fine for the usage these systems will get. Reply
  • DanNeely - Thursday, June 12, 2014 - link

    I think you're confusing two different items. Each CPU module will have an mSATA connection for onboard storage. The enclosure also allows you to install 16 2x2.5" drive modules in place of CPU modules to increase the amount of on chassis storage present if you need more than a single mSata drive per CPU can offer (it's not clear if options other than 28 CPU+ 16 SSD modules or 46 cpus are possible or if different backplanes are used) . Reply
  • DanNeely - Thursday, June 12, 2014 - link

    Probably for cost reasons/reusing existing designs. IF they go with sata/sas they can recycle designs for existing high density storage servers to for the PCB layers carrying the connections for the drive modules; PCIe based storage is new enough that they'd probably have to design something from scratch to implement it. Reply
  • DanNeely - Thursday, June 12, 2014 - link

    The cpu and cpu + SSD module combinations don't add up or match the total number of modules shown in the enclosure. 28 CPU + 16 storage gives 44 total modules vs 46 for the CPU only version. Also I count 48 total modules in it, 18 in the two large rows 12 in the smaller one.

    Is this an error in the article, or are 2/4 slots taken up by control hardware for the array as a whole?
  • Ian Cutress - Thursday, June 12, 2014 - link

    I've got the product guide booklet in front of me, and it states 28 CPU + 16 storage. Nice spot; I don't know what's happened to the other two.

    I can concur your 48 count, although given the first image I took, and the product guide, it says 46. There are two off-color modules in the middle row, but I'm not sure what these are for. I'll fire of an email, but I know my GIGABYTE Server contact is on holiday this week. When I get a response I'll update this post.
  • Ian Cutress - Friday, June 13, 2014 - link

    I have an answer!

    "There are indeed 48 nodes, but two of them are occupied by traffic management controller nodes (the darker grey ones in the middle row) which must be there independently from the nodes configuration.

    For the storage configuration that's probably a typo, the correct one being 30+16 or 28+18."
  • jwilkins - Saturday, October 18, 2014 - link

    Did this line get scrapped? I can't find anything about availability or any info online that is dated after this show. Reply

Log in

Don't have an account? Sign up now