There's a bunch of cool stuff happening at this year's Supercomputing conference in Salt Lake City, Utah. For starters, we'll get to see the updated list of the world's fastest supercomputers (and we'll get to see where Titan ranks on that list). Intel will also be making a few very big announcements at the show, among them will be the official unveil of Intel's SSD DC S3700. We've already looked at the architecture of the drive and reviewed it here, but Intel invited us to participate in a live webcast from the SC12 showfloor.

The topic of the webcast will be Intel's SSD DC S3700. You'll have the opportunity to have your questions about the drive and/or the SSD space in general answered by myself and Roger Peene of Intel's Data Center SSD Solutions, live from the show. Given the venue (SC12 is a supercomputing show after all), try and stick to a mostly enterprise focus (client or server) but I'll be game for answering architectural questions or bigger picture items as well.

If you want to have your question answered on the webcast, respond to this post with your question in the comments area. I can't guarantee that we'll get to all of them but we'll try. The webcast will be live at 9AM Pacific/12PM Eastern on November 13. You can view it live here.

POST A COMMENT

33 Comments

View All Comments

  • dave_rosenthal - Friday, November 09, 2012 - link

    1) Over, say, the next three years, which aspects of SSD performance do you see advancing the most and the least.

    2) The realistic way for an application to use NAND today is through an abstraction that makes it look like the flat, contiguous partition of it's rotational disk ancestor. Do you think that as SSDs get faster and applications use them in more sophisticated ways that the efficiency of this abstraction will hold up? Or will there a real need to change the abstraction, for example to include things like atomic multi-block writes.
    Reply
  • James5mith - Friday, November 09, 2012 - link

    As much as I love SATA drives, in the business space, SAS makes a huge difference. The fact that it is full-duplex vs. the half-duplex of SATA alone boosts what it can do. Are there any plans for Intel to release a native SAS based solution using this controller or a derivative? Reply
  • NandFlashGuy - Friday, November 09, 2012 - link

    No, but the same techinology should appear in next-generation SAS drives from Hitachi GST, since Intel has a Joint Development Program on SAS SSDs with Hitachi:
    http://www.intel.com/pressroom/archive/releases/20...

    Hitachi's next SAS SSD will also be utilizing the latest 12-Gbit SAS interface, which was demonstrated earlier this year:
    http://www.tomshardware.com/news/Hitachi-Storage-s...
    Reply
  • Kevin G - Friday, November 09, 2012 - link

    Have you spotted any of the new Itanium 9500 series chips? The press release is out on Intel's site now. (For the curious: http://newsroom.intel.com/community/intel_newsroom... )

    Any chance of bringing one back for some testing?
    Reply
  • mckirkus - Friday, November 09, 2012 - link

    "reviewed it here"

    Should link here, assuming Intel has green lighted it this time ;)
    http://www.anandtech.com/show/6432/the-intel-ssd-d...
    Reply
  • TemjinGold - Friday, November 09, 2012 - link

    Do you have any plans to introduce a consumer-level drive with these new technologies? Reply
  • Shadowmaster625 - Friday, November 09, 2012 - link

    If I open 32 large files and append 32 bytes to the end of each file, and then close the files, how much NAND actually gets written in that process? Is it just 32 bytes x 32 (<1 page)? Or do 32 pages get written? Or does many hundreds or even thousands of pages get written? Do these writes get queued up so that only one page gets written? That's what I think the controller is doing but would be nice to know for sure. Reply
  • extide - Friday, November 09, 2012 - link

    That depends largely on the application you are using to edit. It may write out all the bytes, or only the changed ones. Reply
  • stepz - Friday, November 09, 2012 - link

    Did they do any specific optimizations for databases? Specifically, how do they handle transaction logs that do large amounts small sequential writes that need to be persistent? Is a battery backed write cache still advisable for such workloads or can the onboard super-capacitors handle it by doing write combining? Reply
  • DukeN - Friday, November 09, 2012 - link

    Is there a concern with using these drives in a RAID-array that is constantly being written to?

    Am I correct in assuming since these drives will be in an array most of the time, does the wear-levelling not work like usual? And does this reduce the life expectancy?

    Any plans to bring TRIM to RAID arrays/controllers in the future?
    Reply

Log in

Don't have an account? Sign up now