There's a bunch of cool stuff happening at this year's Supercomputing conference in Salt Lake City, Utah. For starters, we'll get to see the updated list of the world's fastest supercomputers (and we'll get to see where Titan ranks on that list). Intel will also be making a few very big announcements at the show, among them will be the official unveil of Intel's SSD DC S3700. We've already looked at the architecture of the drive and reviewed it here, but Intel invited us to participate in a live webcast from the SC12 showfloor.

The topic of the webcast will be Intel's SSD DC S3700. You'll have the opportunity to have your questions about the drive and/or the SSD space in general answered by myself and Roger Peene of Intel's Data Center SSD Solutions, live from the show. Given the venue (SC12 is a supercomputing show after all), try and stick to a mostly enterprise focus (client or server) but I'll be game for answering architectural questions or bigger picture items as well.

If you want to have your question answered on the webcast, respond to this post with your question in the comments area. I can't guarantee that we'll get to all of them but we'll try. The webcast will be live at 9AM Pacific/12PM Eastern on November 13. You can view it live here.



View All Comments

  • mayankleoboy1 - Friday, November 9, 2012 - link

    Is there any possible way to increase SSD speeds for low QD usage, typical in consumer SSD'd ? or are SSD's going the way of x86 : best speed ups for paralle loads only? Reply
  • Jaaap - Friday, November 9, 2012 - link

    According to Anand,
    Big multi-user (or virtualized) enterprise workloads almost always look fully random at a distance, ...

    It it possible to enhance VT-d to mitigate this "problem"?
  • extide - Friday, November 9, 2012 - link

    Not really, there is nothing you can do. When you combine the disk IO from several machines all together, it is going to inherently be pretty random, as in reads and writes all over the place. That's just the nature of the beast. Reply
  • mayankleoboy1 - Friday, November 9, 2012 - link

    Apart from the SATA3 interface, what is the biggest roadblock to increasing SSD speeds ? Reply
  • Kevin G - Sunday, November 11, 2012 - link

    Looking at some native PCI-E SSD's (Micron P320h in particular) you can see SSD's scale really high under certain work loads. The interface is always going to be a bottleneck with any decent controller and enough channels.

    The next logical place to put an SSD controller would be into a SoC which is common place in the embedded area already. I suspect that x86 SoC's for laptops and possibly some desktops will start to incorporate an SSD controller on-die to reduce the number of components on a board as well as low power consumption.\ by a hair.

    The thing I'm watching is performance under specific queue depths. Consumer workloads don't reach high queue depths that are required to make top of the line SSD's shine. With the Micron P320h as an example, it didn't hit its stride until QD of 256 was tested. Ultimately what needs to happen in the consumer space for SSD performance to improve is a refinement of storage drivers and better file systems. I strongly suspect that there are some optimizations there that can lead to improved performance for all SSD's but it may require a full break away from hard drives. Economics don't favor such development yet due to the capacity/price balance in favor of hard drives still.
  • FunBunny2 - Friday, November 9, 2012 - link

    All the independent drive vendors have had bad numbers the last few quarters. Enterprise is supplied by niche (mostly private) companies. The X-25E didn't go over so hot in Enterprise. Sun/Oracle released a flash appliance years ago, and Violin has one now. It appears that IBM is headed that way, too. So: is any SSD form factor in the future? Or will Linus be proven right:

    ... but Flash-based storage has such a different performance profile from rotating media, that I suspect that it will end up having a large impact on filesystem design. Right now, most filesystems tend to be designed with the latencies of rotating media in mind.
    -- Linus Torvalds/2007
  • eanazag - Friday, November 9, 2012 - link

    What are the supported and non-supported RAID configurations of this drive? 5, 50, 60 and number of max drives. I assume 0, 1, and 10 work fine.
    Does TRIM pass to these RAID modes yet or simulate? Is TRIM necessary still for this drive?

    I'd like to see low capacity drives with this controller for use as boot disks on a VM server 16 to 30 or 30 to 60 GB, any chance? I won't ever go lower than 16 GB These low capacity drives would likely just see a RAID 1.
  • Hulk - Friday, November 9, 2012 - link

    Are there any software and/or hardware technologies on the horizon that will help to mitigate the issue of SSD endurance as process size continues to decrease? Reply
  • Kougar - Friday, November 9, 2012 - link

    My question is basically the same point Duke already raised.

    Intel has already modified drivers to allow RAID 0 to pass TRIM commands under the 7-series chipset. Given the primary design of the DC S3700 drive is to give consistent IO performance and the targeting of data center customers, it would make sense to include TRIM pass through with RAID support. Will Intel be working on this (either with its own chipset, or possibly even other vendors as well) to enable this?

    Secondly, does Intel assume TRIM is in place when it calculates the endurance of the SSD? Since TRIM would affect write amplification I would assume it would have somewhat of an impact on the drive longevity figures? Or would that impact be inconsequential with good idle garbage collection?
  • extide - Friday, November 9, 2012 - link

    What technology do you guys see most likely replacing NAND in the next several years? And approximately when do you think that will happen? What will this new technology bring that allows it to scale beyond NAND? Reply

Log in

Don't have an account? Sign up now