One of the expanding elements of the storage business is that the capacity per drive has been ever increasing. Spinning hard-disk drives are approaching 20 TB soon, while solid state storage can vary from 4TB to 16TB or even more, if you’re willing to entertain an exotic implementation. Today at the Data Centre World conference in London, I was quite surprised to hear that due to managed risk, we’re unlikely to see much demand for drives over 16TB.

Speaking with a few individuals at the show about expanding capacities, storage customers that need high density are starting to discuss maximum drive size requirements based on their implementation needs. One message starting to come through is that storage deployments are looking at managing risk with drive size – sure, a large capacity drive allows for high-density, but in a drive failure of a large drive means a lot of data is going to be lost.

If we consider how data is used in the datacentre, there are several levels regarding how often the data is used. Long-term storage, known as cold storage, is accessed very infrequently and occupied with mechanical hard-drives with long-time data retention. A large drive failure at this level might lose substantial archival data, or require long build times. More regularly accessed storage, or nearline storage / warm storage, is accessed frequently but is often used as a localised cache from the long-term storage. For this case, imagine Netflix storing a good amount of its back-catalogue for users to access – a loss of a drive here requires accessing colder storage, and the rebuild times come in to play. For hot storage, the storage that has constant read/write access, we’re often dealing with DRAM or large database work with many operations per second. This is where a drive failure and rebuild can result in critical issues with server uptime and availability.

Ultimately the size of the drive and the failure rate leads to element of risks and downtime, and aside from engineering more reliant drives, the other variable for risk management is drive size. 16TB, based on the conversations I’ve had today, seems to be that inflection point; no-one wants to lose 16TB of data in one go, regardless of how often it is accessed, or how well a storage array has additional failover metrics.

I was told that sure, drives above 16TB do exist in the market, however aside from niche applications (such as risk is an acceptable factor for higher density), volumes are low. This inflection point, one would imagine, is subject to change based on how the nature of data and data analytics will change over time. Samsung’s PM983 NF1 drive tops out at 16 TB, and while Intel has shown samples of 8 TB units of its long ruler E1.L form factor, it has listed future drives using QLC up to 32TB. Of course, 16 TB per drive puts no limits on the number of drives per system – we have seen 1U units with 36 of these drives in the past, and Intel has been promoting up to 1 PB in a 1U form factor. It is worth noting that the market for 8 TB SATA SSDs is relatively small - no-one wants to rebuild that large a drive at 500 MB/s, which would take a minimum of 4.44 hours, bringing server uptime down to 99.95% rather than the 99.999% metric (5m22 per year).

Related Reading

Comments Locked

86 Comments

View All Comments

  • occasional - Thursday, March 14, 2019 - link

    No-one wants over 16TB ... FOR NOW
    Tech moves so fast.
    Before you know it, we'll have over 200TB SSDs as normal.
  • Xajel - Monday, March 18, 2019 - link

    I would, if cost was low to justify having 2-3 drives for backups.
  • meh130 - Friday, March 22, 2019 - link

    I think the larger SSDs have a strong use case in RAID arrays. There are several reasons for this. First, NVMe attached SSDs present a complicated way to add capacity compared to SAS conntected SSDs. SAS connected SSDs are easy to scale by adding SAS connected drive enclosures, the same enclosures currently used for 10K SAS HDDs. NVMe drive enclosures would need to be attached either using PCIe cables or NVMe over Fabrics technology. All of the array vendors have settled on the latter approach, and it requires the drive enclosure to be a full-blown array with dual NVMe over Fabrics controllers fronting the SSDs. This significantly increases costs compared to an array with local NVMe attached drives (which only requires additional PCIe switch ASICs to fan out the PCIe connections to the local drives).

    The result is an array with 24 local 16TB NVMe SSDs is much less expensive than an array with 24 local 8TB NVMe SSDs and 24 more 8TB NVMe SSDs in an NVMe over fabrics external enclosure.

    Rebuild times are a consideration, but declustered RAID technology can reduce that, as can triple parity RAID.
  • vedru - Tuesday, October 29, 2019 - link

    I believe this article is a little conservative and does not cover the real reasons behind the slow uptake of larger SSD drives or the possibilities there. Currently the larger mass produced SSD-s are Samsung PM1643 that go up to 30.72TB size. The issue is the price of the larger drives is still relatively high. However rebuilding a RAID with 30TB SSD-s will likely take less time than rebuilding same RAID with 12-16TB HDD-s, so the risk during rebuild is smaller with the SSD-s.
    Also with HDD-s the drive failures are more common and the overall run time is lower where SSD-s are limited by number of writes. Now with the chip development advancing there are already storage's available that do asymmetric load on the SSD drives to ensure they do not get to end of write cycles at same time. Also SSD drives are composed of a number of chips that with advancement of firmware can be managed so that if one chip in a drive in the storage fails most or all of the data can still be read and copied to a replacement drive in the array, not requiring a full rebuild. This would alleviate the fear of data loss in larger drives. I have no doubt that once the price parity is reached between HDD-s and SSD-s, much larger SSD-s will become common use and current cold storage will be replaced by very large SSD-s. It is just waiting for prices to be able to compete with HDD-s.
  • Bellfazair - Wednesday, September 30, 2020 - link

    These kind of articles are pointless. in 20 years 16TB will be the new 16GB. 30 years ago my father paid $900 for a 40MB drive and said "No one will ever use this much space". The reality is if the price was low enough who wouldn't want 16TB in their laptop or PC? Hell I'd like a 16TB drive in the new PS5. Given enough time we will laugh at the absurdity that some one actually thought we wouldn't want a drive larger than 16TB. Just as people think the the president of IBM from 1914 to 1956, Watson was absurd when he said he thought there was a world market "for maybe five computers".
  • Jake5554 - Sunday, November 28, 2021 - link

    Wrong. I have been waiting for an internal 16TB to come out for the last year or so, and I am getting impatient.

Log in

Don't have an account? Sign up now