One of the expanding elements of the storage business is that the capacity per drive has been ever increasing. Spinning hard-disk drives are approaching 20 TB soon, while solid state storage can vary from 4TB to 16TB or even more, if you’re willing to entertain an exotic implementation. Today at the Data Centre World conference in London, I was quite surprised to hear that due to managed risk, we’re unlikely to see much demand for drives over 16TB.

Speaking with a few individuals at the show about expanding capacities, storage customers that need high density are starting to discuss maximum drive size requirements based on their implementation needs. One message starting to come through is that storage deployments are looking at managing risk with drive size – sure, a large capacity drive allows for high-density, but in a drive failure of a large drive means a lot of data is going to be lost.

If we consider how data is used in the datacentre, there are several levels regarding how often the data is used. Long-term storage, known as cold storage, is accessed very infrequently and occupied with mechanical hard-drives with long-time data retention. A large drive failure at this level might lose substantial archival data, or require long build times. More regularly accessed storage, or nearline storage / warm storage, is accessed frequently but is often used as a localised cache from the long-term storage. For this case, imagine Netflix storing a good amount of its back-catalogue for users to access – a loss of a drive here requires accessing colder storage, and the rebuild times come in to play. For hot storage, the storage that has constant read/write access, we’re often dealing with DRAM or large database work with many operations per second. This is where a drive failure and rebuild can result in critical issues with server uptime and availability.

Ultimately the size of the drive and the failure rate leads to element of risks and downtime, and aside from engineering more reliant drives, the other variable for risk management is drive size. 16TB, based on the conversations I’ve had today, seems to be that inflection point; no-one wants to lose 16TB of data in one go, regardless of how often it is accessed, or how well a storage array has additional failover metrics.

I was told that sure, drives above 16TB do exist in the market, however aside from niche applications (such as risk is an acceptable factor for higher density), volumes are low. This inflection point, one would imagine, is subject to change based on how the nature of data and data analytics will change over time. Samsung’s PM983 NF1 drive tops out at 16 TB, and while Intel has shown samples of 8 TB units of its long ruler E1.L form factor, it has listed future drives using QLC up to 32TB. Of course, 16 TB per drive puts no limits on the number of drives per system – we have seen 1U units with 36 of these drives in the past, and Intel has been promoting up to 1 PB in a 1U form factor. It is worth noting that the market for 8 TB SATA SSDs is relatively small - no-one wants to rebuild that large a drive at 500 MB/s, which would take a minimum of 4.44 hours, bringing server uptime down to 99.95% rather than the 99.999% metric (5m22 per year).

Related Reading

POST A COMMENT

85 Comments

View All Comments

  • rahvin - Wednesday, March 13, 2019 - link

    Just remember, there is always someone older out there.

    The first hard drive I ever purchased was 80 megabytes, top of line at the time, around $380 and one of the first IDE drive they produced IIRC.
    Reply
  • PeachNCream - Wednesday, March 13, 2019 - link

    I'm not sure why that's worth bragging about, but that age of yours is catching up with you if you read that a 1GB drive was my first and not just one I happened to be working at installing in a customer's new PC. It's okay though, Anandtech's comments section is indeed a competitive zone in which it is necessary to bash our nerd junk against one another. I feel no need to compete with you on personal experiences, but as it stands, my history with computing by no means started in the mid-90s. I just happened to be a part owner of a computer store at that point. The rest you can guess about or make up with whatever you feel makes you sleep better at night. Reply
  • Opencg - Wednesday, March 13, 2019 - link

    well my e penis remembers when 1kb was huge. world war 2. suck it Reply
  • PeachNCream - Wednesday, March 13, 2019 - link

    I'm horrified and impressed at the same time because my epeen clearly doesn't measure up at this point. Reply
  • sorten - Thursday, March 14, 2019 - link

    What about WWII? If you've been "computing" since WWII and you were 15 years old at the end of that war, that means you're 89 +- 1 years old. Do you need a nap? Reply
  • bigvlada - Thursday, March 14, 2019 - link

    That's what you could get in Europe in 1980/1981. Sinclair ZX 81 had one kilobyte of ram (and the most horrible keyboard in this part of the galaxy) and cost somewhere around 70 British pounds at the time. There were programs and games for that machine but the process usually consisted of user having to retype the whole program from a book or magazine (hard level mode, the magazine was in German :) ) before storing them on an audio tape. The great 8-bit rivalry (Sinclair ZX Spectrum vs Commodore C-64) was still at least a year away. Reply
  • Anonymous Blowhard - Thursday, March 14, 2019 - link

    My first hard drive was a 10MB MFM unit on an IBM XT.

    Now get off my lawn.
    Reply
  • FunBunny2 - Thursday, March 14, 2019 - link

    "My first hard drive was a 10MB MFM unit on an IBM XT."

    if memory serves, that thingee was about the size of a artisan loaf of bread, and weighed about 10 lbs. and my lawn, too.
    Reply
  • abufrejoval - Thursday, March 14, 2019 - link

    No, much smaller than an 8" floppy disk drive.

    They actually were the same size as a 5 1/4" floppy in the PC. I worked on one which was an upgraded PC (no XT), so it only had a 5MB Winchester drive.

    It also eventually had one of these really impressive Hercules graphics cards, super-high resolution 720x400 monochrome on these wonderful green and slow phosphor maskless tube displays that were so easy on the eyes.
    Reply
  • abufrejoval - Thursday, March 14, 2019 - link

    I think this site is more about starting early than being old.

    That said, I regularly swapped 5MB RK05 and 10MB DL10 disks on a PDP-11/34 with 64KB magnetic core memory we eventually upgraded to 256MB of this fancy new DRAM, which unfortunately lost its content when you switched off the power. That meant we either had to leave it on during the night or "reboot" in the morning. Before we'd just put the box on powerless-standby and resume next morning.

    We called all hard disks Winchester drives btw. and only swapped the media not the head assembly.

    And transporting data meant riding trains with reel tapes in your backpack, because punched cards were too bulky.

    That job paid for my first own computer, an Apple ][, and the last Apple product I ever purchased (well, actually it was a clone, had 48KB of RAM and lower case letters!). It also featured removable media 140k per disk and side (you could cut out a write-protect notch and then also use the reverse side).

    But when I benchmarked my first FusionIO SSD in January 2008 my eyes glazed over almost like they did when we compared access times of fixed head drums against moving head disks.

    Since I still have to work for one or two decades I cannot afford to be old.
    Reply

Log in

Don't have an account? Sign up now