Two-Dimensional Magnetic Recording Due in 2017

Two-dimensional magnetic recording (TDMR) is a yet another technology that should help to increase areal density and this is something that Seagate is investing in. The manufacturer believes that TDMR helps to increase areal density by 5% to 10%. Plans were announced several months ago and during the conversation with Mark Re, it was confirmed that Seagate was on track to release its first commercial TDMR-based HDDs in 2017.

TDMR technology enables makers of hard drives to increase the areal density of HDD platters by making tracks narrower and pitches even smaller than they are today. While it is possible to minimize the writer (a part of an HDD's head that writes data), reading becomes a challenge. As magnetic tracks become narrower, they start to affect each other, an effect called magnetic inter-track interference (ITI). This means It becomes increasingly hard for HDD heads (readers) to perform read operations. To mitigate the ITI effect of very narrow tracks, two-dimensional magnetic recording technology uses an array of heads to read data from either one, or several nearby tracks (a method described in several scientific publications). This improves the signal-to-noise ratio delivered to the controller. Several readers enable HDD controllers to determine the correct data based on input from several locations, which implies the need for powerful controllers. More importantly, a number of read heads will be a benefit for HDDs featuring HAMR in the future: heat-assisted recording improves the write process, whereas multiple readers improve the read process. We are also told that with relevant programming, hard drives featuring an array of readers per head can increase the performance of HDDs. This will clearly not make the new hard drives as fast as SSDs, but it will help Seagate’s customers (particularly in the SAS space) to increase the performance of their storage devices. Right now, Seagate does not talk about its plans to use multiple readers in commercial drives because such products are several years out, but considers this a possibility.

Seagate confirmed that TDMR lets HDD makers to increase areal density by up to 10%, which is a noticeable amount compared to typical PMR platters. However, additional capacity does not come free in this case when it comes to computing. An array of heads increases bandwidth requirements for the controller as well as the amount of information that the chip needs to process. As a result, the whole TDMR platform becomes generally expensive: it features multiple arrays of heads, new platters, new motors as well as new controllers. This is why Seagate plans to use it for server applications first sometime in early 2017. Seagate did not confirm whether such HDDs would use both TDMR and helium, but said that virtually all technologies could be mixed and matched to build the right solution for every possible application. Keep in mind that these are plans which are subject to change.

Helium Will Remain Exclusive for High-Capacity Applications, For Now New 10K and 15K RPM HDDs Incoming
Comments Locked

91 Comments

View All Comments

  • mkozakewich - Thursday, July 7, 2016 - link

    Oh, I remember that article. Higher write temperatures mean better longevity, right?
  • twelvebore - Wednesday, July 6, 2016 - link

    Guessing that you don't buy storage by the petabyte then? You know, horizons and all that.
  • Ushio01 - Wednesday, July 6, 2016 - link

    With 2.5" SSD's available today offer lower power, higher capacity, higher performance and higher density than 3.5" HDD's. I wonder how much that offsets the higher cost per GB?
  • twelvebore - Wednesday, July 6, 2016 - link

    Higher capacity? A 10TB 2.5" SSD for <£500? Where?

    Lower power than an HDD that's powered off?

    Performance doesn't always matter.

    The article says several times, this is not about desktop. This is about data-centre, extreme capacity, price-sensitive. These HDDs are competing with magnetic tape, not SSD.
  • jwhannell - Wednesday, July 6, 2016 - link

    Flape.
  • patrickjp93 - Wednesday, July 6, 2016 - link

    Performance/Watt/$ is the most important metric, and HDD is already under immense pressure from archival SSDs.
  • patrickjp93 - Wednesday, July 6, 2016 - link

    For enterprise use that's a $2000 drive, unless you're using one without power loss protection and ECC... And Samsung already has one provided.
  • Murloc - Wednesday, July 6, 2016 - link

    It doesn't offset the cost at all if the only thing that matters is $/GB.
  • amnesia0287 - Wednesday, July 6, 2016 - link

    You don't seem to understand how datacenters work. SSDs and modern JBOD infrastructure are changing the way this is approached. The thing you gotta realize is you can pack SSDs INSANELY dense. Yes, the power difference of 1 ssd is menial, but when you fill a rack with them, the combined power and cooling savings add up, especially if you are aiming for a minimum 2-3 year run cost.

    You also have to bear in mind that datacenters are pretty much exclusively using substantially more expensive (and hotter/louder) SAS drives.

    Capex is important, but you are totally ignoring Opex and TCO. Also AFR is about 1/6th (.5% vs 3%) which gives you more flexibity in your planing for consistency/redundancy. SSD failures are more or less predictable.

    Either way the move to SSDs in the datacenter is VERY real, as density is king. Why waste money expanding into more datacenters and adding more racks? SSDs also solve alot of problems that HDD have such as large array rebuilds.

    Also tech like RDMA combined with NVMe virtualization is going to fundamentally change the landscape.
  • zodiacfml - Friday, July 8, 2016 - link

    Correct. SSDs have higher density already and the rich companies can afford them.

Log in

Don't have an account? Sign up now