It's not just planar NAND that's running into physical limits lately. According to Seagate, its latest 1TB platter 3.5" drives have shrunk read/write heads as small as they can physically go. Similarly, tracks on those platters are placed as close together as physically possible. Pushing areal density is important to increase overall capacities (no one wants to see more platters per drive), but if we're at physical limits today then it's time for some architectural changes to push capacities going forward.

Seagate's solution is something it calls Shingled Magneting Recording (SMR). The process is pretty simple. Track size is traditionally defined by the size of the write heads, as they are larger than the read heads. The track width is larger than necessary from the perspective of reading data back in order to decrease the chances of reading data from adjacent tracks. Seagate's SMR exploits this reality.

SMR shrinks the guard space between tracks and allows tracks to overlap one another, like roofing shingles. Although data is written to the entire width of the track, a smaller/trimmed portion of the track (the width of the read head) is all that the drive cares about. By allowing tracks to overlap, areal density can continue to scale without further shrinking the size of the heads. 

The obvious downside of SMR is actually very NAND flash-like. When writing data sequentially to an empty platter, SMR is full of advantages. When you're writing to a series of tracks that already contains data, the SMR writing process is actually destructive. Since the writer remains full width and tracks now overlap, overwriting one track will actually harm the next track; those subsequent tracks will need to be overwritten as a result. 

Seagate's SMR groups tracks into bands, with the end of each band breaking the shingled track layout. Breaking the shingled layout regularly reduces max attainable density, but it makes it so that overwriting a portion of one track doesn't force a re-write of the complete disk. At worst, overwriting some sectors will force a re-write of an entire band, not an entire platter.

Seagate claims it has already shipped 1 million SMR enabled drives (I didn't actually know any SMR drives had been shipping), but plans on using the technology to increase areal densities beginning next year. In 2014 Seagate will move from a 1TB per platter design to 1.25TB per platter thanks to SMR. The increase in platter density will allow Seagate to ship a 4 platter/5TB drive next year. Seagate is hoping to hit higher densities without any performance degradation compared to existing SMR designs. The real question is whether or not Seagate can maintain similar full drive performance compared to a non-SMR drive.

POST A COMMENT

27 Comments

View All Comments

  • Snoopykins - Monday, September 09, 2013 - link

    Very nice article, I always love the ones regarding new ideas to push beyond current limits. I for one would be interested in seeing some more real world numbers. If Anandtech could find what drives are currently using SMR and compare them price and performance wise to the ones that aren't, that would make for a pretty helpful article. It might help people decide to buy them or not, and it might also help put a finger on what the pricing is going to be like on the larger capacity drives of the future. Your final sentence was "The real question is whether or not Seagate can maintain similar full drive performance compared to a non-SMR drive." I think you, Anand, of all people have what it takes to find out! Reply
  • melgross - Monday, September 09, 2013 - link

    The question is whether an SMR drive can be differentiated from a "regular" drive from testing. If the rewriting process can be detected, then maybe yes, but if not, who knows? Reply
  • Snoopykins - Monday, September 09, 2013 - link

    He could find out what ones are SMR from Seagate or by other means, then compare the prices. For performance, if he can't find a way to find a difference in performance, then we probably won't either, and therefore they are equal in real world usage scenarios. Reply
  • xdrol - Monday, September 09, 2013 - link

    This process is not much different from the one SSDs use, you can detect it there, why could one not detect it here? Reply
  • name99 - Monday, September 09, 2013 - link

    Depends on how well Seagate works to hide it.
    For example if they add 4GB of invisible flash, store immediate writes to the flash, AND can simultaneously write to flash (incoming stores) and read from flash while writing to disk, then it could all work out just fine.

    It's not impossible to make this transparent, it's just a question of whether they did or did not do the job well.
    Reply
  • Kyle_PL - Friday, December 13, 2013 - link

    This "invisible flash" would be killed very fast ... and the price of this HDD would be higher, so there is no sense for your solution. Reply
  • KAlmquist - Monday, September 23, 2013 - link

    The effect of SMR should show up in random read vs. random write performance. Consider what happens on a SMR drive when you write a block in the fourth track of a band. You have to read the three tracks which come before the track you are interested in, then write the block you actually want to change, and then rewrite (portions of ) the three earlier tracks. That's seven physical I/O operations, so the disk has to rotate six times after the first I/O operation, which takes 50ms on a 7200 RPM disk. The bottom line is that random writes are going to be much slower than random reads.

    The above analysis only applies after the benchmark has reached steady state. To reach steady state, you have to write enough data to force the drive to actually write the data to the disk rather than storing it in cache. If the disk uses RAM for caching, that's not a big deal, but if the drive has 8GB of flash memory, it will take a long time to fill up the cache.
    Reply
  • Cow86 - Monday, September 09, 2013 - link

    The idea is valid, but I can imagine this will reduce write speeds on an already filled disk-section....because it'll have to do the entire band, instead of just overwriting the requested data. So slower write speeds in the later (filled) life of a HDD should be expected I would think? Reply
  • Azethoth - Tuesday, September 10, 2013 - link

    No, on average it has to rewrite half the band. It is only worst case that rewrites the entire band (change at start). Best case is no rewrite for changes at the end of the band. Reply
  • Cow86 - Tuesday, September 10, 2013 - link

    Ah right...I sort of misunderstood...still, you'll have to rewrite more on average than just the desired new data...so slower speed on average? Would be interested in seeing tested how much of an impact it might have as well. Reply

Log in

Don't have an account? Sign up now