Intel's Ultrabook push has forced hard drive makers to do two things: take hybrid drives more seriously (because of the Ultrabook performance requirements) as well as focus on thinner drive form factors. Seagate brought its NAND equipped hybrid Momentus XT to market two years ago, but we haven't seen widespread OEM adoption partially because of the lack of a second source for hybrid hard drives.

Today Western Digital announced a 5mm thick hybrid HDD with 32GB of MLC NAND on-board. WD isn't saying anything else as far as details of architecture (NAND controller, what gets cached, etc...) or availability but I should be able to see the drive in person later this week.

Comments Locked

13 Comments

View All Comments

  • DanNeely - Monday, September 10, 2012 - link

    WD's large increase in NAND over Seagate's implementation could help performance a lot; 32GB is the same as the SSD cache drives showing up in a fair number of systems and is big enough for the OS a few apps/games and frequently accessed user data. As long as they've put enough NAND chips for decent bandwidth and included a decent controller it looks like they've got all the benefits of adding a cache drive, without the increased volume penalty that limits its use by OEM laptop manufacturers and precludes aftermarket upgrades.
  • name99 - Monday, September 10, 2012 - link

    There are two issues here:

    (a) To make a hybrid disk work well, you need decent algorithms. You want to cache material that is used frequently RIGHT AWAY, but you don't want to cache material (especially large files) that are only used once, eg copying large files or viewing movies.
    It's not trivial --- you might think second access would be good enough, but it's common to copy a movie in, then soon thereafter read it. So how about third access? Well, it's also common to read that movie into your backup system soon after you copied it in. This is getting nasty --- do we have to wait for fourth access before we cache? Or do we track reads vs writes?

    Seagate's algorithms don't seem very good. It is possible that their lack of competition so far reflects the fact that no-one can come up with OK algorithms?

    (b) In many ways variable performance is worse than constantly bad performance, which your brain eventually adjusts too. I installed a Seagate Momentus XT in the laptop of my GF and after a month she told me she though the computer was broken because sometimes it felt so fast, other times so slow.
    A similar data point is that Apple, who know a thing or two about UI, in Mountain Lion seem to have tried to optimize for "reproducibility" rather than raw performance, using algorithms that might have a lower mean performance but have low variance --- this is especially obvious in the new VM system.
    So it may be that this path is just flawed in principle; it's fighting against human nature.
  • hsew - Monday, September 10, 2012 - link

    I would just make an algorithm to cache small, regularly used files that the hdd cannot access quickly.
  • LesMoss - Monday, September 10, 2012 - link

    The problem is that a drive does not know what a "file". Just blocks.
  • Alexvrb - Monday, September 10, 2012 - link

    That is true of software agnostic setups like Seagate's Momentus XT. This is not going to be like that. It is going to be more like Intel SRT or Dataplex, as MrSpage discussed already. Software.

    This limits its uses, of course. The caching software is probably going to be limited primarily to Win7/8. But that covers the majority of use cases, especially Ultrabooks and highend x86 Tablets. So I think the benefits outweigh the drawbacks (software agnostic vs software driven).

    In fact, if the drive performs well and is adopted by OEMs, we may see Seagate drop their current approach and follow suit.
  • MrSpadge - Monday, September 10, 2012 - link

    Valid points!

    Intels SRT caching doesn't cache video files at all - a smart decision, in my opinion (I'm using it in my main desktop). They're also caching on a per-block basis rather than per-file. This is also very smart. Overall you won't need anywhere close to 20 GB jsut to cache your Win 7, which may use 20 GB of disk space. And nVelo Dataplex caches upon the 1st file access - otherwise the perceived speed increase wouldn't be worth the investment.

    They can do this because caching is done in the OS by a driver. It knows the file type and generally much more than a "dumb" HDD can know. Overall I like the idea of hybrid drives.. but I feel the cache is put into the wrong position here. I'd rather have some small NAND socket on the mainboard (maybe mSATA), which can be populated with a cache the size you want and have caching working for any HDD you want and on any chipset. If the NAND is worn out simply toss a new one into the socket.
  • Alexvrb - Monday, September 10, 2012 - link

    Support for easily accessible mSATA by more laptop OEMs would be great! The other issue would be software standardization - ideally you'd let the OS handle it, or the chipset maker. I wish AMD had an SRT-like caching implementation on some of their higher-end APU chipsets. Especially now that they have some inexpensive lower power Trinity parts that have been helping drive the prices of Ultrathins down.

    I was thinking about this the other day, Crucial has some nice M4 mSATA SSDs, and there's the Mushkin Atlas mSATA. Both seem like they'd be good candidates for SRT, but again I'd like to see wider support for this sort of setup by more manufacturers.
  • zebrax2 - Monday, September 10, 2012 - link

    Wouldn't it be logical to cache files below a certain file size since HDDs are fairly fast with large files, it is the small files that bogs it down
  • name99 - Monday, September 10, 2012 - link

    As has been pointed out already, the HD doesn't know what files are. It doesn't get told about files, all it sees is a stream of BLOCK commands along the lines of
    "read 64KiB starting at block 0xFF1556576565"
    "write 32KiB starting at block 0x001454564FA"

    The best you could do is use the lengths of the read/write requests as one more piece of data to try to improve your caching. But all these things are not trivial --- you have to also have rapid-access data structures somewhere that track which blocks are being used in which ways, to look for the re-use patterns of interest, and the more information you want to use, the more rapidly these data structures grow.
  • Alexvrb - Monday, September 10, 2012 - link

    Software. See my post above. This makes the caching solution reliant on a particular OS/driver combo. But it also makes it much more powerful.

Log in

Don't have an account? Sign up now