The Application Experience

By this point I’ve talked a lot about the synthetic experience with Apple’s Fusion Drive, but what about the real world user experience? In short, it’s surprisingly good. While I would describe most SSD caching implementations I’ve used as being more HDD-like than SSD-like, Apple’s Fusion Drive ends up almost half way between a HDD experience and an SSD experience.

Installing anything of reasonable size almost always goes to the SSD first, which really goes a long way towards making Fusion Drive feel SSD-like. This isn’t just true of application installs, but copying anything in general hits the SSD first. The magic number appears to be 4GB, although with a little effort you can get the Fusion Drive to start writing to the HDD after only 1 - 2GB. I used Iometer to create a sequential test file on the Fusion Drive, monitored when the file stopped writing to the SSD, stopped the process, renamed the file and started the file creation again. The screenshot below gives you a good idea of the minimum amount of space Apple will keep on the SSD for incoming writes:

You can see that if you’re quick enough you can easily drop below 2GB of writes to the SSD before the HDD takes over. I don’t know for a fact that this is the amount of free space on the SSD, but that’s likely what it is since there’s no sense in exposing a 121GB SSD and not using it all.

In most real world scenarios where you’re not aggressively trying to fill the SSD, Fusion Drive will keep at least 4GB of the SSD free. Note that when you first use a mostly empty Fusion Drive almost anything you write to the drive, of any size, will go straight to the SSD. As capacity pressure increases however, Apple’s policy shifts towards writing up to 4GB of any given file to the SSD and the remainder onto the hard drive.

I confirmed this by installing Apple's OS X developer tools as well as Xcode itself. The latter is closer to the magic 4GB crossover point, but the bulk of the application ended up on the SSD by default.

The same is true for data generated by an application. I used Xcode to build Adium, a 682MB project, and the entire compile process hit the SSD - the mechanical side of the Fusion Drive never lifted a finger. I tried building a larger project, nearly 2GB of Firefox. In this case, I did see a very short period of HDD activity but the vast majority was confined to the SSD.

I grabbed a large video file (> 10GB) I cloned over when I migrated my personal machine to the iMac and paid attention to its behavior as I copied the file to a new location. For the first 2GB of the transfer, the file streamed from the SSD and went back to the SSD. For the next 2GB of the transfer, the file was being read off of the HDD and written to the SSD. After copying around 4GB, both the source and target became the HDD instead. Fusion Drive actually ended up caching way more of that large video than I thought it would. In my opinion the right move here would be to force all large files onto the hard drive by default unless they were heavily accessed. Apple's approach does seem to be a reasonable compromise, but it's still way more aggressive at putting blocks on the SSD than I thought it would be.

I repeated the test with a different video file that I had never accessed and got a completely different result. The entire file was stored on the hard drive portion of the Fusion Drive. I repeated the test once more with my iPhoto library, which I had been accessing a bunch. To my surprise, the bulk of my iPhoto Library was on the HDD but there were a few bursts of reads to the SSD while I was copying it. In both cases, the copy target ended up being the SSD of course.

My AnandTech folder is over 32GB in size and it contains text, photos, presentations, benchmark results and pretty much everything associated with every review I’ve put together. Although this folder is very important, the truth is that the bulk of that 32GB is never really accessed all that frequently. I went to duplicate the folder and discovered that almost none of it resided on the SSD. The same was true for my 38GB Documents folder, the bulk of which, again, went unread.

Applications on the other hand were almost always on the SSD.

In general, Apple’s Fusion Drive appears to do a fairly good job of automating what I typically do manually: keeping my OS and applications on the SSD, and big media files on the HDD. About the only difference between how I manually organize my data and how Fusion Drive does it is I put my documents and AnandTech folder on my SSD by default. I don’t do this just for performance, but more for reliability. My HDD is more likely to die than my SSD.

Management Granularity Fusion Drive Performance & Practical Limits
Comments Locked

127 Comments

View All Comments

  • tipoo - Friday, January 18, 2013 - link

    To your last point Name99, indeed they will.
  • name99 - Friday, January 18, 2013 - link

    As compared to all those other tablets out there with 128 and 256GB of storage? Like uuh, huh, wait, the names will come to me...

    When EVERYONE is doing things a certain way, not just Apple, it may be worth asking if there are other issues going on here (limited manufacturing capacity and exploding demand, for one) rather than immediately assuming Apple is out to screw you.
  • Death666Angel - Friday, January 18, 2013 - link

    Tons of Archos stuff, Samsung XE700, Gigabyte and Dell tablets etc. have >120GB storage.
  • name99 - Friday, January 18, 2013 - link

    So in other words the tablets that are trying to be laptop replacements, and that have to cope with the massive footprint of Windows 8.

    You may consider this to be proof against my point; I don't.
  • Hrel - Friday, January 18, 2013 - link

    "You can create Boot Camp or other additional partitions on a Fusion Drive, however these partitions will reside on the HDD portion exclusively."

    So you CAN create a Boot Camp partition on a Fusion Drive, it just won't utilize the SSD portion of that fusion drive at all. Or am I not understanding you?
  • Hrel - Friday, January 18, 2013 - link

    *facepalm, I read "you can't create..." nm me... whistle whistle whistle
  • Shadowmaster625 - Friday, January 18, 2013 - link

    May as well take that $400 to downtown detroit...

    Seriously though why in blazes are HDD manufacturers having such a hard time with this. How hard is it just to throw 4GB of SLC onto the little circuit board of a 1TB HDD? Yes, all you need is 4GB. The controller simply needs to perform a very simple algorithm... If the file you are writing is greater than 4MB in size, write directly to the HDD. It is a large sequential write and thus HDD performance will be adequate. If its a small write (< 4MB), write that to the SLC cache. That one tiny little optimization will get you 90% of the performance of a Vertex 4. (Depending on the bandwidth of this 4GB of SLC of course). But really it doesnt need to be as fast as a vertex 4. It just needs to be in that ballpark, for small random I/O. Large sequential I/O can just skip the NAND altogether.
  • Ben90 - Friday, January 18, 2013 - link

    Lol, stupid. System32 and SysWOW64 would fill your NAND on installation.
  • Hrel - Friday, January 18, 2013 - link

    Those entire folders wouldn't go on the NAND, they'd go on the HDD. Read the article on here about the MomentusXT from Seagate.
  • Hrel - Friday, January 18, 2013 - link

    found it for you http://www.anandtech.com/show/5160/seagate-2nd-gen...

Log in

Don't have an account? Sign up now