Putting Fusion Drive’s Performance in Perspective

Benchmarking Fusion Drive is a bit of a challenge since it prioritizes the SSD for all incoming writes. If you don’t fill the Fusion Drive up, you can write tons of data to the drive and it’ll all hit the SSD. If you do fill the drive up and test with a dataset < 4GB, then you’ll once again just measure SSD performance.

In trying to come up with a use case that spanned both drives I stumbled upon a relatively simple one. By now my Fusion Drive was over 70% full, which meant the SSD was running as close to capacity as possible (save its 4GB buffer). I took my iPhoto library with 703 photos and simply exported all photos as TIFFs. The resulting files were big enough that by the time I hit photo 297, the 4GB write buffer on the SSD was full and all subsequent exported photos were directed to the HDD instead. I timed the process, then compared it to results from a HDD partition on the iMac as well as compared to a Samsung PM830 SSD connected via USB 3.0 to simulate a pure SSD configuration. The results are a bit biased in favor of the HDD-only configuration since the writes are mostly sequential:

iPhoto Library Export to TIFFs

The breakdown accurately sums up my Fusion Drive experience: nearly half-way between a hard drive and a pure SSD configuration. In this particular test the gains don't appear all that dramatic, but again that's mostly because we're looking at relatively low queue depth sequential transfers. The FD/HDD gap would grow for less sequential workloads. Unfortunately, I couldn't find a good application use case to generate 4GB+ of pseudo-random data in a repeatable enough fashion to benchmark.

If I hammered on the Fusion Drive enough, with constant very large sequential writes (up to 260GB for a single file) I could back the drive into a corner where it would no longer migrate data to the SSD without a reboot (woohoo, I sort of broke it!). I suspect this is a bug that isn't triggered through normal automated testing (for obvious reasons), but it did create an interesting situation that I could exploit for testing purposes.

Although launching any of the iMac's pre-installed applications frequently used by me proved that they were still located on the SSD, this wasn't true for some of the late comers. In particular, Photoshop CS6 remained partially on the SSD and partially on the HDD. It ended up being a good benchmark for pseudo-random read performance on Fusion Drive where the workload is too big (or in this case, artificially divided) to fit on the SSD partition alone. I measured Photoshop launch time on the Fusion Drive, a HDD-only partition and on a PM830 connected via USB 3.0. The results, once again, mirrored my experience with the setup:

Photoshop CS6 Launch Time (Not Fully Cached)

Fusion Drive delivers a noticeable improvement over the HDD-only configuration, speeding up launch time by around 40%. A SSD-only configuration however cuts launch time in more than half. Note that if Photoshop were among the most frequently used applications, it would get moved over to the SSD exclusively and deliver performance indistinguishable from a pure SSD configuration. In this case, it hadn't because my 1.1TB Fusion Drive was nearly 80% full, which brings me to a point I made earlier:

The Practical Limits of Fusion Drive

Apple's Fusion Drive is very aggressive at writing to the SSD, however the more data you have the more conservative the algorithm seems to become. This isn't really shocking, but it's worth pointing out that at a lower total drive utilization the SSD became home to virtually everything I needed, but as soon as my application needs outgrew what FD could easily accommodate the platform became a lot pickier about what would get moved onto the SSD. This is very important to keep in mind. If 128GB of storage isn’t enough for all of your frequently used applications, data and OS to begin with, you’re going to have a distinctly more HDD-like experience with Fusion Drive. To simulate/prove this I took my 200GB+ MacBook Pro image and moved it over to the iMac. Note that most of this 200GB was applications and data that I actually used regularly.

By the end of my testing experience, I was firmly in the category where I needed more solid state storage. Spotlight searches took longer than on a pure SSD configuration, not all application launches were instant, adding photos to iPhoto from Safari took longer, etc... Fusion Drive may be good, but it's not magic. If you realistically need more than 128GB of solid state storage, Fusion Drive isn't for you.

The Application Experience Final Words
Comments Locked

127 Comments

View All Comments

  • tipoo - Friday, January 18, 2013 - link

    To your last point Name99, indeed they will.
  • name99 - Friday, January 18, 2013 - link

    As compared to all those other tablets out there with 128 and 256GB of storage? Like uuh, huh, wait, the names will come to me...

    When EVERYONE is doing things a certain way, not just Apple, it may be worth asking if there are other issues going on here (limited manufacturing capacity and exploding demand, for one) rather than immediately assuming Apple is out to screw you.
  • Death666Angel - Friday, January 18, 2013 - link

    Tons of Archos stuff, Samsung XE700, Gigabyte and Dell tablets etc. have >120GB storage.
  • name99 - Friday, January 18, 2013 - link

    So in other words the tablets that are trying to be laptop replacements, and that have to cope with the massive footprint of Windows 8.

    You may consider this to be proof against my point; I don't.
  • Hrel - Friday, January 18, 2013 - link

    "You can create Boot Camp or other additional partitions on a Fusion Drive, however these partitions will reside on the HDD portion exclusively."

    So you CAN create a Boot Camp partition on a Fusion Drive, it just won't utilize the SSD portion of that fusion drive at all. Or am I not understanding you?
  • Hrel - Friday, January 18, 2013 - link

    *facepalm, I read "you can't create..." nm me... whistle whistle whistle
  • Shadowmaster625 - Friday, January 18, 2013 - link

    May as well take that $400 to downtown detroit...

    Seriously though why in blazes are HDD manufacturers having such a hard time with this. How hard is it just to throw 4GB of SLC onto the little circuit board of a 1TB HDD? Yes, all you need is 4GB. The controller simply needs to perform a very simple algorithm... If the file you are writing is greater than 4MB in size, write directly to the HDD. It is a large sequential write and thus HDD performance will be adequate. If its a small write (< 4MB), write that to the SLC cache. That one tiny little optimization will get you 90% of the performance of a Vertex 4. (Depending on the bandwidth of this 4GB of SLC of course). But really it doesnt need to be as fast as a vertex 4. It just needs to be in that ballpark, for small random I/O. Large sequential I/O can just skip the NAND altogether.
  • Ben90 - Friday, January 18, 2013 - link

    Lol, stupid. System32 and SysWOW64 would fill your NAND on installation.
  • Hrel - Friday, January 18, 2013 - link

    Those entire folders wouldn't go on the NAND, they'd go on the HDD. Read the article on here about the MomentusXT from Seagate.
  • Hrel - Friday, January 18, 2013 - link

    found it for you http://www.anandtech.com/show/5160/seagate-2nd-gen...

Log in

Don't have an account? Sign up now