Fusion Drive: Under the Hood

I took the 27-inch iMac out of the box and immediately went to work on Fusion Drive testing. I started filling the drive with a 128KB sequential write pass (queue depth of 1). Using iStat Menus 4 to actively monitor the state of both drives I noticed that only the SSD was receiving this initial write pass. The SSD was being written to at 322MB/s with no activity on the HDD.

After 117GB of writes the HDD took over, at speeds of roughly 133 - 175MB/s to begin with.

The initial test just confirmed that Fusion Drive is indeed spanning the capacity of both drives. The first 117GB ended up on the SSD and the remaining 1TB of writes went directly to the HDD. It also gave me the first indication of priority: Fusion Drive will try to write to the SSD first, assuming there's sufficient free space (more on this later).

Next up, I wanted to test random IO as this is ultimately where SSDs trump hard drives in performance and typically where SSD caching or hybrid hard drives fall short. I first tried the worst case scenario, a random write test that would span all logical block addresses. Given that the total capacity of the Fusion Drive is 1.1TB, how this test was handled would tell me a lot about how Apple maps LBAs (Logical Block Addresses) between the two drives.

The results were interesting and not unexpected. Both the SSD and HDD saw write activity, with more IOs obviously hitting the hard drive (which consumes a larger percentage of all available LBAs). The average 4KB (QD16) random write performance was around 0.51MB/s, it was constrained by the hard drive portion of the Fusion Drive setup.

After stopping the random write task however, there was immediate moving of data between the HDD and SSD. Since the LBAs were chosen at random, it's possible that some (identical or just spatially similar) addresses were picked more than once and those blocks were immediately marked for promotion to the SSD. This was my first experience with the Fusion Drive actively moving data between drives.

A full span random write test is a bit unfair for a consumer SSD, much less a hybrid SSD/HDD setup with roughly an 1:8 ratio of LBAs. To get an idea of how good Fusion Drive is at dealing with random IO I constrained the random write test to the first 8GB of LBAs.

The resulting performance was quite different. For the first pass, average performance was roughly 7 - 9MB/s, with most of the IO hitting the SSD and a smaller portion hitting the hard drive. After the 3 minute test, I waited while the Fusion Drive moved data around, then repeated it. For the second run, total performance jumped up to 21.9MB/s with more of the IO being moved to the SSD although the hard drive was still seeing writes.

In the shot to the left, most random writes are hitting the SSD but some are still going to the HDD, after some moving of data and remapping of LBAs nearly all random writes go to the SSD and performance is much higher

On the third attempt, nearly all random writes went to the SSD with performance peaking at 98MB/s and dropping to a minimum of 35MB/s as the SSD got more fragmented. This told me that Apple seems to dynamically map LBAs to the SSD based on frequency of access, a very pro-active approach to ensuring high performance. Ultimately this is a big difference between standard SSD caches and what Fusion Drive appears to be doing. Most SSD caches seem to work based on frequency of read access, whereas Fusion Drive appears to (at least partially) take into account what LBAs are frequently targeted for writes and mapping those to the SSD.

Note that subsequent random write tests produced very different results. As I filled up the Fusion Drive with more data and applications (~80% full of real data and applications), I never saw random write performance reach these levels again. After each run I'd see short periods where data would move around, but random IO hit the Fusion Drive in around an 7:1 ratio of HDD to SSD accesses. Given the capacity difference between the drives, this ratio makes a lot of sense. If you have a workload that is composed of a lot of random writes that span all available space, Fusion Drive isn't for you. Given that most such workloads are confined to the enterprise space, that shouldn't really be a concern here.

Meet Fusion Drive Management Granularity


View All Comments

  • robinthakur - Sunday, January 20, 2013 - link

    Lol exactly! When I was a student and had loads of free time, I built my own pcs and overclocked them (Celeron 300a FTW!) but over the years, I really don't have the time anymore to tinker constantly and find myself using Macs increasingly now, booting into Windows whenever I need to use Visual Studio. Yes they are more expensive, but they are very nicely designed and powerful (assuming money is no limiter) Reply
  • mavere - Friday, January 18, 2013 - link

    "The proportion of people who can handle manually segregating their files is much, much smaller than most of us realize"

    I agreed with your post, but it always astounds me that commenters in articles like these need occasional reminders that the real world exists, and no, people don't care about obsessive, esoteric ways to deal with technological minutiae.
  • WaltFrench - Friday, January 18, 2013 - link

    Anybody else getting a bit of déjà vu? I recently saw a rehash of the compiler-vs-assembly (or perhaps, trick-playing to work around compiler-optimization bugs); the early comment was K&P, 1976.

    Yes, anybody who knows what they're doing, and is willing to spend the time, can hand-tune a machine/storage system, better than a general-purpose algorithm. *I* have the combo SSD + spinner approach in my laptop, but would have saved myself MANY hours of fussing and frustration, had a good Fusion-type solution been available.

    It'd be interesting to see how much time Anand thinks a person of his skill and general experience, would take to install, configure and tune a SSD+spinner combo, versus the time he'd save per month from the somewhat better results vis-à-vis a Fusion drive. As a very rough SWAG, I'll guess that the payback for an expert, heavy user is probably around 2–3 years, an up-front sunk cost that won't pay back because it'll be necessary to repeat with a NEW machine before the time.
  • guidryp - Friday, January 18, 2013 - link

    These claims about the effort in setting up SSD/HD combo are getting quite silly.

    There is essentially ZERO time difference into setting up SSD/HD partitioned combo vs Fusion. Your payback would be on Day 1.

    The only effort is simply deciding which partition to load new material on. That decision takes what? Microseconds.

    It is as simple as install OS/Apps on SSD, Media HD. Vs Install OS/Apps/Media on Fusion. The effort is essentially the same.

    But that simple manual partition will perform better, create less system thrashing and less wear on all your drives.
  • Zink - Friday, January 18, 2013 - link

    But then you end up with a SSD filled up with no longer relevant data and you need to figure out how to free up space again. A combo drive takes care of that for you and keeps the SSD filled to the brim with most of the data that gets used. You can download any games, start any big video editing project, and know that you are getting 50%-100% of the benefit of the SSD without worrying about managing segregated data. With a segregated setup you end up playing games from the HDD or editing video files that are on the HDD and sometimes see 0% of the benefit of the SSD. Fusion seems like the future. Reply
  • KitsuneKnight - Friday, January 18, 2013 - link

    If you can divide your data up as OS, Apps, and Media, and OS + Apps fits on the SSD, then sure, it's not too bad.

    Unfortunately, my Steam library is approximately 250 GBs... That alone would fill up most SSDs out there. And that's not even counting all my non-Steam games, which would help push most any SSD towards being totally full. If I'd bought too many recent games, it'd likely be quite a bit larger than that (AAA games seem to be ranging from 10-30 GBs these days).

    Unless you sprung for a 500 GB SSD (which aren't exactly cheap, even today), you'd be having quite a pickle on your hands. Likely having to move most of the library manually to the HDD (which is a bit of a pain with Steam). Which means it's suddenly much more complicated than OS/Apps on SSD, and Media on the HDD. Especially since SSDs massively improve the load time of large games (unlike the impact it has on media).

    And then there's the other examples I've already given: the artist I know that works on absurdly massive PSDs, and has many terabytes of them (what's the point of a SSD if it doesn't benefit your primary usage of a computer?), as well as my situation with VMs on my non-gaming machine (which actually has a SSD + HDD setup right now). A lot of people could probably do the divide you're talking about, but likely even more people could fit all their data in either a 128 or 256 GB SSD.
  • name99 - Friday, January 18, 2013 - link

    Then WTF are you complaining about?
    You can still buy an HD only mac mini and add your own USB3 SSD as boot disk.
    Or you can buy a fusion mac mini and split the two drives apart.

    It's not enough that things can be done your way, you ALSO want everyone else, who wants a simple solution, to have to suffer?
  • Mr Perfect - Friday, January 18, 2013 - link

    Intel SRT is useful for everyone, there's no reason to look down on it. Could I sit there and manually move files back and forth between the SSD and HD? Sure. But why? Seriously, I have better things to do with my time then move around the program of the week between storage mediums. Last week I was using Metro 2033, this week is World of Tanks. Next week I might finish one of those run throughs of D:HR or Portal 2 that I left hanging. SRT takes care of all of that. This is 2013, an enthusiast class workstation should damn well be able to handle something as simple as caching, and it can. Enterprise class servers have been doing it for some time, so why isn't it good enough for a gameing rig?

    My one complaint with RST is the cache size limit. Why would Intel even impose a limit?
  • EnzoFX - Saturday, January 19, 2013 - link

    You're framing it in your own way so that only your solution works. Fail. Unnecessary stressing of the SSD? The better argument for most people would be putting that SSD to good use. Not trying to NOT use it.

    It further isn't simply about putting the files where they go, and then be done with. Files are changed, updated, and if you're on multiple drives, copied back and forth. Some people don't want to deal with that. Actually, no one should want to do deal with that. There are only barriers with every person having their own thresholds to good solutions. Is it that hard to understand?
  • lyeoh - Saturday, January 19, 2013 - link

    Do you manually control the data in the 1st, 2nd level cache in your CPU too? There are plenty of decent caching algorithms created by very smart people. If the algorithms were that bad your CPU would be running very slow.

    There should be no need for you to WASTE TIME moving crap around from drive to drive. The OS can know how often you use stuff, and whether the accesses are sequential, random, slow.

    If Windows 8's Storage Spaces was more like Fusion Drive out of the box (or better even), us geeks would be more impressed by Windows 8.

Log in

Don't have an account? Sign up now