A Custom Form Factor PCIe SSD

In the old days, increasing maximum bandwidth supported by your PATA/SATA interface was always ceremonial at first. Hard drives were rarely quick enough to need more than they were given to begin with, and only after generations of platter density increases would you see transfer rate barriers broken. Then came SSDs.

Not only do solid state drives offer amazingly low access latency, but you can hit amazingly high bandwidth figures by striping accesses across multiple NAND Flash die. A 256GB SSD can be made up of 32 independent NAND die, clustered into 8 discrete packages. A good controller will be able to have reads/writes in flight to over half of those die in parallel. The result is a setup that can quickly exceed the maximum bandwidth that SATA can offer. Today that number is roughly 500MB/s for 6Gbps SATA, which even value consumer SSDs are able to hit without trying too hard. Rather than wait for another rev of the SATA spec, SSD controller makers started eyeing native PCIe based controllers as an alternative.

You can view a traditional SSD controller as having two sides: one that talks to the array of NAND flash, and one that talks to the host system’s SATA controller. The SATA side has been limiting max sequential transfers for a while now at roughly 550MB/s. The SATA interface will talk to the host’s SATA interface, which inevitably sits on a PCIe bus. You can remove the middle man by sticking a native PCIe controller on the SSD controller. With SATA out of the way, you can now easily scale bandwidth by simply adding PCIe lanes. The first generation of consumer PCIe SSDs will use PCIe 2.0, since that’s what’s abundant/inexpensive and power efficient on modern platforms. Each PCIe lane is good for 500MB/s, bidirectional (1GB/s total). Apple’s implementation uses two PCIe 2.0 lanes, for a total of 1GB/s of bandwidth in each direction (2GB/s aggregate).

The move to a PCIe 2.0 x2 interface completely eliminates the host side bottleneck. As I pointed out in my initial look at the new MacBook Air, my review sample’s 256GB SSD had no problems delivering almost 800MB/s in peak sequential reads/writes. Do keep in mind that you’ll likely see slower results on the 128GB drive.

Users have spotted both Samsung and SanDisk based PCIe SSDs in the 2013 MacBook Airs. Thankfully Apple doesn’t occlude the controller maker too much in its drive names. An SM prefix denotes Samsung:

My review sample featured a Samsung controller. There’s very little I know about the new Samsung controller, other than it is a native PCIe solution that still leverages AHCI (this isn't NVMe). Within days of Apple launching the new MBAs, Samsung announced its first consumer PCIe SSD controller: the XP941. I can only assume the XP941 is at least somewhat related to what’s in the new MBA.

The Samsung controller is paired with a 512MB DDR3 DRAM and 8 Samsung 10nm-class (10nm - 20nm process node) MLC NAND devices. 

New PCIe SSD (top) vs. 2012 MBA SATA SSD (bottom) - Courtesy iFixit

Despite moving to PCIe, Apple continues to use its own proprietary form factor and interface for the SSD. This isn’t an M.2 drive. The M.2 spec wasn’t far enough along in time for Apple to use it this generation unfortunately. The overall drive is smaller than the previous design, partially enabled by Samsung’s smaller NAND packages.

Absolutely Insane Battery Life PCIe SSD Performance
Comments Locked

233 Comments

View All Comments

  • p1esk - Monday, June 24, 2013 - link

    At longer length or if there are electrical interferences, Cat6 will be better for 1Gbps than Cat5e.
  • spronkey - Monday, June 24, 2013 - link

    Technically Cat5e doesn't have to reach 1Gbps at 100m lengths, which is the 1Gbps standard. So, Cat6 is the 1Gbps standard :P
  • A5 - Monday, June 24, 2013 - link

    Though you'd have to buy some seriously cheap Cat5e to get something that doesn't hit that spec. But yeah, if you're going in-wall you may as well go Cat6 for when you want a 10G network.
  • Mikad - Monday, June 24, 2013 - link

    Thanks for the review.

    If it's at some point updated with i7 battery life results, it would be great if you could run the battery tests in Windows + Bootcamp also. Some sites have done this, like Pcper, and there has been about 50% decrease in battery life. It would be great to have some kind of an analysis why the battery life takes such a hit in Windows.
  • IntelUser2000 - Monday, June 24, 2013 - link

    Because the battery life gains due to Haswell are based on how well the components are cooperating together, and sub-optimal drivers would easily cancel the improvements out.

    That's why you see Haswell Windows Ultrabooks do pretty good, but Macbook Air running Windows isn't doing so well.
  • Glindon - Monday, June 24, 2013 - link

    I don't think it really has much to do with drivers. There's nothing special about the components compared to a Windows ultra book. I think it boils down to something at the EFI level and Apple isn't going to prioritize it for a small subset of users. I'm guessing that running Windows in a VM (through Parallels) might give better battery life.
  • darwinosx - Monday, June 24, 2013 - link

    Yes there is. Which you would know if you read the article
    Apple uses custom sad, motherboard, chipsets and a few other compoenents. Thats why the sad is so much faster for example.
  • erple2 - Monday, June 24, 2013 - link

    That, BTW, was one of the greatest autocorrects I've seen in a while. Full of fast sad indeed.
  • Bkord123 - Tuesday, June 25, 2013 - link

    hahaha!!!
  • Mikad - Monday, June 24, 2013 - link

    I've seen the drivers mentioned as the reason before, but as Glindon stated, the components are just the standard PC stuff, found in other ultrabooks. The drivers are probably the same too.

Log in

Don't have an account? Sign up now