Why Do We Need Faster SSDs

The claim I've often seen around the Internet is that today's SSDs are already "fast enough" and that there is no point in faster SSDs unless you're an enthusiast or professional with a desire for maximum IO performance. There is some truth to that claim but the big picture is much broader than that.

It's true that going from a SATA SSD to a PCIe SSD likely won't bring you the same "wow" factor as going from a hard drive to an SSD did, and for an average user there may not be any noticeable difference at all. However, when you put it that way, does a faster CPU or GPU bring you any noticeable increase in performance unless you have a usage model that specifically benefits from them? No. But what happens if the faster component doesn't consume any more power than the slower one? You gain battery life!

If you go back in time and think of all the innovations and improvements we've seen over the years, there is one essential part that is conspicuously absent—the battery. Compared to other components there haven't been any major improvements to the battery technology and as a result companies have had to rely on improving other components to increase battery life. If you look at Intel's strategy for its CPUs in the past few years, you'll notice that mobile and power saving have been the center of attention. It's not the increase in battery capacity that has brought us things like 12-hour battery life in 13" MacBook Air but the more efficient chip architectures that can provide more performance while not consuming any more power. The term often used here is "race to idle" because ultimately a faster chip will complete a task faster and can hence spend more time idling, which reduces the overall power consumption.

SSDs are no exception to the rule here. A faster SSD will complete IO requests faster and will thus consume less power in total because it will be idling more (assuming similar power consumptions at idle and under load). If the interface is the bottleneck, there will be cases when the drive could complete tasks faster if the interface was up for that. This is where we need PCIe.

To demonstrate the importance of an SSD from the battery life perspective, let's look at a scenario with a hypothetical laptop. Let's assume our hypothetical laptop has a 50Wh battery and only has two power states: light and heavy use. While in light use, the SSD in our laptop consumes 1W and 3W under heavier load. The other components consume the rest of the power and to keep things simple let's assume their power consumptions are constants and do not depend on the SSD.
 
Our Hypothetical Laptop
Power Consumption Light Use Heavy Use
Whole Laptop 7W 20W
SSD 1W 3W

Our hypothetical laptop spends 80% of its time in light use and 20% of the time under heavier load. With such characteristics, the average power consumption comes in at 9.6W and with a 50Wh battery we should get a battery life of around 5.2 hours. The scenario here is something you could expect from an ultraportable like the 2013 13" MacBook Air because it has a 54Wh battery, consumes around 6-7W while idling and manages 5.5 hours in our Heavy Workload battery life test.

Now the SSD part. In our scenario above, the average power consumption of our SSD was 1.4W but in this case that was a SATA 6Gbps design. What if we took a PCIe SSD that was 20% faster in light use scenario and 40% in heavy use? Our SSD would spend the saved time idling (with minimal <0.05W power consumption) and the average power consumption of the SSD would drop to 1.1W. That's a 0.3W reduction in the average power consumption of the SSD as well as the system total. In our hypothetical scenario, that would bring a 10-minute increase in battery life.

Sure, ten minutes is just ten minutes but bear in mind that a single component can't do miracles to battery life. It's when all components become a little bit faster and more efficient that we get an extra hour or two of battery life. In a few years you would lose an hour of battery life if the development of one aspect suddenly stopped (i.e. if we got stuck to SATA 6Gbps for eternity), so it's crucial that all aspects are actively developed even though there may not be noticeable improvements immediately. Furthermore, the idea here is to demonstrate what faster SSDs provide in addition to increased performance—in the end the power savings depend on one's usage and in workloads that are more IO intensive the battery life gains can be much more significant than 10 minutes. Ultimately we'll also see even bigger gains once the industry moves from PCIe 2.0 to 3.0 with twice the bandwidth.

4K Video: A Beast That Craves Bandwidth

Above I tried to cover a usage scenario that applies to every mobile user regardless of their workload. However, in the prosumer and professional market segments the need for higher IO performance already exists thanks to 4K video. At 24 frames per second, uncompressed 4K video (3840x2160, 12-bit RGB color) requires about 900MB/s of bandwidth, which is way over the limits of SATA 6Gbps. While working with compressed formats is rather common in 4K due to the storage requirements (an hour of uncompressed 4K video would take 3.22TB), it's not uncommon for professionals to work with multiple video sources simultaneously, which even with compressing can certainly exceed the limits of SATA 6Gbps.

Yes, you could use RAID to at least partially overcome the SATA bottleneck but that add costs (a single PCIe controller is cheaper than two SATA controllers) and especially with RAID 0 the risk of array failure is higher (one disk fails and the whole array is busted). While 4K is not ready for the mainstream yet, it's important that the hardware base be made ready for when the mainstream adoption begins.

What Is SATA Express? NVMe vs AHCI: Another Win for PCIe
Comments Locked

131 Comments

View All Comments

  • Kristian Vättö - Tuesday, March 18, 2014 - link

    Bear in mind that SATA-IO is not just some random organization that does standards for fun - it consists of all the players in the storage industry. The current board has members from Intel, Marvell, HP, Dell, SanDisk etc...
  • BMNify - Thursday, March 20, 2014 - link

    indeed, and yet its now clear these and the other design by committee organization's are no longer fit for purpose , producing far to little far to late....

    ARM IP =the generic current CoreLink CCN-508 that can deliver up to 1.6 terabits of sustained usable system bandwidth per second with a peak bandwidth of 2 terabits per second (256 GigaBytes/s) at processor speeds scaling all the way up to 32 processor cores total.

    Intel IP QPI = Intel's Knights Landing Xeon Phi due in 2015 with its antiquated QPI interconnect and its expected ultra short-reach (USR) interconnection only up to 500MB/s data throughput seems a little/lot short on real data throughput by then...
  • Hrel - Monday, March 17, 2014 - link

    Cost: Currently PCI-E SSD's are inexplicably expensive. If this is gonna be the same way it won't sell no matter how many PCI-E lanes Intel builds into it's chipset. My main concern with using the PCI-E bus is cost. Can someone explain WHY those cost so much more? Is it just the niche market or is there an actual legitimate reason for it? Like, PCI-E controllers are THAT much harder to create than SATA ones?

    I doubt that's the case very much. If it is then I guess prices will drop as that gets easier but for now they've priced themselves out of competition.

    Why would I buy a 256GB SSD on PCI-E for $700 when I can buy a 256GB SSD on SATA for $120? That shit makes absolutely no sense. I could see like a 10-30% price premium, no more.
  • BMNify - Tuesday, March 18, 2014 - link

    "Can someone explain WHY those cost so much more?"
    greed...
    due mostly to not invented here is the reason we are not yet using a version of everspin's MRAM 240 pin, 64MByte DIMM with x72 configuration with ECC for instance http://www.everspin.com/image-library/Everspin_Spi...

    it can be packaged for any of the above forms M2 etc too rathe than have motherboard vendors put extra ddr3 ram slots decicated to this ddr3 slot compatable everspin MRAM today with the needed extra ddr3 ram controllers included in any CPU/SoC....

    rather than licence this existing (for 5 years ) commercial MRAM product and collaborate together to make and improve the yield and help them shrink it down to 45nm to get it below all of today's dram fastest speeds etc they all want an invented here product and will make the world markets wait for no good reason...
  • Kristian Vättö - Tuesday, March 18, 2014 - link

    Because most PCIe SSDs (the Plextor M6e being an exception) are just two or four SATA SSDs sitting behind a SATA to PCIe bridge. There is added cost from the bridge chip other additional controller, although the main reason are the laws of economics. Retail PCIe SSDs are low volume because SATA is still the dominant interface and that increases production costs for the OEMs. Low order quantities are also more expensive for the retailers.

    In short, OEMs are just trying to milk enthusiasts with PCIe drives but ones we'll see PCIe entering the mainstream market, you'll no longer have to pay extra for them (e.g. SF3700 combines SATA and PCIe in a single chip, so PCIe isn't more expensive with it).
  • Ammohunt - Thursday, March 20, 2014 - link

    Disappointed there wasn't a SAS offering compared 6GB SAS != 6G SATA
  • jseauve - Thursday, March 20, 2014 - link

    Awesome computer
  • westfault - Saturday, March 22, 2014 - link

    "The SandForce, Marvell, and Samsung designs are all 2.0 but at least OCZ is working on a 3.0 controller that is scheduled for next year."

    When you say OCZ is developing on a PCIe 3.0 controller do you mean that they were working on one before they were purchased by Toshiba, or was this announced since they were acquired by Toshiba? I understand that Toshiba has kept the OCZ name, but is it certain that they have continued all R&D from before OCZ's bankruptcy?
  • dabotsonline - Monday, April 28, 2014 - link

    Roll on SATAe with PCIe 4.0, let alone 3.0 next year!
  • MRFS - Tuesday, January 20, 2015 - link

    I've felt the same way about SATAe and PCIe SSDs --
    cludgy and expensive, respectively.

    Given the roadmaps for PCIe 3.0 and 4.0, it makes sense to me, imho,
    to "sync" SATA and SAS storage with 8G and 16G transmission clocks
    and the 128b/130b "jumbo frame" now implemented in the PCIe 3.0 standard.

    Ideally, end users will have a choice of clock speeds, perhaps with pre-sets:
    6G, 8G, 12G and 16G.

    In actual practice now, USB 3.1 uses a 10G clock and 128b/132b jumbo frame:

    max headroom = 10G / 8.25 bits per byte = 1.212 GB/second.

    132 bits / 16 bytes = 8.25 bits per byte, using the USB 3.1 jumbo frame

    To save a lot of PCIe motherboards, which are designed for expansion,
    PCIe 2.0 and 3.0 expansion slots can be populated with cards
    which implement 8G clocks and 128b/130b jumbo frames.

    That one evolutionary change should put pressure on SSD manufacturers
    to offer SSDs with support for both features.

    Why "SATA-IV" does not already sync with PCIe 3.0 is anybody's guess.

    We tried to discuss this with the SATA-IO folks may moons ago,
    but they were quite committed to their new SATAe connector. UGH!

Log in

Don't have an account? Sign up now