Why Do We Need Faster SSDs

The claim I've often seen around the Internet is that today's SSDs are already "fast enough" and that there is no point in faster SSDs unless you're an enthusiast or professional with a desire for maximum IO performance. There is some truth to that claim but the big picture is much broader than that.

It's true that going from a SATA SSD to a PCIe SSD likely won't bring you the same "wow" factor as going from a hard drive to an SSD did, and for an average user there may not be any noticeable difference at all. However, when you put it that way, does a faster CPU or GPU bring you any noticeable increase in performance unless you have a usage model that specifically benefits from them? No. But what happens if the faster component doesn't consume any more power than the slower one? You gain battery life!

If you go back in time and think of all the innovations and improvements we've seen over the years, there is one essential part that is conspicuously absent—the battery. Compared to other components there haven't been any major improvements to the battery technology and as a result companies have had to rely on improving other components to increase battery life. If you look at Intel's strategy for its CPUs in the past few years, you'll notice that mobile and power saving have been the center of attention. It's not the increase in battery capacity that has brought us things like 12-hour battery life in 13" MacBook Air but the more efficient chip architectures that can provide more performance while not consuming any more power. The term often used here is "race to idle" because ultimately a faster chip will complete a task faster and can hence spend more time idling, which reduces the overall power consumption.

SSDs are no exception to the rule here. A faster SSD will complete IO requests faster and will thus consume less power in total because it will be idling more (assuming similar power consumptions at idle and under load). If the interface is the bottleneck, there will be cases when the drive could complete tasks faster if the interface was up for that. This is where we need PCIe.

To demonstrate the importance of an SSD from the battery life perspective, let's look at a scenario with a hypothetical laptop. Let's assume our hypothetical laptop has a 50Wh battery and only has two power states: light and heavy use. While in light use, the SSD in our laptop consumes 1W and 3W under heavier load. The other components consume the rest of the power and to keep things simple let's assume their power consumptions are constants and do not depend on the SSD.
 
Our Hypothetical Laptop
Power Consumption Light Use Heavy Use
Whole Laptop 7W 20W
SSD 1W 3W

Our hypothetical laptop spends 80% of its time in light use and 20% of the time under heavier load. With such characteristics, the average power consumption comes in at 9.6W and with a 50Wh battery we should get a battery life of around 5.2 hours. The scenario here is something you could expect from an ultraportable like the 2013 13" MacBook Air because it has a 54Wh battery, consumes around 6-7W while idling and manages 5.5 hours in our Heavy Workload battery life test.

Now the SSD part. In our scenario above, the average power consumption of our SSD was 1.4W but in this case that was a SATA 6Gbps design. What if we took a PCIe SSD that was 20% faster in light use scenario and 40% in heavy use? Our SSD would spend the saved time idling (with minimal <0.05W power consumption) and the average power consumption of the SSD would drop to 1.1W. That's a 0.3W reduction in the average power consumption of the SSD as well as the system total. In our hypothetical scenario, that would bring a 10-minute increase in battery life.

Sure, ten minutes is just ten minutes but bear in mind that a single component can't do miracles to battery life. It's when all components become a little bit faster and more efficient that we get an extra hour or two of battery life. In a few years you would lose an hour of battery life if the development of one aspect suddenly stopped (i.e. if we got stuck to SATA 6Gbps for eternity), so it's crucial that all aspects are actively developed even though there may not be noticeable improvements immediately. Furthermore, the idea here is to demonstrate what faster SSDs provide in addition to increased performance—in the end the power savings depend on one's usage and in workloads that are more IO intensive the battery life gains can be much more significant than 10 minutes. Ultimately we'll also see even bigger gains once the industry moves from PCIe 2.0 to 3.0 with twice the bandwidth.

4K Video: A Beast That Craves Bandwidth

Above I tried to cover a usage scenario that applies to every mobile user regardless of their workload. However, in the prosumer and professional market segments the need for higher IO performance already exists thanks to 4K video. At 24 frames per second, uncompressed 4K video (3840x2160, 12-bit RGB color) requires about 900MB/s of bandwidth, which is way over the limits of SATA 6Gbps. While working with compressed formats is rather common in 4K due to the storage requirements (an hour of uncompressed 4K video would take 3.22TB), it's not uncommon for professionals to work with multiple video sources simultaneously, which even with compressing can certainly exceed the limits of SATA 6Gbps.

Yes, you could use RAID to at least partially overcome the SATA bottleneck but that add costs (a single PCIe controller is cheaper than two SATA controllers) and especially with RAID 0 the risk of array failure is higher (one disk fails and the whole array is busted). While 4K is not ready for the mainstream yet, it's important that the hardware base be made ready for when the mainstream adoption begins.

What Is SATA Express? NVMe vs AHCI: Another Win for PCIe
Comments Locked

131 Comments

View All Comments

  • mkozakewich - Friday, March 14, 2014 - link

    Ooh, or what if we had actual M.2 slots on desktop motherboards that could take a ribbon to attach 2.5" PCIe SSDs?
  • phoenix_rizzen - Thursday, March 13, 2014 - link

    Yeah. Seems strange that they wouldn't re-use the M.2 or mSATA connector for this. Why take up 2 complete SATA slots, and add an extra connector? What are they doing with the SATA connectors when running in SATAe mode?

    It amost would have made sense to make a cable that plugged into <whatever> at the drive end, and just slotted into a PCIe x1 or x2 or x4 slot on the mobo. Skipped the dedicated slot entirely. Then they wouldn't need that hokey power dongle off the drive connector.
  • frenchy_2001 - Friday, March 14, 2014 - link

    They were looking for backward compatibility with current storage and in that context, the decision makes sense. No need to think about how to plug it, it just slots right where the rest of the storage goes and can even accept its predecessor.
    It's a desktop/server/storage centric product, not really meant for laptop/portable.

    But I agree its place is becoming squished between full PCIe (used already in data centers) and miniPCIe/M2 used in portables. As the requirement is already 2x PCIe lanes (like the others), it will be hard to use for lots of storage and if you cannot fit 24 of those in a rack (which is how most server use SATA/SAS), as few servers have 48 lanes of PCIe hanging around unused then it seems only reserved to desktop/workstation and those can easily use PCIe storage...
  • phoenix_rizzen - Friday, March 14, 2014 - link

    Yeah, until you try to connect more than 2 of those to a motherboard. And good luck getting that to work on a mini-ATX/micro-ATX board. Why use up two whole SATA ports, and still use an extra port for PCIe side of it?

    How are you going to make add-in controller cards for 4+ drives? There's no room for 4 of those connectors anywhere. And trying to do a multi-lane setup like SFF-8087 for this will be rediculous.

    The connector is dumb, no matter how you look at it. Especially since it doesn't support power.
  • jasonelmore - Saturday, March 15, 2014 - link

    it looks like the only reason to be excited about this connector is for using older Hard Drives 2.5 or 3.5 form factor, and putting them on a faster bus.

    Other than that, other solutions exist and they do it quicker and with less power. its just a solution to let people use old hardware longer.
  • phobos512 - Thursday, March 13, 2014 - link

    It's not an assumption. The cabling adds distance to the signal path, which increases latency. Electrons don't travel at infinite speed; merely the speed of light (in a vacuum; in a cable it is of course reduced).
  • ddriver - Thursday, March 13, 2014 - link

    You might be surprised now negligible the effect of the speed of electrons is for the total overall latency.
  • Khenglish - Thursday, March 13, 2014 - link

    It's negligible.

    The worst cables carry a signal at 66% of the speed of light, with the best over 90%. If we take the worst case scenario of 66% we get this:

    speed of light = 3*10^8 m/s
    1m / (.66 * 3*10^8 m/s) = 5ns per meter

    If we have a really long 5m cable that's 25ns. Kristian says it takes 115us to read a page. You never read less than 1 page at a time.

    25ns/115us = .0217% for a long 5m cable. Completely insignificant latency impact.
  • willis936 - Thursday, March 13, 2014 - link

    The real latency number to look at is the one cited on the nvme page: 2.8us. It's not so negligible then. It does affect control overhead a good deal.

    Also I have a practical concern of channel loss. You can't just slap a pcie lane onto a 1m cable. Pcie is designed to ride a vein of traces straight to a socket, straight to a card. You're now increasing the length of those traces, still putting it through a socket, and now putting it through a long, low cost cable. Asking more than 1.5GB/s might not work as planned going forward.
  • DanNeely - Thursday, March 13, 2014 - link

    Actually you can. Pcie cabling has been part of the spec since 2007; and while there isn't an explicit max length in the spec, at least one vendor is selling pcie2.0 cables that are up to 7m long for passive versions and 25m for active copper cables. Fiberoptic 3.0 cables are available to 300m.

Log in

Don't have an account? Sign up now