Why Do We Need Faster SSDs

The claim I've often seen around the Internet is that today's SSDs are already "fast enough" and that there is no point in faster SSDs unless you're an enthusiast or professional with a desire for maximum IO performance. There is some truth to that claim but the big picture is much broader than that.

It's true that going from a SATA SSD to a PCIe SSD likely won't bring you the same "wow" factor as going from a hard drive to an SSD did, and for an average user there may not be any noticeable difference at all. However, when you put it that way, does a faster CPU or GPU bring you any noticeable increase in performance unless you have a usage model that specifically benefits from them? No. But what happens if the faster component doesn't consume any more power than the slower one? You gain battery life!

If you go back in time and think of all the innovations and improvements we've seen over the years, there is one essential part that is conspicuously absent—the battery. Compared to other components there haven't been any major improvements to the battery technology and as a result companies have had to rely on improving other components to increase battery life. If you look at Intel's strategy for its CPUs in the past few years, you'll notice that mobile and power saving have been the center of attention. It's not the increase in battery capacity that has brought us things like 12-hour battery life in 13" MacBook Air but the more efficient chip architectures that can provide more performance while not consuming any more power. The term often used here is "race to idle" because ultimately a faster chip will complete a task faster and can hence spend more time idling, which reduces the overall power consumption.

SSDs are no exception to the rule here. A faster SSD will complete IO requests faster and will thus consume less power in total because it will be idling more (assuming similar power consumptions at idle and under load). If the interface is the bottleneck, there will be cases when the drive could complete tasks faster if the interface was up for that. This is where we need PCIe.

To demonstrate the importance of an SSD from the battery life perspective, let's look at a scenario with a hypothetical laptop. Let's assume our hypothetical laptop has a 50Wh battery and only has two power states: light and heavy use. While in light use, the SSD in our laptop consumes 1W and 3W under heavier load. The other components consume the rest of the power and to keep things simple let's assume their power consumptions are constants and do not depend on the SSD.
 
Our Hypothetical Laptop
Power Consumption Light Use Heavy Use
Whole Laptop 7W 20W
SSD 1W 3W

Our hypothetical laptop spends 80% of its time in light use and 20% of the time under heavier load. With such characteristics, the average power consumption comes in at 9.6W and with a 50Wh battery we should get a battery life of around 5.2 hours. The scenario here is something you could expect from an ultraportable like the 2013 13" MacBook Air because it has a 54Wh battery, consumes around 6-7W while idling and manages 5.5 hours in our Heavy Workload battery life test.

Now the SSD part. In our scenario above, the average power consumption of our SSD was 1.4W but in this case that was a SATA 6Gbps design. What if we took a PCIe SSD that was 20% faster in light use scenario and 40% in heavy use? Our SSD would spend the saved time idling (with minimal <0.05W power consumption) and the average power consumption of the SSD would drop to 1.1W. That's a 0.3W reduction in the average power consumption of the SSD as well as the system total. In our hypothetical scenario, that would bring a 10-minute increase in battery life.

Sure, ten minutes is just ten minutes but bear in mind that a single component can't do miracles to battery life. It's when all components become a little bit faster and more efficient that we get an extra hour or two of battery life. In a few years you would lose an hour of battery life if the development of one aspect suddenly stopped (i.e. if we got stuck to SATA 6Gbps for eternity), so it's crucial that all aspects are actively developed even though there may not be noticeable improvements immediately. Furthermore, the idea here is to demonstrate what faster SSDs provide in addition to increased performance—in the end the power savings depend on one's usage and in workloads that are more IO intensive the battery life gains can be much more significant than 10 minutes. Ultimately we'll also see even bigger gains once the industry moves from PCIe 2.0 to 3.0 with twice the bandwidth.

4K Video: A Beast That Craves Bandwidth

Above I tried to cover a usage scenario that applies to every mobile user regardless of their workload. However, in the prosumer and professional market segments the need for higher IO performance already exists thanks to 4K video. At 24 frames per second, uncompressed 4K video (3840x2160, 12-bit RGB color) requires about 900MB/s of bandwidth, which is way over the limits of SATA 6Gbps. While working with compressed formats is rather common in 4K due to the storage requirements (an hour of uncompressed 4K video would take 3.22TB), it's not uncommon for professionals to work with multiple video sources simultaneously, which even with compressing can certainly exceed the limits of SATA 6Gbps.

Yes, you could use RAID to at least partially overcome the SATA bottleneck but that add costs (a single PCIe controller is cheaper than two SATA controllers) and especially with RAID 0 the risk of array failure is higher (one disk fails and the whole array is busted). While 4K is not ready for the mainstream yet, it's important that the hardware base be made ready for when the mainstream adoption begins.

What Is SATA Express? NVMe vs AHCI: Another Win for PCIe
Comments Locked

131 Comments

View All Comments

  • R0H1T - Thursday, March 13, 2014 - link

    "This is actually the same motherboard as our 2014 SSD testbed but with added SATAe functionality."

    Does this mean you're going to test next gen SSD's with this(SATAe) & if so perhaps anytime during the current 2014 calendar year?
  • ddriver - Thursday, March 13, 2014 - link

    So why not use 2 lane PCIE for the SSD instead - it does look like it uses less power and has higher bandwidth than SATAE?
  • DanNeely - Thursday, March 13, 2014 - link

    Mini ITX with a discrete GPU (or any other card) or mATX with dual GPU setups either don't have anywhere to put a PCIe SSD or don't have anywhere good to put one.
  • SirKnobsworth - Saturday, March 15, 2014 - link

    That's what M.2 is for.
  • Bigman397 - Friday, April 4, 2014 - link

    Which is a much better solution than retrofitting controllers and protocols meant for rotational media.
  • Kristian Vättö - Thursday, March 13, 2014 - link

    The motherboard in our 2014 testbed is the normal Z87 Deluxe without SATAe. There aren't any official SATAe products yet so we're not sure how we'll test those but the ASUS board is certainly an option.
  • MrPoletski - Thursday, March 13, 2014 - link

    I wonder what ridiculous speed SSD's we are going to start seeing with this tech. Quite exciting really.
  • nathanddrews - Friday, March 14, 2014 - link

    The Future!

    http://www.tomsitpro.com/articles/intel-silicon-ph...
  • thevoiceofreason - Thursday, March 13, 2014 - link

    "because after all we are using cabling that should add latency"
    Why would you assume that?
  • DiHydro - Thursday, March 13, 2014 - link

    When talking about one nanosecond signals, a charge will travel approximately 30 cm or 1 foot. If you add length onto a signal path, it will delay your transmission speed.

Log in

Don't have an account? Sign up now