Why Do We Need Faster SSDs

The claim I've often seen around the Internet is that today's SSDs are already "fast enough" and that there is no point in faster SSDs unless you're an enthusiast or professional with a desire for maximum IO performance. There is some truth to that claim but the big picture is much broader than that.

It's true that going from a SATA SSD to a PCIe SSD likely won't bring you the same "wow" factor as going from a hard drive to an SSD did, and for an average user there may not be any noticeable difference at all. However, when you put it that way, does a faster CPU or GPU bring you any noticeable increase in performance unless you have a usage model that specifically benefits from them? No. But what happens if the faster component doesn't consume any more power than the slower one? You gain battery life!

If you go back in time and think of all the innovations and improvements we've seen over the years, there is one essential part that is conspicuously absent—the battery. Compared to other components there haven't been any major improvements to the battery technology and as a result companies have had to rely on improving other components to increase battery life. If you look at Intel's strategy for its CPUs in the past few years, you'll notice that mobile and power saving have been the center of attention. It's not the increase in battery capacity that has brought us things like 12-hour battery life in 13" MacBook Air but the more efficient chip architectures that can provide more performance while not consuming any more power. The term often used here is "race to idle" because ultimately a faster chip will complete a task faster and can hence spend more time idling, which reduces the overall power consumption.

SSDs are no exception to the rule here. A faster SSD will complete IO requests faster and will thus consume less power in total because it will be idling more (assuming similar power consumptions at idle and under load). If the interface is the bottleneck, there will be cases when the drive could complete tasks faster if the interface was up for that. This is where we need PCIe.

To demonstrate the importance of an SSD from the battery life perspective, let's look at a scenario with a hypothetical laptop. Let's assume our hypothetical laptop has a 50Wh battery and only has two power states: light and heavy use. While in light use, the SSD in our laptop consumes 1W and 3W under heavier load. The other components consume the rest of the power and to keep things simple let's assume their power consumptions are constants and do not depend on the SSD.
 
Our Hypothetical Laptop
Power Consumption Light Use Heavy Use
Whole Laptop 7W 20W
SSD 1W 3W

Our hypothetical laptop spends 80% of its time in light use and 20% of the time under heavier load. With such characteristics, the average power consumption comes in at 9.6W and with a 50Wh battery we should get a battery life of around 5.2 hours. The scenario here is something you could expect from an ultraportable like the 2013 13" MacBook Air because it has a 54Wh battery, consumes around 6-7W while idling and manages 5.5 hours in our Heavy Workload battery life test.

Now the SSD part. In our scenario above, the average power consumption of our SSD was 1.4W but in this case that was a SATA 6Gbps design. What if we took a PCIe SSD that was 20% faster in light use scenario and 40% in heavy use? Our SSD would spend the saved time idling (with minimal <0.05W power consumption) and the average power consumption of the SSD would drop to 1.1W. That's a 0.3W reduction in the average power consumption of the SSD as well as the system total. In our hypothetical scenario, that would bring a 10-minute increase in battery life.

Sure, ten minutes is just ten minutes but bear in mind that a single component can't do miracles to battery life. It's when all components become a little bit faster and more efficient that we get an extra hour or two of battery life. In a few years you would lose an hour of battery life if the development of one aspect suddenly stopped (i.e. if we got stuck to SATA 6Gbps for eternity), so it's crucial that all aspects are actively developed even though there may not be noticeable improvements immediately. Furthermore, the idea here is to demonstrate what faster SSDs provide in addition to increased performance—in the end the power savings depend on one's usage and in workloads that are more IO intensive the battery life gains can be much more significant than 10 minutes. Ultimately we'll also see even bigger gains once the industry moves from PCIe 2.0 to 3.0 with twice the bandwidth.

4K Video: A Beast That Craves Bandwidth

Above I tried to cover a usage scenario that applies to every mobile user regardless of their workload. However, in the prosumer and professional market segments the need for higher IO performance already exists thanks to 4K video. At 24 frames per second, uncompressed 4K video (3840x2160, 12-bit RGB color) requires about 900MB/s of bandwidth, which is way over the limits of SATA 6Gbps. While working with compressed formats is rather common in 4K due to the storage requirements (an hour of uncompressed 4K video would take 3.22TB), it's not uncommon for professionals to work with multiple video sources simultaneously, which even with compressing can certainly exceed the limits of SATA 6Gbps.

Yes, you could use RAID to at least partially overcome the SATA bottleneck but that add costs (a single PCIe controller is cheaper than two SATA controllers) and especially with RAID 0 the risk of array failure is higher (one disk fails and the whole array is busted). While 4K is not ready for the mainstream yet, it's important that the hardware base be made ready for when the mainstream adoption begins.

What Is SATA Express? NVMe vs AHCI: Another Win for PCIe
Comments Locked

131 Comments

View All Comments

  • SunLord - Thursday, March 13, 2014 - link

    I like the idea but they should of rolled there own custom connector not twist the sata connector to meet there needs it's looks stupid. A custom high density connector and cable designed specifically for its task would make far more sense then this hodgepodge but I guess they need to cut comers to "keep costs down" on something already aimed at the high end which is even stupider. A nice clean high density interface with an sata adpater would of been far better.
  • androticus - Thursday, March 13, 2014 - link

    Ugh. What an immensely cumbersome and kludgy design.
  • asuglax - Thursday, March 13, 2014 - link

    Kristian, I completely agree with your final thoughts. I would actually take it a step further and say that Intel should completely do away with the DMI interface and corresponding PCH; they should limit the I/O off the processor to as many as possible PCI-e lanes, 3 DisplayPort (which can be exposed as dual-mode), and however many memory channels. Enterprise could have QPI, additionally. I would like to see I/O controllers embedded into the physical interconnects where PCI-e could be routed to the interconnects and however many USB, SATA, or other connections could be switched and exposed through the devices (I supposed it could be argued that this could be a PCH in itself, only connected through PCI-e instead of DMI). Security protection measures (such as TPM's functionality) should be built-in to all components and, while being independently operative, be able to communicate with one-another through the presented I/O channels.
  • fteoath64 - Saturday, March 15, 2014 - link

    @asuglax: Intel is known and has done this. Provide small incremental adds to the processor and chipset features so they can provide as many iterations of SKUs as they can over a period of time. If they do a radical change, then they risk not being able to manage the incremental changes they wanted. It is a strategy to allows for a large variety of product units, hence expanding the market for themselves. Lately, you see that they have reduced the number of CPU skus while expanded the mobile mobile skus. This is possible since in both market segments they are the majority leader and allows them to maximise profits with minimal changes to production. It is a different strategy for AMD and a completely different one yet for the Arm SoC vendors. Intel's strategy seems like it is coercing the market to move to a place and pace they wanted. The Arm guys just give their best shot on every product they have so we got a lot more than we paid for.
    You just cannot teach an old dog new tricks.
  • Babar Javied - Thursday, March 13, 2014 - link

    This SATA 3.2 really doesn't make a lot of sense to me and others also seem to agree from when I've read in the comments. Is this supposed to be a temporary thing or the middle man before we get to the good stuff? like SATA 4.0, is that the reason why it's called SATA 3.2?

    So here is a genuine question. Why not just use Thunderbolt? It is owned by intel and they can implement it into their next chipset(s). Also, Thunderbolt uses PCIe lanes so it is plenty fast without wasting lanes. Sure, the controller and cables are expensive but once it starts to be mass produced they should come down as is common with electronics.

    It seems to me that SATA is going though a lot of trouble to bring 3.2 when it is only marginally better. I also get the feeling that SSDs are going to get even faster by using more channels (current standard is 8) and NAND chips (current standard is 16) as they become the new standard in storage. Of course the transition from HDD to SDD is not going to happen overnight but it is going to happen and I get the feeling that the 750MB/s is going to become a bottleneck very quickly.

    And finally, by switching to Thunderbolt, we also help kickstart the adoption of this standard and hopefully see it flourish. Allowing us to daisy chain monitors, storage drives (SSDs and HDDs), external graphic cards and so much more.
  • SirKnobsworth - Thursday, March 13, 2014 - link

    There's no point to implementing Thunderbolt internally, which is what SATAe is for. For external purposes you can already buy Thunderbolt SSDs.
  • SittingBull - Thursday, March 13, 2014 - link

    I don't feel like you have proven that there is any need for these faster hard drive interfaces, as you hoped to in the title of your article. The need for, let alone the desire for, higher resolution video is anything but proven by anyone that I know of. 4k video offers only dubious benefits, as only very large displays can show the difference between it and 1080p, ie., 70 or 80 inches! The wider colour gamut would be nice but is not really compelling, and those are the only benefits I am aware of. I seriously doubt that the TV or electronics industry are going to be able to sell the 4k idea to the public as a whole. Even 720p is not shown to be lacking until we get into displays larger than 50 inches.

    It is always nice to read up on the tech of the future and I thank you for explaining the SATAe and other interfaces that are in the works. Eventually these advances will be implemented but I can't see it happening until there is some sort of substantial demand, and your entire article is built on the premise that we will need the bandwidth to support 4k video quite soon. But we don't ... :(
  • BMNify - Sunday, March 16, 2014 - link

    SittingBull , perhaps you should stick your head out of the Native American Law Students offices and look to your alumni of the Indian Institute of Science for inspiration in the tech world today,

    given that its clear and public knowledge that the NHK/ BBC R&D years of UHD development http://www.bbc.co.uk/rd/blog/2013/06/defining-the-... and now ratified by the International Telecommunication Union are the minimum base for any new Soc design to adhere to and comply with IF they want to actually reuse their current UHD IP for the longest time scales...

    the main point is if the PR are not trying to cover up by acts of omission the fact they don't actually comply with the new Rec. 2020 real colour space is better colour coverage due to using 10bits per pixel for UHD-1 consumer grade panels and later UHD-2 12bit grade panels for the 8192×4320 [8K] consumer in 4 years or so.

    to put it simply, antiquated Rec. 709 (HDTV and below) 8bit pseudocolor color = only 256 bands of usable colour.

    Rec. 2020 real colour space 10bits per pixel= 1000+ bands of usable colour so you get far less banding in lower bit rate encodes/decodes and more compression for a given bit rate so a better higher visual quality at smaller size.

    as it happens, NHK announced they are to give another UHD-2/8K 3840 pixels wide by 2160 pixels high Broadcast Demo at the coming NAB Show,"Japanese public broadcaster NHK is planning to give a demonstration of "8K" resolution content over at single 6MHz bandwidth UHF TV channel at the National Association of Broadcasters (NAB) Show coming up in Las Vegas, Nevada, April 5 to 10."
    In order to transmit the 8K signal, whose raw data requirement is 16 times greater than an HDTV signal, it was necessary to deploy additional technologies These include ultra multi-level orthogonal frequency domain multiplex (OFDM) transmission and dual–polarized multiple input multiple output (MIMO) antennas. This was in addition to image data compression. The broadcast uses 4096-point QAM modulation and MPEG-4 AVC H.264 video coding.

    we could also have a debate about how qualcomm and other cortex vendors might finally provide the needed UHD-2 data throughput and far lower power with ether integrated JEDEC Wide IO2 25.6GBps/51.2GBps or Hybrid Memory Cube 2.5D interposer-based architectures,and using MRAM inline computation etc.

    did you notice how the ARM SoC with its current NoC (network On Chip) can already beat today's QPI real life data throughput (1Tb/s,2Tb/s etc) at far lower power,never mind the slower MCI as above, they only need to bring that NoC capability to the external interconnect to take advantage of it in any number of IO ports
  • Popskalius - Friday, March 14, 2014 - link

    I haven't even taken my Asus z87 Plus out of its shrink wrap and it's becoming obsolete.
  • SittingBull - Friday, March 14, 2014 - link

    I just put together my own system with an Asus Z87 Plus mb, an i7 4770k, 16 GB of RAM and an SSD. It is not and will not be obsolete anytime in the near future, ie., at least 3 years. Worry not. There isn't anything on the horizon our systems won't be able to deal with.

Log in

Don't have an account? Sign up now