Power Consumption

One of the key elements of Haswell will be up to 10 days of connected standby. To enable that, Serial ATA organization came up with a new SATA specification: DEVSLP (stands for Device Sleep). It's just an addition to the existing SATA 6Gbps standard but to function it needs chipset and device level support. Haswell is the first platform to support DEVSLP but Plextor was forward thinking and the M5M already supports it.

The idea behind DEVSLP is that the drive can enter very low-power state and then resume from that state very quickly. For example, if an email comes in, the drive can't spend seconds powering itself up because that's wasted battery charge -- it needs to be functioning in a matter of milliseconds and then go back to low-power state once the task has been completed. Plextor is claiming that the M5M can resume from DEVSLP mode in only 100ms and while in DEVSLP, it will only draw 1mW of power. 

Our platform obviously doesn't support DEVSLP (yet), so we can't test this in action. Also, as someone noted in our MyDigitalSSD mSATA SSD review, the mSATA spec is actually rated at 3.3V, while our results are based on measuring the current from the 5V rail that comes from the power supply. We don't have any equipment to measure the current at the 3.3V rail (there's a voltage regulator in the adapter that lowers the voltage), hence we have to measure the power consumed by both, the drive and the adapter. I have no idea how much power the voltage regulator is drawing (I know the model and have the datasheet, so if anyone knows how to calculate the power, let me know) and some of the results here does seem pretty high. However, I doubt the M5M is drawing significantly more power than the M5 Pro given that they share the same components, so using the M5 Pro as a reference point is not a bad idea. 

Drive Power Consumption - Idle

Drive Power Consumption - Sequential Write

Drive Power Consumption - Random Write

TRIM Performance Final Words
POST A COMMENT

36 Comments

View All Comments

  • kmmatney - Wednesday, April 17, 2013 - link

    " I strongly recommend having at least 25% free space with the M5M. The more you fill the drive, the more likely it is that you'll face inconsistent performance."

    Would this really effect the average user? Do you let the drives idle long enough so the normal garbage collection can kick in?
    Reply
  • msahni - Wednesday, April 17, 2013 - link

    Hi there,
    First of all Kristian thanks for the reviews. You've finally answered my queries about the best mSATA SSD to get. (from the Intel 525 review)

    Could you please advise what is the best method to leave the 25% free space on the drive for over provisioning to enhance the performance.

    Cheers....
    Reply
  • Minion4Hire - Wednesday, April 17, 2013 - link

    Anand answered that in another article. I believe you are supposed to shrink the partition, create a second partition out of the unallocated space, then delete the new partition. The act of deleting the partition brings the OS to TRIM that portion of the drive freeing it up for use as spare area. And since you won't be writing to it any more it is permanently spare area (well, unless you repartition or something) Reply
  • xdrol - Wednesday, April 17, 2013 - link

    Actually, Windows does not trim when you delete a partition, rather when you create a new one. Reply
  • Hrel - Wednesday, April 17, 2013 - link

    I have wondered for a long time if the extra free space is really necessary. Home users aren't benchmarking, drives are mostly idle. Not often do you transfer 100GB at a time or install programs. Reply
  • JellyRoll - Wednesday, April 17, 2013 - link

    Unrealistic workloads for a consumer environment result in unrealistic test results. How many consumer notebooks or laptops, hell even enterprise mobile devices, will be subjected to this type of load? Answer: Zero.
    Even in a consumer desktop this is NEVER going to happen.
    Reply
  • JPForums - Thursday, April 18, 2013 - link

    It was stated a long time ago at Anandtech that their testing was harsher than typical consumer loads for the express purpose of separating the field. Under typical consumer workloads, there is practically no difference between modern drives. I don't know how many times I've read that any SSD is a significant step up from an HDD. It has pretty much been a standing assumption since the old jMicron controllers left the market. However, more information is required for those that need (or think they need) the performance to handle heavier workloads.

    Personally, everything else being equal, I'd rather have the drive that performs better/more consistently, even if it is only in workloads I never see. I don't think Kristian is trying to pull the wool over your eyes. He simply gives the readers here enough credit to make up their own mind about the level of performance they need.
    Reply
  • Kristian Vättö - Wednesday, April 17, 2013 - link

    If the drive is nearly full and there's no extra OP, then it's possible that even normal (but slightly larger/heavier, like app installation) usage will cause the performance to become inconsistent which will affect the overall performance (average IOPS will go down). Performance will of course recover with idle time but the hit in performance has already been experienced. Reply
  • JellyRoll - Wednesday, April 17, 2013 - link

    Running a simple trace of an application install will show that this is not an accurate statement. This testing also does not benefit from TRIM because there is no filesystem during the test. This ends up making an overly-negative portrayal. Reply
  • JPForums - Thursday, April 18, 2013 - link

    Which test in particular are you referring to that has no access to TRIM, that otherwise would?

    As far as application traces go, I can confirm Kristian's statement is accurate on both a Corsair Force GT 120GB and a Crucial M4 128GB. Performance drops appreciably when installing programs with a large number of small files (or copying a large number of small files I.E. Libraries). As an aside, it can also tank the performance of Xilinx ISE, which is typically limited by memory bandwidth and single threaded CPU performance.
    Reply

Log in

Don't have an account? Sign up now