POST A COMMENT

34 Comments

Back to Article

  • vaayu64 - Thursday, August 21, 2014 - link

    Thanks for the review, as always =).
    If you have the opportunity to meet with sandisk, can you please ask if there will be a msata version of their extreme or ultra ssd.
    Reply
  • vaayu64 - Thursday, August 21, 2014 - link

    Another question, does this X300 provide power loss protection?
    Regards
    Reply
  • hojnikb - Thursday, August 21, 2014 - link

    Looking at the PCB it appears not. Reply
  • Samus - Thursday, August 21, 2014 - link

    That's too bad since it clearly has a piece of (undisclosed capacity) memory on the PCB. Looks to be a 128MB DDR2 chip. I wonder if any user data is stored in there of if it truly caches only the indirection table? Reply
  • Kristian Vättö - Thursday, August 21, 2014 - link

    The X300s does not have capacitors to provide power-loss protection as that is generally an enterprise-only feature. SanDisk does have a good white paper about their power-loss protection techniques, though.

    http://www.sandisk.com/assets/docs/unexpected_powe...
    Reply
  • Samus - Friday, August 22, 2014 - link

    Enterprise-only feature? Many mainstream drives have capacitors dating back to the Intel SSD 320 (X25-M v3)

    Some of the cheapest SSD's on the market have capacitors (Crucial MX100) so its inconceivable to leave them out in 2014.
    Reply
  • Kristian Vättö - Friday, August 22, 2014 - link

    "Many mainstream drives have capacitors dating back to the Intel SSD 320 (X25-M v3)"

    There is only a handful of client-grade drives that provide power loss protection in the form of capacitors (Crucial M500, M550 & MX100, Intel SSD 730 & SSD 320 are the only ones I can remember).

    The SSD 320 was never strictly a client drive as Intel also targeted it towards the entry-level enterprise market, hence the power loss protection. The SSD 730, on the other hand, is derived from the DC S3500/S3700, so it is basically a client tuned enterprise drive.

    The power loss protection in the MX100 and other Crucial's client drives is not as perfect as their marketing makes you think. Crucial only guarantees that the capacitors provide enough power to save the NAND mapping table, which means user data is vulnerable to data loss. That is why the M500DC uses different capacitors because the ones in the client drives do not provide enough power to save all writes in progress.

    SanDisk's approach is to use nCache (i.e. an SLC portion) to flush the NAND mapping table from the DRAM more often. The lower write latency that SLC has ensures that in case of a power loss, the data loss is minimal but it is true that some data may be lost. Crucial/Micron operates all NAND as MLC, which is why they need the capacitors to make sure that the NAND mapping table is safe.
    Reply
  • hojnikb - Friday, August 22, 2014 - link

    On the subject of mapping tables; how does controllers like sandforce (and some marvell implementations) work without DRAM ? Do they dedicate a portion of flash for that and how do they keep track of that portions activity (eg block wear) ?

    Also, since some of the manufactures use pseudo SLC (ie MLC/TLC acting as SLC) how is endurance of those cells affected ? Can SLC portion last longer than normal MLC/TLC ?
    Reply
  • Kristian Vättö - Friday, August 22, 2014 - link

    The controller designs that don't utilize DRAM use the internal SRAM cache in the controller to cache the NAND mapping table. It just requires a different mapping table design since SRAM caches are much smaller than DRAM. Ultimately the mapping table is still stored in NAND, though.

    Pseudo-SLC can definitely last longer than MLC/TLC. With only one bit per cell, there is much more voltage headroom as there are only two voltage states.
    Reply
  • hojnikb - Friday, August 22, 2014 - link

    So really, MLC/TLC and SLC dies do not differ much internally. I'm guessing that real SLC just uses less on die error correction than MLC, but cells shouldn't be that different at all. Same i suppose goes for TLC aswell.

    If this is the case, it brings an interesting question; If one were to buy MLC drive and wanted SLC grade endurace, it could (if access to firmware was available) tweak the firmware in a manner, so the whole drive would act as a pSLC; obviously at a cost of performance. Something like nCache 2.0, but expanded to whole capacity.

    I believe some cheap flash drive controllers offered something like that using their MPtools. I remember messing around with a cheap TLC based flash drive; Once done, i ended up with 1/3 of the capacity, but write speeds increased dramaticly.
    Reply
  • hojnikb - Friday, August 22, 2014 - link

    *at a cost of capacity :) :) Reply
  • Kristian Vättö - Friday, August 22, 2014 - link

    Yeah, fundamentally SLC, MLC and TLC are the same. Of course there are some silicon level optimizations to better fit the characteristics of each technology but the underlaying physics are the same.

    I'm thinking that pseudo-SLC is effectively just combining the voltage states of MLC/TLC. I.e. output of 11 or 10 from the NAND would read as 1, which allows for higher endurance since it doesn't matter if the actual voltage state switches from 11 to 10 due to the oxide wear out.
    Reply
  • Spoony - Friday, August 22, 2014 - link

    I believe you'd lose half the capacity on your drive. The MLC drives store two bits per cell, so they would store a 1 and a 0 for example. If you now are only allowing it to store a 1, then you've halved the capacity of the cell. Across the entire drive, this would thus halve the total drive capacity.

    As far as performance (read/write speed) I think this would be affected less. SSDs rely on parallelism to extract performance from NAND. The array is just as parallel as before. There might be an impact to performance having to do with extracting less information from each cell, how much this would be I'm not sure.

    I think the changes to firmware would have to be much more substantial than just re-programming how many bits per cell are stored. There is most likely a lot of interesting logic around voltage handling at very small scales. Perhaps even looking at how voltages from neighbouring cells influence each other. I'm not sure how serious this firmware gets regarding physics, but it must have to do some sort of compensation because the drives seem pretty reliable.
    Reply
  • hojnikb - Friday, August 22, 2014 - link

    Yeah, ive "edited" the post to reflect the loss of capacity. Obviously capacity drops, but its still waay cheaper than real SLC solutions.

    I bet write speeds would actually go up (since this is the exact reason why samsung and sandisk are doing pSLC) but read would stay unaffected (since this is controller/interface limited anyway).
    Reply
  • BillyONeal - Thursday, August 21, 2014 - link


    eDrive is not really designed for big corporate operations as it lacks the tools for remote management

    Erm, what is MBAM for then? http://technet.microsoft.com/en-us/library/hh82607... My work PC has remotely managed BitLocker.
    Reply
  • Zink - Thursday, August 21, 2014 - link

    MBAM is "Malwarebytes Anti-Malware" malware removal tool Reply
  • BillyONeal - Thursday, August 21, 2014 - link

    @Zink: It is also "Microsoft Bitlocker Administration and Management" Reply
  • Kristian Vättö - Thursday, August 21, 2014 - link

    Looks like I should have done my research better. Thanks for the heads up, I've edited the review to remove the incorrect reference. Reply
  • thecoolnessrune - Thursday, August 21, 2014 - link

    Yep, I the company I work with also has all of our drives encrypted with Bitlocker. It's managed by MBAM and integrated right into the rest of Active Directory Management. Really simple for the Domain Administrators (and relevant IT HelpDesk personnel) to use and manage.

    eDrive can fit in the Enterprise environment quite well.
    Reply
  • cbf - Thursday, August 21, 2014 - link

    Yup. As the other commenters indicate, the only thing we care about in the Enterprise is BitLocker. Hell, even if it was my personal drive, I'd probably only use BitLocker. I just trust it more than the third party solutions.

    So why don't you review this drive's encryption features using BitLocker. Anand showed how to do this last April: http://www.anandtech.com/show/6891/hardware-accele...
    Reply
  • Kristian Vättö - Friday, August 22, 2014 - link

    That is not true. Windows 7 is still the dominant OS in the enterprise space with Windows 8 only having a marginal share:

    http://www.sysaid.com/company/press/382-global-win...

    Yes, that is one-year-old data but it shows that enterprises are not very keen on W8 and are adopting it very slowly. That in turn leaves a huge market for solutions like Wave ECS since the BitLocker in Windows 7 does not support Opal.

    Besides, eDrive/BitLocker is the same for every drive. I don't see the need to revisit it with this drive because the process is not any different.
    Reply
  • cbf - Friday, August 22, 2014 - link

    Well, that market share article is from June 2013.

    While, I don't think Windows 8.1 is taking the market by storm, I think it is creeping in. I've deployed it due to things like improved startup/hibernation, BitLocker improvements, etc. The start menu just isn't that big a deal for my users.

    In any event, it looks like we'll see Win 9 in the next six months, which I predict enterprises will deploy as fast as they've ever deployed any new Windows OS, so that should settle the issue.
    Reply
  • jabber - Saturday, August 23, 2014 - link

    Maybe not.

    Windows 9 is too soon. A lot of corps are only two years into their OS refresh, they aren't going to change till maybe 2017 at the earliest and then 10 is round the corner. A lot haven't moved to 7 till this year so they are going to hang around till 2020. Windows 10 will be the one that fits the schedule better.

    9 will bomb probably. Plus anyone knows that 9 is purely a rushed damage limitation excercise.
    Reply
  • devione - Thursday, August 21, 2014 - link

    Hi Kristian,

    Really appreciate your efforts. However would it be possible to see future reviews involving Enterprise-grade SSDs? Thanks for your time.
    Reply
  • Kristian Vättö - Friday, August 22, 2014 - link

    Yeah, we have something in the works :) Reply
  • jay401 - Friday, August 22, 2014 - link

    By the way, the Samsung 840 Evo in 256GB and 512GB sizes just dropped $20 and $50 in price on Amazon. $119 and $199 respectively, though the 500GB did just bump back up slightly to $212. Reply
  • 7amood - Friday, August 22, 2014 - link

    I appreciate the SEDs but I think these aren't open source and can't be audited like TC which is being audited right now. How to know for sure that the SED encryption is secure and doesn't have backdoor code for the spying? Reply
  • fk- - Saturday, August 23, 2014 - link

    I'm still a bit confused about one thing - with all that security software listed in the table, what are the motherboard requirements to use the encryption on this drive? Do I still need a motherboard capable of setting ATA password if I want to password-protect the data on the drive?

    Or, to put it straight, is there any [software] way to use the option to password-protect the drive (and being prompted to enter the password on startup) on an older motherboard without UEFI, without ATA password capabilities and without Opal certification?
    Reply
  • Kristian Vättö - Sunday, August 24, 2014 - link

    Wave's ECS should do that as long as the drive is Opal certified. I tested without UEFI and it worked fine, and ATA password is just a BIOS feature whereas Opal is independent from the rest of the system (i.e. should work with any motherboard or system). Reply
  • mike8675309 - Sunday, August 24, 2014 - link

    I assume that when you use the took to secure erase that when you enter the PSID that the drive is no deactivating the encryption, and then secure erasing the drive. Using that tool and that process must do something more complex so that avoids creating an attack vector. Reply
  • Death666Angel - Tuesday, August 26, 2014 - link

    I'm totally not getting the new drop down menus in the consistency part of the review. I only get one set of data points in the chart even though I can select 2 (different) items. It changes whether I change the first or second part. Can someone explain what it shows me when? Reply
  • doylecc - Tuesday, August 26, 2014 - link

    The drop down menus in the consistency part of the review are not working properly. The only way I could make the charts show the performance of the 25% over-provisioning was to choose another SSD from the menu (I chose the A-Data since it is right next to the X300) then change back to the X300. When I did that the chart would update.

    I had to repeat with the default over-provisioning menu to get the chart to change back. This is a pain and needs to be corrected!
    Reply
  • Kristian Vättö - Wednesday, August 27, 2014 - link

    I've noticed that too. Let me see if there is something we can do to fix it -- my HTML skills are limited to copy-pasting so I need to ask someone else to have a look at the code. Reply
  • Gonemad - Wednesday, August 27, 2014 - link

    I wonder if encryption would affect deduplication in any kind of setup. As far as I know, repeatable patterns that can be compressed are exactly the thing that encryption prevents, and any deduplication effort must happen before the drive is encrypted. Will encryption ALWAYS be transparent? Reply

Log in

Don't have an account? Sign up now