Back to Article

  • MrCommunistGen - Thursday, July 25, 2013 - link

    YES! I've been excitedly waiting for this review since the announcement! Reply
  • Byte - Thursday, July 25, 2013 - link

    Writes for the 120GB are still quite slow. Reply
  • chizow - Thursday, July 25, 2013 - link

    That's nearly universal though for all the entry-level capacity SSDs on the market, it's similar to RAID 0, when you can write to symmetrical NAND packages you see a significant increase in write speeds. Reply
  • OUT FOX EM - Monday, July 29, 2013 - link

    Speaking of RAID 0, if you'll notice, all the drives of 250GB and higher perform around the same. You are MUCH better off getting 4x250GB drives instead of the 1TB. With most models the cost will actually be about the same, but the speed of the RAID will be 4x faster as well while maintaining the same capacity.

    Of course there are other drawbacks like space inside your PC and amount of available SATA ports on your motherboard, for instance, but if those aren't a factor, buying multiple SSD's is a much better option in terms of performance. I don't see many reviews mention this fact.
  • Jorgisven - Thursday, August 01, 2013 - link

    Much better in terms of performance, but I wouldn't recommend RAID 0 for 4 SSD hard drives. RAID6 is likely a better option, as it is fault tolerant without losing too much space. It's a bit of a personal decision, but the RAID concepts stand true whether it's SSD or not. Additionally, 4x250 is likely a good percentage more expensive than the already expensive 1TB SSD. Reply
  • Democrab - Thursday, August 15, 2013 - link

    I'm not sure about you, but I'm only storing replaceable data on my SSDs...There are game saves but they're automatically put on Google Drive too so I get backups easily, it's easy to set something like that up and then just get the benefits of RAID0 although I'd be using a RAID card as the chipset would likely bottleneck it. Reply
  • yut345 - Thursday, December 12, 2013 - link

    I agree. Due to the volatile nature of SSDs, and the fact that if they go down your data can't really be recovered like it could be on mechanical drive, I do not plan to store anything on the drive that I don't also back up somewhere else. Reply
  • m00dawg - Friday, August 23, 2013 - link

    With only 4 drives, a RAID10 would be much preferable. 1/2 the available space (same as a 4 drive RAID6 in this case), but without the need to calculate parity, worry (as much) about partitioning alignment, and you can still handle up to 2 drive failures (though only if they are on different stripes). Reply
  • fallaha56 - Friday, September 19, 2014 - link

    sorry but disagree this will defeat the point unless you're on a top-end raid controller -and then you get no TRIM

    when there's no moving parts reliability becomes much less of an issue, esp for an OS drive with cloud and local backup like most of us high-end users do
  • Stas - Tuesday, September 24, 2013 - link

    That's what I did for the recent laser data processing builds. 4x250GB 840s and a 1TB HDD for nightly backup. Only data is stored on the array. Speeds are up to 1600MB/sec. Needless to say, the client is very happy :) Reply
  • yut345 - Thursday, December 12, 2013 - link

    That would depend on how large your files are and how much space of the drive you will be using up for storage. I would fill up a 250GB drive almost immediately and certainly slow it down, even though I store most of my files on an external drive. For me, a 1TB would perform better. Reply
  • HenryWell - Monday, July 29, 2013 - link

    Love my job, since I've been bringing in $82h… I sit at home, music playing while I work in front of my new iMac that I got now that I'm making it online. (Home more information)
  • Romberry - Saturday, July 27, 2013 - link

    Well...that sort of depends, doesn't it? The first 2.5-3GB or so are at close to 400mb/s before depleting the turbowrite buffer and dropping down to around 110-120mb/s, 2-3GB covers a lot of average files. Even a relatively small video fits. And as soon as the turbowrite cache is flushed, you can burst again. All in all, long (very large file) steady state transfer on the 120gb version is average, but more typical small and mid file sizes (below the 3GB turbowrite limit) relatively scream. Seems to me that real world performance is going to be a lot quicker feeling than those large file steady state numbers might suggest. The 120gb version won't be the first pick for ginormous video and graphics file work, but outside of that....3GB will fit a LOT of stuff. Reply
  • MrSpadge - Saturday, July 27, 2013 - link

    Agreed! And if your're blowing past the 3 GB cache you'll need some other SSD or RAID to actually supply your data any faster than the 128 GB 840 Evo can write. Not even GBit LAN can do this. Reply
  • Dewend - Friday, March 04, 2016 - link

    My partner and I were sitting yourself down for lunch, as i mentioned to her which i read a script in the morning newspaper, thus i wanted to do a certain amount of research. Thankfully, I stumbled upon this web site, which helped me see why people consider even the idea of such a thing. Reply
  • nathanddrews - Thursday, July 25, 2013 - link

    RAPID seems intended for devices with built-in UPS - notebooks and tablets. Likewise, I wouldn't use it on my desktop without a UPS. Seems wicked cool, though. Reply
  • ItsMrNick - Thursday, July 25, 2013 - link

    I don't know if I'm as extreme as you. The fact is your O/S already keeps some unflushed data in RAM anyways - often times "some" means "a lot". If RAPID obeys flush commands from the O/S (and from Anand's article, it seems that it does) then the chances of data corruption should be minimal - and no different than the chances of data corruption without RAPID. Reply
  • Sivar - Thursday, July 25, 2013 - link

    You can always mount your drives in synchronous mode and avoid any caching of data in RAM.
    I wouldn't, though. :)
  • nathanddrews - Thursday, July 25, 2013 - link

    soo00 XTr3M3!!1 Sorry, I just found that humorous. I've actually been meaning to get a UPS for my main rig anyway, it never hurts. Reply
  • MrSpadge - Saturday, July 27, 2013 - link

    It does hurt your purse, though. Reply
  • sheh - Thursday, July 25, 2013 - link

    I wonder how it's any different from the OS caching. Seemingly, that's something that the OS should do the best it can, regardless of which drive it writes to, and with configurability to let the user choose the right balance between quick/unreliable and slower/reliable. Reply
  • Death666Angel - Friday, July 26, 2013 - link

    That was my thought as well. The OS should know what files it uses most and what to cache in RAM. Many people always try to have the most free RAM possible, I'd rather have most of my RAM used as a cache. Reply
  • halbhh2 - Saturday, July 27, 2013 - link

    Exactly. I found that even moving up from 8GB to 16GB had a great effect for me with an old Samsung F3 hard drive. The difference: after just 30 minutes from boot, loading an often used program like iTunes (for podcasts) was very similar to the speed on my laptop which has an 830 SSD and only 4GB. Both load in about 4 seconds, and the 16GB desktop loads so fast because it has had time to cache a lot of iTunes. Before the ram upgrade, that load time for iTunes on the desktop computer was about 14 seconds. Quite a difference due to windows 7 caching. The extra improvement I'd get from installing an SSD onto the desktop computer now would be modest, since I usually only need to reboot once or twice in a week. Still, the sweet spot of price/performance for me is approaching, probably around $60-$70, and that won't be long. Reply
  • Klimax - Sunday, July 28, 2013 - link

    It's in wrong place. Unlike OS level caching (at least in Windows), which is in cooperation between memory manager, cache manager and file system driver, this is too low in the chain and sees only requests but nothing else and also takes memory from OS and takes too few. Reply
  • Coup27 - Thursday, July 25, 2013 - link

    Typo: "although I wouldn't recommend deploying the EVO in a write heavy serve Microsoft's eDrive standard isn't supported at launch"

    Excellent article. Samsung continue to push SSDs and I'm really excited about RAPID. Is the 840 Pro due for a successor any time soon? I am selling my current ATX Sandy Bridge + 830 and getting a mITX Haswell + (840 Pro?) and want the fastest Samsung consumer SSD available and I'd be gutted to buy an 840 Pro to see it's successor released a few weeks later.
  • vLsL2VnDmWjoTByaVLxb - Thursday, July 25, 2013 - link

    Another typo last page:
    "Even though its performnace wasn't class leading, it was honestly good enough to make the recommendation a no-brainer. "
  • JDG1980 - Thursday, July 25, 2013 - link

    Will there be an 840 EVO Pro coming out later? To me, TLC is still a deal-breaker.
    By the way, what happens if power goes out during a TurboWrite (before the data has been written to the normal storage space)? Does this result in data loss, or, worse, bricking? I'd suspect Samsung at least avoided the latter, but I'd like to see some confirmation on this.
  • sherlockwing - Thursday, July 25, 2013 - link

    I guess you didn't read the Endurance part of the review? Even if you write 100GiB a day all of those drives last longer than their warranty( 3 years), that's more than enough endurance. Reply
  • Coup27 - Thursday, July 25, 2013 - link

    Some people just don't want to accept the facts. TLC could get to 99.9% of MLC endurance and people would still want MLC. I've been deploying 840's in a light duty enterprise environment and they've been fine. The only reason I use MLC at home is because I want the absolute fastest performance and I can afford it, not that I actually need it. Reply
  • Oxford Guy - Thursday, July 25, 2013 - link

    The SanDisk Extreme 240 was just on sale for $150. TLC NAND still seems like a solution in need of a problem. Reply
  • Spunjji - Friday, July 26, 2013 - link

    You can approach TLC pricing with an MLC drive in a sale, but the fact remains that when it comes to actual sustainable production pricing TLC NAND has a 50% density (and thus manufacturing cost advantage) over MLC. Given that NAND price determines drive cost and drive costs are the primary barrier of entry to SSDs, I'm fairly sure it has a problem to solve.

    FWIW I have not seen any drive touch the 120GB 840's price here in the UK, on sale or otherwise.
  • Oxford Guy - Friday, July 26, 2013 - link

    However, there is also the problem of increasing latency and lifespan from node shrinkage. Reply
  • MrSpadge - Saturday, July 27, 2013 - link

    Agreed: no real drive (read: not a rubbish sale) can touch the price of a 128 GB 840 here in Germany either. Reply
  • MamiyaOtaru - Friday, July 26, 2013 - link

    hell with that I want SLC Reply
  • Notmyusualid - Friday, July 26, 2013 - link

    Me too... Reply
  • Dal Makhani - Thursday, July 25, 2013 - link

    why still a dealbreaker? Unless you write a TON. Its a great drive and you dont really need MLC. Reply
  • Heavensrevenge - Thursday, July 25, 2013 - link

    The worries of TLC is a pretty useless worry. I still have a 32MB Sony flash stick I used around 2003, and its flash memory wasn't rated and wear-leveled like it does nowadays by design and It's not dead nor corrupted somehow lol. If you have a USB stick older than 5 years old or any flash cards for a camera that's a few years old and still working 100% fine when people weren't so uselessly worried about flash endurance, then these drive will pose no problems whatsoever. Reply
  • Oxford Guy - Thursday, July 25, 2013 - link

    I thought the Vertex 2 firmware problems (especially the wake-from-sleep bug) were overblown until I had three of them die. I finally gave up on RMAs because the replacements died, too. Anandtech was so positive about OCZ and its Vertex 2. Funny how the drives didn't turn out to be so great. I don't remember the rave reviews covering the wonderful panic mode, either. Reply
  • HisDivineOrder - Saturday, July 27, 2013 - link

    Lots of people were raving about OCZ back then. Today, it's clear. Friends don't let friends OCZ. Reply
  • Shadowmaster625 - Thursday, July 25, 2013 - link

    Can you test RAPID by cutting power to a pc while doing normal everyday stuff like surfing the web or watching a youtube or loadign a game. I would like to know how likely it is for windows to have an unrecoverable error if it loses power while this cachign solution is active. Reply
  • Spunjji - Friday, July 26, 2013 - link

    I second that request. Reply
  • MrSpadge - Saturday, July 27, 2013 - link

    You'd need to be writing to the disk to provoke errors, not reading. Reply
  • B0GiE-uk- - Thursday, July 25, 2013 - link

    Seeing as this drive is similar to the 840 basic, it will be interesting to see the performance of the 840 Pro with the rapid software enabled. Has the potential to be faster than the EVO. I have heard that the rapid software will be backwards compatible. Reply
  • sheh - Thursday, July 25, 2013 - link

    Caching speed is based on RAM, flushing speed on drive. I don't think there will be any surprises. Reply
  • Heavensrevenge - Thursday, July 25, 2013 - link

    Finally were seeing transition to RAM caches, it's nice a RAM disk is being utilized and I hope the trend continues so that HDD/SDD can actually be taken out of the storage hierarchy for the OS & operating memory and have EVERYTHING reside in a non-volatile RAM space together for CRAZY increases in perf since HDD's in a way are a side-effect of old memory's being so small there had to be a drive backing the RAM. But of course we need traditional storage for actual storage purposes afterwards. But I'll hope for a migration of RAM towards a similarly fast combination of RAM+Drive being the main root drive built right onto the motherboards in a stick-like way within 10 years to cause a nice little computing revolution via re-architecting the classical storage hierarchy that's now, I believe, is quite possible and reasonable. Reply
  • DanNeely - Thursday, July 25, 2013 - link

    Modern OSes have been doing ram cache for years. Samsung is able to "cheat" with rapid because they've got a much better view of what the drive is doing internally to optimize for it (even if the data isn't normally exposed via standard APIs). Eventually OS authors will catch up and have SSD optimized caches instead of HDD optimized ones and it will again be a moot point. Reply
  • Jaybus - Thursday, July 25, 2013 - link

    Yes. It is doing the same thing as the O/S cache, but using a different algorithm to decide which blocks to cache, one that is tailored to SSD. So the O/S is very likely to adapt something similar in future.

    What is more interesting is TurboWrite. If you consider the on board DRAM a L1 cache, then TW implements a more-or-less L2 cache in NAND by using some of the NAND array in SLC mode instead of TLC mode. In addition to greater endurance, SLC mode allows much faster P/E cycles than TLC (or MLC). And unlike the DRAM cache, the SLC-mode NAND cache is not susceptible to power failure data loss. It still is not nearly as fast as DRAM, so the L1 DRAM cache is still needed. Encryption would kill performance without DRAM. But because data can be moved from DRAM cache to SLC cache more quickly, it frees up DRAM at a faster rate and increases throughput. So unless writing an awfully lot of data continuously, you essentially get SLC performance from a TLC drive. That is the EVO (lutionary) thing about this drive, much more so than RAPID software.
  • Heavensrevenge - Thursday, July 25, 2013 - link

    Heh yes of course, I mean removing the "hard drive/solid state drive" out of the storage hierarchy completely and putting all OS and cache data into non-volatile silicon where the ram sits today, making all operations go as fast as ramdisk speed, not just have it there as a way to hide latency. like boot from the modules plugged directly into the motherboard and everything :) THATS what I'd love to see, 1-2GB/s 4K read & write speeds all-around not just for special use cases, All because the fab process is becoming small enough o fit the amount of data there we can actually re id f that part of the storage hierarchy if you know what I mean. Reply
  • Spunjji - Friday, July 26, 2013 - link

    I think there's always going to be a space for slower, more density-efficient storage in any sensible storage hierarchy. I think what you're looking forwards to is MRAM / PRAM, though. :) Reply
  • Heavensrevenge - Saturday, July 27, 2013 - link

    MRAM or any NVRAM is basically the concept I was wanting :) Thank you for the reference!!
    The day/year/decade that type of memory become our RAM & OS/Boot drive replacement in the storage hierarchy will be the one of the best times in modern computing history.
    Honestly all HDD/SDD manufactures should stop wasting their R&D on this type of crap even though SSD's are a wonderful "now" solution to the problem and I'll still recommend them for the time being.
    The sooner that type of memory is our primary 1st level storage directly addressable from the CPU the better our modern world of computing will become and begin evolving again.
  • MrSpadge - Saturday, July 27, 2013 - link

    I don't think Samsung is doing anything better here, or working some SSD-magic. They're just being much more agressive with caching than Win dares to be. Reply
  • Touche - Thursday, July 25, 2013 - link

    I don't think your tests are representative of most people's usage, especially for these drives. TurboWrite should prove to be a much better asset for most, so the drive's performance is actually quite better than this review indicates. Reply
  • Sivar - Thursday, July 25, 2013 - link

    Really well-written article.
    I have to admit, while most of Samsung's products are crap, their 840 and later SSDs are not bad at all.
    (The 830, while not prone to electronic failure, was built really poorly. It's SATA connector would snap off if you tilted your head the wrong way while looking at it).
  • Coup27 - Thursday, July 25, 2013 - link

    Samsung have gotten into the world position they are in today by selling crap. I have used plenty of 830's and I have never had an issue with the SATA connector so I have no idea what you are doing with it. Reply
  • Coup27 - Thursday, July 25, 2013 - link

    Haven't ^^ (why is there no edit button?) Reply
  • piroroadkill - Thursday, July 25, 2013 - link

    So you accidentally broke a SATA connector, and now that's suddenly a flaw? I have two Samsung 830 256GB in my system, and somehow I didn't break the SATA connectors...
    I also fitted 4x Samsung 830 256GB to a server at work.. and somehow I didn't break the SATA connectors..
  • HisDivineOrder - Saturday, July 27, 2013 - link

    True, this. SATA connectors are poorly designed, but that's the fault of the people who made the spec, not the specific one in the 830. I'm not saying it can't break. I've had SATA connectors break on a variety of devices. None of them were my 830, but I'm not saying it's impossible or whatever.

    I've seen WD, Seagate, and Hitachi drives all have a problem with the connector, though. Seems like SATA and HDMI were designed to make the connection as loose and easily broken as possible. I guess that gives them some small percentage of people buying all new product to replace something on said product that's small and plastic...
  • mmaenpaa - Thursday, July 25, 2013 - link

    Good article once again Anand,

    and very good perfomance for this price range.

    Regarding Torx, I believe this is one the main reasons why it is used:

    "By design, Torx head screws resist cam-out better than Phillips head or slot head screws. Where Phillips heads were designed to cause the driver to cam out, to prevent overtightening, Torx heads were designed to prevent cam-out. The reason for this was the development of better torque-limiting automatic screwdrivers for use in factories. Rather than rely on the tool slipping out of the screw head when a torque level is reached, thereby risking damage to the driver tip, screw head and/or workpiece, the newer driver design achieves a desired torque consistently. The manufacturer claims this can increase tool bit life by ten times or more"


  • hybrid2d4x4 - Thursday, July 25, 2013 - link

    For what it's worth, my experience with screws is consistent with your post. I've never had a torx screw slip out, which is definitely not the case with philips or the square or flathead varieties. I'd like to see them used more often. Reply
  • piroroadkill - Thursday, July 25, 2013 - link

    Agreed. I love Torx. Philips and pozidriv are the terrible bastard children of the screw universe. Always slipping and burring. Ugh. If everything was replaced with totally cam-out free designs like Torx, allen head, robertson screw.. etc, etc.. then I'd be more than happy. Reply
  • psuedonymous - Thursday, July 25, 2013 - link

    I'd LOVE for Torx to be used more often. They're much easier to work with (not once have I had a Torx screw fall off the screwdriver and roll under the desk), the screwheads are more robust, and they frankly look a lot nicer than Philips or Pozidriv.

    It'd make pulling apart laptops all day a darn sight less onerous if Torx were the standard rather than Philips.
  • camramaan - Friday, February 14, 2014 - link

    But then there would be less security in other areas of the mechanical world... not everyone can carry a bunch of Torx bits everywhere they go, so breaking into, or disassembling something built with Torx is more time laborious and pre-planned. I fully understand the sentiments, but the development of alternative screw heads was more for security than ease of use. Reply
  • ervinshiznit - Thursday, July 25, 2013 - link

    Typo? On the Turbowrite page you say "For most light use cases I can see TurboWrite being a great way to deliver more of an MLC experience but on a TLC drive."
    It should be deliver more of a SLC experience but on a TLC drive.
  • ciri - Sunday, July 28, 2013 - link

    SLC>MLC>TLC Reply
  • Guspaz - Thursday, July 25, 2013 - link

    The fact that RAPID sees any performance improvement at all illustrates to me a failure of the operating system's disk caching subsystem. That's all that RAPID really is, after all, a replacement for the Windows disk cache.

    I'd be curious to see the performance results of RAPID compared to the disk caching subsystems on other platforms, such as Linux and ZFS (which even on Linux has it's own cache called the "ARC"). Are the large improvements because Windows disk caching is particularly bad, or because RAPID is a better implementation than anybody else?
  • themelon - Thursday, July 25, 2013 - link

    Windows is absolutely horrible at filesystem caching and I don't think it does any sort of block caching. It seems to use more of a FIFO algorithm that has no sequential write bypass no matter what you do. ZFS and the 2 block device caches that recently integrated into the linux kernel, bcache and dm-cache, use more of an LRU method. All of them have at least basic sequential bypass detection as well. bcache in particular is tuneable to your load in almost all aspects of performance. Of course these are only block side caching and currently have no filesystem specific knowledge.

    There is some interesting work going on to track hot spots that will eventually allow for preemptive cache warming and/or hot relocation. Right now it is BTRFS specific but it is being integrated below the filesystem layer so any filesystem will eventually be able to take advantage of it.

    ZFS on Linux is a waste of time in my opinion. ZFS's L2ARC and SLOG are great but limited by some of what I feel are architectural flaws in zfs itself. I used to love zfs but the Linux kernel block stack has caught up to it in features and still offers all of the flexibility that it always has.
  • aicom - Friday, July 26, 2013 - link

    Windows' cache system is better than you give it credit for. It does support sequential bypass (see FILE_FLAG_SEQUENTIAL_SCAN flag). It works with filesystem drivers with the Cc* APIs in the kernel. It also supports caching files over a network, even with other clients modifying the files. It does standard read-ahead and write-behind and is supplemented by an adaptive prefetcher (SuperFetch).

    The reason we're seeing such huge gains is because the programs being tested explicitly ask NOT to be cached. The whole point is to test the drive, so they pass FILE_FLAG_NO_BUFFERING to disable caching on the files being accessed.
  • MrSpadge - Saturday, July 27, 2013 - link

    Excellent post! Reply
  • Timur Born - Sunday, July 28, 2013 - link

    Question still arises why the Anand Storage Bench is affected beneficial by RAPID?! Is it because ASB also asks the Windows cache to be bypassed, is it because of the Windows cache flushing parts of its pages every second or does RAPID communicate with the drive (firmware) at a more fundamental level that allows further optimizations? Reply
  • watersb - Friday, July 26, 2013 - link

    Excellent points. I stick with ZFS because I trust it (after many hardware failures but no data loss) and because it is cross-platform.

    Mac HFS does "hot relocation", I believe. And NTFS has always tried to keep hot files in the middle of the disk in order to reduce hard disk seek times. So maybe I don't understand what is meant by hot relocation.
  • piroroadkill - Thursday, July 25, 2013 - link

    I agree. I'm pretty sure Windows' own disk caching is terrible. It's pretty poor even on the server side. They really need to work on that shit. Reply
  • tincmulc - Thursday, July 25, 2013 - link

    How is rapid any better from SuperCache or FancyCache? Not only do they do the same thing, but can also be configured to use more ram or use os invisble memory (32 bit os with more than 3GB of ram) and they work for any drive, even HDDs. Reply
  • spazoid - Thursday, July 25, 2013 - link

    It's free. Free is better. Reply
  • jhh - Thursday, July 25, 2013 - link

    Are there any latency measurements in milliseconds as opposed to IOPS? With IOPS, the drive may be queuing rquests, making it difficult to translate IOPS to milliseconds per request. Reply
  • Kibbles - Thursday, July 25, 2013 - link

    If I write 1gb/day on average to my SSD, since media files go on my home server, this drive would last me 395 years LOL! Reply
  • sheh - Thursday, July 25, 2013 - link

    Anand, would you consider writing an article on the other aspect of endurance: data retention time? With TLC entering the fray it's starting to get even more worrying.

    It'd be interesting to know how retention time changes throughout a drive's life, trends in the last few years, differences between manufacturers, the effect of the JEDEC standard, whether there's any idle-time refreshing for old written cells, etc.

    And an idea: I'd like to see drives where you can configure whether to use the drive as SLC/MLC/TLC. Switch to SLC for reliability/performance, TLC for capacity.
  • MrSpadge - Saturday, July 27, 2013 - link

    "And an idea: I'd like to see drives where you can configure whether to use the drive as SLC/MLC/TLC. Switch to SLC for reliability/performance, TLC for capacity."

    Or a drive which switches blocks from TLC operation to MLC as it runs out of writes cycles. And finally to SLC.. at which point in time it should last pretty much infinitely.
  • mgl888 - Thursday, July 25, 2013 - link

    Great article.
    Does RAPID require that you install a separate driver or does it just work automatically out of the box? What's the support like for Linux?
  • bobbozzo - Friday, July 26, 2013 - link

    It's a driver, for Windows. Reply
  • TheinsanegamerN - Saturday, July 27, 2013 - link

    and i dont think that rapid has a reason to be on linux. linux is already much better with ssd writes than windows. Reply
  • chizow - Thursday, July 25, 2013 - link

    Minor spelling correction:

    "counterfit" should be "counterfeit"
  • chizow - Thursday, July 25, 2013 - link

    Nice review Anand, I'm really glad to see almost all the top SSDs from numerous makers (Samsung, Crucial, SanDisk, Intel) are creeping up and exceeding SATA2 specs across the board and nearly saturating SATA3 specs.

    It really is amazing though how Samsung seems to be dominating the SSD landscape. I know this review is a bit skewed though since you presumably tried to include almost all the Samsung capacity offerings (for comparison sake), but the impact of the 840, 840 Pro and now the 840 EVO on the SSD market are undeniable. They really have no weaknesses, other than perhaps the Seq. Write Speeds on the 840/EVO.

    I guess this is why there's so many deals currently on the 840, I bought the 250GB version earlier this month and don't really regret it given the price I got it for, but the EVO is certainly a step up in nearly every aspect.
  • Riven98 - Thursday, July 25, 2013 - link

    Thanks for the great article. I had just been thinking that there had been a downturn in the number of articles like these, which are the main reasons I visit on an almost daily basis.
  • chrnochime - Friday, July 26, 2013 - link

    Still recommending a technology that's known to not last as long as the MLC. Yes the *extropolated* result indicates that its lifetime is far longer than advertised, but really, why when even M500 is not that slow in the first place and cost about the same, why risk going with the TLC? Not to mention Samsung's 830 has its fair share of horror stories as well... Reply
  • watersb - Friday, July 26, 2013 - link

    Excellent review.

    How does write amplification scale as the disk fills up? Wouldn't a full disk fail more rapidly than a half-full one?
  • BobAjob2000 - Tuesday, January 28, 2014 - link

    Hopefully wear leveling and TRIM/garbage collection algorithms should take care of your concerns. They should take existing unchanged 'cold' data and move it around to make way for regularly changed 'hot' data. This should reduce the impact of both data longevity and write amplification as it guides new writes to hit the 'freshest' unused or rarely written blocks on the disk and also helps to ensure that data goes not go 'stale' after being untouched for years. Different vendors use different algorithms that have evolved and improved over time. I think Samsung (being a RAM manufacturer) can possibly provide better RAM caches for their disks that may provide advantages for garbage collection and wear leveling algorithms by improving the available 'thinking space' for the caching and sorting/organizing of 'hot' data.
    Its all to do with managing the 'temperature' of your data somewhat like a data 'weather forecast' which can be very useful in the short term or for simple predictable/settled patterns but less practical for long term or unseasonal data storms.
    Would like to see these things tested by 'what if' scenarios though to demonstrate the differences between different vendors algorithms.
  • xtreme2k - Friday, July 26, 2013 - link

    Can anyone tell me why I am paying 90% of the price for 33% of the endurance of a drive? Reply
  • MrSpadge - Saturday, July 27, 2013 - link

    Because endurance doesn't matter (very likely also for you), but price does. Reply
  • log - Friday, July 26, 2013 - link

    Can you partition this drive and still take advantage of its features? Thnaks Reply
  • Timur Born - Friday, July 26, 2013 - link

    I don't quite understand exactly why the Samsung RAPID software cache brings higher performance in *practice* than Windows' own cache? Using two software caches will lead to the same information being stored in RAM twice or even thrice, which is exactly what the Windows cache tries to avoid since XP days.

    That the usual benchmark programs get fooled is visible, as they think to be working without a software cache. So the higher values ​there are not surprising. But I am a bit puzzled why the Anand Storage Bench results increase, too?! Why is RAPID software caching better than Windows' own cache in this scenario? Or does the ASB bypass Windows' cache, too (like most benchmarks)?

    By the way: ATTO allows the Windows cache to be turned ON for testing. My "old" Crucial M4 256 gets sees very high read results once ATTO makes use of Windows' cache. Only the write rates remain significantly smaller.

    Therefor an ATTO test with combinations of either or both software caches (RAPID and Windows) would be interesting.
  • MrSpadge - Saturday, July 27, 2013 - link

    I think it's because Samsung is being much more agressive with caching than Win dares to be, i.e. it holds files far longer before writing them, so they can be combined more efficiently but are longer at risk of being lost. Reply
  • Timur Born - Sunday, July 28, 2013 - link

    I am not convinced about that yet, especially since you can turn off drive cache flushing via Device-Manager and thus should get an even more aggressive Windows cache behavior than what RAPID offers (which is reported to adhere to Windows' flush commands).

    The Windows cache is designed to keep data in RAM for as long as it's not needed for something else. Even more important, data is *directly* executed from inside the Windows cache instead of being copied back and forth between separate memory regions. This keeps duplication to a minimum (implemented since XP as far as I remember). So at least for reads the Windows cache is very useful, especially in combination with Superfetch, which is *not* disabled for SSDs btw (even Prefetch for the boot phase isn't disabled, but in practice it makes not much of a difference whether you boot with or without Prefetch from an SSD).

    There is something funky going on with Windows' cache and the drive's onboard cache of my Crucial M4 in combination with ATTO (Windows cache enabled). Different block sizes get very different results, with some *larger* block sizes not benefiting from Windows' cache either at read or write, the latter depending on the block size chosen. Turning the drive's own cache flushing on/off via Device-Manager can have an impact on that, too.

    In some cases I get less throughput with Windows cache than without (i.e. 512 kb block size with drive flushing on). This may be an issue of ATTO, though, because I also got some measurements where ATTO claimed a write speed of zero (0)! Turning off either drive cache flushing or the Windows cache or both helps ATTO to get meaningful measurements again.

    So the main question remains: How and why would RAPID affect "real-world" performance on top of the Windows cache and does the Anand Storage Bench deliberately circumvent the Windows cache?

    The reason I was looking at this review was that I am currently looking for a new SSD to build a desktop PC and the 840 EVO looks like the thing to buy. So once I get my hands on one myself I will just try RAPID myself. ;)
  • Timur Born - Sunday, July 28, 2013 - link

    Just did a quick test: On my 8 gb RAM system Windows 8 uses quite exactly 1 gb for write caching and all available RAM for read caching. It doesn't matter whether the 1 gb consist of one or several files and whether they fit into the cache as a whole or not (first 1 gb is cached if not). Reply
  • 1Angelreloaded - Friday, July 26, 2013 - link

    Hold on a second correct me if I'm wrong on this paradox. Did Samsung not scale back on NAND production in order to drive the price up for greater bloated profits, now as stated in Korea press conference they want "SSDs for everyone". WTF is going on here, and why are SSDs not at more reasonable pricing by now about .33cents per gig.?They had a complete shot at burying HDDs after the flood and the price hike. Reply
  • FunBunny2 - Friday, July 26, 2013 - link

    Don't confuse capitalists with intelligence. They look at unit margin and ignore gross profit. IOW, they'd rather sell 100 at $2 margin than 1,000 at $1 margin. They're stupid. Reply
  • MrSpadge - Saturday, July 27, 2013 - link

    There's also the factor of marget saturation to take into account. You can't sell an infinite number of drives. Reply
  • Notmyusualid - Friday, July 26, 2013 - link

    Exciting technology indeed! Impressive numbers, nice identification of spare computer resources, and put to good use too. I'd imagine this would be the go-to drive for most users...

    But I'd like my clocks available for my applications thanks.

    In addition, I'm not willing to put my data on any non-enterprise disk now, cost be damned. Burned too many times now.

    Interesting product though....
  • z28dreams - Friday, July 26, 2013 - link

    I recently saw the Plextor M5P (pro) for $190 on sale.

    If the 840 evo comes out in the same price range, which would be a better buy?

    It looks like the write seems of the M5P are better, but I'm not about overall performance.
  • K_Space - Friday, July 26, 2013 - link

    Help a noob here: How is Rapid any different to a custom nonvolatile RAM disk with your selected cached files stored on it & these being written to the SSD at an interval? Is it mainly because Rapid can writes in blocks and it's more intelligent in its choice of cached files? Reply
  • wpapolis - Saturday, July 27, 2013 - link

    Hey there all,

    I have a MacBook 13" from late 2008, the first gen of the unibody construction (Model MB467*/A).

    My bus speed is SATA 3Gbit/s.

    What's the best SSD for me?

    Trim doesn't work automatically for me, though I have found the commands to use in terminal to enable it.

    This Samsung drives looks really good, but it seems like I won't be able to use RAPID, or perhaps even TRIM. Plus I am limited by my bus speed. Should I still go for this Samsung just because the price might be the same as lower featured alternatives?

    What do you guys suggest? I want one in the 250GB range.


  • TheinsanegamerN - Saturday, July 27, 2013 - link

    if you have a mac, the samsung is your best bet. TRIM can be enabled quite easily on a mac if it is not done automatically, so you can use trim. as for RAPID, it replaces window's terrible i/o caching process. osx does not have this problem, so you dont have to worry about that. now, the sata2 interface will be a bottleneck, but it will still be much faster than a hard drive. id go for either this evo drive or the 840 250gb Reply
  • wpapolis - Saturday, July 27, 2013 - link

    Yes, you reaffirmed what I was already thinking.

    Plus, when I upgrade this MacBook, I have the option to move the drive. Though, I have to say, performance is still pretty good, but each OS upgrade seems to make things a bit more sluggish.

    With 8GB RAM, and a current SSD, things should be good for a bit longer.

    Thanks for the feedback,

  • Grim0013 - Sunday, July 28, 2013 - link

    I wonder what, if anything, the impact of Turbo Write is on drive endurance, as in, does the SLC buffer have the effect of "shielding" the TLC from some amount of write amplification (WA)? More specifically, I was thinking that in the case of small random writes (high WA), many of them would be going to the SLC first, then when the data is transferred to the TLC, I wonder if the buffering affords the controller the opportunity to write the data is such a way as to reduce WA on the TLC?

    In fact, I wonder if that is something that is done...if the controller is able to characterize certain types of files as being likely the be frequently modified then just keep them in the SLC semi-permanently. Stuff like the page file and other OS stuff that is constantly modified...I'm not very well-versed on this stuff so I'm just guessing. It just seems like taking advantage of SLCs crazy p/e endurance in addition to it's speed could really help make these things bulletproof.
  • shodanshok - Sunday, July 28, 2013 - link

    Yea, I was thinking the same thing. After all, Sandisk already did it on the Ultra Plus and Ultra II SSDs: they have a small pseudo-SLC zone used both for greater performance and reducing WA. Reply
  • shodanshok - Sunday, July 28, 2013 - link

    I am not so exited about RAPID: data integrity is a delicate thing, so I am not so happy to trust Samsung (or others) replacing the key well-tested caching algorithm natively built into the OS.

    Anyway, Windows' write caching is not so quick because the OS, by default, flush its in-memory cache each second. Moreover, it normally issue a barrier event to flush the disk's DRAM cache. This last thing can be disabled, but the flush of the in-memory cache can not be changed, as far I know.

    Linux, on the other side, use much aggressive caching policy: it issue an in-memory cache flush (pagecache) ever 30 seconds, and it aggressively try to coalesce multiple writes into a single transactions. This parameter is configurable using the /proc interface. Moreover, if you have a BBU or power-tolerant disk subsystem, you can even disable the barrier instruction normally issued to the disk's DRAM cache.
  • Timur Born - Sunday, July 28, 2013 - link

    My Windows 8 setup uses quite exactly 1 gb RAM for write caching, regardless of whether it's writing to a 5400 rpm 2.5" HD, 5400 rpm 3.5" HD or Crucial M4 256 gb SSD. That's exactly the size of the RAPID cache. The "flush its cache each second" part becomes a problem when the source and destination are on the same drive, because once Windows starts writing the disk queue starts to climb.

    But even then it should mostly be a problem for spinning HDs that don't really like higher queue numbers. Even more so when you copy multiple files via Windows Explorer, which reads and write files concurrently even on spinning HDs.

    So I wonder if RAPID's only real advantage is its feature to coalesce multiple small writes into single big ones for durations longer than one second?!
  • Timur Born - Sunday, July 28, 2013 - link

    By the way, my personal experience is that CPU power saving features, as set up in both in the default "Balanced" and the "High Performance" power-profiles, have far more of an impact on SSD performance than caching stuff. I can up my M4' 4K random performance by 60% and more just by messing with CPU power savings to be less aggressive (or off). Reply
  • shodanshok - Monday, July 29, 2013 - link

    If I correctly remember, Windows use at most 1/8 of total RAM size for write caching. How much RAM did you have? Reply
  • Timur Born - Tuesday, July 30, 2013 - link

    8 gb, so you may be correct. Or you may mix it up with the 1/8 part of dirty cache that is being flushed by the Windows cache every second. Or both may be 1/8. ;-) Reply
  • zzz777 - Monday, July 29, 2013 - link

    I'm interested in caching writes to a ram disk then to storage. This reminds me if the concept of a write-back cache: for almost everyone The possibility of data corruption is so low that there's no reason not to enable it: can this ssd ramdisk write quickly enough that home users also don't have to worry about using it? Beyond that I'm not a normal home user, I want to see benchmarks for virtualization, I want the quickest way to create, modify and test a vm before putting it on front life hardware Reply
  • Wwhat - Monday, July 29, 2013 - link

    I for me still say: I rather go for the pro version. Reply
  • andreaciri - Thursday, August 01, 2013 - link

    i have to decide if buy an 840 now, or an EVO when it will be available, for my macbook. considering that RAPID technology is only supported under windows, and that i'm more interested in read performance than write, is 840 a good choice? Reply
  • eamon - Thursday, August 01, 2013 - link

    Unless you want to run some kind of continual I/O server, I suspect performance will be fast enough not to matter; I'd only look at pricing if I were you... Reply
  • Busverpasser - Thursday, August 08, 2013 - link

    Hi there, great review, thanks a lot. Actually I do have a question... The article says "The performance story is really good (particularly with the larger capacities), performance consistency out of the box is ok (and gets better if you can leave more free space on the drive)..."

    Does leaving more free space mean that this space is supposed to be unpartitioned or just not filled with data? When I bought my Intel Postville SSD some time ago, I left some space unpartitioned but never really knew whether that was the right thing to do :D. Can someone give me a hint here?
  • xchaotic - Wednesday, August 14, 2013 - link

    @Busverpasser just leave more space free, it doesn't have to be unpartitioned.
    Worst case if you need that extra space for a while, you'll get lower performance, but more storage whenever you need it.
  • speculatrix - Saturday, August 17, 2013 - link

    the table titled "Samsung SSD 840 EVO TurboWrite Buffer Size vs. Capacity" should be titled "Capacity vs Usage vs Endurance" Reply
  • rdugar - Friday, August 23, 2013 - link

    Am in the market for an SSD finally to replace an HDD on a Windows 7 laptop. Was almost set on the 128GB Samsung 840 Pro, but saw the comment on poor performance at almost full capacities.

    Price, reliability and endurance being the most important to me, which one should I go for?

    128Gb Samsung 840 Pro? approx $119 after coupons, etc.
    120 GB Samsung 840 EVO? probably $99 or so
    256 GB Samsung 840 EVO? probably $165 or so
    Other brand and model?

    If I have to spend $120 odd, may as well spend another $50 and get double the capacity....
  • tfop - Saturday, August 24, 2013 - link

    I have a question regarding to the NAND Comparison table.
    How do this Page and Block sizes affect the right Clustersize and Alignment of the Partition?
    If i am getting this right, the SSD 840 EVO would need a 8 KiB Clustersize and a 2 MiB Alignment.
  • Gnomer87 - Wednesday, August 28, 2013 - link

    I have a couple of questions:

    First, how much data is typically written to the average consumer HDD on a daily basis these days? I am thinking it's nowhere close to 50GiB. I guess what I am really interested in knowing, is how much data the operating system(windows 7) writes to the drive for various maintenance uses(if there are any beside defragmenting). In my mind, simply booting up the computer shouldn't mean any writes to the drive at all. Ergo, given my typical use, a 120GB SSD of that caliber, should last a lifetime. Am I right in thinking this? I mean, reading doesn't affect the durability right?

    Secondly: I've been considering getting an SSD for use as a OS drive for a long time, reason of course was to speed up boot time. However, I've long wondered WHY windows boots so slowly from HDDs in the first place. After all, the amount of data loaded during boot up isn't large. In my case the processes post-boot take up around 200 MBs, Assuming the actual amount of data loaded from the drive is about the same, it really shouldn't take that long. My HDD is capable of reading up to 120 MBs in optimal situations, so it's obvious the boot up process isn't optimal by a long shot.

    But why this slow? It can take over a minute before she(my computer) is done loading and starting all processes. Last semester I took course in Operating system at the local university. I must confess I was a horrible student, I didn't show up much. But I do remember a few key elements, namely the scheduler and how this scheduler continually does context switches, letting each process use the CPU, and thus creating parallelism. Now what was really interesting was resource management. It's the scheduler that decides which process is currently running on the cpu, and the scheduler process is run in between each context switch, effectively letting each user process run and have access to resources, such as the hard drive. Now, what happens if all the processes want data from the drive at the same time? Would each process continually interrupt the other processes loading of data, and thus causing the HDD to seek constantly?

    Could that explain why booting takes such idiotic amounts of time? An extremely inefficient resource management that basically ignores the inherent seek-time related weaknesses of an HDD? SSDs, as we know, barely have seek-time, and thus the performance loss from context switching should be negligible.

    I know my cousins SSD powered computer boots near instantly, once it's done with the usual BIOS stuff, the OS is booted and ready for use in mere seconds. And yes, we are talking a completely cold boot here, no sleep or anything like that.
  • abhilashjain30 - Friday, September 20, 2013 - link

    I purchased Samsung 120GB EVO 3 days back from OnlySSD ( )and Drive performance is too good compare to 120GB 840 Basic Series. Reply
  • abhilashjain30 - Friday, September 20, 2013 - link

    Available at OnlySSD dot com Reply
  • abhilashjain30 - Wednesday, October 02, 2013 - link

    Samsung Evo Series now available online in India. You can check on OnlySSD dot com or PrimeABGB dot com Reply
  • MVR - Thursday, November 14, 2013 - link

    It will be very interesting when they start loading these up with more than 512MB of DRAM cache. Imagine a drive with 4-8+ GB on board. The response times would be insane. It is only a matter of time considering you can buy 8GB of SODIMM memory for $70. They could probably put it on board for $50 added cost to the drive - then these would truly act like PCIe SSD cards, except it would totally max out the SATA3 throughput limit. Reply
  • MVR - Thursday, November 14, 2013 - link

    Of course SATA revision 3.2 at 16gbit/sec would sure enjoy it. Imagine a pair of those in RAID 0 :) Reply
  • Wao - Sunday, November 24, 2013 - link

    I'm going to change my old noisy hard disk with a Samsung 840 EVO 1TB model. I am wondering if I really need to enable TRIM in OS X. I check the data sheet. It only said "Yes" about garbage collection and TRIM support. Does it meant that this model has its own garbage collection built-in, or I really need to enable TRIM in OS X. Honestly, I don't like to hack around the system files.
    Thanks !
  • iradel - Monday, November 25, 2013 - link

    In the "IMFT vs. Samsung NAND Comparison" table, how did you get a Pages per Block value of 256 for 19nm TLC (a.k.a. the 840 EVO)? 8KB * 256 pages per block would imply an erase block size of 2048KB, whereas I've read that the 840 EVO has an EBS of 1536KB (which would mean 192 pages per block).

    Where did you get the 256 value?
  • sambrightman - Sunday, September 20, 2015 - link

    I have the same question. I've read both the 840 and 840 EVO have 1536KiB EBS due to TLC, this is the only place saying 2MiB. Did you find an answer? Reply
  • Scraps - Tuesday, November 26, 2013 - link

    What would be the optimum configuration for this situation. A MacBook Pro with 2 samsung evo 1tb. Would striped raid zero be the best ? Reply
  • code42 - Wednesday, December 18, 2013 - link

    Can I use the Samsung 840 Pro 1TB with a NAS solution? Can some propose a nice setup? Thanks Reply
  • Hal9009 - Wednesday, December 18, 2013 - link

    Just received my new ASUS N550JV and updated the slow HD with 840 EVO-Series 750GB SSD, 16GB of G.SKILL 16GB (2 x 8G) 204-Pin DDR3 and a fresh copy of win-7x64...could not be happier, Samsung makes great SSDs Reply
  • 7beauties - Saturday, December 28, 2013 - link

    I bought the Samsung 840 EVO 1TB because Maximum PC gave it a 9 Kick *ss award, but they described it as being MLC. Good ole Anand tells it like it is. This is TLC. I was pretty steamed with Samsung because they describe this as their "new 3 bit MLC NAND," which I wouldn't have bought over Crucial's M500 960GB MLC SSD. Though Anand tries to calm fears of TLC's endurance, I can't understand what a "GiB" is and how I can calculate my drive's life span. Reply
  • verjic - Thursday, February 13, 2014 - link

    I have a question. In some of the tests I found of real life use shows that Kingston V300 and Samsung a practically the same speed and even at copy 2 GB of 26000 files is slowly on samsung with about 30 %!!! Also installing a program like photoshop, takes longer on Samsung than Kingston, difference is not so big but is arou 10-15 %. Why is that? From all the test for kingston and Samsung, everyone say that Samsung is better but I don't see how? If anyone can explain to me, please Reply
  • verjic - Thursday, February 13, 2014 - link

    I'm talking about 120 Gb version Reply
  • verjic - Thursday, February 13, 2014 - link

    Also what is Write/Read IOMeter Bootup and Write/Read IOMeter IOMix - what means their speed? Thank You Reply
  • AhDah - Thursday, May 15, 2014 - link

    The TRIM validation graph shows a tremendous performance drop after a few gigs of writes, even after TRIM pass, the write speed is only 150MBps.
    Does this mean once the drive is 75%-85% filled up, the write speed will always be slow?

    I'm tempted to get Crucial M550 because of this down fall.
  • njwhite2 - Wednesday, October 15, 2014 - link

    Kudos to Anand Lal Shimpi! This is one of the finest reviews I have ever read! No jargon. No unexplained acronyms. Quantitative testing of compared items instead of reviewer bias. Explanation of why the measured criteria are imortant to the end user! Just fabulous! I read dozens of reviews each week, so I'm surprised I had not stumbled upon Anandtech before. I'm (for sure) going to check out their smartphone reviews. Most of those on other sites are written by Apple fans or Android fans and really don't tell the potential purchaser what they need to know to make the best choice for them. Reply
  • IT_Architect - Thursday, October 22, 2015 - link

    I would be interested in how reliable they are. The reason I ask is one time, when the time the Intel SLC technology was just under two years old, and there was no MLC or TLC, I needed speed to load a database from scratch 6 times an hour during incredible traffic times. I was getting requests by users at the rate of 66 times a second per server, which each required many reads of the database per request. I couldn't swap databases without breaking sessions, and mirror and unmirror did not work well. I would have to pay a ton to duplicate a redundant array in SSDs. Then I asked the data center how many of these drives they had out there. They (SoftLayer) queried and came back with 700+. Then I asked them how many they've had go bad. They queried their records and it was none, not so much as a DOA. I reasoned from that I would be just as likely to have a chassis or disk controller go bad. None of them have any moving parts, and the drives are low power. Those were enterprise drives of course because that's all there was at that time.

    In 2011 I bought a Dell M6600. Dell was shipping them with the Micron SSD. I was concerned about the lifespan and I do a lot of reading and writing with it and work constantly with virtual machines while prototyping, and VM files are huge. It calculated out to 4 years. While researching, I came across that situation where Dell had "cold feet" about OEMing them due to lifespan. Micron/Intel demonstrated to them 10x the rated lifespan, which convinced Dell. There was plenty of other trouble with consumer-level SSDs at the time, which gave the technology a bad name. The Micron/Intel was one of the very few solid citizens at the time. I went with it, although I didn't buy my M6600 with it because Dell had such a premium on them. I had two problems with the drive, which by the way is still in service today. The first was the drive just stopped doing anything one day. I called Micron and it turned out to be a bug in the firmware. If I had two drives arrayed, it would have stopped both at the same time. I upgraded the firmware and never had that problem again. The next time I was troubleshooting the laptop and putting the battery in and out and the computer would no longer boot. I again called Micron. It was by design. They said disconnect the power, pull the battery, and wait one hour. I did, and it has worked perfectly since. If I had an array, it would have stopped both at the same time.

    Today, the market is much more mature and the technology no longer has a bad name. A redundant array is no substitute for a backup anyway. A redundant array brings business continuity and speed. Are we just as likely or more so to have a motherboard go out? We don't have redundant motherboards unless without having another entire computer. Unlike a power supplies and CPUs, SSDs are low-current devices. I'm considering the possibility that we may be at the point, even for consumer-level drives, where redundant arrays for SSDs are just plain silly.
  • Gothmoth - Sunday, January 08, 2017 - link

    in real life my RAPID test showed no benefits AT ALL!!

    all it does is making low level benchmarks look better.
    you should test with real applications. RAPID is a useless feature.

Log in

Don't have an account? Sign up now