What's Wrong with Samsung?

The largest SSD maker in the world is Samsung. Samsung makes the drives offered by Apple in its entire MacBook/MacBook Pro lineup. Samsung makes the drives you get if you order a Lenovo X300. In fact, if you're buying any major OEM system with an SSD in it, Samsung makes that drive.

It's just too bad that those drives aren’t very good.

This is the 4KB random write performance of Samsung's latest SSD, based on the RBB controller:


4.4MB/s. That's 3x the speed of a VelociRaptor, but 1/3 the speed of a cheaper Indilinx drive.

Speedy, but not earth shattering. Now let's look at performance once every LBA has been written to. This is the worst case scenario performance we've been testing for the past year:


...and now we're down to mechanical hard drive speeds

Holycrapwtfbbq? Terrible.

Now to be fair to Samsung, this isn’t JMicron-terrible performance. It’s just not worth the money performance.

The Samsung RBB based SSDs are rebranded by at least two manufacturers: OCZ and Corsair.

The OCZ Summit and the Corsair P256 both use the Samsung RBB platform.


The Corsair and OCZ Samsung RBB drives.

The drive most OEMs are now shipping is an even older, lower performing Samsung SSD based on an older controller.

I talked to some of the vendors who ship Samsung RBB based SSDs and got some sales data. They simply can’t give these drives away. The Indilinx based drives outsell those based on the Samsung RBB controller by over 40:1. If end users are smart enough to choose Indilinx and Intel, why aren't companies like Apple and Lenovo?

Don't ever opt for the SSD upgrade from any of these OEMs if you've got the option of buying your own Indilinx or Intel drive and swapping it in there. If you don't know how, post in our forums; someone will help you out.

Samsung realized it had an issue with its used-state performance and was actually the first to introduce background garbage collection; official TRIM support will be coming later. Great right? Not exactly.

There’s currently no way for an end user to flash the firmware on any of these Samsung drives. To make matters worse, there’s no way for companies like OCZ or Corsair to upgrade the firmware on these drives either. If you want a new firmware on the drive, it has to go back to Samsung. I can’t even begin to point out how ridiculous this is.

If you’re lucky enough to get one of the Samsung drives with background garbage collection, then the performance drop I talked about above doesn’t really matter. How can you tell? Open up Device Manager, go to your SSD properties, then details, then select Hardware Ids from the dropdown. Your firmware version will be listed at the end of your hardware id string:

Version 1801Q doesn’t support BGC. Version 18C1Q (or later) does.

How can you ensure you get a model with the right firmware revision? Pick a religion and start praying, because that’s the best you can do.

Now the good news. When brand new, the Samsung drives actually boast competitive sequential write, sequential read and random write speeds.

These drives are also highly compatible and very well tested. For all of the major OEMs to use them they have to be. It’s their random write performance that’s most disappointing. TRIM support is coming later this year and it will help keep the drives performing fresh, but even then they are still slower than the Indilinx alternatives.

There’s no wiper tool and there’s currently no method to deploy end-user flashable firmware updates. Even with TRIM coming down the road, the Samsung drives just don’t make sense.

The OCZ Vertex Turbo: Overclocked Indilinx Why You Absolutely Need an SSD
Comments Locked

295 Comments

View All Comments

  • GourdFreeMan - Tuesday, September 1, 2009 - link

    Yes, rewriting a cell will refill the floating gate with trapped electrons to the proper voltage level unless the gate has begun to wear out, so backing up your data, secure erasing your drive and copying the data back will preserve the life (within reason) of even drives that use minimalistic wear leveling to safeguard data. Charge retention is only a problem for users if they intend to use the drive for archival storage, or operate the drive at highly elevated temperatures.

    It is a bigger problem for flash engineers, however, and one of the reasons why MLC cannot be moved easily to more bits per cell without design changes. To store n-bits in a single cell you need 2^n separate energy levels to represent them, and thus each bit is only has approximately 1/(2^(n-1)) the amount of energy difference between states when compared to SLC using similar designs and materials.
  • Zheos - Tuesday, September 1, 2009 - link

    Man you seem to know a lot about what you're talking about :)

    Yeah now i understand why SSD for database and file storage server would be quite a bad idea.

    But for personal windows & everyday application storage, seems like a pure win to me if you can afford one :)

    I was only worried about its life-span but thankx to you and you're quick replys (and for the maths and technical stuff about how it realy work ;) im sold on the fact that i will buy one soon.

    The G2 from Intel seems like the best choice for now but I'll just wait and see how it's going when TRIM will become almost enable on every SSD and i'll make my decision there in a couple of months =)


  • GourdFreeMan - Wednesday, September 2, 2009 - link

    It isn't so much that SSDs make a bad storage server, but rather that you can't neglect to make periodic backups, as with any type of storage, if your data has great monetary or sentimental value. In addition to backups, RAID (1-6) is also an option if cost is no object and you want to use SSDs for long term storage in a running server. Database servers are a little more complicated, but SSDs can be an intelligent choice there as well if your usage patterns aren't continuous heavy small (i.e. <= 4K) writes.

    I plan on getting a G2 myself for my laptop after Intel updates the firmware to support TRIM and Anand reviews the effects in Windows 7, and I have already been using an Indilinx-based SLC drive in my home server.

    If you do anything that stresses your hard drive(s), or just like snappy boot times and application load times you will probably be impressed by the speeds of a new SSD. The cost per GB and lack of long term reliability studies are really the only things holding them back from taking the storage market by storm now.
  • ninevoltz - Thursday, September 17, 2009 - link

    GourdFreeMan could you please continue your explanation? I would like to learn more. You have really dived deeply into the physical properties of these drives.
  • GourdFreeMan - Tuesday, September 1, 2009 - link

    Minor correction to the second paragraph in my post above -- "each bit is only has" should read "each representation only has" in the last sentence.
  • philosofool - Monday, August 31, 2009 - link

    Nice job. This has been a great series.

    I'm getting a SSD once I can get one at $1/GB. I want a system/program files drive of at least 80GB and then a conventional HDD (a tenth of the cost/GB) for user data.

    Would keeping user data on a conventional HDD affect these results? It would seem like it wouldn't, but I would like to see the evidence.

    I would really like to see more benchmarks for these drives that aren't synthetic. Have you tried things like Crysis or The Witcher load times? (Both seemed to me to have pretty slow loads for maps.) I don't know if these would be affected, but as real world applications, I think it makes sense to try them out.
  • Anand Lal Shimpi - Monday, August 31, 2009 - link

    Personally I keep docs on my SSD but I keep pictures/music on a hard drive. Neither gets touched all that often in the grand scheme of things, but one is a lot smaller :)

    In The SSD Anthology I looked at Crysis load times. Performance didn't really improve when going to an SSD.

    Take care,
    Anand
  • Eeqmcsq - Monday, August 31, 2009 - link

    I would have thought that the read speed of an SSD would have helped cut down some of the compile time. Is there any tool that lets you analyze disk usage vs cpu usage during the compile time, to see what percentage of the compile was spent reading/writing to disk vs CPU processing?

    Is there any way you can add a temperature test between an HDD and an SSD? I read a couple of Newegg reviews that say their SSDs got HOT after use, though I think that may have just been 1 particular brand that I don't remember. Also, there was at least one article online that tested an SSD vs an HDD and the SSD ran a little warmer than the HDD.

    Also, garbage collection does have one advantage: It's OS independent. I'm still using Ubuntu 8.04 at work, and I'm stuck on 8.04 because my development environment WORKS, and I won't risk upgrading and destabilizing it. A garbage collecting SSD would certainly be helpful for my system... though your compiling tests are now swaying me against an SSD upgrade. Doh!

    And just for fun, have you thought about running some of your benchmarks on a RAM drive? I'd like to see how far SSDs and SATA have to go before matching the speed of RAM.

    Finally, any word from JMicron and their supposed update to the much "loved" JMF602 controller? I'd like to see some non-stuttering cheapo SSDs enter the market and really bring the $$$/GB down, like the Kingston V-series. Also, I'd like to see a refresh in the PATA SSD market.

    "Am I relieved to be done with this article? You betcha." And I give you a great THANK YOU!!! for spending the time working on it. As usual, it was a great read.
  • Per Hansson - Monday, August 31, 2009 - link

    Photofast have released Indilinx based PATA drives;
    http://www.photofastuk.com/engine/shop/category/G-...">http://www.photofastuk.com/engine/shop/category/G-...
  • aggressor - Monday, August 31, 2009 - link

    What ever happened to the price drops that OCZ announced when the Intel G2 drives came out? I want 128GB for $280!

Log in

Don't have an account? Sign up now