Be sure to read our latest SSD article: The SSD Relapse for an updated look at the SSD market.

The Prelude

I spent about three weeks working on my review of Intel’s X25-M SSD. Most of that was research and writing and re-writing to not only review the product but also to talk about the rest of the SSDs in the marketplace and their deficiencies. Truth be told, I spent more time working on SSDs that weren’t the X25-M than the Intel drive itself. The Intel drive just worked as it should, the rest of them didn’t.

If you read the article, you know I was pretty harsh on some of the SSDs out at the time and if you’ve ever used any of those SSDs, you know why. Needless to say, there was some definite fallout from that review. I’m used to negative manufacturer response after a GPU review, but I’m always a bit surprised when it happens in any other segment.

I took a day or two off after that review went live, I think it was a day. Afterwards, I immediately started working on a follow-up. There was a strange phenomenon a few people noticed, something I unfortunately picked up on after the review went live; if you filled the X25-M up and re-benchmarked it, it got slower. And I had no idea why.

A few weeks later, I had it figured out. But then Nehalem was right around the corner. I’d tackle it after that. But then a new batch of SSDs from OCZ and other vendors were almost ready. I told myself I’d do them all at the same time. Then CES happened.

The More Things Change, The More They Stay The Same

If you look at the SSD market today, you’d assume that it’s very different from what it was just six months ago when the X25-M launched. People are worried that the Intel drive has issues with degrading performance over time. Some vendors are now shipping “revised” JMicron drives with multiple controllers, supposedly fixing all of the problems I talked about last year.

I hate to break it to you guys. As different as the world may seem today, it’s all very much the same.

The Intel drive is still the best of the best. Yes, it, and other SSDs do get slower over time and later in this article I’ll explain why it happens and why it’s not as big of a deal as you’d think. The issues I complained about with the JMicron drives from last year are still alive and well today; they’re just somewhat occluded.

Delay after delay kept me from writing this article, but I believe it’s for the best. What went in to what you’re about to read is nearly six months of research, testing and plain old work with SSDs.

The timing couldn’t be more perfect. We’re about to see a new wave of SSDs hit the market and it’s time to separate the fact from the fiction, the benchmarks from reality and the men from the boys. The last time I wrote an article about SSDs I ruffled quite a few feathers. That’s never my aim, but we’ll see what comes of this one.

Bringing You Up to Speed: The History Lesson
Comments Locked

250 Comments

View All Comments

  • SkullOne - Wednesday, March 18, 2009 - link

    Fantastic article. Definitely one of the best I've read in a long time. Incredibly informative. Everyone who reads this article is a little bit smarter afterwards.

    All the great information about SSDs aside, I think the best part though is how OCZ is willing to take blame for failure earlier and fix the problems. Companies like that are the ones who will get my money in the future especially when it is time for me to move from HDD to SSD.
  • Apache2009 - Wednesday, March 18, 2009 - link

    i got one Vertex SSD. Why suspend will cause system halt ? My laptop is nVidia chipset and it is work fine with HDD. Somebody know it ?
  • MarcHFR - Wednesday, March 18, 2009 - link

    Hi,

    You wrote that there is spare-area on X25-M :

    "Intel ships its X25-M with 80GB of MLC flash on it, but only 74.5GB is available to the user"

    It's a mistake. 80 GB of Flash look like 74.5GB for the user because 80,000,000,000 bytes of flash is 74.5 Go for the user point of view (with 1 KB = 1024 byte).

    You did'nt point out the other problem of the X25-M : LBA "optimisation". After doing a lot of I/O random write the speed in sequential write can get down to only 10 MB /s :/
  • Kary - Thursday, March 19, 2009 - link

    The extra space would be invisible to the end user (it is used internally)

    Also, addressing is normally done in binary..as a result actual sizes are typically in binary in memory devices (flash, RAM...):
    64gb
    128gb

    80 GB...not compatible with binary addressing

    (though 48GB of a 128GB drive being used for this seems pretty high)
  • ssj4Gogeta - Wednesday, March 18, 2009 - link

    Did you bother reading the article? He pointed out that you can get any SSD (NOT just Intel's) stuck into a situation when only a secure erase will help you out. The problem is not specific to Intel's SSD, and it doesn't occur during normal usage.
  • MarcHFR - Wednesday, March 18, 2009 - link

    The problem i've pointed out has nothing to do with the performance dregradation related to the write on a filled page, it's a performance degradation related to an LBA optimisation that is specific to Intel SSD.
  • VaultDweller - Wednesday, March 18, 2009 - link

    So where would Corsair's SSD fit into this mix? It uses a Samsung MLC controller... so would it be comparable to the OCZ Summit? I would expect not since the rated sequential speeds on the Corsair are tremendously lower than the Summit, but the Summit is the closest match in terms of the internals.
  • kensiko - Wednesday, March 18, 2009 - link

    No, OCZ Summit = newest Samsung controller. The Corsair use the previous controller, smaller performance.
  • VaultDweller - Wednesday, March 18, 2009 - link

    So what's the difference?

    The Summit is optimized for sequential performance at the cost of random I/O, as per the article. That is clearly not the case with the Corsair drive, so how does the Corsair hold up in terms of random I/O? That's what I'm interested in, since the sequential on the Corsair is "fast enough" if the random write performance is good.
  • jatypc - Wednesday, March 18, 2009 - link

    A detailed description of how SSDs operate makes me wonder: Imagene hypothetically I have a SSD drive that is filled from more than 90% (e.g., 95%) and those 90% are read-only things (or almost read-only things such as exe and other application files). The remaining 10% is free or frequently written to (e.g., page/swap file). Then the use of drive results - from what I understood in the article - in very fast aging of those 10% of the SSD disk because the 90% are occupied by read-only stuff. If the disk in question has for instance 32GB, those 10% are 3.2 GB (e.g., a size of a usual swap file) and after writing it approx. 10000 times, the respective part of the disk would become dead. Being occupies by a swap file, this number of reads/writes can be achieved in one or two years... Am I right?

Log in

Don't have an account? Sign up now