I'm not sure what it is about SSD manufacturers and overly complicated product stacks. Kingston has no less than six different SSD brands in its lineup. The E Series, M Series, SSDNow V 100, SSDNow V+ 100, SSDNow V+ 100E and SSDNow V+ 180. The E and M series are just rebranded Intel drives, these use Intel's X25-E and X25-M G2 controllers respectively with Kingston logo on the enclosure. The SSDNow V 100 is an update to the SSDNow V Series drives, both of which use the JMicron JMF618 controller. Don't get this confused with the 30GB SSDNow V Series Boot Drive which actually uses a Toshiba T6UG1XBG controller, also used in the SSDNow V+. Confused yet? It gets better.

The standard V+ is gone and replaced by the new V+ 100, which is what we're here to take a look at today. This drive uses the T6UG1XBG controller but with updated firmware. The new firmware enables two things: very aggressive OS-independent garbage collection and higher overall performance. The former is very important as this is the same controller used in Apple's new MacBook Air. In fact, the performance of the Kingston V+100 drive mimics that of Apple's new SSDs:

Apple vs. Kingston SSDNow V+100 Performance
Drive Sequential Write Sequential Read Random Write Random Read
Apple TS064C 64GB 185.4 MB/s 199.7 MB/s 4.9 MB/s 19.0 MB/s
Kingston SSDNow V+100 128GB 193.1 MB/s 227.0 MB/s 4.9 MB/s 19.7 MB/s

Sequential speed is higher on the Kingston drive but that is likely due to the size difference. Random read/write speed are nearly identical. And there's one phrase in Kingston's press release that sums up why Apple chose this controller for its MacBook Air: "always-on garbage collection". Remember that NAND is written to at the page level (4KB), but erased at the block level (512 pages). Unless told otherwise, SSDs try to retain data as long as possible because to erase a block of NAND usually means erasing a bunch of valid as well as invalid data and then re-writing the valid data again to a new block. Garbage collection is the process by which a block of NAND is cleaned for future writes.


Diagram inspired by IBM Zurich Research Laboratory

If you're too lax with your garbage collection algorithm then write speed will eventually suffer. Each write will eventually have a large penalty associated with it, driving write latency up and throughput down. Too aggressive with garbage collection and drive lifespan suffers. NAND can only be written/erased a finite number of times, aggressively cleaning NAND before it's absolutely necessary will keep write performance high at the expense of wearing out NAND quicker.

Intel was the first to really show us what realtime garbage collection looked like. Here is a graph showing sequential write speed of Intel's X25-V:

The almost periodic square wave formed by the darker red line above shows a horribly fragmented X25-V attempting to clean itself up at every write. Eventually, with enough writes, the X25-V will return to peak performance. At every write request the X25-V controller will attempt to clean some blocks and return to peak performance. The garbage collection isn't seamless but it will eventually restore performance.

Now look at Kingston's SSDNow V+100, both before fragmentation and after:

There's hardly any difference. Actually the best way to see this in work is to look at power draw when firing random write requests all over the drive. The SSDNow V+100 has wild swings in power consumption during our random write test ranging from 1.25W to 3.40W. The swings would happen several times in a window of a couple of seconds. The V+100 is aggressively tries to reorganize writes and recycle bad blocks, more aggressively than we've seen from any other SSD.

The benefit of this is you get peak performance out of the drive regardless of how much you use it, which is perfect for an OS without TRIM support - ahem, OS X. Now you can see why Apple chose this controller.

There is a downside however: write amplification. For every 4KB we randomly write to a location on the drive, the actual amount of data written is much, much greater. It's the cost of constantly cleaning/reorganizing the drive for performance. While I haven't had any 50nm, 4xnm or 3xnm NAND physically wear out on me, the V+100 is the most likely to blow through those program/erase cycles. Keep in mind that at the 3xnm node you no longer have 10,000 cycles, but closer to 5,000 before your NAND dies. On nearly all drives we've tested this isn't an issue, but I would be concerned about the V+100. Concerned enough to recommend running it with 20% free space at all times (at least). The more free space you have, the better job the controller can do wear leveling.

Prices and New Competitors
Comments Locked

96 Comments

View All Comments

  • dagamer34 - Saturday, November 13, 2010 - link

    If you're buying an SSD, I see no reason why your OS should still be Windows XP.
  • Oxford Guy - Sunday, November 14, 2010 - link

    Some may not want to pay the Microsoft tax.
  • Out of Box Experience - Monday, November 15, 2010 - link

    What you see is irrelevant to what I use!

    I see several good reasons to use XP and None to using Windows 7

    The number 1 OS is still XP and has the highest user base so why does OCZ think the public will spend an extra $200 for Windows 7 just to use their overhyped SSD's?

    Why doesn't OCZ just build SSD's for the majority of people on XP instead of making their customers jusmp though all these hoops just to get synthetic speeds from their drives that have little to do with real world results?
  • sprockkets - Sunday, November 21, 2010 - link

    Oh, I don't know, TRIM support, built in alignment support, build in optimization after running WEI for SSDs?

    But if you want to stick with a 9 year old OS which lacks basic security, poor desktop video rendering, etc, go right on ahead.
  • Anand Lal Shimpi - Thursday, November 11, 2010 - link

    Representing the true nature of "random" access on the desktop is very difficult to do. Most desktops don't exhibit truly random access, instead what you get is a small file write followed by a table update somewhere else in the LBA space (not sequential, but not random). String a lot of these operations together and you get small writes peppered all over specific areas of the drive. The way we simulate this is running a 100% random sweep of 4KB writes but constraining it to a space of about 8GB of LBAs. This is overkill as well for most users, however what the benchmark does do is give an indication of worst case small file, non-sequential write performance. I agree with you that we need more synthetic tests that are representative of exactly what desktop random write behavior happens to be, however I haven't been able to come across the right combination to deliver just that. Admittedly I've been off chasing phones and other SSD issues as of late (you'll read more about this in the coming weeks) so I haven't been actively looking for a better 4KB test.

    Now OS X feeling snappier vs. SandForce I can completely agree with. I don't believe this is 100% attributable to the data you see here, Apple also has the ability to go in and tweak its firmwares specifically for its software. I believe the ultra quick response time you see from boot and resume comes from firmware optimizations specific to OS X. Although I am going to toss this Kingston drive in my MBP to see what things look like at that point.

    Take care,
    Anand
  • iwodo - Friday, November 12, 2010 - link

    ARH, Thx,

    "toss this Kingston drive in my MBP to see what things look like at that point."

    I never thought of that. Keep thinking mSATA was blocking anyone from testing it.

    I am looking forward to see your SSD issues and MBP testing.

    Tech news has been dull for number of years, SSD finally make thing interesting again.
  • sunjava04 - Friday, November 12, 2010 - link

    hey anand,

    would you provide a different SSD test result with macbook pro?

    like me, many macbook unibody or new mac book pro users use mainly for browsers and office, iphoto, itunes. we like to have ssd to make our experience better and faster. i search many website and blogs, but there are no clear answer for this.
    even, apple keeping quite about "TRIM" support!

    after, reading your article, i am still not sure which ssd is good for my macbook unibody. i got the an idea of garbage collection which was very helpful. but didn't know, how long ssd last if we use for general purpose?

    i really appreciate if you provide descriptive guideline of ssd for OS X.
    please, also tell us, is it worth to waiting for intel 3rd gen.?
    i desperately need ssd for mac book unibody!
    i dont mind to pay premium as long as performance stay as it is! also, i can store movies and other data in external hard drive!

    Sincerely,
    Rishi
  • iwodo - Saturday, November 13, 2010 - link

    As of Today, i just read another review on SSD comparison. Namely Intel SSD and Sandforce,

    While the Sandforce wins on all synthetic benchmarks like Seq Read Write and Random Read Write.

    It was booting slower, starting up Apps slower, finish task slower then Intel SSD.
    And by an noticeable amount of percentage. ( 10 - 30% )

    I am beginning to think there are things Sandforce dont work well at all. But again, we have yet to find out what.
  • Out of Box Experience - Tuesday, November 16, 2010 - link

    Sandforce controllers give you the "illusion" of speed by writing less data to flash than contollers without hardware compression

    If I wanted to test the speed of a copy and paste involving 200MB of data in the flash cells of a sandforce based controller, how can I tell exactly how much data is in the flash cells?

    I mean, would Windows tell me how much compressed data is represented in the flash cells (200MB), or would Windows tell me how much compressed data is in the cells (maybe only 150MB) ?

    The only way I can see fairly comparing an SSD with hardware compression and one without is to be sure you are actually writing the same amount of data to the flash cells (in this case - 200MB)

    If sandforce based SSD's will only tell you how much data is represented and not what is actually on the drive, then I think the best way would be to use data that cannot be compressed

    The tests I described in another post here involved copying and pasting 200MB of data which took 55 seconds on an ATOM computer with a Vertex 2
    200MB / 55 sec = 3.6MB/sec

    But if the 200MB was only a representation and the actual amount of data was for example 165MB in flash, then the actual throughput of my Vertex was even worse than I thought (In this case - 165MB / 55sec = 3.0MB/sec)

    I need to know exactly how much data is indeed in flash or I need to start using non-compressible data for my tests

    Make sense?
  • Out of Box Experience - Monday, November 15, 2010 - link

    There has to be an missing pieces in our performance test, something that these companies knows and we dont.
    ------------------------------------------

    Like smoke & Mirrors?

    Sandforce Controllers Compress the data to give you the impression of speed

    check the speed without compression and then compare drives

Log in

Don't have an account? Sign up now