I'm not sure what it is about SSD manufacturers and overly complicated product stacks. Kingston has no less than six different SSD brands in its lineup. The E Series, M Series, SSDNow V 100, SSDNow V+ 100, SSDNow V+ 100E and SSDNow V+ 180. The E and M series are just rebranded Intel drives, these use Intel's X25-E and X25-M G2 controllers respectively with Kingston logo on the enclosure. The SSDNow V 100 is an update to the SSDNow V Series drives, both of which use the JMicron JMF618 controller. Don't get this confused with the 30GB SSDNow V Series Boot Drive which actually uses a Toshiba T6UG1XBG controller, also used in the SSDNow V+. Confused yet? It gets better.

The standard V+ is gone and replaced by the new V+ 100, which is what we're here to take a look at today. This drive uses the T6UG1XBG controller but with updated firmware. The new firmware enables two things: very aggressive OS-independent garbage collection and higher overall performance. The former is very important as this is the same controller used in Apple's new MacBook Air. In fact, the performance of the Kingston V+100 drive mimics that of Apple's new SSDs:

Apple vs. Kingston SSDNow V+100 Performance
Drive Sequential Write Sequential Read Random Write Random Read
Apple TS064C 64GB 185.4 MB/s 199.7 MB/s 4.9 MB/s 19.0 MB/s
Kingston SSDNow V+100 128GB 193.1 MB/s 227.0 MB/s 4.9 MB/s 19.7 MB/s

Sequential speed is higher on the Kingston drive but that is likely due to the size difference. Random read/write speed are nearly identical. And there's one phrase in Kingston's press release that sums up why Apple chose this controller for its MacBook Air: "always-on garbage collection". Remember that NAND is written to at the page level (4KB), but erased at the block level (512 pages). Unless told otherwise, SSDs try to retain data as long as possible because to erase a block of NAND usually means erasing a bunch of valid as well as invalid data and then re-writing the valid data again to a new block. Garbage collection is the process by which a block of NAND is cleaned for future writes.


Diagram inspired by IBM Zurich Research Laboratory

If you're too lax with your garbage collection algorithm then write speed will eventually suffer. Each write will eventually have a large penalty associated with it, driving write latency up and throughput down. Too aggressive with garbage collection and drive lifespan suffers. NAND can only be written/erased a finite number of times, aggressively cleaning NAND before it's absolutely necessary will keep write performance high at the expense of wearing out NAND quicker.

Intel was the first to really show us what realtime garbage collection looked like. Here is a graph showing sequential write speed of Intel's X25-V:

The almost periodic square wave formed by the darker red line above shows a horribly fragmented X25-V attempting to clean itself up at every write. Eventually, with enough writes, the X25-V will return to peak performance. At every write request the X25-V controller will attempt to clean some blocks and return to peak performance. The garbage collection isn't seamless but it will eventually restore performance.

Now look at Kingston's SSDNow V+100, both before fragmentation and after:

There's hardly any difference. Actually the best way to see this in work is to look at power draw when firing random write requests all over the drive. The SSDNow V+100 has wild swings in power consumption during our random write test ranging from 1.25W to 3.40W. The swings would happen several times in a window of a couple of seconds. The V+100 is aggressively tries to reorganize writes and recycle bad blocks, more aggressively than we've seen from any other SSD.

The benefit of this is you get peak performance out of the drive regardless of how much you use it, which is perfect for an OS without TRIM support - ahem, OS X. Now you can see why Apple chose this controller.

There is a downside however: write amplification. For every 4KB we randomly write to a location on the drive, the actual amount of data written is much, much greater. It's the cost of constantly cleaning/reorganizing the drive for performance. While I haven't had any 50nm, 4xnm or 3xnm NAND physically wear out on me, the V+100 is the most likely to blow through those program/erase cycles. Keep in mind that at the 3xnm node you no longer have 10,000 cycles, but closer to 5,000 before your NAND dies. On nearly all drives we've tested this isn't an issue, but I would be concerned about the V+100. Concerned enough to recommend running it with 20% free space at all times (at least). The more free space you have, the better job the controller can do wear leveling.

Prices and New Competitors
Comments Locked

96 Comments

View All Comments

  • Taft12 - Thursday, November 11, 2010 - link

    Can you comment on any penalty for 3Gbps SATA?

    I'm not convinced any SSD can exhibit any performance impact of the older standard except in the most contrived of benchmarks.
  • Sufo - Thursday, November 11, 2010 - link

    Well, i've seen speeds spike above 375MB/s tho ofc this could well be erroneous reporting on windows' side. I haven't actually hooked the drive up to my 3gbps ports so in all honesty, i can't compare the two - perhaps i should run a couple of benches...
  • Hacp - Thursday, November 11, 2010 - link

    It seems that you recommend drives despite the results of your own storage bench. It shows that the Kingston is the premier ssd to have if you want a drive that handles multi-tasking well.

    Sandforce is nice if you do light tasks, but who the hell buys an ssd that only does well handling light tasks? No one!
  • JNo - Thursday, November 11, 2010 - link

    "Sandforce is nice if you do light tasks, but who the hell buys an ssd that only does well handling light tasks? No one!"

    Er... I do. Well obviously I would want a drive that does well handling heavy task loads as well but there are limits to how much I can pay and the cost per gig of some of the better performers is significantly higher. Maybe money is no object for you but if I'm *absolutely honest* with myself, I only *very rarely* perform the type of very heavy loads that Anand uses in his heavy load bench (it has almost ridiculously levels of multi-tasking). So the premium for something that benefits me only 2-3% of the time is unjustified.
  • Anand Lal Shimpi - Thursday, November 11, 2010 - link

    That's why I renamed our light benchmark a "typical" benchmark, because it's not really a light usage case but rather more of what you'd commonly do on a system. The Kingston drive does very well there and in a few other tests, which is why I'd recommend it - however concerns about price and write amplification keep it from being a knock out of the park.

    Take care,
    Anand
  • OneArmedScissorB - Thursday, November 11, 2010 - link

    "Sandforce is nice if you do light tasks, but who the hell buys an ssd that only does well handling light tasks? No one!"

    Uh...pretty much every single person who buys one for a laptop?
  • cjcoats - Thursday, November 11, 2010 - link

    I have what may be an unusual access pattern -- seeks within a file -- that I haven't
    seen any "standard" benchmarks for, and I'm curious how drives do under it, particularly
    the Sandforce drives that depend upon (inherently sequential?) compression. Quite possibly, heavy database use has the same problem, but I haven't seen benchmarks on that, either.

    I do meteorology and other environmental modeling, and frequently we want to "strip mine" the data in various selective ways. A typical data file might look like:

    * Header stuff -- file description, etc.

    * Sequence of time steps, each of which is an
    > array of variables, each of which is a
    + 2-D or 3-D grid of values

    For example, you might have a year's worth of hourly meteorology (about 9000 time steps),
    for ten variables (of which temperature is the 2'nd), on a 250-row by 500-column grid.

    So for this file, that's 0.5 MB per variable, 5 MB per time step, total size 45 GB, with
    one file per year.

    Now you might want to know, "What's the temperature for Christmas Eve?" The logical sequence of operations to be performed is:

    1. Read the header
    2. Compute timestep-record descriptions
    3. Seek to { headersize + 8592*5MB + 500KB }
    4. Read 0.5 MB

    Now with a "conventional" disk, that's two seeks and two reads (assuming the header is not already cached by either the OS or the application), returning a result almost instantaneously.
    But what does that mean for a Sandforce-style drive that relies on compression, and implicitly on reading the whole thing in sequence? Does it mean I need to issue the data request and then go take a coffee break? I remember too well when this sort of data was stored in sequential ASCII files, and such a request would mean "Go take a 3-martini lunch." ;-(

  • FunBunny2 - Thursday, November 11, 2010 - link

    I've been asking for similar for a while. What I want to know from a test is how as SSD behaves as a data drive for a real database, DB2/Oracle/PostgreSQL with 10's of gig of data doing realistic random transactions. The compression used by SandForce becomes germane, in that engine writers are incorporating compression/security in storage. Whether one should use consumer/prosumer drives for real databases is not pertinent; people do.
  • Shadowmaster625 - Thursday, November 11, 2010 - link

    Yes I have been wondering about exactly this sort of thing too. I propose a seeking and logging benchmark. It should go something like this:

    Create a set of 100 log files. Some only a few bytes. Some with a few MB of random data.

    Create one very large file for seek testing. Just make an uncompressed zip file filled with 1/3 videos and 1/3 temporary internet files and 1/3 documents.

    The actual test should be two steps:

    1 - Open one log file and write a few bytes onto the end of it. Then close the file.

    2 - Open the seek test file and seek to random location and read a few bytes. Close the file.

    Then I guess you just count the number of loops this can run in a minute. Maybe run two threads, each working on 50 files.
  • Shadowmaster625 - Thursday, November 11, 2010 - link

    Intel charging too much, surely you must be joking!

    Do you know what the Dow Jones Industrial Average would be trading at if every DOW component (such as Intel) were to cut their margins down to the level of companies like Kingston? My guess would be about 3000. Something to keep in mind as we witness Bernanke's helicopter induced meltup...

Log in

Don't have an account? Sign up now