Final Words

Kingston, like many of its competitors, desperately needs a simplified product lineup. On the one hand, Kingston has hedged its bets. With three different controller makers supplying hardware for its six SSDs, Kingston can't go wrong. However, the Toshiba powered SSDNow V+ spans the gamut from mediocre random write speed to chart topping performance in some of our workloads. The real world test results are strong enough for me to recommend the drive, however two things don't sit well with me.

First is the price. The V+100 commands a nearly $50 premium over competing SandForce drives. While I can understand paying some premium for Toshiba's name and hopefully reliability, that's a bit much. Compared to the RealSSD C300 the premium is negligible, so the complaint only applies as long as SandForce is in the running.


Kingston sells SSDs as both standalone drives or a part of an upgrade kit

The second issue is the overly aggressive garbage collection. Sequential performance on the V+100 just doesn't change regardless of how much fragmentation you throw at the drive. The drive is quick to clean and keeps performance high as long as it has the free space to do so. This is great for delivering consistent performance, however it doesn't come for free. I am curious to see how the aggressive garbage collection impacts drive lifespan. Kingston ships the V+100 with a 3-year warranty and to Kingston's credit I haven't had any other drives die as a result of wearing out the NAND. Even if the V+100 has higher effective write amplification than the competition, your usage model will determine whether or not you bump into it.

Toshiba is clearly close to knocking this one out of the park. There's a tangible improvement over the original V+ and the drive is clearly doing some things right. The V+100 isn't flawless, but it's finally in the list of drives to consider.

SandForce continues to be the sensible choice, at least in terms of performance per dollar for a boot/application drive. I am careful to mention it as a boot/application drive because if you start storing a lot of incompressible data on the drive (e.g. movies, music, photos) then SandForce quickly loses a lot of its performance advantage. Then you're left with Crucial's RealSSD C300 which delivers more consistent performance regardless of data, at the expense of lower steady state write performance. Without TRIM, the C300 can quickly get into a not-so-great performance situation.

If you don't want a SandForce drive and are running an OS without TRIM support, the V+100 is probably a better option than the C300 thanks to its aggressive garbage collection. I realize this isn't the simplest recommendation, but that's the reality of today's SSD market. There are a lot of great options, but nothing is absolutely perfect.

Power Consumption
Comments Locked

96 Comments

View All Comments

  • Taft12 - Thursday, November 11, 2010 - link

    Can you comment on any penalty for 3Gbps SATA?

    I'm not convinced any SSD can exhibit any performance impact of the older standard except in the most contrived of benchmarks.
  • Sufo - Thursday, November 11, 2010 - link

    Well, i've seen speeds spike above 375MB/s tho ofc this could well be erroneous reporting on windows' side. I haven't actually hooked the drive up to my 3gbps ports so in all honesty, i can't compare the two - perhaps i should run a couple of benches...
  • Hacp - Thursday, November 11, 2010 - link

    It seems that you recommend drives despite the results of your own storage bench. It shows that the Kingston is the premier ssd to have if you want a drive that handles multi-tasking well.

    Sandforce is nice if you do light tasks, but who the hell buys an ssd that only does well handling light tasks? No one!
  • JNo - Thursday, November 11, 2010 - link

    "Sandforce is nice if you do light tasks, but who the hell buys an ssd that only does well handling light tasks? No one!"

    Er... I do. Well obviously I would want a drive that does well handling heavy task loads as well but there are limits to how much I can pay and the cost per gig of some of the better performers is significantly higher. Maybe money is no object for you but if I'm *absolutely honest* with myself, I only *very rarely* perform the type of very heavy loads that Anand uses in his heavy load bench (it has almost ridiculously levels of multi-tasking). So the premium for something that benefits me only 2-3% of the time is unjustified.
  • Anand Lal Shimpi - Thursday, November 11, 2010 - link

    That's why I renamed our light benchmark a "typical" benchmark, because it's not really a light usage case but rather more of what you'd commonly do on a system. The Kingston drive does very well there and in a few other tests, which is why I'd recommend it - however concerns about price and write amplification keep it from being a knock out of the park.

    Take care,
    Anand
  • OneArmedScissorB - Thursday, November 11, 2010 - link

    "Sandforce is nice if you do light tasks, but who the hell buys an ssd that only does well handling light tasks? No one!"

    Uh...pretty much every single person who buys one for a laptop?
  • cjcoats - Thursday, November 11, 2010 - link

    I have what may be an unusual access pattern -- seeks within a file -- that I haven't
    seen any "standard" benchmarks for, and I'm curious how drives do under it, particularly
    the Sandforce drives that depend upon (inherently sequential?) compression. Quite possibly, heavy database use has the same problem, but I haven't seen benchmarks on that, either.

    I do meteorology and other environmental modeling, and frequently we want to "strip mine" the data in various selective ways. A typical data file might look like:

    * Header stuff -- file description, etc.

    * Sequence of time steps, each of which is an
    > array of variables, each of which is a
    + 2-D or 3-D grid of values

    For example, you might have a year's worth of hourly meteorology (about 9000 time steps),
    for ten variables (of which temperature is the 2'nd), on a 250-row by 500-column grid.

    So for this file, that's 0.5 MB per variable, 5 MB per time step, total size 45 GB, with
    one file per year.

    Now you might want to know, "What's the temperature for Christmas Eve?" The logical sequence of operations to be performed is:

    1. Read the header
    2. Compute timestep-record descriptions
    3. Seek to { headersize + 8592*5MB + 500KB }
    4. Read 0.5 MB

    Now with a "conventional" disk, that's two seeks and two reads (assuming the header is not already cached by either the OS or the application), returning a result almost instantaneously.
    But what does that mean for a Sandforce-style drive that relies on compression, and implicitly on reading the whole thing in sequence? Does it mean I need to issue the data request and then go take a coffee break? I remember too well when this sort of data was stored in sequential ASCII files, and such a request would mean "Go take a 3-martini lunch." ;-(

  • FunBunny2 - Thursday, November 11, 2010 - link

    I've been asking for similar for a while. What I want to know from a test is how as SSD behaves as a data drive for a real database, DB2/Oracle/PostgreSQL with 10's of gig of data doing realistic random transactions. The compression used by SandForce becomes germane, in that engine writers are incorporating compression/security in storage. Whether one should use consumer/prosumer drives for real databases is not pertinent; people do.
  • Shadowmaster625 - Thursday, November 11, 2010 - link

    Yes I have been wondering about exactly this sort of thing too. I propose a seeking and logging benchmark. It should go something like this:

    Create a set of 100 log files. Some only a few bytes. Some with a few MB of random data.

    Create one very large file for seek testing. Just make an uncompressed zip file filled with 1/3 videos and 1/3 temporary internet files and 1/3 documents.

    The actual test should be two steps:

    1 - Open one log file and write a few bytes onto the end of it. Then close the file.

    2 - Open the seek test file and seek to random location and read a few bytes. Close the file.

    Then I guess you just count the number of loops this can run in a minute. Maybe run two threads, each working on 50 files.
  • Shadowmaster625 - Thursday, November 11, 2010 - link

    Intel charging too much, surely you must be joking!

    Do you know what the Dow Jones Industrial Average would be trading at if every DOW component (such as Intel) were to cut their margins down to the level of companies like Kingston? My guess would be about 3000. Something to keep in mind as we witness Bernanke's helicopter induced meltup...

Log in

Don't have an account? Sign up now