Random/Sequential Read & Write Performance

To start with, let's look at how the Corsair Force F40 and Intel SSD 311 stack up. Remember that the F40 is based on SandForce's SF-1200 controller, meaning it gains its high performance by using real-time compression and deduplication techniques to reduce what it actually writes to NAND. Data that can easily be compressed is written as quickly as possible, while data that isn't as compressible goes by much slower. As a cache the drive is likely to encounter data from both camps, although Intel's SRT driver does filter out sequential file operations so large incompressible movies and images should be kept out of the cache altogether.

Iometer - 128KB Sequential Write

Peak sequential write performance is nearly double that of Intel's SSD 311. Toss incompressible (fully random) data at the drive however and it's noticeably slower. I'd say in practice the F40 is probably about the speed of the 311, perhaps a bit quicker in sequential writes.

Iometer - 128KB Sequential Read

For only having five NAND devices on board, Intel's SSD 311 boasts extremely high sequential read performance. At best the F40 equals it, but in reality the sequential read performance is likely a bit lower.

Iometer - 4KB Random Write, 8GB LBA Space, QD=3

Random write performance is higher across the board, even with incompressible data. Random read/write performance is incredibly important for a cache, especially if most sequential data is kept off the cache to begin with. Things could be quite good for the F40 drive here.

Iometer - 4KB Random Write, 8GB LBA Space, QD=32

Iometer - 4KB Random Read, QD=3

Random read performance unfortunately doesn't look as good for the F40. Again, Intel's SSD 311 performs a lot like a X25-M G2, which happens to do very well in our random read test. At best the F40 is an equal performer, but at worst it's about 75% of the performance of the SSD 311.

Without a clear victory here, we'll likely see mixed results in our storage benchmark suite.

Introduction AnandTech Storage Bench 2011 - Heavy Workload
Comments Locked

81 Comments

View All Comments

  • c4v3man - Friday, May 13, 2011 - link

    ...because there are still some boards out there that haven't been recalled, and the potential bad press from someone using the cache feature and losing their data due to a failed port would be very damaging to their reputation. Z68 chipsets are unaffected due to their launch date...

    Anyone else think this may be the case?
  • Comdrpopnfresh - Friday, May 13, 2011 - link

    In it's current implementation, Intel's Smart Response Technology is self-defeating and a wasted allocation of SSD potential:

    -It doesn't provide benefit, or it bottlenecks, write performance to the storage system
    -Establishes an artificial NAND-wearing function of read operations; inflated number of writes is incurred on a ssd used as cache that is not on one used for storage
    -Squanders the potential of ssd benefits that would be recognized if the ssd was used as primary storage
    -I strongly doubt that data integrity is maintained under Maximized mode in the event of a power loss
    -A lot of the processes that would see the most benefit from ssd speeds don't see it applied under SRT (ex: antivirus)

    To mitigate the downsides of SRT you can:
    -use a larger ssd -use an SLC, rather than MLC ssd

    But the two of those solutions are mutually constraining, and the resultant configuration is at ends with the entire exercise of ssd-caching and intended purpose of Intel's SRT.

    The reason why these downsides put a damper on SRT but aren't expressed in ssd-caching in enterprise space is because the product constraints are paradoxal & divergent from the intentions and goals of the technology:
    1- If you place a ssd sized well enough to serve as the primary storage as ssd-cache with SRT, all results conclude it's better off as storage rather than cache. No benefits. The allowable size of a large ssd for use as ssd-caching is 64GB- the rest is a user-accessible partition that is along for the ride (dragged through the mud?)
    2- SRT is intended for small ssds, but the minimum size is 18.6GB. Even with Intel's own specially purposed SSD, all signs point to a larger ssd being necessary to get the most from SRT. But the point of SRT is to spend less and get away with a smaller ssd (if you consider 'getting away with' underutilizing ssd benefits, all the while coughing and hacking along the way, that's your perogative)
    -3 The chosen ssd needs sequentiual writes higher than the HDD being cached, or it will bottleneck writes storage. You can use an SLC model but doing so means reduced drive size and higher costs incurred- both in contention w/ SRT's purpose
    -4 For the necessary speeds to make SRT worthwhile, MLC is an option on ssd models of higher capacity because they have more channels to move data along. But then your not using a small drive, and castrating and otherwise delightful pirmary storage purposed ssd.
    -5 Higher write-cycles are mitigated by using SLC or large enough sized MLC for wear-leveling to retard degredation. This presents the same problems as in 4 and 5.

    In enterprise space, the scales are much larger, the budgets higher, and the implementations of hardware are performed to attain the proper ends or because it is the best option for the parameters of usage. Most likely large, expensive, SLC drives are used as cache for arrays with performance requirements necessitating their use: it all fits together like a hand in a glove.

    To be of real potential or an alternative to primary ssd-storage, the hardware allocation used is either a waste of hardware or doesn't yield a comparably worthwhile solution. its like taking that cozy enterprise glove and trying to make a mitten out of it
  • JNo - Sunday, May 15, 2011 - link

    +1

    SSDs much better used as main OS drive. Unless money no object, any money for a cache SSD is better spent on a bigger main SSD. A small cache SSD couldn't speed up my 2TB media drive anyway as most films are 9GB+ and would be sequential so ignored by cache. Game loads are the other area but again, better off with a larger drive and using Steam Mover to move the ones you want on to SSD as an when you need rather than slower speed, integrity losses and evictions and higher wear levelling of cache SSD set up. For OS drive you're much safer using enhanced mode (less performance) as maximised sounds too risky and virus scans still slow. Overall I can barely think of a situation where SRT benefits a lot of people much in a remotely economical fashion.
  • adamantinepiggy - Monday, May 16, 2011 - link

    I'm curious how badly this caching beats up the SSD. Like compdrpopn above, I assume there is a reason Intel chose SLC NAND, and I assume because of its much larger write cycles over MLC and faster write speeds when completely full. Figure a normal consumer SSD does not have the majority of it's cells re-written constantly nor is it generally completely full, while a SSD used to cache a hard drive "is" going to to be constantly full and completely re-written to every cell.

    Example, one installs the OS, MSOffice, and any other standard apps to a normal bootup SSD. In regular usage, other than pagefiles and temp files, the majority of the cells retain the same data pretty much forever over the life of the SSD. With a SSD used in a HD caching capacity, I assume it's going to cache until completely full very quickly, and then overwrite all the data continuously as it constantly caches different/new HD data. That's a lot of writing/erase cycles if the SSD, acting as a cache for the HD, if it gets flushed often. How is typical a typical MLC SSD gonna handle this wear pattern?

    Now take this with a grain of salt, as I am just conjecturing with my limited understanding of how Intel is actually caching the HD but coupled with the question on why Intel would press an relatively "expensive" SLC SSD for its SSD of choice in this particular usage leads me to believe that this type of HD caching duty is gonna beat up normal MLC SSD's as the wear pattern is not the same as the expected use patterns designed into consumer SSD firmware.
  • gfody - Friday, May 13, 2011 - link

    wouldn't an M4 or C300 perform better as cache since it has much lower latency than other SSDs?
  • Action - Saturday, May 14, 2011 - link

    I would second this comment. On the surface if would seem to me that Sandforce drives would be handicapped in this particular application as the overhead and latency of compressing the data would have a negative impact. A non-Sandforce drive would be the desired one to use and the M4 or C300 would appear to be the ones to try first over the Vertex or Agility drives in this particular application.
  • sparky0002 - Friday, May 13, 2011 - link

    Give us a 312 is sort of the message here.

    Double up on the NAND chips, use all 10 channels and as a side effect it would be 40gb.

    The current model 311 is too limiting in write speeds for more enthusiasts. the only option would be to run it in Enhanced mode so that write go direct to the platters.

    If that is then the case, then it would be nice to run a system of a good fast ssd with a massive traditional disk as storage and a ssd 311 to cache it.

    Now the question becomes if the platter has all your games and music and movies on it. just how good is intel cache dirtying policy? Load up a few games so they are in cache, then please go listen to 25gb of music. lol. and see if the games are still in cache.
  • Casper42 - Friday, May 13, 2011 - link

    So I cracked open my wife's new SB based HP Laptop and while there isnt a ton of room, it really makes me wonder if laptop vendors shouldn't be including the MBA style SSD socket inside their laptops.

    1) You could do a traditional Dual Drive design and have a 128GB-256GB Boot SSD in the sleeker MBA Form Factor with a traditional HDD in the normal 2.5" slot for storing data.
    2) With this new feature being retroactive on alot of existing laptops using the right chipset, and future laptop models as well, why not offer a combo with a 64GB SSD Stick pre-configured to be used by SRT along side the same traditional 2.5" HDD. This could be a $100-150 upgrade and I would assume it would produce even better results when boosting the traditionally slower laptop drives.

    Especially on 15.6" models I just cant see that they couldnt squeeze this in there somewhere.

    So perhaps as a follow up, you could grab an average 5400 and 7200 rpm laptop drive and run through the tests again with either of the 2 SSDs you tested so far, or if there is an adapter out there, the actual MBA 64GB SSD Stick drive.

    Thx
  • IlllI - Friday, May 13, 2011 - link

    how is this different from readyboost?

    how is this different than the cache that typical hard drives have had for years now?

    other than performance.. isnt it basically the same idea?

    and if so, i wonder how it seems to be much better/faster than those other two concepts
  • cbass64 - Friday, May 13, 2011 - link

    As far as I know, ReadyBoost only caches small, random reads. Any large reads are sent directly to the HDD. Writes aren't cached at all.

    Cache on HDD's are tiny...32, maybe 64MB. Just not big enough to make a real difference. If you made the cache any larger, the price of the drives would go way up. Plus they use cheap flash.

Log in

Don't have an account? Sign up now