OCZ has released a new series of SSDs called Synapse Cache today. This announcement is a bit different from normal SSD announcements since OCZ will be bundling Dataplex caching software with these SSDs, hence the "Cache" in the series name. As for the other specs, you are pretty much looking at yet another 2.5" SF-2281 based SSD. 

OCZ Synapse Cache Series
Raw Capacity 64GB 128GB
Available capacity 32GB 64GB
Read Speed 550MB/s 550MB/s
Write Speed 490MB/s 510MB/s
4KB Random Read 10,000 IOPS 19,000 IOPS
4KB Random Write 75,000 IOPS 80,000 IOPS

Capacities are limited to 64GB and 128GB, but there is 50% overprovision, meaning that only 32GB and 64GB will be usabe. It wouldn't make much sense to use a bigger SSD for caching though, and Intel is limiting their Smart Response Technology (our review) to 64GB as well. To briefly summarize the idea of caching; the software analyzes your usage and moves the most frequently accessed files to the SSD, while keeping the less frequently used files in the HD. Especially with smaller SSDs like 64GB, caching can be very useful because it can be hard to decide what goes to the SSD and what doesn't - now software does that for you. 

As a whole, SSD caching with OCZ Synapse might be a good option for people without Intel Z68 chipset. Which is more effective, remains to be seen though. Pricing is unfortunately unknown, so it's hard to say how attractive Synapse really is. Keeping the price close to regular SSDs is important because most people probably won't be ready to pay tons of extra for caching software. OCZ is claiming immediate availability but none of the biggest retailers have Synapse listed as of today. 

Source: OCZ

POST A COMMENT

37 Comments

View All Comments

  • Paul Tarnowski - Wednesday, September 21, 2011 - link

    That was supposed to be BIGGER, standby caps. Although what I wrote works too. Reply
  • MrSpadge - Wednesday, September 21, 2011 - link

    "And to make caching effective on an SSD, you ideally want the OS running the show. It knows what is going to have lots of access (non-sequential, small read/write, frequent seeks)."

    Intel's driver knows that stuff, too. Not sure if it get's it from the OS or does its own profiling.. which would seem unnecessary.

    "But with these caching systems, what we're going to need to see are built-in batteries, or bigger, standby caps. A computer should have enough time to flush all writes."

    If your cache works in the safe mode, i.e. no data is written exclusively to the cache, loosing power is just as dangerous as without the cache. If you're also caching writes one could argue that the chance of data loss in case of power loss is also reduced, because your cache SSD should be able to write faster than your HDD and thus reduce the time needed for the writing, which in turn reduces the chance of being caught by a power loss during a write operation.

    MrS
    Reply
  • MrSpadge - Wednesday, September 21, 2011 - link

    Intel (and probably other software solutions) can cache frequently accessed parts of large files, which you can't at the file system level. With huge data files for games this is a huge plus for "not-filesystem".

    If it's done within the HDD it's very simple and reliable, works with any OS. But is less flexible, which I don't like.

    MrS
    Reply
  • Bob-o - Wednesday, September 21, 2011 - link

    > Intel (and probably other software solutions) can cache frequently
    > accessed parts of large files, which you can't at the file system level.

    I don't think you understand how filesystems work. . .
    Reply
  • applestooranges - Thursday, September 22, 2011 - link

    I looked into this as well... I think what Spadge was saying is that its not always optimum to cache an "entire file" when the OS or App might only be using parts of it. Like my stupid .PST file for Outlook. It would be a waste of my cache capacity to save the whole .PST file in the cache, but a smart cache software solution would probably just cache the relevant accesses to that file (block level?). This is one reason why some of those other cache attempts at so-called "caching" are no good. Kind of like boot drives - kind of primitive to keep all that data on the SSD when only some of it, often a small fraction, is actually used very often (if at all), and thus you are wasting cache SSD capacity on static files. At least this is what I'm getting from looking at some of the whitepapers and presentations avaialbe. Reply
  • Bob-o - Thursday, September 22, 2011 - link

    And my point was, filesystems don't work with entire files. Go research what a "block" is. . . Reply
  • bunnyfubbles - Thursday, September 22, 2011 - link

    you're not thinking with too much tunnel vision

    caching is extremely useful for me

    OS and primary apps go on a SSD, everything else (mostly games) goes on a SSD + HDD cache array

    so really that sweeping need for this to be implemented on the file system level to be useful is a bit of an exaggeration
    Reply
  • bunnyfubbles - Thursday, September 22, 2011 - link

    whoops, shouldn't be a "not" in that first sentence, time for bed I suppose :P Reply
  • applestooranges - Thursday, September 22, 2011 - link

    Hey, if all I have to do is plug in the SSD and install the cache driver (Dataplex?), then that sounds pretty darn good to me. I just wish I could find it for sale? Anybody know where they are selling it? Not on Amazon or NewEgg yet... Reply
  • Visual - Wednesday, September 21, 2011 - link

    Write IOPS are higher than read, how does that work? If it is just write-buffering them in some cache, that's really deceiving. If it is counting on grouping them up in bigger packets, even more so. Reply

Log in

Don't have an account? Sign up now