Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Desktop Iometer - 4KB Random Read

Desktop Iometer - 4KB Random Write

Desktop Iometer - 4KB Random Write (QD=32)

Random IO performance is relatively low per today's standards but not truly horrible. I was expecting something worse but the JMF667H turns out to be rather competitive with popular big brand drives like the Samsung 840 EVO and Crucial M500.

Sequential Read/Write Speed

To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

Desktop Iometer - 128KB Sequential Read

The same goes for sequential performance. It's not bad but there are far better options at 120/128GB.

Desktop Iometer - 128KB Sequential Write

 

AS-SSD Incompressible Sequential Read/Write Performance

The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers.

Incompressible Sequential Read Performance

Incompressible Sequential Write Performance

AnandTech Storage Bench 2013 Random & Sequential Performance - HDD
Comments Locked

100 Comments

View All Comments

  • apoe - Friday, February 7, 2014 - link

    People think US internet speeds are slow? When I lived in China, 50 kb/s was considered really fast. In the US, with a standard ISP, I can download Steam games at 6 MB / s, 120 times faster. According to Forbes, the US is in the top 10 for fastest internet speeds, but nothing tops South Korea. Helpful that plenty of larger datacenters are located here too.
  • xKrNMBoYx - Tuesday, February 4, 2014 - link

    Try downloading a game or program that is 20-50+ GB constantly. That eats bandwidth and datacap. With out optical drives your left having to download everything or do multi step transfer from disk to computer to another computer. There are external optical drives but that is another story. There will never be a time where optical drives become obsolete until ISP invest more in speed/reliability with lower prices. Then there's the group of people that wouldn't use the internet either.
  • JMcGrath - Sunday, February 9, 2014 - link

    @Morawka -

    Forget Blu Ray, it will be left far behind in the (fairly) near future. I won't go as far as saying that BD is obsolete or going to be for quite a while, BD has far too much influence in the current movie industry or in the near future - even with 4K already hitting the markets a lot faster than people ever thought possible.

    However, as other people have stated BD is simply not a feasible solution going forward. It has served it's purpose for many years now, but just like CD and DVD better technologies will replace it with larger and faster storage mediums.

    I think it's too hard to say what will become the dominant technology in the near future, and hopefully we won't have to go through another BD vs HD-DVD type war again(!) but there are a number of different technologies in the works, many of which have already shown working prototypes to replace the aging BD tech.

    Most of these tech's have gone with either smaller track widths and laser technologies, additional layers or a combination of the two. However, there is one new technology that sounds very promising, and one I believe (and hope) will become the adopted standard - Holographic Discs!

    As the focus on ultra high resolutions and the aim for "retina" type displays, deeper color depths and shading, and higher true refresh rates (4K/60 or 4K/120 for example) new technologies will be needed. Most internet connections - even the fastest available in most areas - won't support these extreme bitrates, and BD simply can't keep up either.

    I have seen demo's of everything from true 24-bit color panels, 60hz and 120hz 4K via HDMI 2.0 or DP 1.2+, multi-panel / multi-head displays de-multiplexed (demuxed) showing true 23:9 content at 11280x4320P @ 120hz using multiple DP/HDMI connections.

    When talking about just the current standard 4K/30 on an RGB 4:4:4, 12-bit panel, you're talking about:

    3840 * 2160 * 36 * 30 = 8,957,952,000 bps / 8 = 373,248,000 bytes/s...
    =1.04GB/s, that's 3.67TB/hour (uncompressed, true 2160P)!!

    That's 62.6GB / minute or just over 7.33TB for a 2 hour long movie, and this is excluding audio!!

    Now, add in new technologies (coming very soon) like 24-bit color, 60FPS, and the *real* widescreen aspect and you're looking at closer to 367.6GB/s and 43TB for a 2 hour movie!

    I haven't kept a real close eye on the holographic disc technology lately, I know it was originally created by GE (who actually had a working, but smaller 4TB/Layer, proto ~3 years ago!) The discs themselves look identical to a CD/DVD/BD, but rather than using a single laser on 1 linear track, the drive uses multiple lasers at different angles. The possibilities are really endless considering the technology itself is no different than current media, just add 2 more lasers @ 45 degrees and you increase density by 300%, add 2 more @ 30 degrees and you've increased it 500%, 2 more lasers @ 15 degrees... you get the idea.

    The last I remember reading about the technology was that they had the working 4TB/Layer model that I mentioned, but were also working on using additional lasers and a finer track which would allow them as much as 40-80TB in the future!!

    BD won the last round because Sony had such a large influence on the market, especially with the PS3 hitting the market at the same time as BD/HD-DVD players and HDTV's becoming mainstream. It remains to be seen what will be the driving factor this time around, but with a company as large as GE behind the wheel and the demand in large data centers for backup I think holographic discs stand a good chance at winning the next round.

    For everyone out there that works in a large DC using automated tape backups or cloud based backups, imagine being able to not only store 80/160/320TB on a single disc the size of a CD but being able to do it in less than 2 hours!! Considering you could write 80TB in 2 hours and assuming they release PC writers @ 2X, 4X, 8X, etc you could backup an entire enterprise data center in less than an hour, throw it in a small fire/water proof safe, and you're done!
  • patrickjchase - Thursday, January 30, 2014 - link

    This is tangential, but...

    I have similar backup needs, and faced similar issues with OD unreliability (and also with HDD failures for that matter). I ended up developing my own archiver that stripes backup files across multiple drives/disks (optical or HDD). It calculates and embeds strong block-level checksums, and provides RAID6-style Reed-Solomon-code based redundancy within each block-sized stripe. In particular it can tolerate up to 2 block-checksum failures in each stripe (for for example if I stripe across 7 Blu-Ray disks I can tolerate read errors from any 2 within any given block-sized stripe), which means that it can tolerate a *lot* of optical disk read errors. I intentionally degraded (read: scratched up) a backup set such that every disk yielded a very large number of read errors, but the backup payload as a whole was recoverable.

    With that in mind, I find that optical (Blu-Ray) media remain very useful for backups due to their superior shock/vibration/environmental tolerance as compared to hard drives. If I were using them without my archiver I'd be pretty worried, though :-).
  • Navvie - Friday, January 31, 2014 - link

    I'd be very, very interested in seeing this software!
  • Solandri - Friday, January 31, 2014 - link

    We did that on Usenet in the 1990s. When posting a big binary (e.g. a TV show episode) you had to break it up into multiple parts to fit within the Usenet post length limit. So you might break the TV show into 50 compressed archive files (usually RAR). The problem was Usenet would frequently fail to propagate a file. So even though you posted 50, many sites might only get 49 or 47. The solution was to add parity files. So you'd post the original 50 archive files (RAR) and 5 parity files (PAR).

    Any 50 of those 55 files would allow you to recreate the original video file. You could vary the number of parity files, but about 10% was typical.

    When I was backing up stuff to DVD, I found and downloaded newer versions of the old parity programs. I broke up my backups into enough archive files and parity files that I could lose large portions of several disks, or even an entire disk and still recover my backup. Your block-level parity checksums sounds like it would be more robust and transparent, but I only had to use freely downloadable tools.

    http://en.wikipedia.org/wiki/Parity_file
  • Navvie - Monday, February 3, 2014 - link

    When I read patrickjchase's comment my first thought was "that's exactly like usenet."
  • peter64 - Friday, January 31, 2014 - link

    Yes, thank you Dell for making devices are easily user upgraded. I hate all these other notebooks being completely sealed. You can't even replace the battery.
  • peter64 - Friday, January 31, 2014 - link

    I bet if Dell didn't put that removable optical drive in there, your notebook won't have a 2nd hard drive at all.

    Thanks Dell for giving people options and post-purchase upgrade ability during times of sealed non-user upgradeable devices.
  • Johnmcl7 - Thursday, January 30, 2014 - link

    I think it would have been the ideal solution for a single drive perhaps last year but I think it's too late and too expensive as the Crucial M500 960GB is now under £330. While that's still a bit more expensive than this drive it's much neater (one drive instead of two) and I assume power consumption and heat would be better as well. That's the option I'd go for on a machine now, if they'd managed a 2TB drive for this price it would be a lot more attractive as that would put it beyond what's affordable with SSD's at the moment. I realise there are technical difficulties with 2TB 2.5in drives (I don't know if there's any standard drives available in this capacity) but they have to move forward at some point.

Log in

Don't have an account? Sign up now