Individual Application Performance

PCMark Vantage does a great job of summarizing system performance, but I thought I'd pick a couple of applications to showcase real world strengths/weaknesses of these drives.

The first test is our Photoshop CS4 benchmark by the Retouch Artists. I made one small change to the way this test is run however. Normally I set the number of history states in Photoshop to 1, this significantly reduces the impact of the HDD/SSD on the test and makes it a better measure of CPU/memory speed. Since this is an SSD article, I've left the setting at its default value of 20. The numbers are now a lot lower and the performance a lot more disk bound.

Adobe Photoshop CS4 - Retouch Artists Benchmark

I didn't run all of the drives through this test, just one from each major controller. The results speak for themselves. The Indilinx drives are actually the fastest MLC drives here. Even the Samsung is faster than the Intel drives in this test. Why? Sequential write speed. Even the VelociRaptor has a higher sequential write speed than the X25-M. So while sequential write speed isn't the most important metric to look at when evaluating an SSD, there are real world situations where it does matter.

Intel's performance here is just embarassing. Sequential write speed is something Intel needs to take more seriously in the future. Throw in any amount of random read/write operations alongside your Photoshop usage and the Intel drives would redeem themselves, but this is a very realistic snapshot of their achilles' heel.

Many of you have been asking for compiler benchmarks so I did just that. I grabbed the latest source for Pidgin (a popular IM application) and followed the developer's instructions on building it in Windows:

Compile Pidgin

Nada. I thought perhaps it wasn't stressful enough so I tried building two instances in parallel:

Compile Pidgin...Twice Simultaneously

And...nothing. It seems that building Pidgin is more CPU than IO bound, or at least its IO access isn't random enough to really benefit from an SSD. I'll keep experimenting with other compiler tests but this one appears to be a bust for SSD/HDD performance testing.

PCMark Vantage: Used Drive Performance Power Consumption
Comments Locked

295 Comments

View All Comments

  • zodiacfml - Wednesday, September 2, 2009 - link

    Very informative, answered more than anything in my mind. Hope to see this again in the future with these drive capacities around $100.
  • mgrmgr - Wednesday, September 2, 2009 - link

    Any idea if the (mid-Sept release?) OCZ Colossus's internal RAID setup will handle the problem of RAID controllers not being able to pass Windows 7's TRIM command to the SSD array. I'm intent on getting a new Photoshop machine with two SSDs in Raid-0 as soon as Win7 releases, but the word here and elsewhere so far is that RAID will block the TRIM function.
  • kunedog - Wednesday, September 2, 2009 - link

    All the Gen2 X-25M 80GB drives are apparently gone from Newegg . . . so they've marked up the Gen1 drives to $360 (from $230):
    http://www.newegg.com/Product/Product.aspx?Item=N8...">http://www.newegg.com/Product/Product.aspx?Item=N8...

    Unbelievable.
  • gfody - Wednesday, September 2, 2009 - link

    What happened to the gen2 160gb on Newegg? For a month the ETA was 9/2 (today) and now it's as if they never had it in the first place. The product page has been removed.

    It's like Newegg are holding the gen2 drives hostage until we buy out their remaining stock of gen1 drives.
  • iwodo - Tuesday, September 1, 2009 - link

    I think it acts as a good summary. However someone wrote last time about Intel drive handling Random Read / Write extremely poorly during Sequential Read / Write.

    Has Aanand investigate yet?

    I am hoping next Gen Intel SSD coming in Q2 10 will bring some substantial improvement.
  • statik213 - Tuesday, September 1, 2009 - link

    Does the RAID controller propagate TRIM commands to the SSD? Or will having RAID negate TRIM?
  • justaviking - Tuesday, September 1, 2009 - link

    Another great article, Anand! Thanks, and keep them coming.

    If this has already been discussed, I apologize. I'm still exhausted from reading the wonderful article, and have not read all 17 pages of comments.

    On PAGE 3, it talks about the trade-off of larger vs. smaller pages.

    I wonder if it would be feasible to make a hybrid drive, with a portion of the drive using small pages for faster performance when writing small files, and the majority of it being larger pages to keep the management of the drive reasonable.

    Any file could be written anywhere, but the controller would bias small writes to the small pages, and large writes to large files.

    Externally it would appear as a single drive, of course, but deep down in the internals, it would essentially be two drives. Each of the two portions would be tuned for maximum performance in different areas, but able to serve as backup or overflow if the other portion became full or ever got written to too many times.

    Interesting concept? Or a hair brained idea buy an ignorant amateur?
  • CList - Tuesday, September 1, 2009 - link

    Great article, wonderful to see insightful, in depth analysis.

    I'd be curious to hear anyone's thoughts on the implications are of running virtual hard disk files on SSD's. I do a lot of work these days on virtual machines, and I'd love to get them feeling more snappy - especially on my laptop which is limited to 4GB of ram.

    For example;
    What would the constant updates of those vmdk (or "vhd") files do to the disk's lifespan?

    If the OS hosting the VM is windows 7, but the virtual machine is WinServer2003 will the TRIM command be used properly?

    Cheers,
    CList
  • pcfxer - Tuesday, September 1, 2009 - link

    Great article!

    "It seems that building Pidgin is more CPU than IO bound.."

    Obviously, Mr. Anand doesnt' understand how compilers work ;). Compilers will always be CPU and memory bound, reduce your memory in the computer to say 256MB (or lower) and you'll see what I mean. The levels of recursion necessary to follow the production (grammars that define the language) use up memory but would rarely use the drive unless the OS had terrible resource management. :0.
  • CMGuy - Wednesday, September 2, 2009 - link

    While I can't comment on the specifics of software compilers I know that faster disk IO makes a big difference when your performing a full build (compilation and packaging) of software.
    IDEs these days spend a lot their time reading/writing small files (thats a lot of small, random, disk IO) and a good SSD can make a huge difference to this.

Log in

Don't have an account? Sign up now