Original Link: http://www.anandtech.com/show/7708/ocz-vertex-460-240gb-review

The last few months have not been easy at OCZ. After long-lasting financial issues, the company filed for bankruptcy on November 27th and a week later Toshiba announced that it will be acquiring the assets for $35 million.

Yesterday OCZ announced that the acquistion has been completed and were finally able to shed some lights to the details of the deal. To my surprise, OCZ will continue to operate as an independent subsidiary and won't be integrated into Toshiba's own SSD team. I'm guessing Toshiba sees financial potential in OCZ and is hence keeping things as they are. The only change aside from the change of ownership is a new brand logo and name: OCZ is now called OCZ Storage Solutions to further emphasize the focus of the company. Last time I heard OCZ was looking for a buyer for its PSU business but it seems they've not found one yet.

Update 1/31: We finally have an official statement regarding warranties.

Update 2/1: OCZ has a buyer for its PSU division and we'll have more details in a couple of weeks. The RAM and cooling divisions have been discontinued a long while ago, though.

Comparison of OCZ's Barefoot 3 Based SSDs
  Vector 150 Vertex 460 Vector Vertex 450
Controller Indilinx Barefoot 3
NAND 19nm Toshiba 19nm Toshiba 25nm IMFT 20nm IMFT
Over-Provisioning 12% 12% 7% 7%
Encryption AES-256 AES-256 N/A AES-256
Endurance 50GB/day for 5 years 20GB/day for 3 years 20GB/day for 5 years 20GB/day for 3 years
Warranty 5 years 3 years 5 years 3 years

The Vertex 460 resembles OCZ's flagship Vector 150 a lot. In terms of hardware the only difference between the two is that the Barefoot 3 controller in the Vertex 460 is slightly lower clocked than the one in Vector 150. The Barefoot 3 in the Vector 150 runs at 397MHz while in the Vertex 460 it's clocked at 352MHz. The speed of the controller isn't proportional to the overall performance but there are scenarios (like intensive read/write workloads) where a faster controller will help.

Both drives actually use exactly the same NAND (identical part numbers) but each Vector 150 goes through more testing and validation cycles to make sure the higher endurance criteria is met. Even though the NAND should be the same in both drives, bear in mind that endurance specifications are always minimums -- one part can be more durable than the other as long as both meet spec. By doing additional validation, OCZ is able to pick the highest endurance parts and use them in the Vector 150, whereas lower quality chips (but good enough to meet the mainstream endurance requirements) end up in the Vertex 460. 

The choice of identical NAND in both models is indeed odd but I'm guessing that Toshiba had a hand in this. The Vertex 450 used Micron's NAND but obviously Toshiba doesn't want to use a competitor's NAND in their products, hence the Vertex 450 is replaced with the 460 and Toshiba NAND.

OCZ Vertex 460 Specifications
Capacity 120GB 240GB 480GB
Sequential Read 530MB/s 540MB/s 545MB/s
Sequential Write 420MB/s 525MB/s 525MB/s
4KB Random Read 80K IOPS 85K IOPS 95K IOPS
4KB Random Write 90K IOPS 90K IOPS 90K IOPS
Steady-State 4KB Random Write 12K IOPS 21K IOPS 23K IOPS

Similar to the Vector 150, the Vertex 460 switches to 12% over-provisioning. This seems to be an industry wide trend and to be honest I'm happy with that. The few percents extra makes a huge difference in terms of IO consistency, which in the end accounts for a better user experience. 

Test System

CPU Intel Core i5-2500K running at 3.3GHz (Turbo and EIST enabled)
Motherboard AsRock Z68 Pro3
Chipset Intel Z68
Chipset Drivers Intel + Intel RST 10.2
Memory G.Skill RipjawsX DDR3-1600 4 x 8GB (9-9-9-24)
Video Card Palit GeForce GTX 770 JetStream 2GB GDDR5 (1150MHz core clock; 3505MHz GDDR5 effective)
Video Drivers NVIDIA GeForce 332.21 WHQL
Desktop Resolution 1920 x 1080
OS Windows 7 x64

Thanks to G.Skill for the RipjawsX 32GB DDR3 DRAM kit

Performance Consistency

In our Intel SSD DC S3700 review Anand introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst-case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below we take a freshly secure erased SSD and fill it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. We run the test for just over half an hour, nowhere near what we run our steady state tests for but enough to give a good look at drive behavior once all spare area fills up.

We record instantaneous IOPS every second for the duration of the test and then plot IOPS vs. time and generate the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.

The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, we vary the percentage of the drive that gets filled/tested depending on the amount of spare area we're trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area. If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives we've tested here but not all controllers are guaranteed to behave the same way.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.

  OCZ Vertex 460 240GB OCZ Vector 150 240GB Corsair Neutron 240GB Sandisk Extreme II 480GB Samsung SSD 840 Pro 256GB
25% OP

Performance consistency is more or less a match with the Vector 150. There is essentially no difference, only some slight variation which may as well be caused by the nature of how garbage collection algorithms work (i.e. the result is never exactly the same). 

  OCZ Vertex 460 240GB OCZ Vector 150 240GB Corsair Neutron 240GB SanDisk Extreme II 480GB Samsung SSD 840 Pro 256GB
25% OP


  OCZ Vertex 460 240GB OCZ Vector 150 240GB Corsair Neutron 240GB SanDisk Extreme II 480GB Samsung SSD 840 Pro 256GB
25% OP

TRIM Validation

To test TRIM, I first filled all user-accessible LBAs with sequential data and continued with torturing the drive with 4KB random writes (100% LBA, QD=32) for 60 minutes. After the torture I TRIM'ed the drive (quick format in Windows 7/8) and ran HD Tach to make sure TRIM is functional.

And TRIM works. The HD Tach graph also shows the impact of OCZ's performance mode, although in a negative light. Once half of the LBAs have been filled, all data has to be reorganized. The result is a decrease in write performance as the drive is reorganizing the existing data at the same time as HD Tach is writing to it. Once the reorganization process is over, the performance will recover close to the original performance.

AnandTech Storage Bench 2013

When Anand built the AnandTech Heavy and Light Storage Bench suites in 2011 he did so because we did not have any good tools at the time that would begin to stress a drive's garbage collection routines. Once all blocks have a sufficient number of used pages, all further writes will inevitably trigger some sort of garbage collection/block recycling algorithm. Our Heavy 2011 test in particular was designed to do just this. By hitting the test SSD with a large enough and write intensive enough workload, we could ensure that some amount of GC would happen.

There were a couple of issues with our 2011 tests that we've been wanting to rectify however. First off, all of our 2011 tests were built using Windows 7 x64 pre-SP1, which meant there were potentially some 4K alignment issues that wouldn't exist had we built the trace on a system with SP1. This didn't really impact most SSDs but it proved to be a problem with some hard drives. Secondly, and more recently, we've shifted focus from simply triggering GC routines to really looking at worst-case scenario performance after prolonged random IO.

For years we'd felt the negative impacts of inconsistent IO performance with all SSDs, but until the S3700 showed up we didn't think to actually measure and visualize IO consistency. The problem with our IO consistency tests is that they are very focused on 4KB random writes at high queue depths and full LBA spans–not exactly a real world client usage model. The aspects of SSD architecture that those tests stress however are very important, and none of our existing tests were doing a good job of quantifying that.

We needed an updated heavy test, one that dealt with an even larger set of data and one that somehow incorporated IO consistency into its metrics. We think we have that test. The new benchmark doesn't even have a name, we've just been calling it The Destroyer (although AnandTech Storage Bench 2013 is likely a better fit for PR reasons).

Everything about this new test is bigger and better. The test platform moves to Windows 8 Pro x64. The workload is far more realistic. Just as before, this is an application trace based test–we record all IO requests made to a test system, then play them back on the drive we're measuring and run statistical analysis on the drive's responses.

Imitating most modern benchmarks Anand crafted the Destroyer out of a series of scenarios. For this benchmark we focused heavily on Photo editing, Gaming, Virtualization, General Productivity, Video Playback and Application Development. Rough descriptions of the various scenarios are in the table below:

AnandTech Storage Bench 2013 Preview - The Destroyer
Workload Description Applications Used
Photo Sync/Editing Import images, edit, export Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox
Gaming Download/install games, play games Steam, Deus Ex, Skyrim, Starcraft 2, BioShock Infinite
Virtualization Run/manage VM, use general apps inside VM VirtualBox
General Productivity Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware
Video Playback Copy and watch movies Windows 8
Application Development Compile projects, check out code, download code samples Visual Studio 2012

While some tasks remained independent, many were stitched together (e.g. system backups would take place while other scenarios were taking place). The overall stats give some justification to what we've been calling this test internally:

AnandTech Storage Bench 2013 Preview - The Destroyer, Specs
  The Destroyer (2013) Heavy 2011
Reads 38.83 million 2.17 million
Writes 10.98 million 1.78 million
Total IO Operations 49.8 million 3.99 million
Total GB Read 1583.02 GB 48.63 GB
Total GB Written 875.62 GB 106.32 GB
Average Queue Depth ~5.5 ~4.6
Focus Worst-case multitasking, IO consistency Peak IO, basic GC routines

SSDs have grown in their performance abilities over the years, so we wanted a new test that could really push high queue depths at times. The average queue depth is still realistic for a client workload, but the Destroyer has some very demanding peaks. When we first introduced the Heavy 2011 test, some drives would take multiple hours to complete it; today most high performance SSDs can finish the test in under 90 minutes. The Destroyer? So far the fastest we've seen it go is 10 hours. Most high performance SSDs we've tested seem to need around 12–13 hours per run, with mainstream drives taking closer to 24 hours. The read/write balance is also a lot more realistic than in the Heavy 2011 test. Back in 2011 we just needed something that had a ton of writes so we could start separating the good from the bad. Now that the drives have matured, we felt a test that was a bit more balanced would be a better idea.

Despite the balance recalibration, there is just a ton of data moving around in this test. Ultimately the sheer volume of data here and the fact that there's a good amount of random IO courtesy of all of the multitasking (e.g. background VM work, background photo exports/syncs, etc...) makes the Destroyer do a far better job of giving credit for performance consistency than the old Heavy 2011 test. Both tests are valid; they just stress/showcase different things. As the days of begging for better random IO performance and basic GC intelligence are over, we wanted a test that would give us a bit more of what we're interested in these days. As Anand mentioned in the S3700 review, having good worst-case IO performance and consistency matters just as much to client users as it does to enterprise users.

We are reporting two primary metrics with the Destroyer: average data rate in MB/s and average service time in microseconds. The former gives you an idea of the throughput of the drive during the time that it was running the Destroyer workload. This can be a very good indication of overall performance. What average data rate doesn't do a good job of is taking into account response time of very bursty (read: high queue depth) IO. By reporting average service time we heavily weigh latency for queued IOs. You'll note that this is a metric we've been reporting in our enterprise benchmarks for a while now. With the client tests maturing, the time was right for a little convergence.

AT Storage Bench 2013 - The Destroyer (Data Rate)

The Vertex 460 is only a hair slower than the Vector 150 in our Storage Bench 2013. The difference actually falls into the margin of error, so it seems that in spite of the lower clock speed, the performance is essentially the same.

AT Storage Bench 2013 - The Destroyer (Service Time)

Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.

Desktop Iometer - 4KB Random Read

Desktop Iometer - 4KB Random Write

Desktop Iometer - 4KB Random Write (QD=32)

In random read/write tests the Vertex 460 and Vector 150 trade punches but the differences are minor. Our Vector 150 sample was tested a few months back so minor differences in firmware could account for the increased random read/write results, or it may simply be margin of error.

Sequential Read/Write Speed

To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

Desktop Iometer - 128KB Sequential Read

Same goes for sequential performance: the Vertex 460 and Vector 150 perform more or less equally.

Desktop Iometer - 128KB Sequential Write

Sequential performance gives the Vector 150 a slight edge in both read and write results, but the difference is only a few percent at most.

AS-SSD Incompressible Sequential Read/Write Performance

The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers.

Incompressible Sequential Read Performance

Incompressible Sequential Write Performance

Performance vs. Transfer Size

ATTO is a useful tool for quickly benchmarking performance across various transfer sizes. You can get the complete data set in Bench. At smaller transfer sizes, the read speeds of Vertex 460 are slightly slower than the speeds of Vector 150 and Vertex 450. Write performance is mostly the same, though, for all drives.

Click for full size

AnandTech Storage Bench 2011

Two years ago we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. Anand assembled the traces out of frustration with the majority of what we have today in terms of SSD benchmarks.

Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.

Originally the benchmarks were kept short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.

1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. Our thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.

2) We tried to cover as many bases as possible with the software incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). We've included a large amount of email downloading, document creation and editing as well. To top it all off we even use Visual Studio 2008 to build Chromium during the test.

The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:

AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown
IO Size % of Total
4KB 28%
16KB 10%
32KB 10%
64KB 4%

Only 42% of all operations are sequential, the rest range from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.

AnandTech Storage Bench 2011 - Heavy Workload

Heavy Workload 2011 - Average Data Rate

The full results, including disk busy times and read/write separations, can be found in our Bench.

AnandTech Storage Bench 2011 - Light Workload

Our light workload actually has more write operations than read operations. The split is as follows: 372,630 reads and 459,709 writes. The relatively close read/write ratio does better mimic a typical light workload (although even lighter workloads would be far more read centric). There's lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming.

The I/O breakdown is similar to the heavy workload at small IOs, however you'll notice that there are far fewer large IO transfers. Interestingly, the 480GB drive actually comes out ahead in this case, suggesting it's more capable at light workloads.

AnandTech Storage Bench 2011 - Light Workload IO Breakdown
IO Size % of Total
4KB 27%
16KB 8%
32KB 6%
64KB 5%

Light Workload 2011 - Average Data Rate

Power Consumption

As expected, power consumption is also on-par with the Vector 150. The difference to Vertex 450 is surprisingly big, actually, as the Vertex 460 draws more than a watt less under load, so the difference in clock speeds may be having an impact here. I'm still a bit disappointed that OCZ hasn't implemented any support for low power states (HIPM+DIPM and DevSleep).

Drive Power Consumption - Idle

Drive Power Consumption - Sequential Write

Drive Power Consumption - Random Write

Final Words

The biggest problem I have with the Vertex 460 is OCZ's current situation. The Vertex 460 won't be affected by the acquisition terms because it'll be available after the deal closes, meaning that Toshiba will be covering the warranty. However, I don't feel comfortable recommending OCZ's products until the dust settles and we know more about the future. The drive itself is good, just like the Vector 150, but it doesn't enjoy any major advantage over drives from manufacturers that are stable and proven in long-term reliability.

Update 1/22: We have just received word that Toshiba has finalized the purchase of the OCZ Technology group, making it a wholly owned subsidiary of the Toshiba Group Company.  OCZ will still act independently as OCZ Storage Solutions, focusing on SSDs, meaning that the future of OCZ products is essentially confirmed for the future.

NewEgg Price Comparison (1/21/2013)
  120/128GB 240/256GB 480/512GB
OCZ Vertex 460 (MSRP) $100 $190 $360
OCZ Vector 150 $120 $215 $440
OCZ Vertex 450 $90 $160 -
Samsung SSD 840 EVO $110 $175 $345
Samsung SSD 840 Pro $130 $200 $465
Crucial M500 $90 $155 $310
SanDisk Extreme II $120 $230 $300
Seagate SSD 600 $110 $170 $300

OCZ's pricing is relatively competitive, although I'd like to see the 480GB SKU being priced a little more aggressively. $300 is really starting to be the sweet spot for 480-512GB drives and with drives like SanDisk Extreme II, there is barely any reason to pay more than that. 120GB and 240GB SKUs are priced a bit more competitively but there are still better deals to be found. 

In summary, then, the Vertex 460 is a reasonable replacement for the Vertex 450, but pricing on the old model is actually a bit lower for now when resellers are clearing their stocks. 

Log in

Don't have an account? Sign up now