Original Link: http://www.anandtech.com/show/8407/ocz-arc-100-240gb-review
OCZ ARC 100 (240GB) SSD Reviewby Kristian Vättö on August 26, 2014 7:00 AM EST
Ever since Ryan Petersen, the founder and former CEO of OCZ, resigned almost exactly two years ago, the company has had a new direction. Starting with the launch of the Barefoot 3 platform and the Vector SSDs in late 2012, OCZ has been trying to rebrand itself as a premium manufacturer of high performance SSDs instead of being a budget brand. The old OCZ would have taken the Barefoot 3 controller and stuffed it inside several other models to cater to more price points, but the new OCZ played their cards right. The Vector remained as the only Barefoot 3 based product for months until OCZ introduced the Vertex 450, which was not exactly cheap but a more mainstream version of the Vector with a shorter three-year warranty.
Now, almost two years later after the introduction of the Barefoot 3, OCZ is back in the mainstream SSD game with the ARC 100. The asynchronous NAND from the Agility days is long gone and the ARC 100 uses Toshiba's latest 64Gbit A19nm MLC NAND. In theory the performance will drop a bit but the ARC 100 should hit lower price points. Here's the quick overview:
|OCZ Consumer SSD Lineup|
|ARC 100||Vertex 460||Vector 150|
|Controller||Barefoot 3 M10||Barefoot 3 M10||Barefoot 3 M00|
|NAND||64Gbit A19nm||64Gbit 19nm||64Gbit 19nm|
|Sequential Speed||Up to 490MB/s||Up to 545MB/s||Up to 550MB/s|
|Random Speed||Up to 80K IOPS||Up to 95K IOPS||Up to 100K IOPS|
|Accessories||-||Cloning Software & Desktop Adapter|
|Endurance||20GB per day||20GB per day||50GB per day|
|Warranty||3 Years||3 Years||5 Years|
The smaller process node NAND and the lack of accessories is the secret behind ARC 100's lower cost. Performance takes a slight hit from the newer NAND compared to the Vertex 460, though that is expected since NAND performance decreases as the lithography shrinks. Fortunately endurance is still rated at the same 20GB per day for three years, which is more than enough for typical client workloads.
|OCZ ARC 100 Specifications|
|Controller||OCZ Barefoot 3 M10|
|NAND||Toshiba 64Gbit A19nm MLC|
|4KB Random Read||75K IOPS||75K IOPS||75K IOPS|
|4KB Random Write||80K IOPS||80K IOPS||80K IOPS|
|Steady-State 4KB Random Write||12K IOPS||18K IOPS||20K IOPS|
|Endurance||20GB/day for 3 years|
Sadly there is still no support for low power states (slumber and DevSleep), so idle power consumption remains high compared to the competition. The same goes for encryption support as the ARC 100 only supports ATA passwords, whereas the industry is moving towards more secure and easily manageable TCG Opal encryption. OCZ's PCIe controller, the JetExpress, will support both, but in the meantime OCZ's SSDs remain limited to the desktop crowd.
The ARC 100 uses the slower bin of the Barefoot 3, which is clocked at 352MHz. The faster version, M00, that is found inside the Vector 150 runs at 397MHz instead, but the two are otherwise the same. Our 240GB sample (256GiB of raw NAND) has sixteen dual-die packages with each die being 8GB (64Gb) in capacity.
For AnandTech Storage Benches, performance consistency, random and sequential performance, performance vs transfer size and load power consumption we use the following system:
|CPU||Intel Core i5-2500K running at 3.3GHz (Turbo & EIST enabled)|
|Motherboard||AsRock Z68 Pro3|
|Chipset Drivers||Intel 22.214.171.1245 + Intel RST 10.2|
|Memory||G.Skill RipjawsX DDR3-1600 4 x 8GB (9-9-9-24)|
|Video Card||Palit GeForce GTX 770 JetStream 2GB GDDR5 (1150MHz core clock; 3505MHz GDDR5 effective)|
|Video Drivers||NVIDIA GeForce 332.21 WHQL|
|Desktop Resolution||1920 x 1080|
|OS||Windows 7 x64|
For slumber power testing we use a different system:
|CPU||Intel Core i7-4770K running at 3.3GHz (Turbo & EIST enabled, C-states disabled)|
|Motherboard||ASUS Z87 Deluxe (BIOS 1707)|
|Chipset Drivers||Intel 126.96.36.1996 + Intel RST 12.9|
|Memory||Corsair Vengeance DDR3-1866 2x8GB (9-10-9-27 2T)|
|Graphics||Intel HD Graphics 4600|
|Desktop Resolution||1920 x 1080|
|OS||Windows 7 x64|
- Thanks to Intel for the Core i7-4770K CPU
- Thanks to ASUS for the Z87 Deluxe motherboard
- Thanks to Corsair for the Vengeance 16GB DDR3-1866 DRAM kit, RM750 power supply, Hydro H60 CPU cooler and Carbide 330R case
Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.
To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.
We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.
Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the dropdown selections below each graph to switch the source data.
For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.
The performance consistency takes a small hit compared to the Vector 150 and Vertex 460, but compared to the other value drives the ARC 100 offers amazing consistency. Most of the performance gain is due to the higher default over-provisioning (12% vs 7% in other value SSDs), although the Barefoot 3 platform has always done well when it comes to consistency. While there is quite a bit of variation in IOPS, the average is still somewhere between 15K and 20K IOPS, whereas for example the MX100 only provides about 5K IOPS at steady-state.
AnandTech Storage Bench 2013
Our Storage Bench 2013 focuses on worst-case multitasking and IO consistency. Similar to our earlier Storage Benches, the test is still application trace based – we record all IO requests made to a test system and play them back on the drive we are testing and run statistical analysis on the drive's responses. There are 49.8 million IO operations in total with 1583.0GB of reads and 875.6GB of writes. I'm not including the full description of the test for better readability, so make sure to read our Storage Bench 2013 introduction for the full details.
|AnandTech Storage Bench 2013 - The Destroyer|
|Photo Sync/Editing||Import images, edit, export||Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox|
|Gaming||Download/install games, play games||Steam, Deus Ex, Skyrim, Starcraft 2, BioShock Infinite|
|Virtualization||Run/manage VM, use general apps inside VM||VirtualBox|
|General Productivity||Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan||Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware|
|Video Playback||Copy and watch movies||Windows 8|
|Application Development||Compile projects, check out code, download code samples||Visual Studio 2012|
We are reporting two primary metrics with the Destroyer: average data rate in MB/s and average service time in microseconds. The former gives you an idea of the throughput of the drive during the time that it was running the test workload. This can be a very good indication of overall performance. What average data rate doesn't do a good job of is taking into account response time of very bursty (read: high queue depth) IO. By reporting average service time we heavily weigh latency for queued IOs. You'll note that this is a metric we have been reporting in our enterprise benchmarks for a while now. With the client tests maturing, the time was right for a little convergence.
The good IO consistency translates into good performance in our 2013 Storage Bench. The ARC 100 is without a doubt the fastest value drive in the market for heavy IO workloads as the 840 EVO and MX100 do not even come close.
AnandTech Storage Bench 2011
Back in 2011 (which seems like so long ago now!), we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. The MOASB, officially called AnandTech Storage Bench 2011 – Heavy Workload, mainly focuses on peak IO performance and basic garbage collection routines. There is a lot of downloading and application installing that happens during the course of this test. Our thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives. The full description of the Heavy test can be found here, while the Light workload details are here.
While heavy workloads are good on the ARC 100, the same cannot be said about our 2011 Storage Benches, which are more relevant to the typical client user. The ARC 100 is not slow but it loses its advantage against the MX100 and 840 EVO. Part of this is simply due to the fact that lighter workloads are all plenty fast even on "slower" SSDs.
Random Read/Write Speed
The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.
Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). We perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time.
Random speeds show a small decrease in performance over the other OCZ drives, which is due to the combination of the slower M10 controller and A19nm NAND.
Sequential Read/Write Speed
To measure sequential performance we run a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.
In sequential performance the ARC 100 seems to be more optimized for read performance as that is up compared to the Vector 150 and Vertex 460, but in turn the write speed has decreased slightly.
AS-SSD Incompressible Sequential Read/Write Performance
The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers, but most other controllers are unaffected.
Performance vs. Transfer Size
ATTO is a useful tool for quickly benchmarking performance across various transfer sizes. You can get the complete data set in Bench. The ARC 100 no longer has the weird drop in read performance at IO size of 32KB, but it is generally slower than the Vertex 460 and Vector 150. It is still better than the MX100 in both read and write, though.
Click for full size
As the Barefoot 3 platform still does not support low power states, the idle power consumption remains the biggest weakness of the ARC 100. That is not a problem for desktop users, but it pretty much excludes the whole mobile market. Load power consumption is decent, although it is a bit higher than what Vector 150 and Vertex 460 have.
The ARC 100 provides what I expected it to do, which is its strength and weakness. The Barefoot 3 platform does provide excellent performance consistency and it has proven to be reliable over the last two years, but the performance in lighter workloads is only mediocre. I am glad that OCZ pursues consistency, but the truth is that average client workloads are more about peak performance as IOs tend to happen in bursts. Think about application installing and launching for instance – they stress the drive a lot but only for a short period of time and afterwards the drive will mostly be idling until another similar request comes in. It is true that an average user will most likely not notice the difference between two modern SSDs, but I still would have liked to see the ARC 100 being more optimized for lighter client workloads.
Another shortcoming of the ARC 100 is its lack of support for lower power states and TCG Opal. With most of today's PCs being laptops, OCZ is missing the needs of a huge market. I am guessing that there are some limitations in the Barefoot 3 silicon itself that prohibit OCZ from implementing proper low power state support – or at least that is what I hope because otherwise there is no good explanation as to why the Barefoot 3 continues to use so much power at idle. OCZ's next generation controller will support both DevSleep and Opal encryption, but in the meantime I can only recommend Barefoot 3 based SSDs for desktops.
|NewEgg Price Comparison (8/25/2014)|
|OCZ ARC 100||$75||$120||$240|
|OCZ Vector 150||$85||$140||$280|
|OCZ Vertex 460||$90||$140||$245|
|Samsung SSD 850 Pro||$130||$200||$400|
|Samsung SSD 840 EVO||$90||$165||$250|
|SanDisk Extreme Pro||-||$200||$380|
|SanDisk Extreme II||$70||$140||$295|
|Intel SSD 730||-||$190||$340|
|Intel SSD 530||$90||$140||$250|
Pricing appears to be competitive, although beating the MX100 is very tough. At capacities of 120GB and 240GB the ARC 100 is effectively the same price as the MX100, but I would like to see the 480GB drop in price to be more competitive. The mainstream market is all about price, so the ARC 100 cannot be $20 more expensive than the MX100 if OCZ wants to compete. Then again, 480GB/512GB SSDs aren't the normal target for mainstream users, so it may not matter too much.
I have to say that the MX100 is still my recommendation for most people because the feature set and value are just amazing, but the ARC 100 is a compelling alternative for desktop users. The better performance consistency makes the ARC 100 more suitable for heavier workloads, so for a user with a heavy-ish IO workload and a tight budget the ARC 100 is a great option.