Original Link: http://www.anandtech.com/show/8180/adata-sp610-ssd-256gb-512gb-review

SandForce's situation has shaken up the SSD industry quite a bit. The constant delays of SF3700 and acquisitions have kept everyone on their toes because many OEMs have relied solely on SandForce for their controllers. If you go back two years or so, nearly every OEM without their own controller technology was using SandForce's controllers. But in the last six months or so, we have seen a shift away from SandForce. Sure, many are still using SandForce and will continue to do so, but quite a few OEMs that used to be SandForce-only have started to include other controllers in their portfolio as well. This is logical. Relying on a single supplier poses a huge risk because if something happens to that company, your whole SSD business is in trouble.

With the SF-2000 series the situation was totally different because the controller was on time for SATA 6Gbps and SandForce was a privately owned company; now the situation has changed. The SF3700 has been pushed back several times and the latest word is a Q4'14 release. In addition, SandForce has been acquired three times in the last three years: first in late 2011 by LSI, then Avago acquired LSI in Q4'13, and a few weeks ago Seagate announced that they will be acquiring LSI's Flash Components Division, a.k.a. the old SandForce. The first two acquisitions didn't have any major impact on SandForce (other than maybe added SF3700 delays), but the Seagate acquisition presents a substantial risk to SSD OEMs. Seagate has its own SSD business, so what if they decide to stop licensing the SandForce platform and keep the technology for their own exclusive use -- or at least delay the release to other companies?

That is a risk that may come to pass, and from what I have heard we will hear something significant in the coming months. But a risk always has two sides: a negative and a positive one. Shifting away from SandForce opens the market for new suppliers, which adds new opportunities for the SSD OEMs. Moreover, new suppliers always means more competition, which is ultimately beneficial for the end users. We have already seen JMicron's plans and they are aggressively going after SandForce's current and ex-customers, but there is another company that has been laying low and is now looking to make a comeback.

That company is Silicon Motion, or SMI as they are often called within the industry. Silicon Motion has been in the SSD industry for years but this is the first time an SSD with one of their controllers has found its way into our test lab. Silicon Motion's controllers have had more presence in the industrial SSD market, which would explain why we haven't seen its controllers before. With the SM2246EN, Silicon Motion is entering the consumer market and they already have a handful of partners. ADATA's Premier SP610, which we are reviewing today, is one of them, but Corsair's Force LX and PNY's Optima also use the same controller (although PNY is playing dirty and using SandForce controllers in the Optima as well).

The SM2246EN is a 4-channel design and features a single-core 32-bit ARC (Argonaut RISC Core) CPU. The benefit of ARC is that it is configurable and the client can design the CPU to fit the task, for example by adding extra instructions and registers. Generally the result is a more efficient design because the CPU has been designed specifically for the task at hand instead of being an all around solution like the most ARM cores are. In the end, the CPU will only be processing a limited set of tasks set by the firmware, so a specialized CPU can work well.

Each channel can talk to up to eight dies at the same time. Silicon Motion does not list any maximum capacity for the controller but with Micron's 128Gbit 20nm MLC NAND ADATA has been able to achieve a one terabyte capacity. That is in fact sixteen NAND dies per channel but the magic lies in the fact that the controller can talk to more than eight dies per channel, just not at the same time. Each NAND package has a certain amount of Chip Enablers, or CEs, that defines the number of dies that can be accessed simultaneously, but the number of CEs is usually lower than the number of dies, especially when dealing with packages with more than four dies.

Encryption support, including TCG Opal, is present at the hardware level. DevSleep is also supported, although ADATA told me that neither encryption nor DevSleep is supported in the SP610 at the moment. Encryption support (AES-256 and TCG Opal 2.0) is coming through a firmware update in Q3'14, though. IEEE-1667 won't be supported so Microsoft's eDrive is out of the question, unfrotunately, but TCG Opal 2.0 alone is sufficient if you use a third party software for encryption (e.g. Wave).

ADATA Premier SP610 Specifications
Capacity 128GB 256GB 512GB 1TB
Controller Silicon Motion SM2246EM
NAND Micron 128Gbit 20nm MLC
DRAM 128MB 256MB 512MB 1GB
Sequential Read 560MB/s 560MB/s 560MB/s 560MB/s
Sequential Write 150MB/s 290MB/s 450MB/s 450MB/s
4KB Random Read 66K IOPS 75K IOPS 73K IOPS 73K IOPS
4KB Random Write 35K IOPS 67K IOPS 72K IOPS 72K IOPS
Encryption AES-256 & TCG Opal 2.0 in Q3'14
Warranty Three years

ADATA has always been gracious with the bundled software and peripherals. The retail package of the SP610 includes a 3.5" desktop adapter as well as a 9.5mm spacer for laptops that use 9.5mm hard drives. For cloning the existing drive ADATA includes Acronis True Image HD, which in my experience is one of the most convenient tools for OS cloning.

Photography by Juha Kokkonen

The SP610 comes in a variety of capacities ranging from 128GB to up to 1TB (1024GB but marketed as 1TB). Like many SSD OEMs, ADATA buys NAND in wafers and does the binning and packaging on their own. I always thought the reason behind this was lower cost but ADATA told me that the cost is actually about the same as buying pre-packaged NAND straight from Micron. However, by doing the packaging ADATA can utilize special packages that go into some industrial and embedded solutions, which are not offered by Micron or other NAND manufacturers. There are four NAND packages on each side of the PCB, which means that we are dealing with dual-die packages at 256GB and quad-die packages at 512GB.

ADATA does not provide any certain endurance rating for the SP610 but since we are dealing with MLC NAND, there should be no concerns regarding the durability of the drive.

The PCB is about half the size of a normal PCB in a 2.5" SSD. This is not the first time we've seen a smaller PCB as several OEMs have used it as a method to cut costs. I doubt the cost savings are big but any and all savings are welcome given the tight competition in the market. It would be interesting to see the 1TB offering as well, to see if the internal layout has been modified at all.

Test System

CPU Intel Core i5-2500K running at 3.3GHz (Turbo and EIST enabled)
Motherboard AsRock Z68 Pro3
Chipset Intel Z68
Chipset Drivers Intel + Intel RST 10.2
Memory G.Skill RipjawsX DDR3-1600 4 x 8GB (9-9-9-24)
Video Card Palit GeForce GTX 770 JetStream 2GB GDDR5 (1150MHz core clock; 3505MHz GDDR5 effective)
Video Drivers NVIDIA GeForce 332.21 WHQL
Desktop Resolution 1920 x 1080
OS Windows 7 x64

Thanks to G.Skill for the RipjawsX 32GB DDR3 DRAM kit

Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the buttons below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

  ADATA SP610 ADATA SP920 JMicron JMF667H (Toshiba NAND) Samsung SSD 840 EVO mSATA Crucial MX100
25% OP

Ouch, this doesn't look too promising. The SMI controller seems to be very aggressive when it comes to steady-state performance, meaning that as soon as there is an empty block it prioritizes host writes over internal garbage collection. The result is fairly inconsistent performance because for a second the drive is pushing over 50K IOPS but then it must do garbage collection to free up blocks, which results in the IOPS dropping to ~2,000. Even with added over-provisioning, the behavior continues, although now more IOs happen at a higher speed because the drive has to do less internal garbage collection to free up blocks.

This may have something to do with the fact that the SM2246EN controller only has a single core. Most controllers today are at least dual-core, which means that at least in one simple scenario one core can be dedicated to host operations while the other handles internal routines. Of course the utilization of cores is likely much more complex and manufactures are not usually willing to share this information, but it would explain why the SP610 has such a large variance in performance.

As we are dealing with a budget mainstream drive, I am not going to be that harsh with the IO consistency. Most users are unlikely to put the drive under a heavy 4KB random write load anyway, so for light and moderate usage the drive should do just fine because ~2,000 IOPS at the lowest is not even that bad -- and it's still a large step ahead of any HDD.

  ADATA SP610 ADATA SP920 JMicron JMF667H (Toshiba NAND) Samsung SSD 840 EVO mSATA Crucial MX100
25% OP

Just to put things in perspective, however, even the over-provisioned "384GB" SP610 ends up offering worse consistency than the 128GB JMicron JMF667H SSD. Pricing will need to be very compelling if this drive is going to stand up against drives like the Crucial MX100.

  ADATA SP610 ADATA SP920 JMicron JMF667H (Toshiba NAND) Samsung SSD 840 EVO mSATA Crucial MX100
25% OP

TRIM Validation

To test TRIM, I first filled all user-accessible LBAs with sequential data and continued with torturing the drive with 4KB random writes (100% LBA, QD=32) for 30 minutes. After the torture I TRIM'ed the drive (quick format in Windows 7/8) and ran HD Tach to make sure TRIM is functional.

And it is.

AnandTech Storage Bench 2013

Our Storage Bench 2013 focuses on worst-case multitasking and IO consistency. Similar to our earlier Storage Benches, the test is still application trace based - we record all IO requests made to a test system and play them back on the drive we are testing and run statistical analysis on the drive's responses. There are 49.8 million IO operations in total with 1583.0GB of reads and 875.6GB of writes. I'm not including the full description of the test for better readability, so make sure to read our Storage Bench 2013 introduction for the full details.

AnandTech Storage Bench 2013 - The Destroyer
Workload Description Applications Used
Photo Sync/Editing Import images, edit, export Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox
Gaming Download/install games, play games Steam, Deus Ex, Skyrim, Starcraft 2, BioShock Infinite
Virtualization Run/manage VM, use general apps inside VM VirtualBox
General Productivity Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware
Video Playback Copy and watch movies Windows 8
Application Development Compile projects, check out code, download code samples Visual Studio 2012

We are reporting two primary metrics with the Destroyer: average data rate in MB/s and average service time in microseconds. The former gives you an idea of the throughput of the drive during the time that it was running the test workload. This can be a very good indication of overall performance. What average data rate doesn't do a good job of is taking into account response time of very bursty (read: high queue depth) IO. By reporting average service time we heavily weigh latency for queued IOs. You'll note that this is a metric we have been reporting in our enterprise benchmarks for a while now. With the client tests maturing, the time was right for a little convergence.

Storage Bench 2013 - The Destroyer (Data Rate)

Well, this doesn't make much sense. The service times aren't anything special, yet the average data rate is surprisingly high. For instance the 256GB JMF667H drive has lower service time but the average data rate is lower.

Storage Bench 2013 - The Destroyer (Service Time)

For this to make some sense, let's look at the service times for reads and writes separately.

Storage Bench 2013 - The Destroyer (Read Service Time)

Storage Bench 2013 - The Destroyer (Write Service Time)

Now this makes more sense. Due to the relatively poor IO consistency of the SP610, the write service times are quite high. That increases the average service time substantially but the impact on average data rate is smaller, and the reason for that is quite simple.

To keep thing simple, imagine that it takes five seconds to write 128KB of sequential data and one second to write 4KB of random data. In other words, it takes six seconds to write 132KB of data, which works out to be 22KB/s. However, the 128KB write operation completed at 25.6KB/s, whereas the throughput of the 4KB write operation was only 4KB/s. 22KB/s is certainly much closer to 25.6KB/s than 4KB/s, meaning that the average data rate gives more emphasis to the large transfer because frankly it counts more when looking at amount of data that was transferred.

But now add in the service time. It took a total of six seconds to complete two IOs, which means that the average service time is three seconds. It doesn't matter that the other IO is much larger because service time only looks at the completion time of an IO, regardless of its size. Because of that, the service times give more information about small IO performance than the average data rate where the small IOs are buried under the larger transfers.

In the context of the SP610, this means that it performs well with large IOs as the data rate is high. Only about 30% of the IOs in the trace are 4KB, whereas 40% are 64KB and 20% are 128KB. With small IOs the performance is not bad but for instance the JMF667H does slightly better, resulting in lower service times. Despite the mediocre performance consistency on the previous page, overall the SP610 does very well in the 2013 Storage Bench and potentially offers users better performance than the MX100, 840 EVO and JMF667H.

AnandTech Storage Bench 2011

Back in 2011 (which seems like so long ago now!), we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on peak IO performance and basic garbage collection routines. There is a lot of downloading and application installing that happens during the course of this test. Our thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives. The full description of the Heavy test can be found here, while the Light workload details are here.

Heavy Workload 2011 - Average Data Rate

The SP610 also does well in our old Storage Bench. It's about equal to the MX100 and 840 EVO, although I'm still surprised how well the JMF667H performs with Toshiba NAND. In the Light Workload test the SP610 is actually a bit faster than the MX100 but I wouldn't get too excited about a 10% difference.

Light Workload 2011 - Average Data Rate

Random Read/Write Speed

The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.

Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). We perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time.

Desktop Iometer - 4KB Random Read

Desktop Iometer - 4KB Random Write

Desktop Iometer - 4KB Random Write (QD=32)

Random performance is mediocre but certainly fine for a lower-end controller.

Sequential Read/Write Speed

To measure sequential performance we run a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.

Desktop Iometer - 128KB Sequential Read

Sequential performance, especially read speed, is very good. The high read speed might explain why the SP610 did so well in the 2013 Storage Bench when looking at the average data rate.

Desktop Iometer - 128KB Sequential Write

AS-SSD Incompressible Sequential Read/Write Performance

The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers, but it doesn't impact most of the other controller much if at all..

Incompressible Sequential Read Performance

Incompressible Sequential Write Performance

Performance vs. Transfer Size

ATTO is a useful tool for quickly benchmarking performance across various transfer sizes. You can get the complete data set in Bench. Read performance is again excellent and beats all the other value SSDs. I'm surprised by how good the read performance actually is. I'm guessing the custom RISC CPU helps with its more efficient design but it could simply be a matter of firmware design as well. Write performance, on the other hand, is average and comparable to the JMF667H. I wonder if Crucial's advantage in write performance is due to higher binned NAND because generally speaking Micron and other NAND manufacturers keep the best NAND to themselves.

Click for full size

Power Consumption

One of Silicon Motion's design targets was a very power efficient controller, which is one of the reasons why they chose a custom RISC CPU instead of a general purpose ARM core and it looks like that has paid off. While the SM2246EN controller supports DevSleep, the SP610 for some reason does not. However, it still supports slumber power and the slumber power consumption is very competitive with other drives. Power consumption under load is also modest and stays below 3.5W.

SSD Slumber Power (HIPM+DIPM) - 5V Rail

Drive Power Consumption - Sequential Write

Drive Power Consumption - Random Write

Final Words

I am positively surprised by the new SMI controller and SP610. I have learned to be skeptical about value controllers because in the past the sacrifice in performance has not been worth the relatively small savings in cost. Usually the problem is that the value controller is also combined with cheaper (i.e. slower) NAND, resulting in a mediocre drive at best. Fortunately, the SP610 does not have that problem. Even though the SMI controller is paired with Micron's 128Gbit 20nm MLC, which is generally slower than 64Gbit parts and Toshiba's NAND, the drive is still extremely competitive under heavy workloads. It is also faster (sometimes substantially) than the MX100 and 840 EVO in light to medium workloads, which have been my recommended value drives.

IO consistency is really the only complaint I have regarding performance. It is not horrible but I would still rather see consistent behavior instead of a "clean later" approach where the drive pushes maximum IOPS whenever it can. For typical client workloads, this is not necessarily bad because IOs tend to happen in bursts and the drive should have enough time to do garbage collection between the bursts, but there is still a chance that the performance may degrade if the controller runs out of empty blocks. Because of that, I would recommend to keep some empty space (maybe 10-15%) to ensure a steady supply of empty blocks.

Furthermore, the lack of DevSleep support is also a minor drawback. The reason why it is minor is because DevSleep only matters if you have a Haswell laptop as the older platforms do not support it. In other words, if you are running a laptop that has a previous generation or older Intel CPU (or any AMD CPU/APU for that matter), you have absolutely no need to worry about DevSleep because your device does not support it. Obviously if you are running a desktop, the power consumption should not be a concern in the first place because there is no battery life to worry about.

NewEgg Price Comparison (6/25/2014)
  120/128GB 240/256GB 480/512GB 960GB/1TB
ADATA Premier SP610 $80 $130 $260 $470
ADATA Premier Pro SP600 $65 $110 - -
ADATA Premier Pro SP920 $90 $150 - -
ADATA XPG SX900 $80 $130 $245  
SanDisk Extreme Pro - $200 $400 $600
SanDisk Extreme II - $171 $308 -
SanDisk Ultra Plus $87 $110 - -
Crucial MX100 $78 $111 $215 -
Crucial M550 $104 $157 $300 $440
Plextor M6S $100 $150 $400 -
Intel SSD 730 - $210 $425 -
Intel SSD 530 $110 $165 $330 -
OCZ Vector 150 $115 $280 $408 -
OCZ Vertex 460 $86 $158 $293  
Samsung SSD 840 EVO $80 $145 $250 $420
Samsung SSD 840 Pro $120 $190 $410 -

The pricing is competitive but not low enough to make the SP610 the king of value SSDs. It is very hard to compete against Crucial/Micron and Samsung in price because they are both NAND manufacturers and have access to cheaper and newer technology NAND. From what I have heard, Micron's 128Gbit 20nm MLC is currently the cheapest NAND on the open market but of course that is not as cost efficient as Micron's 128Gbit 16nm MLC used in the MX100 or Samsung's 128Gbit 19nm TLC used in the 840 EVO.

With the current pricing, the SP610 falls in the infamous middle-class, meaning that it is not cheap enough to be the ultimate value drive but it is also not fast enough to compete against the fastest (albeit more expensive) drives. Given the performance of the SP610, I would gladly pay $10-20 more for it (depending on the capacity) over the MX100 or 840 EVO, but I do not find it to be worth the up to $50 premium in the 1TB-class. The issue is that for light and moderate workloads, the performance difference is negligible, so I would rather save the cash or put it towards another component upgrade.

The SP610 can, however, be a good compromise if you are not entirely sure whether your workload needs a high performance SSD or not, because it is significantly cheaper than the high-end drives like the Extreme Pro, yet it is not much more expensive than the value drives while providing generally better performance.

All in all, I am pleased to see more competition in the value SSD segment. Crucial and Samsung have dominated that for too long but the SM2246EN is turning out to be a platform that can challenge Crucial's and Samsung's drives in the three main aspects: price, performance, and features. With a slightly lower price tag and the updated firmware with TCG Opal 2.0 support, the SP610 could certainly warrant a recommendation over other offerings.

Log in

Don't have an account? Sign up now