Original Link: http://www.anandtech.com/show/7438/intel-ssd-530-240gb-review
Intel SSD 530 (240GB) Reviewby Kristian Vättö on November 15, 2013 1:45 PM EST
The consumer SSD market is currently at a turning point. SATA 6Gbps is starting to be a bit long in the tooth but SATA Express support is still very limited. There is currently only one motherboard (ASUS Maximus VI Extreme) that has an M.2 slot and even that is limited to PCIe x1, making M.2 quite redundant at this point. On top of the lack of motherboard support, all M.2 SSDs are OEM only at the moment, which makes sense because it's pointless to sell a product that can't be used anywhere yet. The only way to get the full M.2 PCIe experience right now is to buy one of the few laptops with M.2 PCIe SSDs, such as the new Sony VAIO Pro 13 we just reviewed and the 2013 MacBook Air (though it has a proprietary connector).
The wait for moving to PCIe has put manufacturers in an odd situation. Ever since SSDs started gaining popularity there's always been a ton of room for improvement. Fundamentally the SSD market has gone through three different stages. First everyone focused on fast sequential speeds because that was the most important benchmark with hard drives. Shortly after it was understood that it's not the sequential speeds that make SSDs fast but the small random transfers that make hard drives crawl. IOPS quickly became a word that every manufacturer was shouting. Getting the IOPS as high as possible for marketing reasons was an important goal for many but in the process many bypassed another very important metric: Performance consistency. In the last year or so we've finally seen manufacturers paying attention to making performance more consistent instead of just focusing on the peak numbers.
The problem now is that every significant segment from a performance angle has been covered. Almost every SSD in the market is able to saturate the SATA 6Gbps bus. Random IO has also more or less stayed the same for the last year, which suggests that we've hit a hardware bottleneck. IO consistency is the only aspect that still requires some tweaking but most of the latest SSDs do a pretty good job at it too. There's nothing that manufacturers can do with SATA 6Gbps to really take performance to the next level. What makes things even more complex is the physics of NAND because read and program times increase as we move to smaller lithographies.
Since driving performance up is getting harder and harder, manufacturers need to rely on other methods to improve their products. Transitioning to smaller lithography NAND is one of the most common ways because it helps to reduce cost. Even though smaller lithography NAND is actually a step back when it comes to NAND endurance and potentially performance, it's still a powerful marketing tool because consumers tend to think that smaller equals better performance and lower power (we can thank CPUs and GPUs for that mindset). Another rising aspect has been power consumption and especially Windows 8's DevSleep has been a big part of that.
With an overview of the state of the SSD market out of the way, let's focus on the actual Intel SSD 530. The above probably would have been a good tease for the first consumer M.2 drive or a revolutionary SATA 6Gbps drive but unfortunately that's not the case. Intel didn't make much noise when they released the SSD 530 in August and there's a reason for that. Like its predecessor, the SSD 520, the SSD 530 is still SF-2281 based but unlike the SSD 335, the SSD 530 uses a newer silicon revision of the SF-2281. The new B02 stepping doesn't change performance in any way but it lowers power consumption especially when the drive is idling.
Similar to the SSD 335, the SSD 530 moves from 25nm IMFT NAND to 20nm IMFT NAND. I went through the major differences in the SSD 335 review, but shortly put Intel's 20nm NAND has slightly slower erase times and is otherwise comparable to their 25nm NAND. Some of you probably would have preferred that Intel had stuck with 25nm NAND in their high-end consumer offering but the fact is that 25nm NAND is no longer cost effective and hasn't been for over a year. Let me show you a cool graph of NAND price scaling:
Courtesy of Mark Webb, taken from his 2013 Flash Memory Summit presentation
Now let's apply that graph to the case of Intel's 20nm NAND. "N generation" in the graph means 25nm NAND and "N+1 generation" stands for 20nm NAND. In April 2011 IMFT announced that they have started sampling 20nm MLC NAND (you should now be looking at the "N+1 Generation announced for samples" part, which is at about quarter 7). The Intel SSD 335 was released six quarters later in October 2012. As you can see in the graph, that's the exact spot when 20nm NAND prices started to settle down and they're about 30-40% cheaper than 25nm NAND.
Since 25nm NAND was more mature and had higher performance and endurance, Intel kept using it in the SSD 520 while the mainstream SSD 300-series switched to 20nm NAND. However, the 20nm process has now matured and both performance and endurance are close to what the 25nm process offered, making it viable for Intel's high-end consumer SSD as well. I'd like to remind that while the graph above is based on historical data, each generation from every manufacturer is its own challenges. Delays and unexpected wafer cost increases can shift the N+1 graph to the right, meaning smaller price benefits and a longer period for the new generation to become cost efficient.
Unlike Intel's previous SSDs, the SSD 530 is available in three different form factors: 2.5", mSATA and M.2 (80mm). The capacities for the 2.5" version range from 80GB to 480GB, whereas the mSATA is limited to 240GB and M.2 to 360GB (those are the highest capacities you can get with 64Gbit NAND; mSATA only has room for four packages and 80mm M.2 for six NAND packages). The 2.5" version has also gone through a facelift. Aluminum has remained as the building material of the chassis but the Intel logo is now bigger and more centered and there's a sticker representing a die shot in the upper right corner.
|Intel SSD 530 2.5" Specifications|
|NAND||Intel 20nm MLC|
|4KB Random Read||24K IOPS||41K IOPS||45K IOPS||48K IOPS|
|4KB Random Write||80K IOPS|
|Endurance||20GB/day for 5 years|
Intel rates the SSD 530 at 20GB of host writes for five years. This is a typical rating for high-end consumer SSDs with a 5-year warranty and for heavier workloads Intel advises you to get an enterprise-grade drive. For the record, the Intel SSD 520 was also rated at 20GB/day for five years so there's been no degradation in that sense.
The PCB itself looks similar to the SSD 520. There is a total of 16 NAND packages (eight on each side of the PCB) and no DRAM similar to every other SandForce based drive. However, what's interesting is that the actual controller is Intel branded. I'm still waiting for Intel to clarify the reason but this isn't the first time we've seen a manufacturer branded SandForce controller (Toshiba has been doing this for a while).
|CPU||Intel Core i5-2500K running at 3.3GHz (Turbo and EIST enabled)|
|Motherboard||AsRock Z68 Pro3|
|Chipset Drivers||Intel 188.8.131.525 + Intel RST 10.2|
|Memory||G.Skill RipjawsX DDR3-1600 4 x 8GB (9-9-9-24)|
XFX AMD Radeon HD 6850 XXX
(800MHz core clock; 4.2GHz GDDR5 effective)
|Video Drivers||AMD Catalyst 10.1|
|Desktop Resolution||1920 x 1080|
|OS||Windows 7 x64|
In our Intel SSD DC S3700 review Anand introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst-case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.
To generate the data below we take a freshly secure erased SSD and fill it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. We run the test for just over half an hour, nowhere near what we run our steady state tests for but enough to give a good look at drive behavior once all spare area fills up.
We record instantaneous IOPS every second for the duration of the test and then plot IOPS vs. time and generate the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 40K IOPS for better visualization of differences between drives.
The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, we vary the percentage of the drive that gets filled/tested depending on the amount of spare area we're trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area. If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives we've tested here but not all controllers are guaranteed to behave the same way.
The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).
The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.
|Intel SSD 530 240GB||Intel SSD 335 240GB||Corsair Neutron 240GB||Samsung SSD 840 EVO 250GB||Samsung SSD 840 Pro 256GB|
Even though the SF-2281 is over two and a half years old, its performance consistency is still impressive. Compared to the SSD 335, there's been quite significant improvement as it takes nearly double the time for SSD 530 to enter steady-state. Increasing the over-provisioning doesn't seem to have a major impact on performance, which is odd. On one hand it's a good thing as you can fill the SSD 530 without worrying that its performance will degrade but on the other hand, the steady-state performance could be better. For example the Corsair Neutron beats the SSD 530 by a fairly big margin with 25% over-provisioning.
|Intel SSD 530 240GB||Intel SSD 335 240GB||Corsair Neutron 240GB||Samsung SSD 840 EVO 250GB||Samsung SSD 840 Pro 256GB|
|Intel SSD 530 240GB||Intel SSD 335 240GB||Corsair Neutron 240GB||Samsung SSD 840 EVO 250GB||Samsung SSD 840 Pro 256GB|
To test TRIM, I filled the drive with incompressible sequential data and proceeded with 60 minutes of incompressible 4KB random writes at queue depth of 32. I measured performance after the torture as well as after a single TRIM pass with AS-SSD since it uses incompressible data and hence suits for this purpose.
|Intel SSD 530 Resiliency - AS-SSD Incompressible Sequential Write|
|Clean||After Torture (60 min)||After TRIM|
|Intel SSD 530 240GB||315.1MB/s||183.3MB/s||193.3MB/s|
SandForce's TRIM has never been fully functional when the drive is pushed into a corner with incompressible writes and the SSD 530 doesn't bring any change to that. This is really a big problem with SandForce drives if you're going to store lots of incompressible data (such as MP3s, H.264 videos and other highly compressed formats) because sequential speeds may suffer even more in the long run. As an OS drive the SSD 530 will do just fine since it won't be full of incompressible data, but I would recommend buying something non-SandForce if the main use will be storage of incompressible data. Hopefully SandForce's third generation controller will bring a fix to this.
AnandTech Storage Bench 2013
When Anand built the AnandTech Heavy and Light Storage Bench suites in 2011 he did so because we didn't have any good tools at the time that would begin to stress a drive's garbage collection routines. Once all blocks have a sufficient number of used pages, all further writes will inevitably trigger some sort of garbage collection/block recycling algorithm. Our Heavy 2011 test in particular was designed to do just this. By hitting the test SSD with a large enough and write intensive enough workload, we could ensure that some amount of GC would happen.
There were a couple of issues with our 2011 tests that we've been wanting to rectify however. First off, all of our 2011 tests were built using Windows 7 x64 pre-SP1, which meant there were potentially some 4K alignment issues that wouldn't exist had we built the trace on a system with SP1. This didn't really impact most SSDs but it proved to be a problem with some hard drives. Secondly, and more recently, we've shifted focus from simply triggering GC routines to really looking at worst-case scenario performance after prolonged random IO.
For years we'd felt the negative impacts of inconsistent IO performance with all SSDs, but until the S3700 showed up we didn't think to actually measure and visualize IO consistency. The problem with our IO consistency tests is that they are very focused on 4KB random writes at high queue depths and full LBA spans–not exactly a real world client usage model. The aspects of SSD architecture that those tests stress however are very important, and none of our existing tests were doing a good job of quantifying that.
We needed an updated heavy test, one that dealt with an even larger set of data and one that somehow incorporated IO consistency into its metrics. We think we have that test. The new benchmark doesn't even have a name, we've just been calling it The Destroyer (although AnandTech Storage Bench 2013 is likely a better fit for PR reasons).
Everything about this new test is bigger and better. The test platform moves to Windows 8 Pro x64. The workload is far more realistic. Just as before, this is an application trace based test–we record all IO requests made to a test system, then play them back on the drive we're measuring and run statistical analysis on the drive's responses.
Imitating most modern benchmarks Anand crafted the Destroyer out of a series of scenarios. For this benchmark we focused heavily on Photo editing, Gaming, Virtualization, General Productivity, Video Playback and Application Development. Rough descriptions of the various scenarios are in the table below:
|AnandTech Storage Bench 2013 Preview - The Destroyer|
|Photo Sync/Editing||Import images, edit, export||Adobe Photoshop CS6, Adobe Lightroom 4, Dropbox|
|Gaming||Download/install games, play games||Steam, Deus Ex, Skyrim, Starcraft 2, BioShock Infinite|
|Virtualization||Run/manage VM, use general apps inside VM||VirtualBox|
|General Productivity||Browse the web, manage local email, copy files, encrypt/decrypt files, backup system, download content, virus/malware scan||Chrome, IE10, Outlook, Windows 8, AxCrypt, uTorrent, AdAware|
|Video Playback||Copy and watch movies||Windows 8|
|Application Development||Compile projects, check out code, download code samples||Visual Studio 2012|
While some tasks remained independent, many were stitched together (e.g. system backups would take place while other scenarios were taking place). The overall stats give some justification to what we've been calling this test internally:
|AnandTech Storage Bench 2013 Preview - The Destroyer, Specs|
|The Destroyer (2013)||Heavy 2011|
|Reads||38.83 million||2.17 million|
|Writes||10.98 million||1.78 million|
|Total IO Operations||49.8 million||3.99 million|
|Total GB Read||1583.02 GB||48.63 GB|
|Total GB Written||875.62 GB||106.32 GB|
|Average Queue Depth||~5.5||~4.6|
|Focus||Worst-case multitasking, IO consistency||Peak IO, basic GC routines|
SSDs have grown in their performance abilities over the years, so we wanted a new test that could really push high queue depths at times. The average queue depth is still realistic for a client workload, but the Destroyer has some very demanding peaks. When we first introduced the Heavy 2011 test, some drives would take multiple hours to complete it; today most high performance SSDs can finish the test in under 90 minutes. The Destroyer? So far the fastest we've seen it go is 10 hours. Most high performance SSDs we've tested seem to need around 12–13 hours per run, with mainstream drives taking closer to 24 hours. The read/write balance is also a lot more realistic than in the Heavy 2011 test. Back in 2011 we just needed something that had a ton of writes so we could start separating the good from the bad. Now that the drives have matured, we felt a test that was a bit more balanced would be a better idea.
Despite the balance recalibration, there's just a ton of data moving around in this test. Ultimately the sheer volume of data here and the fact that there's a good amount of random IO courtesy of all of the multitasking (e.g. background VM work, background photo exports/syncs, etc...) makes the Destroyer do a far better job of giving credit for performance consistency than the old Heavy 2011 test. Both tests are valid; they just stress/showcase different things. As the days of begging for better random IO performance and basic GC intelligence are over, we wanted a test that would give us a bit more of what we're interested in these days. As Anand mentioned in the S3700 review, having good worst-case IO performance and consistency matters just as much to client users as it does to enterprise users.
We're reporting two primary metrics with the Destroyer: average data rate in MB/s and average service time in microseconds. The former gives you an idea of the throughput of the drive during the time that it was running the Destroyer workload. This can be a very good indication of overall performance. What average data rate doesn't do a good job of is taking into account response time of very bursty (read: high queue depth) IO. By reporting average service time we heavily weigh latency for queued IOs. You'll note that this is a metric we've been reporting in our enterprise benchmarks for a while now. With the client tests maturing, the time was right for a little convergence.
The SSD 530 does okay in our new Storage Bench 2013. The improvement from SSD 335 is again quite significant, which is mostly thanks to the improved performance consistency. However, the SF-2281 simply can't challenge the more modern designs and for ultimate performance the SanDisk Extreme II is still the best pick.
Random Read/Write Speed
The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.
Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data for each write as well as fully random data to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why this matters, read our original SandForce article.
Random speeds are slightly down from SSD 520/525. The decrease could be NAND related as it's a fact that read and program times go up as we move to smaller lithographies, which will then translate to decrease in overall performance. However, compared to the SSD 335 performance is better but we're dealing with a higher bin of IMFT's 20nm NAND, so that's not a surprise.
Sequential Read/Write Speed
To measure sequential performance I ran a 1 minute long 128KB sequential test over the entire span of the drive at a queue depth of 1. The results reported are in average MB/s over the entire test length.
Read performance has never been one of SandForce's biggest strengths but write performance is top of the class (except when dealing with incompressible data).
AS-SSD Incompressible Sequential Read/Write Performance
The AS-SSD sequential benchmark uses incompressible data for all of its transfers. The result is a pretty big reduction in sequential write speed on SandForce based controllers.
Performance vs. Transfer Size
ATTO is a useful tool for quickly benchmarking performance across various transfer sizes. You can get the complete data set in Bench. SandForce's read performance has never been top notch and the SSD 530 is no exception. Compared to the SSD 335, read speeds are better across all smaller IO sizes but as you can see the SSD 530 is beaten by most of today's SSDs. The story is similar with write performance, although differences are noticeably smaller. The SF-2281 is really starting to show its age because it used to dominate all ATTO benchmarks thanks to the use of highly compressible data but those times seem to be over.
Click for full size
AnandTech Storage Bench 2011
Two years ago we introduced our AnandTech Storage Bench, a suite of benchmarks that took traces of real OS/application usage and played them back in a repeatable manner. Anand assembled the traces out of frustration with the majority of what we have today in terms of SSD benchmarks.
Although the AnandTech Storage Bench tests did a good job of characterizing SSD performance, they weren't stressful enough. All of the tests performed less than 10GB of reads/writes and typically involved only 4GB of writes specifically. That's not even enough exceed the spare area on most SSDs. Most canned SSD benchmarks don't even come close to writing a single gigabyte of data, but that doesn't mean that simply writing 4GB is acceptable.
Originally I kept the benchmarks short enough that they wouldn't be a burden to run (~30 minutes) but long enough that they were representative of what a power user might do with their system.
Not too long ago I tweeted that I had created what I referred to as the Mother of All SSD Benchmarks (MOASB). Rather than only writing 4GB of data to the drive, this benchmark writes 106.32GB. It's the load you'd put on a drive after nearly two weeks of constant usage. And it takes a *long* time to run.
1) The MOASB, officially called AnandTech Storage Bench 2011 - Heavy Workload, mainly focuses on the times when your I/O activity is the highest. There is a lot of downloading and application installing that happens during the course of this test. My thinking was that it's during application installs, file copies, downloading and multitasking with all of this that you can really notice performance differences between drives.
2) I tried to cover as many bases as possible with the software I incorporated into this test. There's a lot of photo editing in Photoshop, HTML editing in Dreamweaver, web browsing, game playing/level loading (Starcraft II & WoW are both a part of the test) as well as general use stuff (application installing, virus scanning). I included a large amount of email downloading, document creation and editing as well. To top it all off I even use Visual Studio 2008 to build Chromium during the test.
The test has 2,168,893 read operations and 1,783,447 write operations. The IO breakdown is as follows:
|AnandTech Storage Bench 2011 - Heavy Workload IO Breakdown|
|IO Size||% of Total|
Only 42% of all operations are sequential, the rest range from pseudo to fully random (with most falling in the pseudo-random category). Average queue depth is 4.625 IOs, with 59% of operations taking place in an IO queue of 1.
Many of you have asked for a better way to really characterize performance. Simply looking at IOPS doesn't really say much. As a result I'm going to be presenting Storage Bench 2011 data in a slightly different way. We'll have performance represented as Average MB/s, with higher numbers being better. At the same time I'll be reporting how long the SSD was busy while running this test. These disk busy graphs will show you exactly how much time was shaved off by using a faster drive vs. a slower one during the course of this test. Finally, I will also break out performance into reads, writes and combined. The reason I do this is to help balance out the fact that this test is unusually write intensive, which can often hide the benefits of a drive with good read performance.
There's also a new light workload for 2011. This is a far more reasonable, typical every day use case benchmark. Lots of web browsing, photo editing (but with a greater focus on photo consumption), video playback as well as some application installs and gaming. This test isn't nearly as write intensive as the MOASB but it's still multiple times more write intensive than what we were running in 2010.
As always I don't believe that these two benchmarks alone are enough to characterize the performance of a drive, but hopefully along with the rest of our tests they will help provide a better idea.
The testbed for Storage Bench 2011 has changed as well. We're now using a Sandy Bridge platform with full 6Gbps support for these tests.
AnandTech Storage Bench 2011 - Heavy Workload
We'll start out by looking at average data rate throughout our new heavy workload test:
Performance in our 2011 Storage Bench is a bit below the average of SF-2281, although the difference isn't anything to worry about. I decided to include only the most important graphs but you can find the complete dataset in our Bench.
AnandTech Storage Bench 2011 - Light Workload
Our light workload actually has more write operations than read operations. The split is as follows: 372,630 reads and 459,709 writes. The relatively close read/write ratio does better mimic a typical light workload (although even lighter workloads would be far more read centric).
The I/O breakdown is similar to the heavy workload at small IOs, however you'll notice that there are far fewer large IO transfers:
|AnandTech Storage Bench 2011 - Light Workload IO Breakdown|
|IO Size||% of Total|
Thanks to the newer SF-2281 silicon, idle power consumption is down by ~36%. That's a welcome improvement since it puts the SSD 530 nearly on-par with Samsung's offerings. Power consumption under load has not changed, though. Unfortunately I can't test idle power consumption with HIPM+DIPM enabled to get the real idle power consumption in a laptop since it requires a laptop for testing (the registry change does not work with desktop chipsets).
Ultimately there is one crucial thing that the SSD 530 provides over the SSD 520: price. The SSD 520 has been rather expensive and it hasn't really been able to compete, in price or performance, with the newer SSDs with smaller lithography NAND. As a result, I haven't been able to recommend the SSD 520 for ages as it simply hasn't provided any value for the extra cost. With the SSD 530 Intel's pricing is more reasonable and closer to other high-end SSDs. In fact, Intel's pricing is very competitive if you compare the SSD 530 to OCZ Vector 150 or SanDisk Extreme II, although those two also outperform the SSD 530 by a fairly big margin.
|NewEgg Price Comparison (11/12/2013)|
|Intel SSD 530||$120||$170||$200||N/A ($419)|
|Intel SSD 520||N/A||$195||$250||N/A|
|Intel SSD 335||N/A||$155||$180||N/A|
|OCZ Vector 150||$135||N/A||$240||$490|
|OCZ Vertex 450||$115||N/A||$220||$460|
|Samsung SSD 840 EVO||$100||N/A||$175||$340|
|Samsung SSD 840 Pro||$150||N/A||$250||$570|
|SanDisk Extreme II||$150||N/A||$230||$460|
|Seagate SSD 600||$110||N/A||$150||$380|
We aren't able to find the 480GB model in stock anywhere right now (at least, not at any of the major online resellers), but it's interesting to note that Intel's ARK page shows a bulk price of just $419; by comparison, the 240GB has a bulk price of $219, so if Intel can truly sell the 480GB for close to $400 it's at lest worth a look. Still, the competition is fierce, with the M500 and 840 EVO getting closer to $300 than $400 for 480-512GB capacities.
Other than price, power consumption is the only other major improvement in the SSD 530. Performance is mostly similar to the SSD 520, although I don't think this surprises anyone. The SF-2281 is well over two years old now, so there are no tricks left to increase performance.
I'm still of the opinion that Intel should offer a consumer orientated drive with its own SATA 6Gbps controller (i.e. the one used in the DC S3500/S3700). However, I do understand that it may not be cost effective, especially as the controller was designed for enterprise to begin with, making it not suitable for the consumer market with slimmer profits. It will be interesting to see what Intel's approach will be with SATA Express as it gives Intel a new chance to design something in-house. With SATA 6Gbps Intel was very late to the game, which forced them to use third party controllers (first Marvell and then SandForce). With SATA 3Gbps, on the other hand, Intel was one of the first players to come up with a good controller and firmware (the X25-M series), so I certainly hope that we will see something similar this time around.