The Plextor M3

Plextor sent us a 256GB model of their M3 series. Below is a table containing the specifications of their M3 line.

Plextor M3 Specifications
Model PX-64M3 PX-128M3 PX-256M3 PX-512M3
Raw NAND Capacity 64GiB 128GiB 256GiB 512GiB
Formatted Capacity 59.6GiB 119.2GiB 238.5GiB 476.9GiB
Number of NAND Packages 8 8 8 8
Number of die per Package 1 2 4 8
Sequential Read 520MB/s 510MB/s 510MB/s 525MB/s
Sequential Write 175MB/s 210MB/s 360MB/s 445MB/s
4K Random Read 55K IOPS 70K IOPS 70K IOPS 56K IOPS
4K Random Write 40K IOPS 50K IOPS 65K IOPS 30K IOPS
Cache (DDR3) 128MB 256MB 512MB 512MB

The Plextor M3 is available in all the standard capacities. In the light of the performance specifications, the M3 looks very promising. It beats its closest match, the Crucial m4, in all aspects. It's very competitive even with SandForce based SSDs and especially the stated random read figures are great.

NewEgg Price Comparison (4/2/2012)
  64GB 128GB 256GB 512GB
Plextor M3 $110 $180 $340 $660
Crucial m4 $88 $155 $315 $630
Intel 520 Series $110 $180 $345 $800
Samsung 830 Series $105 $185 $300 $780
OCZ Vertex 3 $90 $178 $340 $770

Price wise the M3 is not the cheapest SSD, especially in the smaller capacities. There is about $10-15 premium in the 64GB and 128GB models but 256GB and 512GB models are more competitively priced. Crucial's m4, however, comes in as a cheaper option than the M3 at every capacity so that will be a key matchup where Plextor has to win on performance or come down in pricing.

The external design of Plextor M3 is very solid. When I first saw it, it reminded me of Samsung 830 with its brushed metal finish. Only the Plextor logo has been printed on the front—The model and other important information are printed on a sticker on the back of the drive. The drive package includes a 3.5" bracket, quick installation guide, and a software CD, which includes a clone&backup utility along with performance analyzer. Plextor is giving the M3 a top notch 5-year warranty as well.

Each of the main components (controller, NAND devices, and DRAM) have their own little thermal pad. Since the chassis is also made out of metal, heat dissipation should not be a problem. 

Inside we find Marvell’s 88SS9174-BLD2 controller (or just 9174). This is the same controller that's in Crucial's m4, but the firmware is custom developed by Plextor. It’s actually a bit surprising, yet very refreshing, to see a Marvell based SSD for a change. Everyone seems to have a SandForce solution these days. We have seen that Marvell can be competitive; you just need to take the time to customize the firmware to get good performance. The stock SandForce firmware is fast enough, so it's obvious that many companies choose to go with the easiest option.

Flip the PCB and we find eight Toshiba 24nm 2-bit-per-cell MLC NAND devices. That’s coupled with two 256MB DDR-1333 chips from Nanya, giving a total of 512MB of DDR3 cache. 

Toshiba uses a Toggle-Mode interface and the current iteration (2.0) of Toggle-Mode NAND is good for up to 400MT/s per interface. Rating speed by transfers is a bit annoying as it doesn't tell us the actual bandwidth—for that we need the width of the channel and transfers per second. The channel in this case is 8 bits wide, so that works out to be 3.2Gbps per interface, or 400MB/s. With eight NAND packages, the maximum throughput works out to be 3200MB/s, over four times more than what SATA 6Gb/s can provide. Of course, reading from NAND and dumping the data into a register is one thing; it's another matter to actually transfer the data to a host controller over the interface.

We want to provide a quick word about firmware updates before we go into benchmarks. The drive came with FW 1.01, which was the latest at that time. Plextor has recently released FW 1.02 which is supposed to fix some issues but all our tests have been done using FW 1.01. Plextor is not claiming increased performance in the release notes of the update. The actual process of updating the firmware is very simple. Download a small ISO (~3MB) from Plextor's site, burn that to a CD or USB stick and boot from that. Press Enter and it automatically flashes the drive. I even had all my other drives plugged in and there was no problem. 

The Test

CPU

Intel Core i5-2500K running at 3.3GHz (Turbo and EIST enabled)

Motherboard

AsRock Z68 Pro3

Chipset

Intel Z68

Chipset Drivers

Intel 9.1.1.1015 + Intel RST 10.2

Memory G.Skill RipjawsX DDR3-1600 2 x 4GB (9-9-9-24)
Video Card XFX AMD Radeon HD 6850 XXX
(800MHz core clock; 4.2GHz GDDR5 effective)
Video Drivers AMD Catalyst 10.1
Desktop Resolution 1920 x 1080
OS Windows 7 x64

Our regular readers may notice that my testbed is not exactly the same as Anand's. Anand's setup is based on Intel's motherboard with H67 chipset, whereas mine is an ASRock board based on Intel's Z68 chipset. The important bit here is that both feature native SATA 6Gb/s support and both setups use the same drivers. Other features and components don't really have an effect on SSD testing. For example the average CPU usage during write speed tests is less than 5%.

Introduction Random and Sequential Read/Write Speed
Comments Locked

113 Comments

View All Comments

  • epobirs - Thursday, April 5, 2012 - link

    Kind of sad to see a review where Plextor is treated as an unknown. For quite a long time they were the brand against which all others were judged. For one simple reason: ifPlextor said their drive functioned at speed X, it did. If other companies were claiming a new high performance mark and Plextor hadn't produced a matching product yet, it often meant those other companies were lying about their performance.
  • cjcoats - Thursday, April 5, 2012 - link

    I'm a scientific user (environmental model), and I have a transaction-pattern I've never seen SSD benchmarks use:

    My dataset transactions are of the form "read or
    write the record for variable V for time T"
    (where record-size S may depend upon the variable;
    typical values range from 16K to 100 MB).

    The datasets have a matching form:

    * a header that indexes the variables, their
    records, and various other metadata

    * a sequence of data records for the time
    steps of the variables.

    This may be implemented using one of various
    standard scientific dataset libraries (netCDF,
    HDF, ...)

    A transaction basically is of the form:

    Seek to the start of the record for variable
    V, time T

    Read or write S bytes at that location.

    NONE of the SSD benchmarks I've seen have this
    kind of "seek, then access" pattern. I have the
    suspicion that Sandforce based drives will do
    miserably with it, but have no hard info.

    Any ideas?
  • bji - Thursday, April 5, 2012 - link

    SSDs have no "seek". Your program's concept of "seek" is just setting the blocks that it will be reading to those at the beginning of a file, but from an SSDs perspective, there is little to no difference between the random access patterns used for a particular operation and any other random access patterns. The only significant difference is between random and serial access.

    My point being, your case sounds like it is covered exactly by the "random write" and "random read" benchmarks. It doesn't matter which part of the file you are "seeking" to, just that your access is non-sequential. All random access is the same (more or less) to an SSD.

    This is most of the performance win of SSDs over HDDs - no seek time. SSDs have no head to move and no latency waiting for the desired sectors to arrive under the head.
  • cjcoats - Friday, April 6, 2012 - link

    I guessed you'd understand the obvious: seek() interacts with data-compression.

    A seek to a 500MB point may depend upon sequentially decompressing the preceding 500 MB of data in order to figure out what the data-compression has done with that 500MB seek-point!

    That's how you have to do seeks in conjunction with the software "gzlib", for example.

    So how do SandForce drives deal with that scenario ??
  • Cerb - Saturday, April 7, 2012 - link

    Nobody can say exactly the results for your specific uses, but it would probably be best to focus on other aspects of the drives, given performance of the current lot. You might get a 830, while a 520 could be faster at your specific application, but you'd more than likely be dealing with <10% either way. If it was more than that, a good RAID card would be a worthy addition to your hardware.

    If you must read the file up to point X, then that's a sequential read. If you read an index and then just read what you need, then that's a random read.

    Compression of the data *IN*SOFTWARE* is a CPU/RAM issue, not an SSD issue. For that, focus on incompressible data results.

    TBH, though, if you must read 500MB into it to edit a single small record, you should consider seekable data formats, instead of wasting all that CPU time.
  • bji - Saturday, April 7, 2012 - link

    They don't use streaming ciphers. They use block ciphers that encrypt each block individually and independently. Once the data makes it to the drive, there is no concept of 'file', it's just 'sectors' or whatever the block concept is at the SATA interface level. As far as the SSD is concerned, no two sectors have any relationship and wear levelling moves them around the drive in seemingly arbitrary (but designed to spread writes out) ways.

    Basically what happens is that the drive represents the sequence of sectors on the drive using the same linear addressing scheme as is present in hard drives, but maintains a mapping for each sector from the linear address that the operating system uses to identify it, to whatever unrelated actual block and sub-block address on the device that it is physically located at. Via this mapping the SSD controller can write blocks wherever makes the most sense, but present a consistent linear sector addressing scheme to the operating system. The SSD can even move blocks around in the background and during unrelated writes, which it definitely does to reduce write amplification and once again for wear levelling purposes. The operating system always believes that it wrote the sector at address N, and the SSD will always deliver the same data back when address N is read back, but under the covers the actual data can be moved around and positioned arbitrarily by the SSD.

    Given the above, and given that blocks are being written all over the flash all the time regardless of how linearly the operating system thinks it has arranged them, there really isn't any concept of contiguously compressed blocks and having to start back at the beginning of some stream of data to uncompress data.

    Keep in mind also that the Sanforce drives do de-duplication as well (as far as I know), which means that for many blocks that have the same contents, only one copy needs to actually be stored in the flash and the block mapping can point multiple operating system sector addresses at the same physical flash block and sub-block segment that has the data. Of course it would have to do copy-on-write when the sector is written but that's not hard once you have all of the rest of the controller machinery built.

    SSD controllers must be really interesting tech to work on. I can't imagine all of the cool algorithmic tricks that must be going on under the covers, but it's fun to try.
  • BolleY2K - Thursday, April 5, 2012 - link

    Don´t forget about the Yamaha CRW-F1... ;-)
  • Metaluna - Saturday, April 7, 2012 - link

    Heh I still have one of those Yamahas, along with some old Kodak Gold CD-R's. I have no real use for them anymore but hate to toss them.
  • Hourglasss - Thursday, April 5, 2012 - link

    You made a forgivable mistake with OCZ's Vertex 4. You said the M3 was the fastest non-sandforce drive. The Vertex 4 is made with OCZ's new everest-2 controller that they developed in-house after acquiring indillix (don't know if they spelled that right). So the M3 is fast, but it's second for non sandforce.
  • zipz0p - Thursday, April 5, 2012 - link

    I am glad that I'm not the only one who noticed this! :)

Log in

Don't have an account? Sign up now