Inside the Vertex 4

At a high level, Everest 2 is architecturally very similar to its predecessor. The SoC is still built on a 65nm G process and maintains the same basic architecture. Clock speeds are up now from 333MHz to 400MHz. There are also microarchitectural tweaks at work here – limits in Everest 1 have since been removed for Everest 2. Everest 2's sequencer is much improved, but getting more detail than this is basically impossible.


Everest 1's Block Diagram, Similar to Everest 2

Just as with previous OCZ drives, the Vertex 4 ships with its own custom designed PCB. Unlike most SSDs we've seen, the Vertex 4 places its Everest 2 controller in the center of the PCB – with NAND fanning out in a circle around it. The Indilinx SoC uses the drive housing as a heatsink as two thermal pads help conduct the heat away from the chip:


OCZ Vertex 4 512GB

In the 512GB version we find quad-die packaged Intel 25nm synchronous MLC NAND devices, sixteen of them in total. The 256GB version keeps the same number of devices but drops the die count per package from 4 to 2. Despite launching with Intel NAND, OCZ claims broad support for Flash from other vendors. Touting that advantage really only matters if there's cheaper NAND available on the market, today Intel NAND is still priced competitively enough to make it an obvious fit.


OCZ Vertex 4 512GB

OCZ is currently testing 20nm Intel NAND and 24nm Toggle Mode NAND with the Everest 2. Both are functional at this point, but neither is optimized in the current firmware. Should Toggle Mode NAND pricing or performance offer a measurable advantage, OCZ will introduce a separate product based on it. Currently, low-density 24nm Toggle NAND isn't price competitive and thus it's not used at launch. OCZ did add that a 1TB Vertex 4 would almost certainly use Toggle NAND as Toshiba's high-density pricing (64GB, octal die packages) is better than from IMFT these days. We're still at least a quarter away from seeing 20nm NAND used in volume, so enabling support for either of these options will require a firmware update.
With its sights fixed on OEMs, OCZ is far more concerned about drive longevity than it once was. OEMs don't like having to re-qualify components, they want a steady supply of the same product for the lifespan of whatever system they're selling. If a system is shipping to a government buyer, that lifespan can be extremely long. As a result, OCZ wanted to build a controller that was as forward looking as possible. Everest 2 needed to be able to migrate to 20nm without requiring another qualification pass from OCZ's customers. As NAND cell sizes shrink, error rates go up. Only so much can be dealt with at the NAND factory, the controller is extremely important to maintaining data validity.

The original Octane could correct up to 78 random bits for every 1KB of data using its BCH ECC engine. While that was more than sufficient for 25nm NAND, OCZ is planning for the future with Everest 2 and implemented a more robust ECC engine capable of correcting up to 128 random bits for every 1KB of data. OCZ believes the Everest 2's ECC capabilities are enough to ensure reliable operation with 20nm IMFT NAND and perhaps the first 1x-nm IMFT NAND as well. For consumers however, this has no bearing on the Vertex 4's performance/reliability as a drive today (Octane's ECC engine was enough).

1GB of DDR3-800 On-Board

The Everest 2 controller is flanked by a 512MB Micron DDR3-800 DRAM. Another 512MB chip exists on the flip side of the PCB bringing the total to a whopping 1GB of DDR3 memory on-board. OCZ makes no effort to hide the DRAM's purpose: Everest 2 will prefetch read requests from NAND into DRAM for quick servicing to the host. When serviced from DRAM, reads should complete as fast as the interface will allow it – in other words, the limit is the 6Gbps SATA interface, not the SSD.


OCZ Vertex 4 256GB

In order to get 128GB, 256GB and 512GB drives to market as quickly as possible, OCZ is shipping them all with 1GB of DRAM on-board. The 128GB and 256GB drives simply won't use all of the DRAM however. A future revision of the Vertex 4 will pair the 128/256GB drives with 512MB of memory instead to save on costs.


OCZ Vertex 4 256GB

The amount of memory bandwidth offered to the Everest 2 controller is insane – we're talking about 3.2GB/s, as much as many modern day smartphone SoCs, and as much as a desktop PC had a decade ago.

OCZ wouldn't tell me if the costs of shipping 1GB of DDR3 memory outweigh the savings from not having to pay SandForce for silicon anymore. Even though it owns Indilinx, R&D and manufacturing aren't free. All of that factored in, the cost of the Everest 2 controller is likely cheaper than SandForce's 2281, but it's not clear to me if the added cost of DRAM offsets that gap. None of this matters to end users, but it's an interesting discussion regardless. OCZ will have to deliver aggressive pricing regardless of its internal cost structure.

AES-256 Encryption

Similar to the Octane/Everest 1, all data written to NAND in the Vertex 4 goes through Everest 2's 256-bit AES encryption engine. Modern SSDs scramble data before writing to NAND to begin with (certain data patterns are more prone to errors in flash than others) and encryption offers security benefits in addition to working as a good scrambling engine. If you're going to support scrambling, the jump to enabling encryption isn't all that far.

Similar to other SSDs, the Vertex 4's encryption key is generated randomly at the factory. Unfortunately, also similar to other SSDs, there's no client facing tool to reset or manage the key. I believe the key is regenerated upon a secure erase and it can be tied to an ATA password, however what I'd really like to see is the bundling of a software package that can allow users to generate a new key and require a password at boot (not all ATA password implementations are super secure). I know there are third party applications that offer this functionality today, but I'd like to see something ship with one of these FDE drives by default so more consumers can actually use the feature. There's no point to having a self encrypting drive that gives up your data as soon as you plug it in to another system. While I'm making requests, I'd also like to see a way for OS X users to take advantage of built in full disk encryption.

Ndurance 2.0 and a 5-year Warranty

With Everest 2 OCZ supports redundant NAND arrays, similar to the latest Intel and SandForce controllers. By including redundant NAND on-board, the drive could withstand the failure of more than a single die without any data loss. The Vertex 4 doesn't have OCZ's redundant NAND technology enabled, although the enterprise version of the drive (Intrepid 3) will likely turn it on.

OCZ is also doing some granular manipulation of voltages at the NAND level in order to get the most endurance out of these drives (not all NAND is created equally, adjusting to characteristics of individual NAND devices can lead to more p/e cycles out of the drive). While this isn't really a concern for 25nm NAND on the Vertex 4 today, it is likely a feature we'll see played up with the move to 20nm and for eMLC versions of the drive targeted at the enterprise.

Confidence in reliability is at an all-time high with the Vertex 4 as it ships with a 5-year warranty, up from 3 years with the Octane, Vertex 2 and Vertex 3.

Introduction Random Read/Write Speed
Comments Locked

127 Comments

View All Comments

  • Kristian Vättö - Wednesday, April 4, 2012 - link

    240GB Vertex 3 is actually faster than 480GB Vertex 3:

    http://www.anandtech.com/bench/Product/352?vs=561
    http://www.ocztechnology.com/res/manuals/OCZ_Verte...
  • MarkLuvsCS - Wednesday, April 4, 2012 - link

    256gb and 512gb should perform nearly identical because they have the same number of NAND packages - 16. the 512gb version just uses 32gb vs 16gb NAND in the 256gb version. The differences between the 256 and 512 gb drives are negligible.
  • Iketh - Wednesday, April 4, 2012 - link

    that concept of yours depends entirely on how each line of SSD is architected... it goes without saying that each manufacturer implements different architectures....

    your comment is what is misleading
  • Glock24 - Wednesday, April 4, 2012 - link

    "...a single TRIM pass is able to restore performance to new"

    I've seen statements similar to this on previous reviews, but how do you force a TRIM pass? Do you use a third party application? Is there a console command?
  • Kristian Vättö - Wednesday, April 4, 2012 - link

    Just format the drive using Windows' Disk Management :-)
  • Glock24 - Wednesday, April 4, 2012 - link

    Well, I will ask this another way:

    Is there a way to force the TRIM command that wil nor destroy the data in the drive?
  • Kristian Vättö - Wednesday, April 4, 2012 - link

    If you've had TRIM enabled throughout the life of the drive, then there is no need to TRIM it as the empty space should already be TRIM'ed.

    One way of forcing it would be to multiply a big file (e.g. an archive or movie file) until the drive runs out of space. Then delete the multiples.
  • PartEleven - Wednesday, April 4, 2012 - link

    I was also curious about this, and hope you can clarify some more. So my understanding is that Windows 7 has TRIM enabled by default if you have an SSD right? So are you saying that if you have TRIM enable throughout the life of the drive, Windows should automagically TRIM the empty space regularly?
  • adamantinepiggy - Wednesday, April 4, 2012 - link

    http://ssd.windows98.co.uk/downloads/ssdtool.exe

    This tool will initiate a trim manually. Problem is that unless you can monitor the SSD, you won't know it has actually done anything. I know it works with Crucial Drives on Win7 as I can see the SSD's initiate a trim from the monitoring port of the SSD when I use this app. I can only "assume" it works on other SSD's too but since I can't monitor them, I can't know for sure.
  • Glock24 - Wednesday, April 4, 2012 - link

    I'll try that tool.

    For those using Linux, I've used a tool bundled with hdparm calles wiper.sh:

    wiper.sh: Linux SATA SSD TRIM utility, version 3.4, by Mark Lord.

    Linux tune-up (TRIM) utility for SATA SSDs
    Usage: /usr/sbin/wiper.sh [--max-ranges <num>] [--verbose] [--commit] <mount_point|block_device>
    Eg: /usr/sbin/wiper.sh /dev/sda1

Log in

Don't have an account? Sign up now