In the process of assimilating SanDisk, Western Digital has been re-using their hard drive branding on consumer SSDs: WD Green, Blue and Black can refer to either mechanical hard drives or SSDs. The WD Blue brand is used for the most mainstream products, which for SSDs meant SATA drives. The first WD Blue SSD introduced in 2016 used planar TLC NAND and a Marvell controller with the usual amount of DRAM for a mainstream SSD. The next year, the WD Blue was updated with 3D TLC NAND that kept it competitive with the Crucial MX series and Samsung 850 EVO. 2018 passed with no changes to the WD Blue hardware, but prices were slashed to keep up with the rest of the industry: the 1TB drive that debuted with a MSRP of $310 is now selling for $120.

SanDisk's 64-layer 3D TLC NAND is nearing the end of its product cycle, but they and other NAND flash manufacturers aren't in a hurry to switch over to 96L NAND, so it's not quite time for another straightforward refresh of the WD Blue. Instead, Western Digital has chosen to migrate the WD Blue brand over to a different market segment. Now that the WD Black is well-established as a high-end NVMe product, there's room for an entry-level NVMe SSD, and it will be the new WD Blue SN500. This is little more than a re-branding of an existing OEM product (WD SN520), in the same way that the current WD Black SN750 SSD is based on the WD SN720. The SN520 was announced more than a year ago, but as an OEM product we were unable to obtain a review sample. Like the high-end SN720 and SN750, the SN520 and WD Blue SN500 use Western Digital's in-house NVMe SSD controller architecture, albeit in a cut-down implementation with just two PCIe lanes and no DRAM interface. The high-end version of this controller architecture has proven to be very competitive (especially for a first-generation product), but so far we have only the SN500's spec sheet by which to judge the low-end controller.

WD Blue SN500 Specifications
Capacity 250 GB 500 GB
Form Factor M.2 2280 Single-Sided
Interface NVMe PCIe 3 x2
Controller Western Digital in-house
NAND SanDisk 64-layer 3D TLC
DRAM None (Host Memory Buffer not supported)
Sequential Read 1700 MB/s 1700 MB/s
Sequential Write 1300 MB/s 1450 MB/s
4KB Random Read 210k IOPS 275k IOPS
4KB Random Write 170k IOPS 300k IOPS
Power Peak 5.94 W 5.94 W
PS3 Idle 25 mW 25 mW
PS4 Idle 2.5 mW 2.5 mW
Endurance 150 TB 300 TB
Warranty 5 years
MSRP $54.99
(22¢/GB)
$77.99
(16¢/GB)

High-end client/consumer NVMe SSDs all use PCIe 3.0 x4 interfaces, but the entry-level NVMe market is split between four-lane and two-lane controllers. Two-lane controllers are generally cheaper and their smaller size makes them attractive for small form factor devices that can't fit a full 22x80mm M.2 card. The WD SN520 is a 22x30mm design that is also available in 42mm and 80mm card lengths, but the retail WD Blue SN500 will only be sold in the 80mm length that is most common for consumer M.2 drives.

The switch from SATA to NVMe means the new WD Blue SN500 will offer much higher peak performance, but the use of a DRAMless controller means there may be some corner cases where heavy workloads show little improvement or even regress in performance. The SN500's controller does not use the NVMe Host Memory Buffer, but does include an undisclosed amount of memory on-board that serves a similar purpose. This means that omitting the external DRAM from the drive should not have as severe a performance impact as it does for DRAMless SATA drives like the WD Green SSD.

Even if the new WD Blue SN500 succeeds at offering far better performance than the current WD Blue SATA SSD, it will still be a big step backward in terms of capacity: the SATA product line ranges from 250GB to 2TB, but the SN500 will only be offered in 250GB and 500GB capacities. We hope that Western Digital has an upgraded WD Green in the works to keep affordable 1TB+ drives in their portfolio.

The MSRPs for the WD Blue SN500 are a few dollars higher than current retail pricing for the mainstream SATA SSDs they are intended to succeed. Western Digital has not mentioned when the SN500 will hit the shelves, but there will probably not be much delay after today's announcement, since this hardware has been shipping to OEMs for a year already.

POST A COMMENT

12 Comments

View All Comments

  • deil - Thursday, March 14, 2019 - link

    Well (16¢/GB) for NVME speeds, that's wow at least today. Reply
  • MDD1963 - Monday, March 18, 2019 - link

    well, for 'half-spec' NVME speeds and still 3x SATA spec, it's pretty darn inexpensive... Now we need mainboards to have 6x NVME slots instead of 6x SATA ports.. Reply
  • BPB - Thursday, March 14, 2019 - link

    This seems like a nice choice for my older PC that needs a PCIe card to use an NVMe drive. The system would never take full advantage of more expensive drives. I may finally upgrade to an NVMe drive on that system now that I can get a reasonable size for cheap. Reply
  • TelstarTOS - Thursday, March 14, 2019 - link

    Where are 2TB drives, WD? Reply
  • jabber - Thursday, March 14, 2019 - link

    So would leaving say 10GB+ free for over provisioning help with these lesser performance driven drives? Reply
  • Cellar Door - Thursday, March 14, 2019 - link

    You don't need to do that - you won't see any real world difference unless you are running a professional workload. In which case you should be starting out with a different drive. Reply
  • npz - Thursday, March 14, 2019 - link

    I would assume that the use of NAND as partial substitute for DRAM also has its drawbacks since the controller has to juggle the meta data/flash translation table operations, etc as well that could've otherwise used to dedicate to i/o only. Reply
  • abufrejoval - Thursday, March 14, 2019 - link

    I am a bit confused when you assert a DRAMless design yet speak of an "undisclosed amount of memory on-board" and categorially exclude a host memory buffer...

    I guess the controller would include a certain amount of RAM, more likely static because it takes an IBM p-Series CPU to mix DRAM and logic on a single die.

    I guess there could in fact be a PoP RAM chip and we couldn't tell from looking at the plastic housing, but could they afford that?

    That leaves embedded MRAM or ReRAM which I believe WD is working on, but would it already be included on this chip?

    And I wonder if a HMB-less design can actually be verified or where and how you can see what amount of host memory is actually being requested by an NVMe drive.

    BTW: How do they actually use that memory? The optimal performance would actually be achieved by having the firmware execute on the host CPU on its own DRAM, but for that the drive would have to upload the firmware, which is a huge security risk unless it were to be eBPF type code (hey, perhaps I should patent that!)

    What remains is PCI bus master access, which would explain how these drives may not be speed daemons.
    Reply
  • Billy Tallis - Thursday, March 14, 2019 - link

    When WD introduced the second generation WD Black SSD, they briefed the media on their controller architecture in general and answered some questions about the SN520. The controller ASIC includes SRAM buffers, but they don't disclose the exact capacity. It's probably tens of MB, comparable to the amount of memory used by HMB drives and far too small to be worth using a separate DRAM device. WD specifically stated that HMB was not used, and that they had sufficient memory on the controller itself to make using HMB unnecessary. (And even without such a statement, it's trivial to inspect HMB configuration from software, since the drive has to ask the OS to give it a buffer to use, and the OS gets to choose how much memory to give the SSD access to.)

    None of the above buffers have anything to do with executing SSD controller firmware; that's always 100% on-chip even for drives that have multi-GB DRAM on board. SSDs use discrete DRAM or HMB or (in this case) on-controller buffers to cache the flash translation layer's mappings between logical block addresses and physical NAND locations.
    Reply
  • abufrejoval - Thursday, March 14, 2019 - link

    Thanks for the feedback. Took the opportunity to find and read your piece on the HMB from last June.

    I’ve tested FusionIO drives when they came out with 160GB of SLC and in fact I still operate a 2.4TB eMLC unit in one of my machines (similar performance levels as a 970Pro, but “slightly” higher energy consumption).

    Those had a much “fatter” HMB design and in fact operated most of their "firmware" as OS drivers on the host, including all mapping tables. The controller FPGA would only run the "analog" parts, low level flash chip writing, reading and erasure with all their current adjustments, including also perhaps some of the ECC logic.

    Of course, you couldn’t boot these and on shutdown all these maps would be saved on a management portion of the device. But on a power failure they could be reconstructed from the write journal and full scans of status bits and translation info on the data blocks.

    That approach was ok for their data center use and it helped them attain performance levels that nobody else could match—at least at the time, because massive server CPUs are difficult to beat even with today’s embedded controllers.

    It also allowed for a higher performance “native” persistent block interface that eliminated most of the typical block layer overhead: Facebook is rumored to have motivated and used that interface directly for some years.

    NVMe has eliminated much of the original overhead yet following the same reasoning which puts smartNIC logic as eBPF into kernel space on Linux, you could argue that you could follow a similar approach for SSDs, where a kernel would load safe eBPF type code from the SSD to manage the translation layer, wear management and SLC to xLC write journal commits....

    Didn’t I even read about some Chinese DC startup doing that type of drive?

    With a split between CPU and RAM across PCIe x4 HBM seems borderline usable, especially since the buffers can both be denied and reclaimed by the host. Translation table accesses via bus master from the SSD controller, with all that arbitration overhead and bandwidth limitations… I doubt it scales.
    Reply

Log in

Don't have an account? Sign up now