Dell & HPE Issue Updates to Fix 40K Hour Runtime Flaw in Enterprise SSDs
by Anton Shilov on March 27, 2020 4:00 PM ESTIn a second SSD snafu in as many years, Dell and HPE have revealed that the two vendors have shipped enterprise drives with a critical firmware bug, one will eventually cause data loss. The bug, seemingly related to an internal runtime counter in the SSDs, causes them to fail once they reach 40,000 hours runtime, losing all data in the process. As a result, both companies have needed to issue firmware updates for their respective drives, as customers who have been running them 24/7 (or nearly as much) are starting to trigger the bug.
Ultimately, both issues, while announced/documented separately, seem to stem from the same basic flaw. HPE and Dell both used the same upstream supplier (believed to be SanDisk) for SSD controllers and firmware for certain, now-legacy, SSDs that the two computer makers sold. And with the oldest of these drives having reached 40,000 hours runtime (4 years, 206 days, and 16 hours), this has led to the discovery of the firmware bug and the need to quickly patch it. To that end, both companies have begun rolling out firmware
As reported by Blocks & Files, the actual firmware bug seems to be a relatively simple off-by-one error that none the less has a significant repercussion to it.
The fault fixed by the Dell EMC firmware concerns an Assert function which had a bad check to validate the value of a circular buffer’s index value. Instead of checking the maximum value as N, it checked for N-1. The fix corrects the assert check to use the maximum value as N.
Overall, Dell EMC shipped a number of the faulty SAS-12Gbps enterprise drives over the years, ranging in capacity from 200 GB to 1.6 TB. All of which will require the new D417 firmware update to avoid an untimely death at 40,000 hours.
Meanwhile, HPE shipped 800 GB and 1.6 TB drives using the faulty firmware. These drives were, in turn, were used in numerous server and storage products, including HPE ProLiant, Synergy, Apollo 4200, Synergy Storage Modules, D3000 Storage Enclosure, and StoreEasy 1000 Storage, and require HPE's firmware update to secure their stability.
As for the supplier of the faulty SSDs, while HPE declined to name its vendor, Dell EMC did reveal that the affected drives were made by SanDisk (now a part of Western Digital). Furthermore, based on an image of HPE’s MO1600JVYPR SSDs published by Blocks & Files, it would appear that HPE’s drives were also made by SanDisk. To that end, it is highly likely that the affected Dell EMC and HPE SSDs are essentially the same drives from the same maker.
Overall, this is the second time in less than a year that a major SSD runtime bug has been revealed. Late last year HPE ran into a similar issue at 32,768 hours with a different series of drives. So as SSDs are now reliable enough to be put into service for several years, we're going to start seeing the long-term impact of such a long service life.
Related Reading:
- Western Digital Introduces WD Gold Enterprise SSDs
- Western Digital Starts Sales of WD_Black P50 USB 3.2 Gen 2x2 SSDs
- Western Digital Ultrastar DC SS540 SAS SSDs: Up to 15.36 TB, Up to 3 DWPD
Sources: Blocks & Files, ZDNet
51 Comments
View All Comments
ken.c - Friday, March 27, 2020 - link
We lost a pair of mirrored drives in a mongodb server to this. They both just kicked the bucket at the same time. :)olafgarten - Friday, March 27, 2020 - link
RAID doesn't help when all the drives fail simultaneously!InTheMidstOfTheInBeforeCrowd - Saturday, March 28, 2020 - link
I agree. It would be rather fruitless for law enforcement to raid company premises in search of documents revealing illicit activities only to find the companies storage array(s) being well beyond their best before date ;)brontes - Saturday, March 28, 2020 - link
Crazy! Are you from the future?> MPORTANT: Due to the SSD failure not occurring until attaining 40,000 hours of operation and based on the dates these drives began shipping from HPE, these drives are NOT susceptible to failure until October 2020 at the earliest.
olafgarten - Saturday, March 28, 2020 - link
That is incorrect, according to the source link, the first drives were shipped in late 2015, and so could possibly start failing now. Any drive put into operation from September 5th 2015 would fail.olafgarten - Saturday, March 28, 2020 - link
No edit facility, but there should be a 'today' at the end of the sentenceInTheMidstOfTheInBeforeCrowd - Sunday, March 29, 2020 - link
Please read that blog article again. It's not exactly a "source". Note that 05. September 2015 date mentioned is pure speculation, based on a blind assumption that a drive would have accrued 40000 operational hours today. Which likely is a misinterpretation of the notice they got from SanDisk/WD, confusing the event of this notice being published with the event of actual drives failing...InTheMidstOfTheInBeforeCrowd - Sunday, March 29, 2020 - link
Addendum: Also note that the Rohs conformity declaration for both the SDLTOCKR and SDLTOCKM series (check the supplier part number of the Dell/HD SSDs...) were signed in June 2016, which would indicate that those SSDs were not sold in 2015 or earlier...Gigaplex - Sunday, March 29, 2020 - link
This was found because drives started failing. If the first failure can't occur before October 2020 then they wouldn't have spotted it.69369369 - Friday, March 27, 2020 - link
HDD Master Race!