POST A COMMENT

31 Comments

Back to Article

  • shabby - Saturday, July 15, 2017 - link

    Out of stock at newegg... Reply
  • Hul8 - Saturday, July 15, 2017 - link

    Article text consistently uses "DPWD" (instead of "DWPD"). Reply
  • Ryan Smith - Monday, July 17, 2017 - link

    Thanks! Reply
  • coder543 - Saturday, July 15, 2017 - link

    "DPWD"

    I'm fairly certain this is not the first time I've seen AnandTech make this mistake before. I think Someone's MacBook has learned the wrong way to spell this and is causing a bad autocorrect.

    DWPD is Drive Write Per Day. I have no idea what a DPWD is. Drive Per Write Day?
    Reply
  • ddriver - Sunday, July 16, 2017 - link

    "I have no idea what a DPWD is. Drive Per Write Day?"

    Yep, that's exactly what it means, as I have elaborated at least twice.

    So a 2 DPWD figure means two drives per write day, meaning that by the middle of the write day, your drive will be dead. The formula for calculating the lifetime estimate is 24 / DPWD, and gives you how many hours the drive will remain useful in a continuous write scenario.
    Reply
  • SiSiX - Sunday, July 16, 2017 - link

    DPWD = Donuts Per Working Day? Data Purchased With Donuts? Drives Per Working Dog?
    Clearly we need more info on this new unit of measure :)
    Reply
  • Ryan Smith - Monday, July 17, 2017 - link

    DPWD = Documents Produced With Defects Reply
  • BurntMyBacon - Tuesday, July 18, 2017 - link

    Win! Reply
  • jimjamjamie - Monday, July 17, 2017 - link

    Never skip write day Reply
  • Samus - Monday, July 17, 2017 - link

    Someones just been talking about DP'ing a lot. Reply
  • MadAd - Saturday, July 15, 2017 - link

    "Since it is impossible to pack 100 or 200 chips into a 3.5” SSD"

    wow, I wonder how much per chip that is, surely someone can make an educated guess on the nand density in these things?
    Reply
  • mdrejhon - Monday, July 17, 2017 - link

    > Since it is impossible to pack 100 or 200 chips into a 3.5” SSD

    On one motherboard, maybe.

    But a 3.5" enclosure can hold up to ~10 thin motherboards stacked, so you could just put 10 chips per motherboard. The trick is in only using thin components (like chips) where the motherboards are stacked.

    Obviously, cooling may be an issue, unless the drive has internal fans (now that would be fun).
    Reply
  • petteyg359 - Monday, July 17, 2017 - link

    I don't see how it is impossible. Assume a 3.5" form has room for at least for times the PCB area of a 2.5", maybe more depending on how you stack multiple PCBs. You could cram 24 or 32 chips (based on the size of chips I've seen in SSDs with the case removed) in a single 2.5" space if you can put the controller on a separate board, and that is just one side. Put chips on both sides, and have two boards full, and you've got 96-128 chips with plenty of room to spare in that 3.5" chassis. Unless these Hynix chips are abnormally large surface area, I'm not seeing the impossibility here. Reply
  • eSyr - Saturday, July 15, 2017 - link

    > Since it is impossible to pack 100 or 200 chips into a 3.5” SSD

    Looks like everyone forgot about Seagate's 60 TB SSD already.
    Reply
  • ZeDestructor - Saturday, July 15, 2017 - link

    TLC vs eMLC, whatever that actually means these days.

    Also, actually for sale vs tech demo.
    Reply
  • IndianaKrom - Saturday, July 15, 2017 - link

    Article repeatedly says GB instead of TB. Reply
  • ZeDestructor - Saturday, July 15, 2017 - link

    Now, hold on just ONE minute there..

    "The Viking UHC-Silo drives use planar eMLC NAND memory in custom packaging with raw NAND capacities of 25 TB and 50 TB, and consequently are currently the highest capacity SSDs available on the market."

    How the hell did they manage that using planar eMLC?! Every other SSD with similar density has been using 3D NAND!
    Reply
  • gfody - Sunday, July 16, 2017 - link

    I'm picturing ~150 microSDs soldered together in rows all connected to sas expanders and finally to some raid controller. Reply
  • ddriver - Sunday, July 16, 2017 - link

    But then again it won't be that slow.

    It is actually quite the achievement to have a drive with such a capacity perform that poorly, saturating only half of the SAS interface for reading and only 1/4 for writing.
    Reply
  • ddriver - Sunday, July 16, 2017 - link

    Oh, scratch that, I see it is just 6gbit SAS. But still, both read and write speed don't hit it. Reply
  • BurntMyBacon - Tuesday, July 18, 2017 - link

    I'm going to guess that it is a thermal issue rather than a raw performance issue. Packing that many dies in a 3.5" form factor is bound to make it hard to cool. It would make sense that they implemented an artificial sustained read/write limitation to prevent overheating. Won't know anything for sure unless we can get one in for review. Reply
  • Kevin G - Saturday, July 15, 2017 - link

    50 packages in a 3.5" form factor? That might be possible if they're using two PCB's and stacking. A single 3.5" PCB can hold 6 by 4 packages on a side. That's 24 per side and double that for the second side to get 48 on the primary PCB. The daughter PCB might be able to squeeze in a 6 x 5 package array for another 60 dies on that PCB. Grand total would be 108.

    The real problem would be keeping everything cool as that would be a very dense PCB and while each NAND package doesn't consume much power, there are a lot of them.

    I'm also kinda surprised that we're not seeing dual port 12 Gbit SAS as an option. The raw amount of packages would permit a massive amount of raw bandwidth. I would suggest NVMe but that segment is dedicated around 2.5" drives.
    Reply
  • SiSiX - Sunday, July 16, 2017 - link

    High working bandwidth in massively cramped 3.5" form factor = HEAT BAD, HULK SMASH DATA, NO MORE DATA NO MORE HEAT, HULK GO SLEEP
    Seriously though, makes sense to limit the write speeds especially for something like this. Still half again faster speedwise for writes than a HDD but with what? 2-5 times the capacity. (And 2ish times faster for reads.)
    Reply
  • Kevin G - Sunday, July 16, 2017 - link

    I don't disappear but the hottest part would be the controller itself. Best option would be to put it on the undersiide of the PCB where it can easily come into direct contact with case. Add a few fins to the case to increase surface area too. Not the best solution but not many other solutions out there.

    Even if the don't go the 12 Gbit route, an option for dual ported version would be handy for fail over even if it couldn't work in an active-active scenario. Just for fail over shouldn't be too much more in terms of power consumption.
    Reply
  • ET - Sunday, July 16, 2017 - link

    Disappointed. Hoped for an article on Viking ships. Reply
  • ads295 - Sunday, July 16, 2017 - link

    Keep refreshing the page, you might find an "article" on them "From Around The Web"... Reply
  • lilmoe - Monday, July 17, 2017 - link

    Jokes on you, I was expecting a new range series. Reply
  • J_E_D_70 - Tuesday, July 18, 2017 - link

    Exactly. I was wondering why Viking ships would need SSDs. Without electricity, what good are they? The SSDs I mean, not the Viking ships. Reply
  • mdw9604 - Monday, July 17, 2017 - link

    When they are $4 per TB I'll be interested. Reply
  • BurntMyBacon - Tuesday, July 18, 2017 - link

    I'll look forward to your comments once they become interesting in ... a decade or two(?).

    Gotta pick me up one of them $100 25TB drives to replace the 250GB drive a pick up for $100 a few years back.
    Reply
  • serendip - Monday, July 17, 2017 - link

    Proof is in the eating - or the teardown, if someone's got $10k to throw away. That's crazy density for a 3.5" chassis, I'm wondering how they can cram so many chips and boards inside without needing big heat sinks on the outside. Reply

Log in

Don't have an account? Sign up now