Back to Article

  • Zorlac - Thursday, July 18, 2013 - link

    Wouldn't M.2 (NGFF) be the perfect motherboard connector for PCIe/NVMe SSDs? Reply
  • IanCutress - Thursday, July 18, 2013 - link

    M.2 fits into the mSATA space, whereas SFF-8639 will be more like a SATA type connector. For the SFF-8639 stuff, if you don't have the connector on the motherboard (which at the minute none do, perhaps we're waiting until Haswell refresh or Broadwell), then a PCIe to SFF-8639 connector would be needed. Reply
  • danjw - Thursday, July 18, 2013 - link

    Actually, Asus shipped motherboards with M.2 connectors with the release of Haswell. They indicated that they were the only on to do so. Particularly I know the Maximus VI Extreme and Gene both have these ports. Reply
  • Streetwind - Thursday, July 18, 2013 - link

    Okay, just so I understand this correctly... NVMe becomes relevant for the enthusiast consumer with the advent of SATA Express drives? Reply
  • istenrot - Thursday, July 18, 2013 - link

    NVMe SSD connector is probably SATA Express. Reply
  • hrrmph - Thursday, July 18, 2013 - link

    To add to that, here is the SF-8639 connector spec (link below), but no mention of NVMe, so looks like NVMe is purely a protocol.

    Also, the AnandTech article referenced by istenrot has a header that indicates: 'Trend, Two Connectors,' and the article further explains that the intended uses of each of the 2 new connectors is different.

    Putting it all together, it looks like there might be 2 new physical connectors: 1 for HDDs (SATA Express) and 1 for SSDs (SFF-8639), with the NVMe protocol being applied to BIOS, OS, Drive Controller, and the Drives themselves, with connections via cables using the SFF-8639 connector.

    No mention of NVMe on SATA Express yet?

    Again, there is no mention of NVMe in the SFF-8639 spec either:
  • iwod - Thursday, July 18, 2013 - link

    1. Just why would operating temperatures be so hot... 400Mhz Cortex-R CPU dont seem to be powerful enough to generate that amount of heat.

    2. NVMe something new and never heard before. Will this be part of SATA-Express Specification? If not, why go SATA-Express anyway and not just jump directly to NVMe.
  • MrSpadge - Thursday, July 18, 2013 - link

    either many drives stacked tightly together or just other external heat sources. In the first case slowing all SSDs down a bit might help. Reply
  • Kevin G - Thursday, July 18, 2013 - link

    The reason for the additional heat could come from the PCI-e block added to the controller. Likely chip is using an old process node (I'd guess 40 nm) and if they're trying PCI-e 3.0 speeds, it could get reasonably warm. The more likely reason for thermal guard is use cases in mobile environments where cooling is at an extreme premium. Throttling won't likely happen as consumer usage scenarios would be rather bursty in nature - transactions are not large enough to keep the controller perpetually active to need throttling.

    NVMe is complimentary to SATA-Express. The best way to think of it is that NVMe is a new logical protocol to connect a drive over PCI-E. SATA Express defines more of physical portion (connector and cabling). The one logical aspect that SATA Express provides is AHCI encapsulation so that legacy OS and firmware can use a new SSD drive without have to worry about having a NVMe compatible driver.

    NVMe doesn't require SATA Express connections as there full sized PCI-e cards in the works that'll use this technology. These new controllers are aiming for full 16x lane PCI-E 3.0 bandwidth. These full sized NVMe cards do not have to provide the AHCI encapsulation that SATA Express does.
  • Sabresiberian - Sunday, July 21, 2013 - link

    Because some notebooks are so poorly designed that heat from other components could drive the heat seen by the SSD up beyond any reasonable limits. Sad, but true.

    (To be a bit more fair, it is quite possible in some parts of the world that a notebook could be used in ambient temperatures approaching 40C already, so headroom to reach 70C is far less than for most of us in our usage scenarios.)
  • MrSpadge - Thursday, July 18, 2013 - link

    Yay for TurboWrite! Finally starting to make smart use of that NAND :)
    This could also make 64 GB drives for caches viable again.
  • Paul Tarnowski - Thursday, July 18, 2013 - link

    Heh. Once I sounded out NVMe, I was completely unable to focus on the actual article.

  • Solid State Brain - Thursday, July 18, 2013 - link

    If the drive is using only 1 bit out of 3 bit, then wouldn't write amplification be 4 (four) times higher than normal when writing data to the pseudo-SLC buffer? There might be an actual concern for endurance here, if users are allowed to fill it even just a few times a day. Reply
  • DanNeely - Thursday, July 18, 2013 - link

    Endurance should be SLC-like for the buffer. The reason MLC and TLC have lower endurance numbers is that when you need to split the (analog) stored value into 4 or 8 values instead of 2 you can tolerate far less drift in where the values are before it becomes ambiguous. Reply
  • Solid State Brain - Thursday, July 18, 2013 - link

    So in pseudo-SLC mode they're still using all 3 bits? While it makes sense that if they're only using two states (all bits on, all bits off) this mode would have much higher tolerance to errors (and therefore have a longer endurance), I don't understand how it can be faster this way, since every additional bit in a cell takes time to write on. Is modifying in NAND memory the state of all bits in one cell at once to the same value faster? Reply
  • Sugardaddy - Friday, July 19, 2013 - link

    There aren't really any bits. Like DanNeely said, the cell returns an analog value when read. Let's assume that's a value 0..1
    for SLC, you would treat anything below 0.5 as 0 and anything above as 1. So even if the cell gets old and returns 0.8 instead of 1 and 0.2 instead of 0, you don't have any trouble.

    For 3-bit TLC, the cell return is treated as
    0 - 0.125 = 0
    0.125 - 0.250 = 1
    0.750 to 0.875 = 6
    0.875 to 1.000 = 7.
    to store 2³ = 8 different values. Now obviously, if the cell returns 0.2 when it's supposed to return 0, it can't be used anymore.

    By using it in SLC mode, they simply say they use the cells from their TLC process, but read anything below .5 as 0 and anything above as 1, (presumably) getting the speeds and reliability of SLC
  • Solid State Brain - Friday, July 19, 2013 - link

    I'm aware that when a transistor outputs a voltage it's actually an analog value that has to be "interpreted". What isn't clear to me is that if TLC cells really work as a single "tank" rather than smaller ones (one for each "bit"), how would using them in SLC mode be any faster than using them normally (TLC mode)? Reply
  • Jaybus - Friday, July 19, 2013 - link

    All modes use the exact same physical cell structure, a floating-gate MOSFET transistor. The floating gate is insulated by a SiO2 layer. Electrons cannot flow in or out of the gate unless an electric field is established that forces (tunnels) electrons through the SiO2 and into or out of the floating gate. A write tunnels electrons through the SiO2 insulating layer from the N-channel into the floating gate. Since a write can only add charge to the floating gate, there is also an erase operation that must be performed before a write. The erase operation tunnels electrons through the SiO2 layer from the floating gate into the P-channel.

    This tunneling process is limited. Some of the SiO2 molecules are altered by the tunneling causing them to trap electrons in the molecular electron cloud rather than letting them pass through. The buildup of negative charge in the SiO2 insulating layer eventually means that the charge stored in the floating gate by a write operation decreases with each P/E cycle. It wears out.

    SLC writes (programs) a cell to one of two levels, high voltage or low voltage. MLC programs it to one of 4 voltage levels, TLC to one of 8 voltage levels.

    The only difference is how much charge is forced into the floating gate during a write and how the corresponding voltage is interpreted during a read. Put simply, if the controller "treats" the cell as SLC, then it in fact, by gosh, IS SLC.

    I think the reason for the confusion is that they say they are "treating" TLC as SLC. They obviously wanted to distinguish that there is not a separate cache chip. There is only one big NAND flash array, most of which they "treat" as TLC and some of which they "treat" as a SLC cache.
  • Mr Perfect - Thursday, July 18, 2013 - link

    Is turbocache in addition to the storage flash, or part of it? In other words, the 120GB drive has 9GB of cache, do usrs now have 111GB of user-flash and 9GB of cache, or 120GB user-flash and an additional 9GB just for caching? Reply
  • Solid State Brain - Thursday, July 18, 2013 - link

    The 120 GB drive is supposed to have 3 GB of TurboWrite cache, which should use 12 GB of TLC NAND memory (as SLC = 2^1 = 2 states, TLC/3bit MLC = 2^3 = 8 states. 8/2 = 4. Not 3). This means there's either an additional TLC NAND die somewhere on the drive PCB dedicated for the buffer, or that for this to work the trimmed spare capacity is used instead. If it's the latter (likely case in my opinion), then if the drive doesn't have enough trimmed free space (or a combination of both trimmed space and overprovisioning space) the usable TurboWrite cache should be lower than 3 GB. Reply
  • speculatrix - Monday, July 22, 2013 - link

    Anandtech wrote: "the amount of storage in the buffer decreases by a factor three"

    surely using three-bit-per-cell flash means your capacity is reduced to one eighth?

Log in

Don't have an account? Sign up now