POST A COMMENT

22 Comments

Back to Article

  • kensiko - Friday, January 06, 2012 - link

    LOL it's like the current Marvell SATA 6Gbps controller, it doesn't perform well at all. Reply
  • r3loaded - Friday, January 06, 2012 - link

    You'd have though they'd at least use PCIe 3.0 to avail of the 1GB/s transfer speed on that single lane. Reply
  • therealnickdanger - Friday, January 06, 2012 - link

    I don't think this means the whole device is limited to x1. If I understand the article properly, EACH controller uses an x1 connection. Instead of have a x16 card with SATA connections to each NAND module, they could have an x16 card with 16 x1 connections to NAND modules.

    "the first platform to use it will feature an array of these controllers behind a PCIe switch"

    I read this to mean that there will be multiple x1 controllers working together for a single drive likely configurable to x4, x8, or even x16 arrays. If each controller addresses its own NAND before being processed in a RAID-like fashion, then this could be one beast of a device.

    I could be optimistic and be projecting a little here, but anything less than that seems like a complete step backward.
    Reply
  • therealnickdanger - Friday, January 06, 2012 - link

    Looks like I was correct - completely scalable from x1 to x16:

    http://www.storagereview.com/ocz_zdrive_r5_kiliman...
    Reply
  • Death666Angel - Friday, January 06, 2012 - link

    The x1 limitation is indeed strange. And I'm not in the market for such a thing anyway. But it's an interesting product and someway down the line we'll maybe see 50 to 100GB NAND on the mainboard for the OS with such a device. :-) Reply
  • xdrol - Friday, January 06, 2012 - link

    One module uses 1x interface.

    16 module uses 16x interface.
    Reply
  • Death666Angel - Saturday, January 07, 2012 - link

    Still seems counter intuitive to me. Why would anyone ever need X1 when that can be saturated by SATA3 already. All this does is add more PCB space and silicon cost. Why not go x4 already? Oh well, I'm no chip designer... Reply
  • jjj - Friday, January 06, 2012 - link

    Marvell was talking recently about PCIe based SSDs in Ultrabooks next year,maybe it's related to the x1 issue just not very sure how yet (no idea how many PCIe lines IB Ultrabooks will have. Reply
  • jjj - Friday, January 06, 2012 - link

    PS: next year as in this year Reply
  • jjj - Friday, January 06, 2012 - link

    The Marvell press release has a bit more info http://investor.marvell.com/phoenix.zhtml?c=120802... Reply
  • NickB. - Friday, January 06, 2012 - link

    Doesn't a single PCIE 2.0 channel max at 500MB (Bytes, not bits) per second, in each direction... and doesn't SATA 3 max at about 600MB total? Seems *a little* apple/orange-ish to me, but anyway...

    The thing I'm curious about is latency. For a normal PCIE-SSD to get between the processor and the flash you go to the onboard chipset, to the PCIE chip on the SSD, which then passes to the SATA based controller. Sure savings for the mfr's in component counts will be a bonus, and this setup should use less power, but there should be some improvement in latency as well.

    Also, isn't there a PCIE interface directly on the Sandy and Ivy Bridge chips? In most of the architecture diagrams I've seen it shows it as an x16 interface for graphics cards but is there anything keeping someone from using that for an SSD - like, say, in an Ultrabook that uses the integrated graphics anyway?
    Reply
  • xdrol - Friday, January 06, 2012 - link

    OCZ's previous PCIe-SSD cards had a PCIe 4x interface - they can build such a card with 4 modules. That is a total of 2 GB/s, versus SATA3's 600 MB/s.

    And nothing stops them to make a 16x card (8 GB/s).

    See the fig 2 at the bottom of the article - 8 modules on a PCIe 8x card.
    Reply
  • NickB. - Friday, January 06, 2012 - link

    Yup, I think that might have been added after I posted... or I just missed it :)

    Either way, this is interesting. Could be an alternative to mSATA... maybe?

    Maybe it's wrong of me, but for some reason I have it in my head that this could allow for the disk equivalent of moving the memory management onto the CPU from the motherboard chipset. CPU->SSD Controller direct via PCIE for up to 128GB... or CPU->PCIE Switch->SSD Controller for over 128GB. I keep thinking that this could give you a screaming fast machine because of the latency improvements... but if not, at the very least it could allow for cheaper SSD-based machines with the storage built into the motherboard.
    Reply
  • ericloewe - Friday, January 06, 2012 - link

    Apple uses those PCI-E lanes for graphics (8x) and Thunderbolt controllers (4x per controller, allowing up to two as in the big iMac)

    So, I'd say they can be used for everything, which makes sense intuitively. Graphics benefit a lot because they're high-bandwidth, very low latency, but storage should benefit just as much.
    Reply
  • MGSsancho - Saturday, January 07, 2012 - link

    Plus it allows for much flexability in configurations for many companies. Not every product ship will saturate all lanes 100% of the time. A server might need those PCIe lanes for storage and network while abandoning display adapters. Reply
  • Matysek - Friday, January 06, 2012 - link

    we're doomed. Reply
  • FunBunny2 - Friday, January 06, 2012 - link

    What do you mean We, Kiemosabe? Reply
  • FunBunny2 - Friday, January 06, 2012 - link

    My recollection of PCIe SSD (at least, Fusion-io variety) was that the card was (to some delta) just raw NAND, with a software "controller" loaded to main storage and executed by the cpu. This off-load was a (still?) source of controversy.

    Anand: how about a refresher on how PCIe SSD are implemented? Frankly, this doesn't make sense.
    Reply
  • nubie - Friday, January 06, 2012 - link

    At a consumer level this makes perfect sense.

    Extra lanes on an entry level motherboard are going to be x1 only, same for laptops on Mini-PCIe.

    I am hoping there will be x1 and mini-pcie versions, you could make some very svelte ITX or wearable PC's if it was in that form factor.

    Don't forget that small and quick is sometimes a goal, not just blazing speed above all else. This is a good baby step, you have to start somewhere, KISS is a good principle.
    Reply
  • MGSsancho - Saturday, January 07, 2012 - link

    Exactly, Or mobos with 4x lanes. Reply
  • toyotabedzrock - Friday, January 06, 2012 - link

    And how are these individual controllers pooled together as a single storage medium?

    If there is no hardware to handle this the CPU will end up limiting it a bit.
    Reply
  • iamkyle - Monday, February 13, 2012 - link

    Marvell's involvement already puts a sour taste in my mouth.

    If you have one of their SATA3 controllers on your motherboard, you'll know why.
    Reply

Log in

Don't have an account? Sign up now