Back to Article

  • colinstu - Saturday, August 02, 2014 - link

    One of the stranger looking things I've seen. Don't see how sticking it on the RAM module is any better than just sticking it on the motherboard itself. Unless they made it not taller than normal.

    Love the transistion to SSDs and how main system RAM is growing in size. It's only a matter of time before RAM and SSDs are one and attached very close to the CPU (...and ultimately... completely inside the processor itself). Quad-core with HT, clocked at 4GHz, with 64GB of RAM and 512GB of SSD all on a single chip.... it sounds like a long shot but it'll happen some day...
  • tygrus - Sunday, August 03, 2014 - link

    I'd me happy with a system with a module that had 1or2 CPU cores, # GPU cores, 4 to 16 GB RAM, between 100 and 512 GB SSD per plugin-module. Additional expansion ports handle the network, video output (no GPU cores: just RAW feed from other modules or framebuffer ?), sound etc. Distributed file system and distributed computing with a switch fabric between modules and a unified OS view of hardware. CPU cores and GPU are closer to RAM. Cool the system with a tunnel of air across low-profile heatsink. More modules = more of CPU/GPU/RAM/SSD. Reply
  • tygrus - Sunday, August 03, 2014 - link

    me = be
  • kr1s69 - Saturday, August 02, 2014 - link

    I don't get it. Why would I want ultra compact SSDs but standard desktop sized RAM? Surely if size is that important the PC would be using SODIMMs? Reply
  • Lonyo - Saturday, August 02, 2014 - link

    But your board height is limited by your heatsink Z height as well. It would work for non Thin-ITX motherboards where the height limit is the height of a standard HS/F, but where you want additional storage without having slots for it.

    If your motherboard only has one mSATA slot, this lets you add more in a case that doesn't support many 2.5"/3.5" drives, in theory, or it's a question of waiting for someone to make a relevant case for such a design.
  • Laststop311 - Saturday, August 02, 2014 - link

    "no mini-ITX motherboards support 2280 M.2 drives without using an M.2 to SATA connector"

    This is not correct! Asus Z97 mini itx board has a native m2 slot on the back of the motherboard and it's even a 10gb/s pci-e interface.
  • MadMan007 - Saturday, August 02, 2014 - link

    The statement is correct because it is specifically talking about M.2 2280 which is the longest length M.2 standard. Reply
  • Ian Cutress - Saturday, August 02, 2014 - link

    Ah true, I had missed that. But to put M.2 on the rear requires extra routing and preferably another couple of PCB layers, which adds work and takes the motherboard out of the cheap cost range. Reply
  • savagemike - Saturday, August 02, 2014 - link

    They should have added a jack to charge my phone too.
    Seriously though - this will come down to the price delta. If it's cheap to add then some people might say 'why not'. Otherwise it goes nowhere. Clever as it might be I don't know that it's solving some pressing issue for a lot of people. Not sure about the cfast pricing but M.2 is still kinda rare and expensive compared to other options. At least the last time I checked...
  • Stephen Barrett - Sunday, August 03, 2014 - link

    Wow that is unique Reply
  • Kevin G - Monday, August 04, 2014 - link

    Except I've seen the basic principle before:
  • zodiacfml - Sunday, August 03, 2014 - link

    disappointed. i thought this is ssd on dimm. Reply
  • shabby - Sunday, August 03, 2014 - link

    Same here, totally let down. Reply
  • jamescox - Sunday, August 03, 2014 - link

    It seems like we should have an m.2 connector which sits perpendicular to the motherboard rather than parallel if this is actually desirable. It would look almost like a DIMM in that case. Technically, motherboard manufactures could just add a riser card with the m.2 slot on it if they want to conserve horizontal board space. This solution seems strange since you need to still connect a sata cable to the top of this, which takes even more z-axis space.

    The m.2 connector mostly sits flat since it was meant for mobile use. For desktop, you can use sata express or an actual pci-express card. It doesn't seem like a half height pci-express card would be much taller than this combined module, but it would take an external slot which you may not have in a SFF system. Personally, I wouldn't mind a low-profile (z-axis) form factor rather than current small form factor (small x/y axis). You can get low-profile boards, but these are performance limited. I would want 4 (SO-DIMM) memory slots, m.2, and a way to mount an actual high performance video card (preferably in the same plane as the motherboard without using a riser card).
  • Kevin G - Monday, August 04, 2014 - link

    The catch is that the longest M.2 drives maybe too tall for some mini-ITX cases. Of course this varies on a case by case basis.

    But yes, there should be more vertically aligned M.2 slots.
  • HollyDOL - Monday, August 04, 2014 - link

    Unless it is a very cheap solution suitable for average office I can't see any point in this. When the SSD wears out, which it will, you have to buy whole kit again including completely healthy RAM (given it's post warranty)?
    Personally, I am against this overintegration at all cost. And especially strongly against mixing components with several levels of difference between designed lifetime. For home use or us liking to play with hardware there won't be much difference since we'll buy new component much earlier than the old one dies (and we wouldn't buy such a kit unless for very specific purposes anyway), but for offices? The productive lifetime of a PC is much longer. Sincerely everything built on Core 2+ will still easily suffice. And that's talking about 8 years old machine (if I remember correctly). I strongly doubt this SSD could live 8 years if we build new machines based on that now.
    Integration and getting higher performance thanks to that is nice, but it slowly starts to turn buildable PCs into monolith without upgrade possibilities wasting more and more resources :-(
  • fluxtatic - Tuesday, August 05, 2014 - link

    I don't think it's like that. The DIMMs have M.2 slots, not built-in M.2 SSDs. So when one or the other dies, replace the dead component and move on.

    Then again, I think this is a terrible idea. Unless the other RAM OEMs pick this up and standardize on it, your choice for RAM is Apacer, whoever the hell that is.

    This looks like a solution in search of a problem to me. Outside of some very specific one-off builds. I just don't see a place for this in the market.
  • zodiacsoulmate - Tuesday, August 05, 2014 - link

    Have a question:
    If we can reach top speed of PCI-e M2 SSD, put 5 in RAID, will it replace RAM??
    at 15GB/s i mean why not just use SSD for ram?
  • HollyDOL - Wednesday, August 06, 2014 - link

    That's talking about maximum speed... but what RAM needs is fast response times. For comparison imagine an electric current... voltage (U) being bandwidth but the current (A) being responsiveness...
    Or flow of a river... Amazon might be the biggest river in the world, but is it the fastest? same here... your RAIDed SSD is just like Amazon compared to RAM in terms of latency. (And yes, HDD here would be stagnating rotting lake :-))

Log in

Don't have an account? Sign up now