POST A COMMENT

26 Comments

Back to Article

  • SirKnobsworth - Thursday, August 14, 2014 - link

    I'm curious why they would route the 4 M.2 PCIe lanes from the CPU rather than use the abundant lanes on the PCH (there are 6 open lanes as it stands). It might give a slight performance boost to the SSD but it still seems like an odd choice. Reply
  • Smuggy - Thursday, August 14, 2014 - link

    Yeah, I was a little confused by the laning, so the comment "Note that when the M.2 slot is occupied, the PCIe slot will reduce to PCIe 3.0 x8" means that the M.2 is connecting to the CPU directly, right? So the other lanes are used up by, the wifi module which can't be disabled to free up the lanes? Or something else entirely... Reply
  • dakishimesan - Thursday, August 14, 2014 - link

    The PCH only supplies PCIe gen 2 lanes. Only the CPU supplies gen 3. Not sure if gen 3 is absolutely necessary for ssd though. Reply
  • dakishimesan - Thursday, August 14, 2014 - link

    http://www.intel.com/content/www/us/en/chipsets/pe... Reply
  • dakishimesan - Thursday, August 14, 2014 - link

    for reference:
    PCIe gen1 x4 = 1000MB/s
    gen2 x4 = 2000
    gen3 x4 = 4000MB/s ... overkill. possibly. those new Intel PCIe SSD's can hit ~1500MB/s burst, still well within gen2 x4, for now.
    Reply
  • Kevin G - Thursday, August 14, 2014 - link

    The key word is 'for now'. It wouldn't surprise me if the next generation of SSD controllers would be able to go beyond 2000 MB/s and thus making PCIe 3.0 a requirement to do so. Reply
  • JeremyAU - Friday, August 15, 2014 - link

    Yes, 3D NAND is here and now. SATA won't have enough bandwidth to cope with the next gen of SSDs, hence M.2 PCIe and SATA Express specifications. Reply
  • JeremyAU - Friday, August 15, 2014 - link

    As PCIe is considered a 'serial' interface shouldn't those measurements be Mbits, not MBytes? Huge difference in throughput. Reply
  • SirKnobsworth - Thursday, August 14, 2014 - link

    The wired and wireless networking are run off the PCH, which has 8 PCIe 2.0 lanes. Reply
  • Morawka - Thursday, August 14, 2014 - link

    I applaud Asus for the M.2 Lane Choices. This board is only going to have 1 GPU obviously, and last gen they had M.2 on a 2X bus. This is the fastest M.2 implementation on the market currently.

    It all makes sense when you consider this is a mini itx board that will have AT MOST, 1 GPU, and even a Titan will not be hampered by 8X PCIE 3.0
    Reply
  • owan - Thursday, August 14, 2014 - link

    At the same time, there really isn't much of a need to have the full 16x 3.0 lanes on the primary PEG slot. With a board this small, with this much going on, trace routing may have been a real factor as well. Reply
  • bebimbap - Thursday, August 14, 2014 - link

    I always felt the M.2 on gen 3 would be best served on a ITX platform you could use the back side of the motherboard like Asrock did, and now you don't need space in your case for the ssd, or the mounts that go along with them. I almost feel the memory should go on the back side of the motherboard too. It could be closer to the socket, but I'm not sure how much of an engineering challenge that would be esp since most dimms are populated on both sides. Things like LAN chips, or the ROG management chips could also go on the back side too. This would free up space for more stuff like dual LAN. Reply
  • iTzSnypah - Thursday, August 14, 2014 - link

    I don't see why ASUS didn't put the RAM slots right above the PCIe slot. Seems like a much better use of space. Reply
  • Ian Cutress - Thursday, August 14, 2014 - link

    The issue here is specifications for the data lines. If you start moving the DRAM slots, you then have to adjust the signal lines around from their pins (which might be on the other side of the CPU) which influences the noise, performance and margins for overclocking as they pass by other data and power signal lines. The more you adjust where the pins are, the more you break compatibility if you don't compensate, and then re compensate for each microcode update. This is why Intel has a memory support limit (e.g. DDR3-1600) and anything other than that, they don't cater for, and why OEM motherboards typically often only support the memory frequency listed by Intel. ASUS do a fair amount of simulation for DRAM support and overclocking, but for something that significant you either break support (and people complain why random chinese OEM DRAM doesn't work) or have to wait until Intel makes the change/adjusts the next platform. Reply
  • ZeDestructor - Thursday, August 14, 2014 - link

    Interestingly though, why not use SO-DIMMs rather than full lenth DIMMs? With the recent influx of high-end SO-DIMMs in recent times, DDR3-2133 SO-DIMMs are not that rare anymore. Reply
  • WithoutWeakness - Thursday, August 14, 2014 - link

    It's an ROG board. This is ASUS's flagship mITX board and they wouldn't dare cripple it with SO-DIMM slots when competitors are sure to put full-size DIMM slots on their boards. 90%+ of people who buy this board will put 8-16GB of 1600/1866 RAM in it there and would be served just as well with SO-DIMM slots. However the remaining 10% will want to stick in DDR3 3000+ and OC the hell out of it and that's just not going to happen with SO-DIMMs. Reply
  • owan - Thursday, August 14, 2014 - link

    In addition to the trace length and other specifications, the RAM slots occupy nearly the full length/width of the board. Where above the PCIe slot are they supposed to go? The space you're talking about is not only occupied by the PCH, mpcie slot, and sound system, but is in the same area as the rear I/O, which is valuable real estate. Placing the RAM slots anywhere other than where they are right now doesn't even make sense from a layout standpoint, let alone the technical hurdles Reply
  • DanNeely - Thursday, August 14, 2014 - link

    In addition to what Ian said about the signalling challenges, which could be worked around by rotating the socket too I suppose, unless you're willing to give up some expansion ports there isn't enough room to fit full size DIMMS left to right on a mITX board. The board is square, so you'd have the same equally tiny clearance left-right as you do top-bottom, and IO connectors are deeper front to back than the tiny sliver of space not taken by the DIMM. Reply
  • Morawka - Thursday, August 14, 2014 - link

    Ram needs to be as Close as possible to CPU Socket for low latency. They made the right choice. Reply
  • kwrzesien - Thursday, August 14, 2014 - link

    I would gladly give up the Start button on this mITX ROG to get a pair of SATA ports next to the USB 3.0 header. Reply
  • Mkii - Thursday, August 14, 2014 - link

    Makes me wonder why the took out the E-sata port out of the back panel. Reply
  • DanNeely - Thursday, August 14, 2014 - link

    Probably because USB3 has eaten a large chunk of the market for them; and when they ran out of space to fit everything in they decided the button board would be useful to more people. Reply
  • Erukian - Thursday, August 14, 2014 - link

    Does anyone know if the Noctua NH-D15 is compatible with this board? Reply
  • Qrash - Thursday, August 14, 2014 - link

    Clearly I don't have one (yet), but from counting pixels and scaling to the board's dimensions (170 mm) I get the edge of the 150 mm wide NH-D15 to be right at the edge of the MPCIE module which means the sound daughterboard might be in the way if it is too tall. I think the heatsink and square corners of the fans are 48 mm above the motherboard, judging from the drawing on the NH-D15's Compatibility webpage. Reply
  • Qrash - Thursday, August 14, 2014 - link

    Doh! The Maximus VI Impact is marked on the NH-D15 Compatibility webpage as compatible so if the CPU socket and the position of the add-on cards haven't moved in the new motherboard, then it should be compatible too. Reply
  • 457R4LDR34DKN07 - Monday, August 18, 2014 - link

    Would love to see a review with a sm951 in the M.2 Reply

Log in

Don't have an account? Sign up now