With the launch of AMD's new second generation "Rome" EPYC processors, AMD's numerous hardware partners have been lining up to announce their own wares built around the new processors. And this doesn't just go for OEMs and integrators, but it also extends to component suppliers who have been eagerly awaiting the first x86 server processor with PCIe 4.0 support. To that end, yesterday Samsung sent out an announcement confirming that their previously announced PM1733 enterprise SSD supports AMD's new processors, and that just in time for inclusion in Rome systems, they'll be shipping this quarter.

Samsung first announced the PM1733 last fall, as part of its general SSD roadmap update for the then-forthcoming year. The PM1733 is the successor to their PM1723b enterprise SSD, and is designed to take advantage of the bandwidth advances provided by PCIe 4.0. Based on the company’s proprietary controller and paired with 512 Gb 5th Generation V-NAND memory chips, the drive family is rated for read speeds of up to 8 GB/second.

Interestingly, Samsung is offering both PCIe 4.0 x4 and x8 versions of the SSDs, with the U.2 versions using a x4 interface while the alf-length half-height (HHHL) card versions will use x8. The U.2 drives with dual-port capability will offer capacities of up to 30.72 TB, whereas the HHHL cards will carry up to 15.36 TB of useful V-NAND flash.

Overall, Samsung's focus on the drive in terms of specifications is on the drive's read performance rather than its write performance. Along with its 8 GB/sec rated sequential read speed, Samsung says the PM1733 can handle random reads of up to 1500K IOPS, which would make the PM1733 among the fastest drives announced so far.

Samsung will start shipments of its PM1733 SSDs this quarter. Among x86 platforms, the PCIe 4.0 capabilities of the drives will only be available on AMD EPYC 7002 and AMD Ryzen 3000 platforms, but the devices will also work with current PCIe 3.0 platforms from Intel and AMD. The manufacturer has not disclosed prices of the new PM1733 SSDs so far.

Related Reading:

Source: Samsung

POST A COMMENT

45 Comments

View All Comments

  • Kevin G - Sunday, August 11, 2019 - link

    I would disagree with that. 10 Gbit off of 1x PCIe 4.0 is ideal of the embedded market which is sensitive to thinks like board cost etc. Even outside of the embedded market, it would enable on board LAN to easily move to PCIe 4.0 and replace the 1 Gbit links currently on a 1x PCIe 1.0/2.0/3.0 connection.

    Also of note is that with AMD's Rome launch, they increased the number of PCIe lanes per socket. There is an extra 1x lane which OEMs are now using for BMC management which could now be on a 10 Gbit LAN connection. This has utility when using VNC to get into a system running a GUI.
    Reply
  • Xajel - Friday, August 09, 2019 - link

    Usually companies think about targeting more users with same product by doing backward compatibility. If they only offered PCIe x1 4.0 then most peoples with PCIe 3.0 will suffer from bottleneck as x1 PCIe 3.0 will not be enough.

    The only solution is to make the card x2 physically and works at x2 PCIe 3.0 when no PCIe 4 is available and when it's available it will use x1 lanes.
    Reply
  • ADB1979 - Friday, August 09, 2019 - link

    I have a PCIe x2 cards, they can only be used as follows, with caveats.

    An open ended PCIe x1 slot (uncommon on law/mid tier boards).
    A PCIe x2 slot, (they exist in theory but not practice, never seen one, and never been mentioned since PCIe 1.0 was the new thing).
    A PCIe x4 slot, these are very uncommon on standard desktop boards, but common on (expensive) Workstation and Server boards.
    A PCIe x8 slot, these are very uncommon on standard desktop boards, but common on (expensive) Workstation and Server boards.
    A PCIe x16 slot, Common as much on mid tier and up boards, however...

    Caveats, not all motherboards allow PCIe x4, x8 and 16x slots to function at x2, other, you have to manually set them up. e.g. one motherboard of mine spit the video output in half (onboard video, HDMI or DP) making it unusable in either of the x16 slots, a different motherboard worked straight out of the box, no settings required, both used the same chipset (I will not name the boards or manufacturers).

    As for PCIe 4.0 1x slots being used for 10G Ethernet, this is by foar the best option as it will provide max bandwidth and PCIe 1x slots are on virtually all boards, and I have never encountered an issue running any PCIe x1 device in any 16x slot (I have a PCIe 1x WIFI card in a 16x slot on the PC I am writing this on)...
    Reply
  • CrystalCowboy - Friday, August 09, 2019 - link

    The PCI Express spec does not include x2 cards. Reply
  • ADB1979 - Saturday, August 10, 2019 - link

    As I said, they WERE talked about when PCIe was brand new, however the slots never materialised, and clearly PCIe x2 was dropped quite early.

    The cards however did, and support is scatty.

    List of examples below.

    https://www.scan.co.uk/search?q=sata+pcie+card

    https://duckduckgo.com/?q=pcie+%22x2+slot%22&t...

    Now PCIe x2 is quite common for NVMe (PCIe) drives and is widely supported, but as PCIe x2 slots do not exist and support for x2 cards is scatty x2 cards will never become common even in the era of PCIe 4.0 and 5.0 and beyond.
    Reply
  • ADB1979 - Saturday, August 10, 2019 - link

    Here is a Motherboard with an actual PCIe x2 slot, or rather, an open ended PCIe x8 slot, that is electrically wired as an x2 slot.

    https://www.supermicro.com/en/products/motherboard...
    Reply
  • romrunning - Friday, August 09, 2019 - link

    Even more of a tangent, why can't I get 10GB as standard LOM ports for servers at this point? I'd appreciate not having to get add-in cards sucking up my limited slots (and space). Reply
  • CoreLogicCom - Friday, August 09, 2019 - link

    HP at least has been offering 10Gb ports in LOM since at least Gen 9. You get (2) per LOM Reply
  • Zibi - Saturday, August 10, 2019 - link

    Actually with Intel cascade lake you can. Starting from chipset c624 Intel provides 2 ports 10 gb. Unfortunately most big vendors tend to use just c621. Still even with LOMs on servers with c621 you can have NICs like 4*10 gb or 2*25 gb Reply
  • Dug - Tuesday, August 13, 2019 - link

    Because most installations use multiple cards for redundancy, fiber, or need rdma. There's no reason to include something that most won't use.
    But if you need more PCIe slots in your server, and can't afford to use one of those slots for networking, then you are a rare case.
    Reply

Log in

Don't have an account? Sign up now