X399 Motherboards: The MSI X399 Creation

For the motherboard situation, AMD clarified that all motherboards on the market today will be able to run the new 250W processors. The differences will be in how well each motherboard will be able to overclock, with AMD citing that the newer models and revisions should perform better, given that they were built with a higher power rating already in mind. Boards like the X399 Creation should also help in pushing the first generation Ryzen Threadripper.

Box. Has board inside.

As noted back at Computex, the MSI X399 Creation is a very visually busy motherboard. Lots of angles, and lots of shades of grey. I know it is customary in some Asian languages and magazines to be very dense, and this is kind of what that looks like. Most of the time I prefer a simpler, elegant design. This design does not scream elegance.

The key headline for this motherboard is the power delivery. MSI has put 16 phases on the processor, and another three for "uncore" portion of the chip, or as AMD calls it, the SoC. In order to fit them in, the DRAM slots are slightly further down than average, but it also allows MSI to put in a larger heatsink, which also connects to the heatsink near the rear panel of the board.

In case you forget the name: Creation.

Storage on the motherboard comes in two forms: eight SATA ports, and seven M.2 drives. That is not a typo: MSI has enabled this motherboard with seven M.2 slots. Three come from on the board, and are found under the chipset heatsink. Here are two of them:

The other four comes from an add-in PCIe card. We also saw this at Computex, and it uses a dual-slot design. It looks like a GPU:

But inside are four M.2 slots, with thermal pad on the heatsink to assist with cooling.

MSI states that this was built specifically with Threadripper in mind, so I’m going to annoy our SSD reviewer, Billy Tallis, to hand over a few more drives.

Also on the board is an extensive rear panel, with USB 3.1 ports, USB 3.0 ports, Ethernet, and Wi-Fi:

Show Me the Chips Benchmarks & Pre-Order Info
Comments Locked

101 Comments

View All Comments

  • SetiroN - Monday, August 6, 2018 - link

    The memory configuration is going to be a huge bottleneck.

    Just try you try to use a 32 core Epyc with only 4 channels populated: performance it's hindered so badly you end up making very little use of the additional cores unless you're not accessing memory at all.

    This all feels like an afterthought.
  • artk2219 - Monday, August 6, 2018 - link

    So you're telling me AMD is shaving off features from their more expensive server parts so that theres some market differentiation? For shame! Seriously though, it is annoying that TR4 and SP3 are "2 different sockets", would have been nice to be able to use Epyc's in TR4.
  • drajitshnew - Monday, August 6, 2018 - link

    My "guess" is that while TR4 ( SP3R2) and SP3 are both 4094 pins, in TR4 the pins leading to the 2nd 2 processors are just that-- pins. They are just for physical support & are not electrically connected to anything. Hence, to maintain backwards comptibility AMD disabled the memory & PCIE of the second pair of dies
  • eastcoast_pete - Monday, August 6, 2018 - link

    While I also believe that there is no such thing as too much computing power, the 32 (and 24?) core TRs are the CPU equivalents of a 1,000 HP engine in a car: great for bragging rights, but only useful in very specific situations, and otherwise not faster than mere 8 core chips. In this case, the applications where 32 cores can make a difference are those that are not that dependent on memory speed/access. I would love to see some benchmarks for compiling and complex CAD situations.

    Overall, the question is/remains how well AMD executed on this second round of "NUMA on a chip".
    Lastly, about EPYC vs. TR: AMD learned from the master (Intel). It's not about not letting people run server chips on desktop boards, it's about blocking people from doing the opposite: using much less expensive desktop CPUs in server boards and for server applications. That is also why desktop CPUs and chipsets basically never support ECC RAM, which is a requirement for many servers. TR is almost "EPYC", but just not quite, so you still have to buy EPYC and pay epic prizes for your servers. But than, Intel does the same, and gouges us even worse.
  • mapesdhs - Monday, August 6, 2018 - link

    Not sure how these are about blocking people from doing the opposite, since they do support ECC, so surely one could use these CPUs just as they are with a good quality consumer mbd and they'd do just fine for a wide range of server tasks, using ECC memory if desired. If companies cared about cost that much then this is an option. Most though won't do that. There's a belief that companies will cram a consumer chip onto a pro board if they can, but really that's very rare as most bulk buyers of workstations and servers get them from OEMs, very few build their own.

    Nobody's gouging anyone btw, it's still a free market choice whether to buy Intel or not.
  • smilingcrow - Monday, August 6, 2018 - link

    In theory TR boards can support ECC but I've heard reports that validation of ECC RAM is not exactly a priority and with all the work Ryzen boards required regarding RAM that's not a surprise.
    So anybody here built a TR ECC system and how did you get on? 1st hand reports are always better.
  • Oxford Guy - Tuesday, August 7, 2018 - link

    ECC RAM is sold at slower speeds than typical enthusiast RAM. I fail to see why validation would be necessary. The fastest ECC RAM I know of is only 2666. If there is anything faster it should still fit within the TR2 spec.
  • imaheadcase - Monday, August 6, 2018 - link

    So why did the CPU race slow to a crawl now for years? Have we actually reached a "safe" limit for CPUs until some new tech can make it faster? I know the need isn't as great as it used to be, but remember the days that CPU speed leaped so much each generation..like 500mhz jumps each new CPU it seemed. Now we are seeing boosts..which is basically like saying "We can go this high, but its just a limit because we not sure of ourselves".
  • DigitalFreak - Monday, August 6, 2018 - link

    Two reasons come to mind - technology and competition. It's becoming increasingly difficult to go to smaller process nodes (see Intel 10nm) which are necessary to make faster chips. As to competition, Intel hasn't had any until AMD's Zen architecture. They're not going to put a lot of money into R&D if they don't have to. Unfortunately for them, AMD caught them with their pants down, and their 10nm process has had nothing but problems.
  • DigitalFreak - Monday, August 6, 2018 - link

    *which are necessary to make faster chips
    Faster chips without crazy heat output and power requirements, or huge die sizes.

Log in

Don't have an account? Sign up now