Murphy's Law

Anything That Can Go Wrong, Will Go Wrong

For those of you that may not know, I am an Academic Director of MCT at Howest University here in Belgium. I perform research in our labs here on big data analytics, virtualization, cloud computing, and server technology in general. We do all the testing here in the lab, and I also do launch article testing for AnandTech.

Undoubtedly, like most academic institutions, we have a summer vacation, where our labs are locked and we are told to get some sunlight. AMD's Rome launch has happened just as our lab closing started, and so I had the Rome server delivered to my home lab instead. The only issue was that our corresponding Intel server was still in the academic lab. Normally this isn't really a problem - even when the lab is open, I issue testing through remote access and process the data that way, in order to reboot the system and run tests and so forth. If a hardware change is needed, I need to be physically there, but usually this isn't a problem.

However, as Murphy's Law would have it, during testing for this review, our Domain Controller also crashed while our labs were closed. We could not reach our older servers any more. This has limited us somewhat in our testing - while I can test this Rome system during normal hours at the home lab (can't really run it overnight, it is a server and therefore loud), I couldn't issue any benchmarks to our Naples / Cascade Lake systems in the lab.

As a result, our only option was to limit ourselves to the benchmarks already done on the EPYC 7601, Skylake, and Cascade Lake machines. Rest assured that we will be back with our usual Big Data/AI and other real world tests once we can get our complete testing infrastructure up and running. 

Benchmark Configuration and Methodology

All of our testing was conducted on Ubuntu Server 18.04 LTS, except for the EPYC 7742 server, which was running Ubuntu 19.04. The reason was simple: we were told that 19.04 had validated support for Rome, and with two weeks of testing time, we wanted to complete what was possible. Support (including X2APIC/IOMMU patches to utilize 256 threads) for Rome is available with Linux Kernel 4.19 and later. 

You will notice that the DRAM capacity varies among our server configurations. This is of course a result of the fact that Xeons have access to six memory channels while EPYC CPUs have eight channels. As far as we know, all of our tests fit in 128 GB, so DRAM capacity should not have much influence on performance. 

 

AMD Daytona - Dual EPYC 7742

AMD sent us the "Daytona XT" server, a reference platform build by ODM Quanta (D52BQ-2U). 

CPU ​AMD EPYC 7742 (2.25 GHz, 64c, 256 MB L3, 225W)
RAM 512 GB (16x32 GB) Micron DDR4-3200
Internal Disks SAMSUNG MZ7LM240 (bootdisk)
Micron 9300 3.84 TB (data)
Motherboard Daytona reference board: S5BQ
PSU PWS-1200

Although the 225W TDP CPUs needs extra heatspipes and heatsinks, there are still running on air cooling... 

AMD EPYC 7601 –  (2U Chassis)

CPU Two EPYC 7601  (2.2 GHz, 32c, 8x8MB L3, 180W)
RAM 512 GB (16x32 GB) Samsung DDR4-2666 @2400
Internal Disks SAMSUNG MZ7LM240 (bootdisk)
Intel SSD3710 800 GB (data)
Motherboard AMD Speedway
PSU 1100W PSU (80+ Platinum)

Intel's Xeon "Purley" Server – S2P2SY3Q (2U Chassis)

CPU Two Intel Xeon Platinum 8280  (2.7 GHz, 28c, 38.5MB L3, 205W)
Two Intel Xeon Platinum 8176  (2.1 GHz, 28c, 38.5MB L3, 165W)
RAM 384 GB (12x32 GB) Hynix DDR4-2666
Internal Disks SAMSUNG MZ7LM240 (bootdisk)
Micron 9300 3.84 TB (data)
Motherboard Intel S2600WF (Wolf Pass baseboard)
Chipset Intel Wellsburg B0
PSU 1100W PSU (80+ Platinum)

We enabled hyper-threading and Intel virtualization acceleration.

The BIG LIST of Rome CPUs: Core Counts and Frequencies Memory Subsystem: Bandwidth
Comments Locked

180 Comments

View All Comments

  • nathanddrews - Wednesday, August 7, 2019 - link

    Binned for OC? We'll find out soon enough!
  • DigitalFreak - Thursday, August 8, 2019 - link

    At this point it looks like all TR will get your is "official" ECC support and more PCIe lanes. Maybe cheaper motherboards than EPYC.
  • willis936 - Thursday, August 8, 2019 - link

    Half the memory lanes (this is a big one), half the pcie lanes, max of 1 socket per mobo. Those are important features for datacenter customers and their absence from threadripper makes threadripper less desirable than epyc in the datacenter.
  • rocky12345 - Thursday, August 8, 2019 - link

    Yes but Threadripper is made for high end desktops for video editing etc etc and some gaming. I do not see the big data center guys going after TR all that much. Yes you may see some of the TR go there but that is not what TR is made for that is why we have EPYC & XEON CPU's.

    I do have to agree though where some said where does TR fit in price wise since we are going to have a 16/32 main stream desktop CPU shortly from AMD. I do also think this time around the 32/64 3990 TR will be 10x better than the older 2990 TR just from the memory controller not being in each CPU complex and in the 2990x because of bandwidth and latency from the memory performance really suffered when all cores were being used. On the 3990x (or whatever it will be called) this should not be an issue. If AMD is smart they will not release a 64/128 3000 series TR since it would have to be priced to far out of reach for even the most techy guy with money and the only ones that would have them would be review sites and YT reviewers and that would be only because them got them sent for free for reviews. 32/64 and the better memory performance as a whole for the new chips would be more than enough to make the 32/64 TR 3990x an instant success. Just my opinion of coarse and AMD will probably do something stupid and release a higher core count TR series CPU that next to no one will be able to afford just to be able to say hey we got the best high end CPU on the planet but to bad no one is gonna buy them because the price is to high but we have the best so who cares.
  • rocky12345 - Thursday, August 8, 2019 - link

    Oops dammit forgot to make paragraph's did not mean to have it all bunched up like that.
  • Mark Rose - Friday, August 9, 2019 - link

    Why wouldn't they release a 64 core Threadripper? Assuming they double the price of the 32 core, it would be $3400. That's affordable to a lot of people working in tech, and should be affordable to just about any business that has employees waiting on their 32 core Threadripper. AMD would sell a ton.

    That being said, I wouldn't personally buy one as I don't have a need. I'd be more likely buy a 16 core 3000 series Threadripper myself.
  • Manch - Friday, August 9, 2019 - link

    Higher Clocks
  • sor - Wednesday, August 7, 2019 - link

    It will be a feature/packaging thing. The motherboards would be TR4 and feature enthusiast features, overclocked memory, etc, not highly reliable server oriented boards. The processors themselves might be fairly comparable to their EPYC counterparts, as some Xeons were occasionally comparable to their desktop ones.
  • close - Thursday, August 8, 2019 - link

    TR was supposed to be a stopgap measure until the consume Ryzen range stretched high enough and the server EPYC range stretched low enough. I guess there is a place for further differentiation especially in terms of the platform (motherboard) used, where you have server like CPU on a more consumer like MB to create basically a workstation. Maybe OC will also fit in here.
  • Death666Angel - Friday, August 9, 2019 - link

    "TR was supposed to be a stopgap measure" where can I see AMD stating that? Considering Intel has fared pretty well with the consumer/HEDT/server differentiation, I don't think AMD needs to axe TR. I don't see them giving us EPYC with OC functions and 8 memory channles seems overkill for 16 or 32 desktop cores. I also haven't seen a statement to the effect you claim, so I highly doubt it at the moment.

Log in

Don't have an account? Sign up now