AMD 3990X Against $20k Enterprise CPUs

For those looking at a server replacement CPU, AMD’s big discussion point here is that in order to get 64 cores on Intel hardware is relatively hard. The best way to get there is with a dual socket system, featuring two of its 28-core dies at a hefty $10k a piece. AMD’s argument is that users can consolidate down to a single socket, but also have better memory support, PCIe 4.0, and no cross-memory domain issues.

AMD 3990X Enterprise Competition
AnandTech AMD
3990X
AMD
7702P
Intel
2x8280
SEP $3990 $4450 $20018
Cores/Threads 64 / 128 64 / 128 56 / 112
Base Frequency 2900 2000 2700
Turbo Frequency 4300 3350 4000
PCIe 4.0 x64 4.0 x128 3.0 x96
DDR4 Frequency 4x 3200 8x 3200 12x 2933
Max DDR4 Capacity 512 GB 2 TB 3 TB
TDP 280 W 200 W 410 W

Unfortunately I was unable to get ahold of our Rome CPUs from Johan in time for this review, however I do have data from several dual Intel Xeon setups that I did a few months ago, including the $20k system.

Corona 1.3 Benchmark

This time with Corona the competition is hot on the heels of AMD's 64-core CPUs, but even $20k of hardware can't match it.

3D Particle Movement v2.1

The non-AVX verson of 3DPM puts the Zen 2 hardware out front, with everything else waiting in the wings.

3D Particle Movement v2.1 (with AVX)

When we add in the AVX-512 hand tuned code, the situation flips: Intel's 56 cores get almost 2.5x the score of AMD, despite having fewer cores.

Blender 2.79b bmw27_cpu Benchmark

Blender doesn't seem to like the additional access latency from the 2P systems.

AES Encoding

For AES encoding, as the benchmark takes places from memory, it appears that none of Intel's CPUs can match AMD here.

7-Zip 1805 Combined

For the 7-zip combined test, there's little difference between AMD's 32-core and 64-core, but there are sizable jumps above Intel hardware.

POV-Ray 3.7.1 Benchmark

LuxMark v3.1 C++

AppTimer: GIMP 2.10.4

Verdict

In our tests here (more in our benchmark database), AMD's 3990X would get the crown over Intel's dual socket offerings. The only thing really keeping me back from giving it is the same reason there was hesitation on the previous page: it doesn't do enough to differentiate itself from AMD's own 32-core CPU. Where AMD does win is in that 'money is less of an issue scenario', where using a single socket 64 core CPU can help consolidate systems, save power, and save money. Intel's CPUs have a TDP of 205W each (more if you decide to use the turbo, which we did here), which totals 410W, while AMD maxed out at 280W in our tests. Technically Intel's 2P has access to more PCIe lanes, but AMD's PCIe lanes are PCIe 4.0, not PCIe 3.0, and with the right switch can power many more than Intel (if you're saving 16k, then a switch is peanuts).

We acknowledge that our tests here aren't in any way a comprehensive test of server level workloads, but for the user base that AMD is aiming for, we'd take the 64 core (or even the 32 core) in most circumstances over two Intel 28 core CPUs, and spend the extra money on memory, storage, or a couple of big fat GPUs.

AMD 3990X Against Prosumer CPUs Opportunities Multiply As They Are Seized
Comments Locked

279 Comments

View All Comments

  • GreenReaper - Saturday, February 8, 2020 - link

    64 sockets, 64 cores, 64 threads per CPU - x64 was never intended to surmount these limits. Heck, affinity groups were only introduced in Windows XP and Server 2003.

    Unfortunately they hardcoded the 64-CPU limit in by using a DWORD and had to add Processor Groups as a hack added in Win7/2008 R2 for the sake of a stable kernel API.

    Linux's sched_setaffinity() had the foresight to use a length parameter and a pointer: https://www.linuxjournal.com/article/6799

    I compile my kernels to support a specific number of CPUs, as there are costs to supporting more, albeit relatively small ones (it assumes that you might hot-add them).
  • Gonemad - Friday, February 7, 2020 - link

    Seeing a $4k processor clubbing a $20k processor to death and take its lunch (in more than one metric) is priceless.

    If you know what you need, you can save 15 to 16 grand building an AMD machine, and that's incredible.

    It shows how greedy and lazy Intel has become.

    It may not be the best chip for, say, a gaming machine, but it can beat a 20-grand intel setup, and that ensures a spot for the chip, not being useless.
  • Khenglish - Friday, February 7, 2020 - link

    I doubt that really anyone would practically want to do this, but in Windows 10 if you disable the GPU driver, games and benchmarks will be fully CPU software rendered. I'm curious how this 64 core beast performs as a GPU!
  • Hulk - Friday, February 7, 2020 - link

    Not very well. Modern GPU's have thousands of specialized processors.
  • Kevin G - Friday, February 7, 2020 - link

    The shaders themselves are remarkably programmable. The only thing really missing from them and more traditional CPU's in terms of capability is how they handle interrupts for IO. Otherwise they'd be functionally complete. Granted the per-thread performance would be abyssal compared to modern CPUs which are fully pipelined, OoO monsters. One other difference is that since GPU tasks are embarrassing parallel by nature, these shaders have hardware thread management to quickly switch between them and partition resources to achieve some fairly high utilization rates.

    The real specialization are in in the fixed function units for their TMUs and ROPs.
  • willis936 - Friday, February 7, 2020 - link

    Will they really? I don’t think graphics APIs fall back on software rendering for most essential features.
  • hansmuff - Friday, February 7, 2020 - link

    That is incorrect. Software rendering is never done by Windows just because you don't have rendering hardware. Games no longer come with software renderers like they used to many, many moons ago.
  • Khenglish - Friday, February 7, 2020 - link

    I love how everyone had to jump in and said I was wrong without spending 30 seconds to disable their GPU driver and try it themselves and finding they are wrong.

    There's a lot of issues with the Win10 software renderer (full screen mode mostly broken, only DX11 seems supported), but it does work. My Ivy Bridge gets fully loaded at 70W+ just to pull off 7 fps at 640x480 in Unigine Heaven, but this is something you can do.
  • extide - Friday, February 7, 2020 - link

    No -- the Windows UI will drop back to software mode but games have not included software renderers for ~two decades.
  • FunBunny2 - Friday, February 7, 2020 - link

    " games have not included software renderers for ~two decades."

    which is a deja vu experience: in the beginning DOS was a nice, benign, control program. then Lotus discovered that the only way to run 1-2-3 faster than molasses uphill in winter was to fiddle the hardware directly, which DOS was happy to let it do. it didn't take long for the evil folks to discover that they could too, and virus was born. one has to wonder how much exposure these latest GPU hardware present?

Log in

Don't have an account? Sign up now