System Benchmarks

Power

The two main aspects of power for a system like this are the idle and load measurements. For the CPU on its own, we have already tested the Threadripper Pro 3995WX in our 3995WX review, and it only draws the rated TDP, a maximum of 280 W. For the system as a whole however, we have the base power consumption with no load, and then additional loading with the GPU to consider. The official listing of the RTX 6000 24 GB here is 295 W, so a CPU+GPU should be 575 W .

  • Measurements taken at-wall
  • 75 W: Idle on OS
  • 630 W: Initial Peak with 100% CPU+GPU Load
  • 640 W: Sustained Peak with 100% CPU+GPU Load

For our metrics, we left the system at idle in the standard power profile but disabled sleep modes and display off modes. After 30 minutes, the at-wall power of the system was measured as 75 W in a long idle state. When firing both the CPU and GPU up with a high-performance cryptography workload, the system jumped up to 630 W at the wall. Over the next 20 minutes, as the fans sped up to deal with the increased thermals, we saw peak power at 640 W. This is easily sufficient for the 1000W power supply, and even with a second GPU, it should be plenty.

Temperature

A workstation wouldn’t be a good under-the-desk unit if the thermals spiraled out of control. Lenovo’s system is straightforward enough, with the unit split into two: the top half with the CPU has a passive intake in the front and blows out the rear, while the PCIe area has an active intake at the front and blows out of the back. We’re under the impression that the full retail units will have a dedicated air baffle between these two zones, but in our testing, the thermals were relatively tame.

We fired up the system in a traditional home/home office environment, with an ambient temperature of around 26ºC, using the default fan setting (which is the lowest). Both the CPU and the GPU were afforded a strong sustained cryptography load with the fans left on automatic. Even when at full loading for over 24 hours, the peak CPU temperature was only 78ºC, and the peak GPU temperature was only 81ºC. 

  • At Fan Level 1 (lowest) in BIOS, 100% CPU+GPU for 24 hour
  • CPU Peak at 78ºC
  • GPU Peak at 81ºC
  • Ambient of 26ºC

With such a sustained heat soak of the unit, the chassis was warm to the touch for sure, and we did additional measurements with an IR meter. On the side of the chassis, in the middle, 30ºC was recorded. Given that heat rises, the top of the case is where to look, and at the front of the case we saw 33ºC, while at the rear of the case it was 37ºC. Around the CPU exhaust fan, our meter recorded 50ºC for the fan casing, having to deal with the exhaust gases.

Something like this, a big 640W system pumping out hot air, will eventually warm up a room. This is always the danger with high powered workstations that have extended work to perform – gaming is at least somewhat varied for consumer systems that after a few hours the system can have a break.

Noise

Thermal results are not anything without rating the noise of a system, and Lenovo has a setting in the BIOS that allows the owner to select one of 7 fan profiles. The default setting as shipped was the lowest setting for the fans.

At this level, the system was inaudible over the background hum of other electronics, and when the system was pushed to the limit, 36 dBA was measured 30 cm from the rear exhaust, with the GPU hitting a peak of 80ºC. Using the highest fan setting, the system was 36 dBA at idle, rising to 46 dBA when at load (or 50+ if you are even closer than 30cm at the back). In this mode however, the GPU only peaked at 73ºC.

POST Time

With more professional-esque systems, the time from pressing the power button to when the OS is loaded is a lot longer than consumer systems, mostly due to the management control that goes on, as well as the fact that the POST time is not a direct optimization target in the same way. Enterprise systems typically have baseband management controllers (BMC), security aspects, and more memory channels to train, which all adds up to POST time. For example, our launch systems for dual-socket AMD Rome and Intel Cascade Lake take 3-5 minutes after pressing the power button.

Workstations are somewhat in the middle of all this. While they use processors with professional admin controls inside, only some have an added BMC for control. Workstations do have more memory channels than consumer grade hardware, so somewhere between the 20 seconds of consumer systems and 3 minutes of enterprise systems is probably a good balance.

For our test here, we shut down our system but kept the power cable plugged in. We started the timer from the moment we pressed the power button, and stopped it when the operating system was visible and accepted inputs. We also noted the POST time listed in the Startup section of Task Manager. This was repeated 5 times, with obvious outliers removed.

In our test, we had a very consistent 60 seconds to enter the operating system. Due to the manual timing here and waiting for the monitor to refresh into the OS resolution, then a timing resolution to the nearest second is probably most adequate. In each of our tests, the BIOS time as represented in Task Manager was also fairly consistent, showcasing 41.0 to 41.5 seconds.

  • Turn on POST Time: 41.0 to 41.5 seconds
  • Turn on to OS Time: 60 seconds

While these aren’t the fastest times in the world, this isn’t much of a machine that’s going to be turned off all that often, but when it is, at least it gets back into a useable shape very quickly.

USB Speed

This system has eight USB 3.2 Gen 2 slots, rated at 10 Gbps. Four are located on the rear, all of which are Type-A, and there are four on the front, two are Type-C and two are Type-A. One of those two Type-A ports on the front is also wired for always-on power, enabling it to act as a 5 W charging interface for smartphones and similar even when the machine is powered off but still plugged in.

For our test of USB speed, we are taking an NVMe SSD installed into a USB drive case. We have the Western Digital SN750 1TB drive, which is typically a PCIe 3.0 x4 drive, and placed it into an ASUS ROG Arion M.2 SSD Enclosure, built for speeds at USB 3.2 Gen 2 (10 Gbps). This enclosure has a Type-C output, but ASUS supplies both C-to-A and C-to-C cables in order to connect the drive to any number of devices.  With this combination of hardware, our theoretical maximum speeds should be at the 10 Gbps range, or 1250 MB/sec.

In our testing, only the front USB ports recognized our drive. The rear USB ports failed to register any device attached – I wonder if this is perhaps due to us receiving a test sample than a full retail system, nonetheless we tested the front ports with our drive using CrystalDiskMark. All four ports on the front performed identically within margin.

As you can see, the theoretical peak speeds are 1022 MB/s read, and 942 MB/s write, indicating some 20% overhead in the protocol for an absolute maximum.

To find a more real-world use case, we transferred files to and from the drive totaling 30 GB. In total there were a dozen compressed files relating to a video game being moved. Our realistic real-world speed was 709 MB/s, and so we’re another 30% off the actual peak of the port.

DPC Latency

Deferred Procedure Call latency is a way in which Windows handles interrupt servicing. In order to wait for a processor to acknowledge the request, the system will queue all interrupt requests by priority. Critical interrupts will be handled as soon as possible, whereas lesser priority requests such as audio will be further down the line. If the audio device requires data, it will have to wait until the request is processed before the buffer is filled.

If the device drivers of higher priority components in a system are poorly implemented, this can cause delays in request scheduling and process time. This can lead to an empty audio buffer and characteristic audible pauses, pops and clicks. The DPC latency checker measures how much time is taken processing DPCs from driver invocation. The lower the value will result in better audio transfer at smaller buffer sizes. Results are measured in microseconds.

  • DPC Latency: 180 microseconds

Our result on the Lenovo P620 was 180 microseconds. This is lower than the 300 microseconds usually recommended for good operation, and below the 200 microseconds we usually put as an upper limit for consumer grade systems. Out of all the motherboards we test, 180 μs is actually rather high: our Intel Z490 motherboards were 104-131 μs, and our AMD X470/B550/X570 motherboards varied from 46-156 μs. The only time we typically go higher is with more professional grade systems, and that is because DPC latency, while a target, isn’t always a priority.

BIOS and Software System Tests: Dual Channel vs 8-Channel
Comments Locked

47 Comments

View All Comments

  • GeoffreyA - Tuesday, February 16, 2021 - link

    "If all PCs were like this, the world would be a brighter place."

    Indeed, feel the same way too. Sheer excellence for once.
  • pattiobear - Tuesday, March 2, 2021 - link

    More engineering and thoughtful design? Sure.
    Proprietary/nonstandardized parts and connections? Not as much.
  • willis936 - Tuesday, February 16, 2021 - link

    That’s a nice looking machine and is fairly priced. My only concern is the use of small, high power fans for cooling. It makes sense in rack mounted situations, but this is a large workstation with lots of free space. They should stretch their legs with larger heatsinks or water cooling.

    Also, in the conclusion:
    “ On to that, the system also has a PCIe bracket area that is both tool-less and easy two use”

    Unless this is a pun I missed, this should be “to” not “two”.
  • DanNeely - Tuesday, February 16, 2021 - link

    the case/CPU fans look like full size ones. And while the memory fans are only 40mm, the amount of airflow needed to cool the ram should be low enough that a quiet fan should be sufficient; no need for the eleventeen zillion RPM ones to push >100W each in 1U chassis. The 46db Ian measured at load fits with quiet fans not screamers; if the ram fans were driving the noise profile I would have expected him to say something because they'd have a very different sound than bigger fans do when speeding up.
  • andychow - Tuesday, February 16, 2021 - link

    If they are offering a 42% discount on a new system, you know they are screwing you over if you ever pay retail.

    This is an 8 channel memory system, comes base with 1 stick? They shipped it with 2 sticks? Rather strange marketing descision.
  • Spunjji - Wednesday, February 17, 2021 - link

    "you know they are screwing you over if you ever pay retail"
    That's just it - nobody ever pays the quoted prices on these things. The question is whether you get the "discount" that's for everybody or the one that's for special customers.
  • stevielee - Tuesday, February 16, 2021 - link

    Noticed that just behind the bars of Memory is a photo of Rick Astley from his early "Never Gonna Give You Up" days. Could there be a 'hidden' review meaning about this Lenovo Thinkstation P620? As in being "Rick Rolled" if you buy into the whole kahuna?
  • WaltC - Tuesday, February 16, 2021 - link

    Nice review, Ian...;) All I can say looking at those options is a person could do much better just starting with the motherboard and putting the system together himself with his own component choices! Reducing an 8-channel capability to dual channel ROOB is a big blooper--should at least be supplying 4 channels with 4x8GB, imo. The motherboard is PCIe4, isn't it? Odd to see the NVMe's restricted to PCIe3. Leaving a boatload of performance on the table, looks like.
  • WaltC - Tuesday, February 16, 2021 - link

    Meant to say 4x16GB...8GB, even 4-channeled, really isn't even a minimum...;)
  • TomWomack - Tuesday, February 16, 2021 - link

    I am a bit wary of the 'power supply can easily be replaced' line - it is a very custom power supply, if Lenovo stop making them you're screwed, and it's much more likely that Lenovo would stop making this specific thing than that ATX power supplies become unavailable.

    In my experience power supplies are very much the first thing to fail, which is why I'm concerned about this - I've taken to buying second-hand dual-PSU servers and keeping one PSU aside so that I have one spare per system.

Log in

Don't have an account? Sign up now