Power, Temperature, & Noise

Finally, let's talk about power, temperature, and noise. At a high level, the Titan V should not be substantially different from other high-end NVIDIA cards. It has the same 250W TDP, and the cooler is nearly identical to NVIDIA’s other vapor chamber cooler designs. In short, NVIDIA has carved out a specific niche on power consumption that the Titan V should fall nicely into.

Unfortunately, no utilities seem to be reporting voltage or HBM temperature of Titan V at this time. These would be particularly of interest considering that Volta is fabbed on TSMC's bespoke 12FFN as opposed to 16nm FinFET. This also marks the first time NVIDIA has implemented HBM2 in gaming use-cases, where HBM temperatures or voltages could be elucidating.

NVIDIA Titan V and Xp Average Clockspeeds
  NVIDIA Titan V NVIDIA Titan Xp Percent Difference
Idle 135MHz 139MHz -
Boost Clocks
1455MHz
1582MHz
-8.0%
Max Observed Boost
1785MHz
1911MHz
-6.6%
LuxMark Max Boost 1355MHz 1911MHz -29.0%
 
Battlefield 1
1651MHz
1767MHz
-6.6%
Ashes: Escalation
1563MHz
1724MHz
-9.3%
DOOM
1561MHz
1751MHz
-10.9%
Ghost Recon
1699MHz
1808MHz
-6.0%
Deus Ex (DX11)
1576MHz
1785MHz
-11.7%
GTA V
1674MHz
1805MHz
-7.3%
Total War (DX11) 1621MHz 1759MHz -7.8%
FurMark
1200MHz
1404MHz
-14.5%

Interestingly, LuxMark only brings Titan V to 1355MHz instead of its maximum boost clock, a behavior that differs from every other card we've benched in recent memory. Other compute and gaming tasks do bring the clocks higher, with a reported peak of 1785MHz.

The other takeaway is that Titan V is consistently outclocked by Titan Xp. In terms of gaming, Volta's performance gains do not seem to be coming from clockspeed improvements, unlike the bulk of Pascal's performance improvement over Maxwell.

Meanwhile it's worth noting that the HBM2 memory on the Titan V has only one observed clock state: 850MHz. This never deviates, even during FurMark as well as extended compute or gameplay. For the other consumer/prosumer graphics cards with HBM2, AMD's Vega cards downclock HBM2 in high temperature situations like FurMark, and also features a low-power 167MHz idle state.

Idle Power Consumption

Measuring power from the wall, Titan V's high idle and lower load readings jump out.

Load Power Consumption - Battlefield 1

Load Power Consumption - FurMark

Meanwhile under load, the Titan V's power consumption at the wall is slightly but consistently lower than the Titan Xp's. Again despite the fact that both cards have the same TDPs, and NVIDIA's figures tend to be pretty consistent here since Maxwell implemented better power management.

Idle GPU Temperature

Load GPU Temperature - Battlefield 1

Load GPU Temperature - FurMark

During the course of benchmarking, GPU-Z reported a significant amount of Titan V thermal throttling, and that continued in Battlefield 1, where it oscillated between being capped out by GPU underutilization and temperature. And in FurMark, the Titan V was consistently temperature-limited.

Without HBM2 voltages, it is hard to say if the constant 850MHz clocks are related to Titan V's higher idle system draw. 815mm2 is quite large, but then again elements like Volta's tensor cores are not being utilized in gaming. In Battlefield 1, system power draw is actually lower than Titan Xp but GPU Z would suggest that thermal limits are the cause. Typically what we've seen with other NVIDIA 250W TDP cards is that they hit their TDP limits more often than they hit their temperature limits. So this is an unusual development.

Idle Noise Levels

Load Noise Levels - Battlefield 1

Load Noise Levels - FurMark

Featuring an improved cooler, Titan V essentially manages the same noise metrics as its Titan siblings.

But Can It Run Crysis? First Thoughts
Comments Locked

111 Comments

View All Comments

  • mode_13h - Wednesday, December 27, 2017 - link

    It's true. All they had to do was pay some grad students to optimize HPC and deep learning software for their GPUs. They could've done that for the price of only a couple marketing persons' salaries.
  • CiccioB - Monday, January 1, 2018 - link

    That would not be a surprise.
    AMD strategy on SW support has always been leaving others (usually not professionist) do the job at their own cost. Results is that AMD HW has never had a decent SW support other than for gaming (and that's only because Sony and MS spend money for improving gaming performances for their consoles).
  • tipoo - Friday, December 22, 2017 - link

    Sarcasm? There's no Vega built up to this scale.
  • mode_13h - Wednesday, December 27, 2017 - link

    It *is* pretty big and burns about as much power. Yet, it's nowhere near as fast at deep learning. Even with its lower purchase price, it's still not operationally cost-competitive with GV100.

    If you look at its feature set, it was really aimed at HPC and deep learning. In the face of Volta's tensor cores, it kinda fell flat, on the latter front.
  • Keermalec - Wednesday, December 20, 2017 - link

    What about mining benchmarks?
  • tipoo - Friday, December 22, 2017 - link

    Would be in line with the CUDA improvements. I.e, two 1080s would be much better at mining. Most of the uplift is in tensor performance, which no algo uses.
  • Cryio - Wednesday, December 20, 2017 - link

    Wait wait wait.

    Crysis Warhead at 4K, Very High with 4 times Supersampling? I think you mean Multisampling.

    I don't think this could manage 4K60 at max settings with 4xSSAA, lol.
  • Ryan Smith - Thursday, December 21, 2017 - link

    "I think you mean Multisampling."

    Nope, supersampling.=)
  • mode_13h - Wednesday, December 27, 2017 - link

    Tile rendering FTMFW.
  • Kevin G - Wednesday, December 20, 2017 - link

    "For our full review hopefully we can track down a Quadro GP100"

    YES. The oddity here is that the GP100 might end up being better than the Titan V at gaming due to having 128 ROPs vs. 96 ROPs and even higher memory bandwidth.

    Outside of half precision matrix multiplication, the GP100 should be roughly ~43% faster due mainly to the difference in ALU counts in professional workloads. Boost clocks are a meager 25 Mhz difference. Major deviations beyond that 43% difference would be where the architectures differ. There is a chance benchmarks would come in below that 43% mark if memory bandwidth comes into play.

Log in

Don't have an account? Sign up now