Power, Temperature, & Noise

Finally, let's talk about power, temperature, and noise. At a high level, the Titan V should not be substantially different from other high-end NVIDIA cards. It has the same 250W TDP, and the cooler is nearly identical to NVIDIA’s other vapor chamber cooler designs. In short, NVIDIA has carved out a specific niche on power consumption that the Titan V should fall nicely into.

Unfortunately, no utilities seem to be reporting voltage or HBM temperature of Titan V at this time. These would be particularly of interest considering that Volta is fabbed on TSMC's bespoke 12FFN as opposed to 16nm FinFET. This also marks the first time NVIDIA has implemented HBM2 in gaming use-cases, where HBM temperatures or voltages could be elucidating.

NVIDIA Titan V and Xp Average Clockspeeds
  NVIDIA Titan V NVIDIA Titan Xp Percent Difference
Idle 135MHz 139MHz -
Boost Clocks
1455MHz
1582MHz
-8.0%
Max Observed Boost
1785MHz
1911MHz
-6.6%
LuxMark Max Boost 1355MHz 1911MHz -29.0%
 
Battlefield 1
1651MHz
1767MHz
-6.6%
Ashes: Escalation
1563MHz
1724MHz
-9.3%
DOOM
1561MHz
1751MHz
-10.9%
Ghost Recon
1699MHz
1808MHz
-6.0%
Deus Ex (DX11)
1576MHz
1785MHz
-11.7%
GTA V
1674MHz
1805MHz
-7.3%
Total War (DX11) 1621MHz 1759MHz -7.8%
FurMark
1200MHz
1404MHz
-14.5%

Interestingly, LuxMark only brings Titan V to 1355MHz instead of its maximum boost clock, a behavior that differs from every other card we've benched in recent memory. Other compute and gaming tasks do bring the clocks higher, with a reported peak of 1785MHz.

The other takeaway is that Titan V is consistently outclocked by Titan Xp. In terms of gaming, Volta's performance gains do not seem to be coming from clockspeed improvements, unlike the bulk of Pascal's performance improvement over Maxwell.

Meanwhile it's worth noting that the HBM2 memory on the Titan V has only one observed clock state: 850MHz. This never deviates, even during FurMark as well as extended compute or gameplay. For the other consumer/prosumer graphics cards with HBM2, AMD's Vega cards downclock HBM2 in high temperature situations like FurMark, and also features a low-power 167MHz idle state.

Idle Power Consumption

Measuring power from the wall, Titan V's high idle and lower load readings jump out.

Load Power Consumption - Battlefield 1

Load Power Consumption - FurMark

Meanwhile under load, the Titan V's power consumption at the wall is slightly but consistently lower than the Titan Xp's. Again despite the fact that both cards have the same TDPs, and NVIDIA's figures tend to be pretty consistent here since Maxwell implemented better power management.

Idle GPU Temperature

Load GPU Temperature - Battlefield 1

Load GPU Temperature - FurMark

During the course of benchmarking, GPU-Z reported a significant amount of Titan V thermal throttling, and that continued in Battlefield 1, where it oscillated between being capped out by GPU underutilization and temperature. And in FurMark, the Titan V was consistently temperature-limited.

Without HBM2 voltages, it is hard to say if the constant 850MHz clocks are related to Titan V's higher idle system draw. 815mm2 is quite large, but then again elements like Volta's tensor cores are not being utilized in gaming. In Battlefield 1, system power draw is actually lower than Titan Xp but GPU Z would suggest that thermal limits are the cause. Typically what we've seen with other NVIDIA 250W TDP cards is that they hit their TDP limits more often than they hit their temperature limits. So this is an unusual development.

Idle Noise Levels

Load Noise Levels - Battlefield 1

Load Noise Levels - FurMark

Featuring an improved cooler, Titan V essentially manages the same noise metrics as its Titan siblings.

But Can It Run Crysis? First Thoughts
Comments Locked

111 Comments

View All Comments

  • mode_13h - Wednesday, December 27, 2017 - link

    I don't know if you've heard of OpenCL, but there's not reason why a GPU needs to be programmed in a proprietary language.

    It's true that OpenCL has some minor issues with performance portability, but the main problem is Nvidia's stubborn refusal to support anything past version 1.2.

    Anyway, lots of businesses know about vendor lock-in and would rather avoid it, so it sounds like you have some growing up to do if you don't understand that.
  • CiccioB - Monday, January 1, 2018 - link

    Grow up.
    I repeat. None is wasting millions in using not certified, supported libraries. Let's avoid talking about entire frameworks.
    If you think that researches with budgets of millions are nerds working in a garage with avoiding lock-in strategies as their first thought in the morning, well, grow up kid.
    Nvidia provides the resources to allow them to exploit their expensive HW at the most of its potential reducing time and other associated costs. Also when upgrading the HW with a better one. That's what counts when investing millions for a job.
    For you kid's home made AI joke, you can use whatever alpha library with zero support and certification. Others have already grown up.
  • mode_13h - Friday, January 5, 2018 - link

    No kid here. I've shipped deep-learning based products to paying customers for a major corporation.

    I've no doubt you're some sort of Nvidia shill. Employee? Maybe you bought a bunch of their stock? Certainly sounds like you've drunk their kool aid.

    Your line of reasoning reminds me of how people used to say businesses would never adopt Linux. Now, it overwhelmingly dominates cloud, embedded, and underpins the Android OS running on most of the world's handsets. Not to mention it's what most "researchers with budgets of millions" use.
  • tuxRoller - Wednesday, December 20, 2017 - link

    "The integer units have now graduated their own set of dedicates cores within the GPU design, meaning that they can be used alongside the FP32 cores much more freely."

    Yay! Nvidia caught up to gcn 1.0!
    Seriously, this goes to show how good the gcn arch was. It was probably too ambitious for its time as those old gpus have aged really well it took a long time for games to catch up.
  • CiccioB - Thursday, December 21, 2017 - link

    <blockquote>Nvidia caught up to gcn 1.0!</blockquote>
    Yeah! It is known to the entire universe that it is nvidia that trails AMD performances.
    Luckly they managed to get this Volta out in time before the bankruptcy.
  • tuxRoller - Wednesday, December 27, 2017 - link

    I'm speaking about architecture not performance.
  • CiccioB - Monday, January 1, 2018 - link

    New bigger costier architectures with lower performance = fail
  • tuxRoller - Monday, January 1, 2018 - link

    Ah, troll.
  • CiccioB - Wednesday, December 20, 2017 - link

    Useless card
    Vega = #poorvolta
  • StrangerGuy - Thursday, December 21, 2017 - link

    AMD can pay me half their marketing budget and I will still do better than them...by doing exactly nothing. Their marketing is worse than being in a state of non-existence.

Log in

Don't have an account? Sign up now