DeepBench Inference: Convolutions

Moving on to convolutions, 8-bit multiply/32-bit accumulate again makes an appearance with INT8 inferencing.

The most striking aspect in the average convolutions performance is Titan Xp's superior INT8 throughput. The numbers, being comparable to the DeepBench Titan Xp inference results, are correct. Nor is padding responsible for the disparity.

DL Inference: DeepBench - Convolutions

Breaking out the convolutions into application-specific workloads, we see that Resnet, Speaker ID, and Vision showcase Titan Xp's superior INT8 performance.

DL Inference: DeepBench - Convolutions (DeepSpeech)

DL Inference: DeepBench - Convolutions (Resnet)

DL Inference: DeepBench - Convolutions (Speaker ID)

DL Inference: DeepBench - Convolutions (Vision)

Nothing seems obvious from the kernels, but if anything, this is likely due to DP4A library/driver maturity on Pascal, compared to it's Volta implementation. There's also the chance that Volta is handling it solely through its INT cores.

DeepBench Inference: GEMM DeepBench Inference: RNN & Sparse GEMM
POST A COMMENT

64 Comments

View All Comments

  • SirCanealot - Tuesday, July 03, 2018 - link

    No overclocking benchmarks. WAT. ¬_¬ (/s)

    Thanks for the awesome, interesting write up as usual!
    Reply
  • Chaitanya - Tuesday, July 03, 2018 - link

    This is more of an enterprise product for consumers so even if overclocking it enabled its something that targeted demographic is not going to use. Reply
  • Samus - Tuesday, July 03, 2018 - link

    wooooooosh Reply
  • MrSpadge - Tuesday, July 03, 2018 - link

    He even put the "end sarcasm" tag (/s) to point out this was a joke. Reply
  • Ticotoo - Tuesday, July 03, 2018 - link

    Where oh where are the MacOS drivers? It took 6 months to get the pascal Titan drivers.
    Hopefully soon
    Reply
  • cwolf78 - Tuesday, July 03, 2018 - link

    Nobody cares? I wouldn't be surprised if support gets dropped at some point. MacOS isn't exactly going anywhere. Reply
  • eek2121 - Tuesday, July 03, 2018 - link

    Quite a few developers and professionals use Macs. Also college students. By manufacturer market share Apple probably has the biggest share, if not then definitely in the top 5. Reply
  • mode_13h - Tuesday, July 03, 2018 - link

    I doubt it. Linux rules the cloud, and that's where all the real horsepower is at. Lately, anyone serious about deep learning is using Nvidia on Linux. It's only 2nd-teir players, like AMD and Intel, who really stand anything to gain by supporting niche platforms like Macs and maybe even Windows/Azure.

    Once upon a time, Apple actually made a rackmount OS X server. I think that line has long since died off.
    Reply
  • Freakie - Wednesday, July 04, 2018 - link

    Lol, those developers and professionals use their Macs to remote in to their compute servers, not to do any of the number crunching with.

    The idea of using a personal computer for anything except writing and debugging code is next to unheard of in an environment that requires the kind of power that these GPUs are meant to output. The machine they use for the actual computations are 99.5% of the time, a dedicated server used for nothing but to complete heavy compute tasks, usually with no graphical interface, just straight command-line.
    Reply
  • philehidiot - Wednesday, July 04, 2018 - link

    If it's just a command line why bother with a GPU like this? Surely integrated graphics would do?

    (Even though this is a joke, I'm not sure I can bear the humiliation of pressing "submit")
    Reply

Log in

Don't have an account? Sign up now