DeepBench Inference: RNN and Sparse GEMM

Rounding out the last of our DeepBench inference tests are RNN and Sparse GEMM, both available in single precision only. That being said, the FP16 parameter could be selected anyway. Given the low results all around, this is more of an artifact than anything else.

DL Inference: DeepBench - RNN (LSTM)

DL Inference: DeepBench - RNN (GRU)

DL Inference: DeepBench - Sparse GEMM

While RNNs might also be accelerated, DeepBench and NVIDIA only support single precision RNN inference at this time.

DeepBench Inference: Convolutions NVIDIA Caffe2 Docker: ResNet50 and ImageNet
Comments Locked

65 Comments

View All Comments

  • mode_13h - Monday, July 9, 2018 - link

    Nice. You gonna water-cool it?

    https://www.anandtech.com/show/12483/ekwb-releases...
  • wumpus - Thursday, July 12, 2018 - link

    Don't forget double precision GFLOPS. Just because fp16 is the next new thing, nVidia didn't forget their existing CUDA customers and left out the doubles. I'm not sure what you would really benchmark, billion-point FFTs or something?
  • mode_13h - Thursday, July 12, 2018 - link

    Yeah, good point. Since GPUs don't support denormals, you run into the limitations of fp32 much more quickly than on many CPU implementations.

    I wonder if Nvidia will continue to combine tensor cores AND high-fp64 performance in the same GPUs, or if they'll bifurcate into deep-learning and HPC-centric variants.
  • byteLAKE - Friday, July 13, 2018 - link

    Yes, indeed. Mixed precision does not come out of the box and requires development. We've done some research and actual projects in the space (described here https://medium.com/@marcrojek/how-artificial-intel... and results give a speedup.
  • ballsystemlord - Monday, September 30, 2019 - link

    Both myself and techpowerup get 14.90Tflops SP. Can you check your figures?

    https://www.techpowerup.com/gpu-specs/titan-v.c305...

Log in

Don't have an account? Sign up now