HPE DLBS TensorRT: ResNet50 and ImageNet

The other unique aspect of HPE DLBS is the feature of a benchmark for TensorRT, NVIDIA's inference optimizing engine. In recent years, NVIDIA has pushed to integrate it with new DL features like INT8/DP4A and tensor core 16-bit accumulator mode for inferencing.

Using a Caffe model, TensorRT adjusts the model as needed for inferencing at a given precision.

DL Inference: DLBS TensorRT- ResNet50 and ImageNet Throughput

In total, we ran batch sizes 64, 512, and 1024 for Titan X (Maxwell) and Titan Xp, and batch sizes 128, 256, and 640 for Titan V; the results were within 1 - 5% of the other batch sizes, so we've not included them in the graph.

The high INT8 performance of Titan Xp somewhat corroborates with the GEMM/convolution performance; both workloads seem to be utilizing DP4A. Meanwhile, it's not clear how Titan V implements DP4A. all we know is that it is supported by the Volta instruction set. And Volta does has those separate INT32 units.

HPE DLBS Caffe2: ResNet50 and ImageNet DAWNBench: Image Classification (CIFAR10)
Comments Locked

65 Comments

View All Comments

  • mode_13h - Monday, July 9, 2018 - link

    Nice. You gonna water-cool it?

    https://www.anandtech.com/show/12483/ekwb-releases...
  • wumpus - Thursday, July 12, 2018 - link

    Don't forget double precision GFLOPS. Just because fp16 is the next new thing, nVidia didn't forget their existing CUDA customers and left out the doubles. I'm not sure what you would really benchmark, billion-point FFTs or something?
  • mode_13h - Thursday, July 12, 2018 - link

    Yeah, good point. Since GPUs don't support denormals, you run into the limitations of fp32 much more quickly than on many CPU implementations.

    I wonder if Nvidia will continue to combine tensor cores AND high-fp64 performance in the same GPUs, or if they'll bifurcate into deep-learning and HPC-centric variants.
  • byteLAKE - Friday, July 13, 2018 - link

    Yes, indeed. Mixed precision does not come out of the box and requires development. We've done some research and actual projects in the space (described here https://medium.com/@marcrojek/how-artificial-intel... and results give a speedup.
  • ballsystemlord - Monday, September 30, 2019 - link

    Both myself and techpowerup get 14.90Tflops SP. Can you check your figures?

    https://www.techpowerup.com/gpu-specs/titan-v.c305...

Log in

Don't have an account? Sign up now