Recurrent Neural Networks: LSTM 

Our loyal readers know that we love real-world enterprise benchmarks. So in our quest for better benchmarks and better data, Pieter Bovijn, the head of research at the MCT IT Bachelor (dutch), turned a real-world AI model into a benchmark.  

The input of the model is time series data, which is used to make predictions on how the time series will behave in the future. As this is a typical sequence prediction problem, we used a Long Short-Term Memory (LSTM) network as neural network. A type of RNN, LSTM selectively "remembers" patterns over a certain duration of time.

LSTM however come with the disadvantage that they are a lot more bandwidth intensive. We quote a recent paper on the topic:

LSTMs exhibit quite inefficient memory access pattern when executed on mobile GPUs due to the redundant data movements and limited off-chip bandwidth.

So we were very curious about how the LSTM network would behave. After all, our server Xeons have ample bandwidth, with a massive 38.5 MB of L3 and six channels of DDR4-2666/2933 (128-141 GB/s per socket). We run this test with 50 GB of data, and train the model for 5 epochs.

Of course, you have the make the most of the available AVX/AVX2/AVX512 SIMD power. That is why we tested with 3 different setups

  1. We used out of the box TensorFlow with conda
  2. We tested with the Intel optimized TensorFlow from PyPi repo
  3. We optimized from source using Bazel. This allowed us to use the very latest version of TensorFlow.

The results are very interesting.

LSTM MCT Benchmark

The most intensive TensorFlow applications are typically run on GPUs, so extra care must be taken when you test on a CPU. AMD's Zen core only has two 128-bit FMACs, and is limited to (256-bit) AVX2. Intel's high-end Xeons have two 256-bit FMACs and one 512-bit FMAC. In other words, on paper Intel's Xeon can deliver four times more FLOPs per clock cycle than AMD. But only if the software is right. Intel has been working intensively with Google to optimize TensorFlow for Intel new Xeons out of necessity: it has to offer a credible alternative in those situations where an NVIDIA Tesla is simply too expensive. Meanwhile, AMD hopes that ROCm catches on and that in the future software engineers run TensorFlow on a Radeon Pro. 

Of course, the big question is how this compares to a GPU. Let us see how our NVIDIA Titan RTX deals with this workload.

LSTM MCT Benchmark (vs GPU)

First of all, we noticed that FP16 did not make much of a difference. Secondly, we were quite amazed that our Titan RTX was less than 3 times faster than our dual Xeon setup.

Investigating further with NVIDIA's System Management Interface (SMI), we found out that GPU did run at a its highest turbo speed: 1.9 GHz, which is higher than the expected 1.775 GHz. Meanwhile utilization dropped to 40% from time to time.

Ultimately this is another example of how real-world applications behave differently from benchmarks, and how important software optimization is. If we would have just used conda, the results above would be very different. Using the right optimized software made the application run 2 to 6 times faster. Also, this another data point that proves that CNNs might be one of the best use cases for GPUs. You should use a GPU to decrease training times of complex LSTMs of course. Still, this kind of neural network is a bit more tricky - you cannot simply add more GPUs to further decrease training time. 

Convolutional Neural Network Training Inference: ResNet-50
Comments Locked

56 Comments

View All Comments

  • Gondalf - Tuesday, July 30, 2019 - link

    Kudos to the article from a technical point of view :), a little less for the weak analysis of the server market. Johan say that Intel is slowing down in server but the server market is growing fast.
    Unfortunately it is not: Q1 this year was the worst quarter of server market in 8 quarters with a grow of only 1%. Q2 will be likely on a negative trend, moreover there is a general consensus that 2019 will be a negative year with a drop in global revenue.
    So there recent Intel drop is consistent with a drop of the demand in China in Q2.

    To be underlined that a GPU has to be piloted and every GPU like Tesla is up, there is a one or two Xeons on the motherboard.
    GPU is only an accelerator, but without a cpu is useless. Intel slides about upcoming threat from competitors are related to the existence of AMD in HPC , IBM and some sparse ARM based SKUs for custom applications.
    A GPU is welcomed, it helps to sell more Xeons.
  • eastcoast_pete - Tuesday, July 30, 2019 - link

    More a question than anything else: What is the state of AI-related computing on AMD (graphics) hardware? I know NVIDIA is very dominant, but is it mainly due to an existing software ecosystem?
  • BenSkywalker - Wednesday, July 31, 2019 - link

    AMD has two major hurdles to overcome when specifically looking at AI/ML on GPUs, essentially non existent software support and essentially non existent hardware support. AMD has chosen the route of focusing on general purpose cores that can perform solidly on a variety of traditional tasks both in hardware and software. AI/ML benefit enormously from specialized hardware that in turn takes specialized software to utilize.

    This entire article is stacking up $40k worth of Intel CPUs against a consumer nVidia part and Intel gets crushed whenever nVidia can use it's specialized hardware. Throw a few Tesla V100s in to give us something resembling price parity and Intel would be eviscerated.

    AMD needs tensor cores, a decade worth of tools development, and a decade worth of pipeline development(university training, integration into new systems and build out on to those systems, not hardware pipeline) in order to get where nVidia is now if they were standing still.

    The software ecosystem is the biggest problem long term, everyone working in the field uses CUDA whenever they can, even if AMD mopped the floor with nVidia on the hardware side, for their GPUs to get traction they would need all the development tools nVidia has spent a decade building, but right now their GPUs are throttled by nVidia because of specialized hardware.
  • abufrejoval - Tuesday, July 30, 2019 - link

    Some telepathy must be involved: Just a day or two before this appeared online, I was looking for Johan de Gelas' last appearance on AT in 2018 and thinking that it was high time for one of my favorite authors to publish something. Ever so glad you came out with the typical depth, quality and relevance!

    While GAFA and BATX seem to lead AI and the frameworks, their problems and solutions mostly fit their needs and as it turns out the vastest number of use cases cannot afford the depth and quality they require, nor do they benefit from it, either: If the responsibility of your AI is to monitor for broken drill bits from vibration, sound, normal and thermal visuals, the ability to identify cats in every shape and color has no benefit.

    The big guys typically need to solve a sharply defined problem in a signle domain at a very high quality: They don't combine visual with audio and the inherent context in time-series video is actually ignored, as their AIs stare at each frame independently, hunting for known faces or things to tag and correlate social graphs and products.

    Iterating over ML approaches, NN designs and adequate hyperparameters for training requires months even with clusters of DGX workstations and highly experience ML experts. What makes all that effort worthwhile is that the inference part can then run at relatively low power on your mobile phone inside WeChat, Facebook, Instagram, Google keyboard/translate (or some other "innocent" background app) at billions of instances: Trial and train until you have trained the single sufficiently good network design in days, weeks or even months and then you can deploy inference to billions of devices on battery power.

    Few of us smaller IT companies can replicate that, but again, few of us need to, because we have a vastly higher number of small problems to solve and with a few orders of magnitude less of a difference in training:inference efforts: 1Watt of difference makes or brakes the usability of inference model on mobile target devices, 100 Watts of difference in a couple of servers running a dozen instances of a less optimized and well trained model won't justify an ML-expert team working through another five pizzas.

    As the complexity of your approach (e.g. XGBoost or RF) is perhaps much smaller or your network are much simpler than those of GAFA/BATX you actually worry about how to scale-in not out and batch dozens of training for model iteration and mix that with some QA or even production inference streams on GPUs which Linux understands or treats little better than a printer with DMA.

    Intel quite simply understands that while you get famous with the results you get from training AIs e.g. on GPUs, the money is made from inference at the lowest power and lowest operational overhead: Linux (or Unix for that matter), knows how to manage virtual memory (preferably uniform) and CPUs (preferably few); a memory hierarchy deeper than the manual for your VCR and more types and numbers of cores than Unics first hard disk had in blocks, confuse it.

    But I'd dare say that AMD understood it much longer and much better. When they came up with the HSA on their first APUs, this GPGPU blend, which allowed switching the compute model with a function call makes CUDA look very brutish indeed.

    Writing code able to take full advantage of these GPGPU capabilites is still a nightmare, because high-level languages have abstraction levels far too low for what these APUs or VNNI CPUs can execute in a single clock cycle, but from the way I read it, the Infinity Fabric is about making those barriers as low as they can possibly be in terms of hardware and memory space.

    And RISC-V goes beyond what all x86 advocates still suffer from: An instruction set that's not designed for modular expandability.
  • FunBunny2 - Wednesday, July 31, 2019 - link

    "Trial and train until you have trained the single sufficiently good network design in days, weeks or even months and then you can deploy inference to billions of devices on battery power."

    when and if this capability is used for something useful, e.g. cure for cancer, rather than yet another scheme to extract moolah from rubes. then I'll be interested.
  • keg504 - Tuesday, July 30, 2019 - link

    Why do you say on the testing page that AMD is colour coded in orange, and then put them in grey?
  • 808Hilo - Wednesday, July 31, 2019 - link

    Client/server renamed again...
    There is no AI. That stuff is very very dumb. look at the diagramm above. Nothing new. Data, script does something, parsing and readout of vastly unimportant info. I have not seen a single meaningful AI app. Its now year 25 of the Internet and I am terribly bored. Next please.
  • J7SC_Orion - Wednesday, July 31, 2019 - link

    This explains very nicely why Intel has been raiding GPU staff and pouring resources into Xe Discrete Graphics...if you can't beat them, join them ?
  • tibamusic.com - Saturday, August 3, 2019 - link

    Thank you very much.
  • Threska - Saturday, August 3, 2019 - link

    What a coincidence. The latest humble bundle is "Data Analysis & Machine Learning by O'Reilly"

    https://www.humblebundle.com/books/data-analysis-m...

Log in

Don't have an account? Sign up now