(All deep learning applications have been implemented using Nvidia’s TensorFlow NGC Docker container.)
This application prices a portfolio of LIBOR swaptions on a LIBOR Market Model using a Monte-Carlo simulation. It also computes Greeks.
In each Monte-Carlo path, the LIBOR forward rates are generated randomly at all required maturities following the LIBOR Market Model, starting from the initial LIBOR rates. The swaption portfolio payoff is then computed and discounted to the pricing date. Averaging the per-path prices gives the final net present value of the portfolio.
The full algorithm is illustrated in the processing graph below:
More details can be found in Prof. Mike Giles’ notes [1].
This benchmark uses a portfolio of 15 swaptions with maturities between 4 and 40 years and 80 forward rates (and hence 80 delta Greeks). To study the performance, the number of Monte-Carlo paths is varied between 128K-2,048K.
[1] M. Giles, “Monte Carlo evaluation of sensitivities in computational finance,” HERCMA Conference, Athens, Sep. 2007.
This benchmark application prices a portfolio of American call options using a Binomial lattice (Cox, Ross and Rubenstein method).
For a given size N of the binomial tree, the option payoff at the N leaf nodes is computed first (the value at maturity for different stock prices, using the Black-Scholes model). Then, the pricer works towards the root node backwards in time, multiplying the 2 child nodes by the pre-computed pseudo-probabilities that the price goes up or down, including discounting at the risk-free rate, and adding the results. After repeating this process for all time steps, the root node holds the present value.
The algorithm is illustrated in the graph below:
This binomial pricing method is applied for every option in the portfolio.
For this benchmark, we use 1,024 steps (the depth of the tree). We vary the number of options in the portfolio to study the performance.
This benchmark application prices a portfolio of European options using the Black-Scholes-Merton formula.
The pricer calculates both the call and put price for a batch of options, defined by their current stock price, strike price, and maturities. It applies the Black-Scholes-Merton forumla for each option in the portfolio.
For this benchmark, we repeat the application of the formula 100 times to increase the overall runtime for the performance measurements. The number of options in the portfolio is varied to study the performance.
This application benchmarks the training of a deep Recurrent Neural Network (RNN), as illustrated below.
RNNs are at the core of many deep learning applications in finance, as they show excellent predition performance for time-series data.
For benchmark purposes, we focus on a single layer of such network, as this is the fundamental building block of more complex deep RNN models. We use Tensorflow, optimised by Nvidia in their NGC Docker container.
This application benchmarks the inference performance of a deep Recurrent Neural Network (RNN), as illustrated below.
RNNs are at the core of many deep learning applications in finance, as they show excellent predition performance for time-series data.
For benchmark purposes, we focus on a single layer of such network, as this is the fundamental building block of more complex deep RNN models. We use Tensorflow, optimised by Nvidia in their NGC Docker container.
This application benchmarks the training of a deep Long-Short Term Memory Model Network (LSTM). This is a modified version of the vanialla RNN, to overcome problems with vanishing or exploding gradients during back-propagation. This allows LSTMs to learn complex long-term dependencies better than RNNs.
RNNs are at the core of many deep learning applications in finance, as they show excellent predition performance for time-series data. In fact, LSTMs are often the perferred form of RNN networks in practial applications.
For benchmark purposes, we focus on a single layer of such network, as this is the fundamental building block of more complex deep LSTM models. We use Tensorflow, optimised by Nvidia in their NGC Docker container.
This application benchmarks the inference performance of a deep Long-Short Term Memory Model Network (LSTM). This is a modified version of the vanialla RNN, to overcome problems with vanishing or exploding gradients during back-propagation. This allows LSTMs to learn complex long-term dependencies better than RNNs.
RNNs are at the core of many deep learning applications in finance, as they show
excellent predition performance for time-series data. In fact, LSTMs are often the
perferred form of RNN networks in practial applications.
For benchmark purposes, we focus on a single layer of such network,
as this is the fundamental building block of more complex deep LSTM models.
We use Tensorflow, optimised by Nvidia in their NGC Docker container.
System | Operating System | Memory (RAM) | Compiler | ECC | Precision Mode | Other |
---|---|---|---|---|---|---|
Intel Haswell | RedHat EL 6.6 (64bit) | 128GB | Intel Compiler 17.0 | on | double | 2x Hyperthreading |
IBM Power8 | Ubuntu 14.10 (64bit) | 256GB | IBM Compiler 13.1 | on | double | 8x SMT |
System | Operating System | Memory (RAM) | Compiler | ECC | Precision Mode | Other |
---|---|---|---|---|---|---|
Intel Haswell | RedHat EL 6.6 (64bit) | 128GB | Intel Compiler 17.0 | on | double | no Hyperthreading |
System | Operating System | Memory (RAM) | Compiler | ECC | Precision Mode | Other |
---|---|---|---|---|---|---|
Intel Haswell | RedHat EL 6.6 (64bit) | 128GB | Intel Compiler 17.0 | on | double | 2xHyperthreading |
Processor | Cores | Logical Cores | Frequency | GFLOPs (double) | Max. Memory | Max. Memory B/W |
---|---|---|---|---|---|---|
Dual Intel Xeon E5-2698 v3 CPU (Haswell) | 2 x 16 | 2 x 32 | 2.30 GHz | 2 x 663 | 768 GB | 2 x 68 GB/s |
Dual IBM Power8 ISeries 8286-42A CPU | 2 x 12 | 2 x 96 (SMT8) | 3.52 GHz | 2 x 338 | 1 TB | 2 x 384 GB/s |
Processor | Cores | Logical Cores | Frequency | GFLOPs (double) | Max. Memory | Max. Memory B/W |
---|---|---|---|---|---|---|
Dual Intel Xeon E5-2698 v3 CPU (Haswell) | 2 x 16 | 2 x 32 | 2.30 GHz | 2 x 663 | 768 GB | 2 x 68 GB/s |
Processor | Cores | Logical Cores | Frequency | GFLOPs (double) | Max. Memory | Max. Memory B/W |
---|---|---|---|---|---|---|
Dual Intel Xeon E5-2698 v3 CPU (Haswell) | 2 x 16 | 2 x 32 | 2.30 GHz | 2 x 663 | 768 GB | 2 x 68 GB/s |
Processor |
---|
Processor |
---|
Processor |
---|
Processor |
---|
(higher is better)
*the sequential version runs on a single core of an Intel Xeon E5-2698 v3 CPU
(higher is better)
*the sequential version runs on a single core of an Intel Xeon E5-2698 v3 CPU
(higher is better)
*the sequential version runs on a single core of an Intel Xeon E5-2698 v3 CPU
(higher is better)
*the results are normalised to the P100 GPU performance
(higher is better)
*the results are normalised to the P100 GPU performance
(higher is better)
*the results are normalised to the P100 GPU performance
(higher is better)
*the results are normalised to the P100 GPU performance