Note that the library has been rebranded to “oneAPI Deep Neural Network Library (oneDNN)” after this article was published. Please find more information about the rebranding at oneAPI Deep Neural Network Library (oneDNN)
Apache MXNet (incubating) community announced the v1.2.0 release of the Apache MXNet* deep learning framework. One of the most important features in this release is the Intel optimized CPU backend: MXNet now integrates with Intel® Math Kernel Library for Deep Neural Networks (Intel® MKL-DNN) to accelerate neural network operators: Convolution, Deconvolution, FullyConnected, Pooling, Batch Normalization, Activation, LRN, Softmax, as well as some common operators: sum and concat. More details are available in the release note and release blog. This article will give more details on how to play it and how much faster v1.2.0 is on CPU platform.
In the deployment environment, the latency always is sensitive so the more specific optimizations are applied to reduce the latency for the better real-time results, especially for the batchsize one.
As the following chart shows, the latency of single picture inference (batchsize one) is significantly decreased.
Figure 1. Note the latency can be calculated by (1000 * batchsize / throughput) and the unit is ms.
For the big batchsize, such as BS=32, the throughput has been improved a lot with Intel optimized backend.
As the following chart shows, the throughput of batchsize=32 is about 23.4-56.9X faster than the original CPU backend.
The new backend shows the good scalability for the batchsize. In below chart, the throughput keeps constant at approximately eight images/second for the original CPU backend.
The new implementation shows very good batch scalability where the throughput is boosted from 83.7 images/second (BS=1) to 199.3 images/second (BS=32) for the ResNet-50*.
Benchmark script: benchmark_score.py
CMD to reproduce the results:
export KMP_AFFINITY=granularity=fine,compact,1,0 export vCPUs=`cat /proc/cpuinfo | grep processor | wc -l` export OMP_NUM_THREADS=$((vCPUs / 2))
Install from PyPI
Install Prerequisites: wget and latest pip (If Needed)
$ sudo apt-get update $ sudo apt-get install -y wget python gcc $ wget https://bootstrap.pypa.io/get-pip.py && sudo python get-pip.py
Install MXNet with oneMKL-DNN Acceleration
MXNet with oneMKL-DNN backend has been released in 1.2.0.
$ pip install mxnet-mkl==1.2.0 [–user]
Please note that the mxnet-mkl package is built with USE_BLAS=openblas. If you want to leverage the performance boost from MKL blas, please try to install mxnet from source.
Install MXNet without oneMKL-DNN Acceleration
$ pip install mxnet==1.2.0 [–user]
Install from Source Code
Download MXNet Source Code from GitHub*
$ git clone --recursive https://github.com/apache/incubator-mxnet $ cd incubator-mxnet $ git checkout 1.2.0 $ git submodule update --init --recursive
Build with oneMKL-DNN Backend
$ make -j USE_OPENCV=1 USE_MKLDNN=1 USE_BLAS=mkl
Note 1: When calling this command, oneMKL-DNN will be downloaded and built automatically.
Note 2: MKL2017 backend has been removed from MXNet main branch. So users cannot build MXNet with MKL2017 backend from source code anymore.
Note 3: To use MKL as BLAS library, users may need to install Intel® Parallel Studio for best performance.
Note 4: If MXNet cannot find MKLML libraries, please add the MKLML library path to LD_LIBRARY_PATH and LIBRARY_PTH at first.
|CPU/GPU Model, Core, Socket#||Intel® Xeon® Platinum 8180, 56, 2S|
|CPU/GPU TFLOPS(FP32)||8.24T = 2.3G*56*64(AVX512)|
|CPU Config||Turbo on, HT on, NUMA on|
|RAM Bandwidth||255GB/s = 2.66*12*8(2666MHz DDR4)|
|RAM Capacity||192G = 16G*12*1|
- Apache MXNet (incubating) 1.8.0 Release
- Apache MXNet (incubating)
- Announcing Apache MXNet 1.2.0
- Accelerating Deep Learning on CPU with Intel MKL-DNN
- Build and install Apache MXNet (incubating) from source
- Some Tips for Improving MXNet Performance
- 用 Intel MKL-DNN 加速 CPU 上的深度学习
Notices and Disclaimers
Performance results are based on testing as of July 2018 and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure.
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations, and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit: http://www.intel.com/performance.
This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications, and roadmaps.
The benchmark results may need to be revised as additional testing is conducted. The results depend on the specific platform configurations and workloads utilized in the testing, and may not be applicable to any particular user’s components, computer system or workloads. The results are not necessarily representative of other benchmarks and other benchmark results may show greater or lesser impact from mitigations.
Intel, the Intel logo, and Xeon are trademarks of Intel Corporation in the U.S. and/or other countries.
Other names and brands may be claimed as the property of others.
© Intel Corporation.