Intel® oneAPI Deep Neural Network Library
Increase Deep Learning Framework Performance on CPUs and GPUs
Building Blocks to Optimize AI Applications
The Intel® oneAPI Deep Neural Network Library (oneDNN) helps developers improve productivity and enhance the performance of their deep learning frameworks. Use the same API to develop for CPUs, GPUs, or both. Then implement the rest of the application using SYCL*. This library is included in both the Intel® oneAPI Base Toolkit and Intel® oneAPI DL Framework Developer Kit.
The library is built around three concepts:
- Primitive: Any low-level operation from which more complex operations are constructed, such as convolution, data format reorder, and memory
- Engine: A hardware processing unit, such as a CPU or GPU
- Stream: A queue of primitive operations on an engine
Top benefits:
- Supports key data type formats, including 16- and 32-bit floating points, bfloat16, and 8-bit integers
- Implements rich operators, including convolution, matrix multiplication, pooling, batch normalization, activation functions, recurrent neural network (RNN) cells, and long short-term memory (LSTM) cells
- Accelerates inference performance with automatic detection of Intel® Deep Learning Boost technology
Download as Part of the Toolkit
oneDNN is included as part of the Intel oneAPI Base Toolkit, which is a core set of tools and libraries for developing high-performance, data-centric applications across diverse architectures.
Download the Stand-Alone Version
A stand-alone download of oneDNN is available. You can download binaries from Intel or choose your preferred repository.
Develop in the Free Intel® Cloud
Get what you need to build and optimize your oneAPI projects for free. With an Intel® DevCloud account, you get 120 days of access to the latest Intel® hardware—CPUs, GPUs, FPGAs—and Intel® oneAPI tools and frameworks. No software downloads. No configuration steps. No installations.
Help oneDNN Evolve
oneDNN is part of the oneAPI industry standards initiative. We welcome you to participate.
Documentation & Code Samples
Documentation
Code Samples
Learn how to access oneAPI code samples in a tool command line or IDE.
- oneDNN Get Started
- oneDNN with SYCL Interops
- oneDNN Library Convolutional Neural Network (CNN) Inference (FP32)
View All Code Samples (GitHub)
Specifications
Processors:
- Intel Atom® processors with Intel® Streaming SIMD Extensions
- Intel® Core™ processors
- Intel® Xeon® processors
- Intel® Xeon® Scalable processors
GPUs:
- Intel® Processor Graphics Gen9 and above
- Xe Architecture
Host & target operating systems:
- Linux*
- Windows*
- macOS*
Languages:
- SYCL
Note Must have Intel oneAPI Base Toolkit installed - C and C++
Compilers:
- Intel® oneAPI DPC++/C++ Compiler
- Intel® C++ Compiler Classic
- GNU C++ Compiler*
- Clang*
For more information, see the system requirements.
Threading runtimes:
- Intel® oneAPI Threading Building Blocks
- OpenMP*
- SYCL
For more information, see the system requirements.
Get Help
Your success is our success. Access these forum and GitHub resources when you need assistance.
Stay in the Know with All Things CODE
Sign up to receive the latest trends, tutorials, tools, training, and more to
help you write better code optimized for CPUs, GPUs, FPGAs, and other
accelerators—stand-alone or in any combination.
产品和性能信息
性能因用途、配置和其他因素而异。请访问 www.Intel.cn/PerformanceIndex 了解更多信息。