Open Domain Question Answering System - A Deep Learning Based NLP Solution (White Paper)

ID 标签 689013
已更新 10/6/2020
版本 Latest
公共

author-image

作者

Natural language processing (NLP) systems like chatbots, document classification systems based on deep neural networks have drastically improved the ability to gain the knowledge stored in text form from humongous amount of information stored on the web or on Wikipedia. These deep neural networks require parallel processing capabilities across multiple cores for real-time latency. Intel® Math Kernel Library (MKL) is the fastest and most-used math library for Intel® based systems to speed up numerical application performance by providing highly optimized, vectorized and threaded math functions. Intel® VTune™ Profiler collects the key profiling data and presents a powerful interface to simplify the algorithm performance analysis. Intel® and Kakao Enterprise explored these Intel® technologies to improve the performance of neural networks in their NLP API service. Kakao Enterprise found that 1.14x speed-up in processing time after integrating their code with Intel® MKL which utilizes the AVX-512 capabilities of 2nd generation of Intel® Xeon® Scalable processors.

Technologies Used:

  • Intel® MKL
  • Intel® VTune™ Profiler
  • AVX-512
  • Intel® Xeon® Scalable processors
"