跳转至主要内容
英特尔标志 - 返回主页
我的工具

选择您的语言

  • Bahasa Indonesia
  • Deutsch
  • English
  • Español
  • Français
  • Português
  • Tiếng Việt
  • ไทย
  • 한국어
  • 日本語
  • 简体中文
  • 繁體中文
登录 以访问受限制的内容

使用 Intel.com 搜索

您可以通过多种方式轻松搜索整个 Intel.com 站点。

  • 品牌: 酷睿 i9
  • 文件号: 123456
  • Code Name: Emerald Rapids
  • 特殊作符: “Ice Lake”, Ice AND Lake, Ice OR Lake, Ice*

快速链接

您也可以尝试使用以下快速链接查看最受欢迎搜索的结果。

  • 产品信息
  • 支持
  • 驱动程序和软件

最近搜索

登录 以访问受限制的内容

高级搜索

仅搜索

Sign in to access restricted content.

不建议本网站使用您正在使用的浏览器版本。
请考虑通过单击以下链接之一升级到最新版本的浏览器。

  • Safari
  • Chrome
  • Edge
  • Firefox

Optimize the Latest Deep Learning Workloads Using PyTorch* Optimized by Intel

@IntelDevTools

 

Subscribe Now

Stay in the know on all things CODE. Updates are delivered to your inbox.

Sign Up

Overview

For developers focused on deep learning use cases—predictive modeling, recommendation systems, natural language processing, object detection, and tons more—it is paramount to extract the most workload performance using newer technologies like bfloat16, graph-level optimizations, and custom kernels.

This session focuses on the performance and ease-of-use benefits for deep learning training and inference of large models like deep learning recommendation model (DLRM) using Intel® Extension for PyTorch* and Intel® oneAPI Deep Neural Network Library (oneDNN).

Join senior deep learning engineer, Eikan Wang to learn more about the following topics:

  • Using oneDNN to deliver optimal training and inference workload performance for the PyTorch* framework on Intel hardware
  • oneDNN-based graph optimizations and custom kernel implementations to boost performance of DLRM modules in PyTorch
  • How the extension library for PyTorch can be dynamically loaded as a Python module to offer a more modular design for custom compound operations that are critical to accelerating key deep learning modules, for example, the interaction module from DLRM.

 

Get the Software

  • Get the Intel Extension for PyTorch as part of the AI Frameworks and Tools.
  • Get oneDNN as part of the Intel® oneAPI Base Toolkit. (Want this tool stand-alone only? Get it here.)

 

Other Resources

  • Sign up for an Intel® Tiber™ AI Cloud account—a free development sandbox with access to the latest Intel hardware and oneAPI software.
  • Explore oneAPI, including developer opportunities and benefits
  • Subscribe to Code Together—an interview series that explores the challenges at the forefront of cross-architecture development. Each bi-weekly episode features industry VIPs who are blazing new trails through today’s data-centric world. Available wherever you get your podcasts.
 

Eikan Wang
Senior deep learning engineer, Intel Corporation

Eikan is part of the Graphics and Software group where he is the technical lead on PyTorch framework optimization for Intel architecture. He is also one of the major contributors to low-precision inference solutions on Intel architecture. He has four years of full-stack experience in AI from various AI applications to framework, library, and compiler optimizations. Eikan received his bachelor’s degree in mathematics from Huaiyin Institute of Technology.

Jump to:

You May Also Like
 

   

You May Also Like

Related Articles

Deliver Blazing-Fast Python Data Science and AI Performance on CPUs—with Minimal Code Changes

Use Intel Deep Learning Optimizations in TensorFlow*

Related Video

Accelerate AI Inferencing from Development to Deployment

  • 公司信息
  • 英特尔资本
  • 企业责任部
  • 投资者关系
  • 联系我们
  • 新闻发布室
  • 网站地图
  • 招贤纳士 (英文)
  • © 英特尔公司
  • 沪 ICP 备 18006294 号-1
  • 使用条款
  • *商标
  • Cookie
  • 隐私条款
  • 请勿分享我的个人信息 California Consumer Privacy Act (CCPA) Opt-Out Icon

英特尔技术可能需要支持的硬件、软件或服务激活。// 没有任何产品或组件能够做到绝对安全。// 您的成本和结果可能会有所不同。// 性能因用途、配置和其他因素而异。请访问 intel.cn/performanceindex 了解更多信息。// 请参阅我们的完整法律声明和免责声明。// 英特尔致力于尊重人权,并避免成为侵犯人权行为的同谋。请参阅英特尔的《全球人权原则》。英特尔产品和软件仅可用于不会导致或有助于任何国际公认的侵犯人权行为的应用。

英特尔页脚标志