跳转至主要内容
英特尔标志 - 返回主页
我的工具

选择您的语言

  • Bahasa Indonesia
  • Deutsch
  • English
  • Español
  • Français
  • Português
  • Tiếng Việt
  • ไทย
  • 한국어
  • 日本語
  • 简体中文
  • 繁體中文
登录 以访问受限制的内容

使用 Intel.com 搜索

您可以通过多种方式轻松搜索整个 Intel.com 站点。

  • 品牌: 酷睿 i9
  • 文件号: 123456
  • Code Name: Emerald Rapids
  • 特殊作符: “Ice Lake”, Ice AND Lake, Ice OR Lake, Ice*

快速链接

您也可以尝试使用以下快速链接查看最受欢迎搜索的结果。

  • 产品信息
  • 支持
  • 驱动程序和软件

最近搜索

登录 以访问受限制的内容

高级搜索

仅搜索

Sign in to access restricted content.

不建议本网站使用您正在使用的浏览器版本。
请考虑通过单击以下链接之一升级到最新版本的浏览器。

  • Safari
  • Chrome
  • Edge
  • Firefox

Accelerate Deep Learning with Intel® Extension for TensorFlow*

@IntelDevTools

Subscribe Now

Stay in the know on all things CODE. Updates are delivered to your inbox.

Sign Up

Overview

Intel and Google* have been collaborating to deliver optimized machine learning implementations of compute-intensive TensorFlow* operations. For example, convolution filters that require large matrix multiplications.

In this session, Penporn Koanantakook of Google delivers an overview of the Intel and Google collaboration, which includes the Intel® Extension for TensorFlow* and other key AI developer tools—Intel® oneAPI Deep Neural Network Library (oneDNN) and Intel® Neural Compressor.

This session covers:

  • Optimizations that have been implemented, such as operation fusion, primitive caching, and vectorization of int8 and bfloat16 data types.
  • A live demonstration of the Intel Neural Compressor automatically quantizing a network to improve performance by 4x with a 0.06% accuracy loss.
  • An overview of the PluggableDevice mechanism in TensorFlow, co-architected by Intel and Google to deliver a scalable way for developers to add new device support as plug-in packages.

Note This presentation was current as of TensorFlow v2.8. Starting with TensorFlow v2.9, the oneDNN optimizations are on by default, and no longer require the TF_ENABLE_ONEDNN_OPTS=1 variable setting.

 

Jump to:

Featured Software

You May Also Like
 


 

Featured Software

Get all of the following as stand-alone products or as part of AI Tools:

  • oneDNN: An open source, cross-platform library that provides implementations of deep learning building blocks that use the same API for CPUs, GPUs, or both.
  • Intel Extension for TensorFlow: An end-to-end, open source, machine learning platform.
  • Intel Neural Compressor: A unified, low-precision inference interface across multiple deep learning frameworks.

   

You May Also Like

Related Articles

oneDNN AI Optimizations Enabled as a Default in TensorFlow

Use Deep Learning Optimizations from Intel in TensorFlow

How Intel Optimized TensorFlow v2.5

Related Videos

Accelerate and Benchmark AI Workloads on Intel Platforms

Tips & Tricks for an AI & HPC Convergence

  • 公司信息
  • 英特尔资本
  • 企业责任部
  • 投资者关系
  • 联系我们
  • 新闻发布室
  • 网站地图
  • 招贤纳士 (英文)
  • © 英特尔公司
  • 沪 ICP 备 18006294 号-1
  • 使用条款
  • *商标
  • Cookie
  • 隐私条款
  • 请勿分享我的个人信息 California Consumer Privacy Act (CCPA) Opt-Out Icon

英特尔技术可能需要支持的硬件、软件或服务激活。// 没有任何产品或组件能够做到绝对安全。// 您的成本和结果可能会有所不同。// 性能因用途、配置和其他因素而异。请访问 intel.cn/performanceindex 了解更多信息。// 请参阅我们的完整法律声明和免责声明。// 英特尔致力于尊重人权,并避免成为侵犯人权行为的同谋。请参阅英特尔的《全球人权原则》。英特尔产品和软件仅可用于不会导致或有助于任何国际公认的侵犯人权行为的应用。

英特尔页脚标志