跳转至主要内容
英特尔标志 - 返回主页
我的工具

选择您的语言

  • Bahasa Indonesia
  • Deutsch
  • English
  • Español
  • Français
  • Português
  • Tiếng Việt
  • ไทย
  • 한국어
  • 日本語
  • 简体中文
  • 繁體中文
登录 以访问受限制的内容

使用 Intel.com 搜索

您可以通过多种方式轻松搜索整个 Intel.com 站点。

  • 品牌: 酷睿 i9
  • 文件号: 123456
  • Code Name: Emerald Rapids
  • 特殊作符: “Ice Lake”, Ice AND Lake, Ice OR Lake, Ice*

快速链接

您也可以尝试使用以下快速链接查看最受欢迎搜索的结果。

  • 产品信息
  • 支持
  • 驱动程序和软件

最近搜索

登录 以访问受限制的内容

高级搜索

仅搜索

Sign in to access restricted content.

不建议本网站使用您正在使用的浏览器版本。
请考虑通过单击以下链接之一升级到最新版本的浏览器。

  • Safari
  • Chrome
  • Edge
  • Firefox

Optimized ONNX* Models Run on AI PCs

@IntelDevTools

Subscribe Now

Stay in the know on all things CODE. Updates are delivered to your inbox.

Sign Up

Overview

Optimizing a diverse network composed of heterogeneous components can be simplified substantially by combining OpenVINO™ toolkit optimizations to improve Open Neural Network Exchange (ONNX*) models. ONNX offers numerous benefits to developers, delivering a common infrastructure for supporting machine learning and providing standardized operators and a common format. For Intel systems that include a mix of CPUs, integrated GPUs, and NPUs, model inferencing is streamlined, as this session demonstrates.

Using OpenVINO toolkit as a back end, models can be inferenced and deployed with the ONNX Runtime APIs. This session shows the performance gains achieved through the simple process of using the OpenVINO Execution Provider on AI PC and evaluating the results.

Topics covered include:

  • Learn the characteristics of an AI PC and the benefits these systems offer developers.
  • Understand the techniques for inferencing and deploying ONNX models on an AI PC.
  • Evaluate the performance of ONNX models on AI PC systems with a combination of OpenVINO toolkit, ONNX, and OpenVINO Execution Provider for ONNX Runtime.
  • Learn how to build a stand-alone app for an AI PC with OpenVINO Execution Provider for ONNX Runtime.

Skill level: All levels

 

Featured Software

Download the following resources:

  • OpenVINO Toolkit
  • OpenVINO Execution Provider
  • Intel® NPU Acceleration Library

Jump to:


You May Also Like
 

   

You May Also Like

Related Articles

Heterogeneous AI Powerhouse: Unveil the Hardware and Software Foundation of Intel® Core™ Ultra Processors for the Edge

OpenVINO Toolkit Workflow: Model Preparation

Related Videos

Prototype and Deploy LLM Applications on Intel NPUs

Build Next-Gen, Portable, Power-Efficient AI on an AI PC

  • 公司信息
  • 英特尔资本
  • 企业责任部
  • 投资者关系
  • 联系我们
  • 新闻发布室
  • 网站地图
  • 招贤纳士 (英文)
  • © 英特尔公司
  • 沪 ICP 备 18006294 号-1
  • 使用条款
  • *商标
  • Cookie
  • 隐私条款
  • 请勿分享我的个人信息 California Consumer Privacy Act (CCPA) Opt-Out Icon

英特尔技术可能需要支持的硬件、软件或服务激活。// 没有任何产品或组件能够做到绝对安全。// 您的成本和结果可能会有所不同。// 性能因用途、配置和其他因素而异。请访问 intel.cn/performanceindex 了解更多信息。// 请参阅我们的完整法律声明和免责声明。// 英特尔致力于尊重人权,并避免成为侵犯人权行为的同谋。请参阅英特尔的《全球人权原则》。英特尔产品和软件仅可用于不会导致或有助于任何国际公认的侵犯人权行为的应用。

英特尔页脚标志