跳转至主要内容
英特尔标志 - 返回主页
我的工具

选择您的语言

  • Bahasa Indonesia
  • Deutsch
  • English
  • Español
  • Français
  • Português
  • Tiếng Việt
  • ไทย
  • 한국어
  • 日本語
  • 简体中文
  • 繁體中文
登录 以访问受限制的内容

使用 Intel.com 搜索

您可以使用几种方式轻松搜索整个 Intel.com 网站。

  • 品牌名称: 酷睿 i9
  • 文件号: 123456
  • Code Name: Emerald Rapids
  • 特殊操作符: “Ice Lake”、Ice AND Lake、Ice OR Lake、Ice*

快速链接

您也可以尝试使用以下快速链接查看最受欢迎搜索的结果。

  • 产品信息
  • 支持
  • 驱动程序和软件

最近搜索

登录 以访问受限制的内容

高级搜索

仅搜索

Sign in to access restricted content.

不建议本网站使用您正在使用的浏览器版本。
请考虑通过单击以下链接之一升级到最新版本的浏览器。

  • Safari
  • Chrome
  • Edge
  • Firefox

AI PC Development Tools

 

Seamlessly transition projects from early AI development on the PC to cloud-based training to edge deployment. Learn what is required of AI workloads and what is available to get started today.

 

 

 

  • Overview
  • Get Started
  • Documentation and Resources

Streamline AI Integration

Intel provides a suite of powerful development tools designed to streamline the integration of AI into applications. These tools use Intel® hardware for AI PCs for great performance and low power use. The suite enables developers to build powerful AI-infused applications without deep AI expertise. 

Core AI PC Development Kit Technologies

OpenVINO™ Toolkit

  • Enable flexible AI model deployment across Intel CPUs, GPUs, and NPUs.
  • Optimize models for efficient deployment.
  • Use pre-optimized models that are ready for production.

 

Open Neural Network Exchange (ONNX*)

  • Create cross-platform inference with ONNX* Runtime.
  • Improve model performance across multiple platforms.

Web Neural Network API (WebNN)

  • Deploy AI entirely within a web browser.
  • Take advantage of lower-level acceleration libraries to run AI more efficiently.
  • Run with near-native performance in the browser.
  • Use ONNX Runtime Web or TensorFlow.js for ease of use at high-level abstraction.


For AI application development and optimization see Tools for Application Development and AI Frameworks.

Configure Your AI PC Development Kit

Get Started
Docs & Resources

OpenVINO Toolkit

This open source toolkit is for developers desiring high-performance and power-efficient AI inferencing across multiple operating systems and hardware architectures. Enable flexible AI model deployment across Intel CPUs, GPUs, and NPUs. This distribution includes tools to compress, quantize, and optimize models for efficient deployment in end-user applications.

OpenVINO™ toolkit manages AI workloads across CPUs, GPUs, and NPUs for optimal deployment. You can accelerate AI inference and generative AI (GenAI) workloads, achieve lower latency, and increase throughput while maintaining accuracy through optimization tools such as Neural Network Compression Framework (NNCF). OpenVINO toolkit also natively supports models from AI frameworks such as PyTorch*, TensorFlow*, and ONNX, and provides developers with a set of prevalidated models. Developers can download and build their own cutting-edge AI applications.

Browse Prevalidated Models

 

Browse the OpenVINO™ Model Hub for AI inference that includes the latest OpenVINO toolkit performance benchmarks for a select list of leading GenAI and LLMs on Intel CPUs, built-in GPUs, NPUs, and accelerators.

  • Model Performance: Find out how top models perform on Intel hardware.
  • Hardware Comparison: Find the right Intel hardware platform for your solution.

Explore AI Model Benchmarks

Download this comprehensive white paper on LLM optimization that uses compression techniques. Learn to use the OpenVINO toolkit to compress LLMs, integrate them into AI applications, and deploy them on your PC with maximum performance.

Unlock the Power of LLMs

ONNX Model and ONNX Runtime

ONNX is a machine learning model format, and ONNX Runtime is a cross-platform inference and training machine learning accelerator. For developers desiring broader platform coverage (mobile, tablets, and PCs) than OpenVINO toolkit, ONNX may be a good choice. It works with Intel platforms and allows developers to improve model performance while targeting multiple platforms with ease. A key component of ONNX is the ONNX Execution Providers (EP), which enable certain hardware acceleration technologies to run AI models. Intel platforms have two optimized EPs: the OpenVINO™ Execution Provider and the DirectML EP.

ONNX is an AI model format based on an open source project with support from Microsoft*. Its goal is to facilitate the exchange of machine learning models between different frameworks with the benefits of:

  • Interoperability across frameworks: ONNX can act as a bridge between several popular AI frameworks, including OpenVINO toolkit, PyTorch, and TensorFlow.
  • Ease of deployment on AI PCs: ONNX Runtime can take advantage of the hardware capabilities of AI PCs that use CPUs and GPUs.
  • Language compatibility: The ONNX project includes samples that show how to use different programming languages such as C++ and C# to bind to ONNX Runtime.

ONNX

WebNN

As machine learning evolves, bridging software and hardware for scalable, web-based solutions has been an ongoing challenge. The WebNN API enables AI models to run with near-native performance in the browser. The API is also enabled in many popular browsers on Intel platforms. Web applications gain the ability to create, compile, and run machine learning models. Web application developers use higher-level frameworks such as ONNX Runtime Web and TensorFlow.js, which use WebNN to provide high-performance AI model inferencing. WebNN is currently an experimental feature in popular browsers and is undergoing extensive community testing.

For instructions on enabling WebNN in your browser, see WebNN Installation Guides.

Tools for Application Development and AI Frameworks


 

Develop AI Applications

  • Intel® C++ Essentials: Compile, debug, and use our most popular performance libraries for SYCL* across diverse architectures.
  • Intel® Distribution for Python*: Use this distribution to make Python applications more efficient and performant.
  • Intel® Deep Learning Essentials: Access tools to develop, compile, test, and optimize deep learning frameworks and libraries.
  • Intel® VTune™ Profiler: Optimize application performance, system performance, and system configuration.


 

Optimize and Tune Training Models for Deep Learning and Inference

  • AI Frameworks and Tools: Unlock the full capabilities of your Intel hardware with software tools at all levels of the AI stack.
  • Get the most performance from your end-to-end pipeline on all your available hardware.
  • Accelerate end-to-end data science and machine learning pipelines using Python tools and frameworks.
  • 公司信息
  • 英特尔资本
  • 企业责任部
  • 投资者关系
  • 联系我们
  • 新闻发布室
  • 网站地图
  • 招贤纳士 (英文)
  • © 英特尔公司
  • 沪 ICP 备 18006294 号-1
  • 使用条款
  • *商标
  • Cookie
  • 隐私条款
  • 请勿分享我的个人信息 California Consumer Privacy Act (CCPA) Opt-Out Icon

英特尔技术可能需要支持的硬件、软件或服务激活。// 没有任何产品或组件能够做到绝对安全。// 您的成本和结果可能会有所不同。// 性能因用途、配置和其他因素而异。请访问 intel.cn/performanceindex 了解更多信息。// 请参阅我们的完整法律声明和免责声明。// 英特尔致力于尊重人权,并避免成为侵犯人权行为的同谋。请参阅英特尔的《全球人权原则》。英特尔产品和软件仅可用于不会导致或有助于任何国际公认的侵犯人权行为的应用。

英特尔页脚标志