跳转至主要内容
英特尔标志 - 返回主页
我的工具

选择您的语言

  • Bahasa Indonesia
  • Deutsch
  • English
  • Español
  • Français
  • Português
  • Tiếng Việt
  • ไทย
  • 한국어
  • 日本語
  • 简体中文
  • 繁體中文
登录 以访问受限制的内容

使用 Intel.com 搜索

您可以使用几种方式轻松搜索整个 Intel.com 网站。

  • 品牌名称: 酷睿 i9
  • 文件号: 123456
  • Code Name: Emerald Rapids
  • 特殊操作符: “Ice Lake”、Ice AND Lake、Ice OR Lake、Ice*

快速链接

您也可以尝试使用以下快速链接查看最受欢迎搜索的结果。

  • 产品信息
  • 支持
  • 驱动程序和软件

最近搜索

登录 以访问受限制的内容

高级搜索

仅搜索

Sign in to access restricted content.

不建议本网站使用您正在使用的浏览器版本。
请考虑通过单击以下链接之一升级到最新版本的浏览器。

  • Safari
  • Chrome
  • Edge
  • Firefox

🡐 AI Overview

 

AI Frameworks and Tools

 

Software tools at all levels of the AI stack unlock the full capabilities of your Intel® hardware. All Intel® AI tools and frameworks are built on the foundation of a standards-based, unified oneAPI programming model that helps you get the most performance from your end-to-end pipeline on all your available hardware.

 

 

 

  • Overview
  • Download
  • Documentation

AI Tool Selector

Customize your download options by use case (data analytics, machine learning, deep learning, or inference optimization) or individually from conda*, pip, or Docker* repositories. Download using a command line installation or offline installer package that is compatible with your development environment.

 

Configure & Download

Get Started Guide | Documentation

Featured

Productive, easy-to-use AI tools and suites span multiple stages of the AI pipeline, including data engineering, training, fine-tuning, optimization, inference, and deployment.

 

 

OpenVINO™ Toolkit

Write Once, Deploy Anywhere

Optimize, tune, and run comprehensive AI inference using the included optimizer, runtime, and development tools. The toolkit includes:

  • A repository of open source, pretrained, and preoptimized models ready for inference
  • A model optimizer for your trained model
  • An inference engine to run inference and output results on multiple processors, accelerators, and environments

 

 

Intel® Gaudi® Software

Speed Up AI Development

  • Optimized for deep learning training and inference
  • Integrates with popular frameworks TensorFlow* and PyTorch*
  • Provides a custom graph compiler
  • Supports custom kernel development
  • Enables an ecosystem of software partners
  • Access resources on GitHub* and a community forum

 

 

Intel® Tiber™ AI Cloud

Build, test, and optimize multiarchitecture applications and solutions—and get to market faster—with an open AI software stack built on oneAPI. 

Featured tutorials (an account is required):

  • Introduction to OpenVINO™ Toolkit on Intel CPUs or GPUs
  • Run DeepSeek* on Intel GPUs
  • Get Started With Intel® Gaudi® AI Accelerator

 

 

Open Platform for Enterprise AI (OPEA)

This open platform project enables the creation of open, multiprovider, robust, and composable generative AI (GenAI) solutions that take advantage of the best innovation across the ecosystem. Upcoming projects include:

  • A Chatbot on Intel® Xeon® 6 Processors and Intel® Gaudi® 2 Processors
  • Document Summarization on Intel Gaudi 2 Processors
  • Visual Question Answering (VQA) on Intel Gaudi 2 Processors
  • A Copilot Designed for Code Generation in Microsoft Visual Studio Code* on Intel Gaudi 2 Processors

Deep Learning & Inference Optimization

Open source deep learning frameworks run with high performance across Intel devices through optimizations powered by oneAPI, along with open source contributions by Intel.

PyTorch*

Reduce model size and workloads for deep learning and inference in apps.

Learn More | Get Started

TensorFlow*

Increase training, inference, and performance on Intel® hardware.

Learn More | Get Started

ONNX Runtime

Accelerate inference across multiple platforms.

Learn More | Get Started

JAX*

Perform complex numerical computations on high-performance devices using Intel® Extension for TensorFlow*.

Learn More | Get Started

DeepSpeed*

Automates parallelism, optimizing communication, managing heterogeneous memory, and model compression.

Learn More | Get Started

PaddlePaddle*

Built using Intel® oneAPI Deep Neural Network Library (oneDNN), get fast performance on Intel Xeon Scalable processors.

Learn More | Get Started

Intel® AI Reference Models

Access a repository of pretrained models, sample scripts, best practices, and step-by-step tutorials.

Learn More

Deep Learning Essentials

Access advanced tools to develop, compile, test, and optimize deep learning frameworks and libraries.

Learn More

Intel® Neural Compressor

Reduce model size and speed up inference with this open source library.

Learn More

显示更多 显示较少

Machine Learning & Data Science

Classical machine learning algorithms in open source frameworks use oneAPI libraries. Intel also offers further optimizations in extensions to these frameworks.

scikit-learn*

Dynamically speed up scikit-learn* applications on Intel CPUs and GPUs.

Learn More

XGBoost

Speed up gradient boosting training and inference on Intel hardware.

Learn More | Get Started

Intel® Distribution for Python*

Get near-native code performance for numerical and scientific computing.

Learn More

Modin*

Accelerate pandas workflows and scale data using this DataFrame library.

Learn More | Get It Now

显示更多 显示较少

Libraries

oneAPI libraries deliver code and performance portability across hardware vendors and accelerator technologies.

Intel® oneAPI Deep Neural Network Library

Deliver optimized neural network building blocks for deep learning applications.

Learn More

Intel® oneAPI Data Analytics Library

Build compute-intense applications that run fast on Intel® architecture.

Learn More

Intel® oneAPI Math Kernel Library

Experience high performance for numerical computing on CPUs and GPUs.

Learn More

Intel® oneAPI Collective Communications Library

Train models more quickly with distributed training across multiple nodes.

Learn More

显示更多 显示较少

Platform Tools

Open Edge Platform

Deliver optimized neural network building blocks for deep learning applications.

Learn More

Intel® Arc™ B-Series Graphics

Experience high performance for numerical computing on CPUs and GPUs.

Learn More

Performance Data for Intel® AI Data Center Products

Find the latest benchmark data, including detailed hardware and software configurations.

Learn More

显示更多 显示较少

Developer Resources from AI Ecosystem Members

Browse All

Hugging Face*

Intel collaborates with Hugging Face* to develop Optimum for Intel, which simplifies training, fine-tuning, and inference optimization of Hugging Face Transformers and Diffusers models on Intel hardware.

PyTorch Foundation

Intel is a premier member of and a top contributor to the PyTorch Foundation. Intel contributions optimize PyTorch training and inference across Intel CPUs, GPUs, and AI accelerators.

Red Hat*

Red Hat* and Intel collaborate to ensure that Red Hat OpenShift AI* works seamlessly with Intel® AI hardware and software in an end-to-end enterprise AI platform across a hybrid cloud infrastructure.

Microsoft*

Microsoft* and Intel collaborate to optimize the AI stack from cloud services to AI PCs, spanning solutions such as Microsoft Azure*, DirectML*, Phi open models, ML.NET, and more.

Develop, train, and deploy your AI solutions quickly with performance- and productivity-optimized tools from Intel. 

Stay Up to Date on AI Workload Optimization.


Sign Up

除非标为可选,否则所有字段均为必填。

英特尔致力于为您提供优质、个性化的体验,您的数据帮助我们实现这一目标。
本网站采用了 reCAPTCHA 保护机制,并且适用谷歌隐私政策和服务条款。
提交此表单,即表示您确认自己已经年满 18 周岁。英特尔将针对此业务请求处理您的个人数据。要详细了解英特尔的实践,包括如何管理您的偏好和设置,请访问英特尔的隐私声明。
提交此表单,即表示您确认自己已经年满 18 周岁。 英特尔可能会与您联系,以进行与营销相关的沟通。您可以随时选择退出。要详细了解英特尔的实践,包括如何管理您的偏好和设置,请访问英特尔的隐私声明。

You’re In!

Thank you for signing up. Watch for a welcome email to get you started.

  • 公司信息
  • 英特尔资本
  • 企业责任部
  • 投资者关系
  • 联系我们
  • 新闻发布室
  • 网站地图
  • 招贤纳士 (英文)
  • © 英特尔公司
  • 沪 ICP 备 18006294 号-1
  • 使用条款
  • *商标
  • Cookie
  • 隐私条款
  • 请勿分享我的个人信息 California Consumer Privacy Act (CCPA) Opt-Out Icon

英特尔技术可能需要支持的硬件、软件或服务激活。// 没有任何产品或组件能够做到绝对安全。// 您的成本和结果可能会有所不同。// 性能因用途、配置和其他因素而异。请访问 intel.cn/performanceindex 了解更多信息。// 请参阅我们的完整法律声明和免责声明。// 英特尔致力于尊重人权,并避免成为侵犯人权行为的同谋。请参阅英特尔的《全球人权原则》。英特尔产品和软件仅可用于不会导致或有助于任何国际公认的侵犯人权行为的应用。

英特尔页脚标志