跳转至主要内容
英特尔标志 - 返回主页

您是否在英特尔工作? 在此登录.

没有英特尔帐户? 在此注册 基本帐户。

我的工具

选择您的地区

Asia Pacific

  • Asia Pacific (English)
  • Australia (English)
  • India (English)
  • Indonesia (Bahasa Indonesia)
  • Japan (日本語)
  • Korea (한국어)
  • Mainland China (简体中文)
  • Taiwan (繁體中文)
  • Thailand (ไทย)
  • Vietnam (Tiếng Việt)

Europe

  • France (Français)
  • Germany (Deutsch)
  • Ireland (English)
  • Italy (Italiano)
  • Poland (Polski)
  • Spain (Español)
  • Turkey (Türkçe)
  • United Kingdom (English)

Latin America

  • Argentina (Español)
  • Brazil (Português)
  • Chile (Español)
  • Colombia (Español)
  • Latin America (Español)
  • Mexico (Español)
  • Peru (Español)

Middle East/Africa

  • Israel (עברית)

North America

  • United States (English)
  • Canada (English)
  • Canada (Français)
登录 以访问受限制的内容

使用 Intel.com 搜索

您可以使用几种方式轻松搜索整个 Intel.com 网站。

  • 品牌名称: 酷睿 i9
  • 文件号: 123456
  • 代号: Alder Lake
  • 特殊操作符: “Ice Lake”、Ice AND Lake、Ice OR Lake、Ice*

快速链接

您也可以尝试使用以下快速链接查看最受欢迎搜索的结果。

  • 产品信息
  • 支持
  • 驱动程序和软件

最近搜索

登录 以访问受限制的内容

高级搜索

仅搜索

Sign in to access restricted content.

不建议将您正在使用的浏览器版本用于此网站。
请考虑点击以下链接之一升级到该浏览器的最新版本。

  • Safari
  • Chrome
  • Edge
  • Firefox

Intel® AI Analytics Toolkit (AI Kit)

Achieve End-to-End Performance for AI Workloads Powered by oneAPI

Accelerate Data Science & AI Pipelines

The AI Kit gives data scientists, AI developers, and researchers familiar Python* tools and frameworks to accelerate end-to-end data science and analytics pipelines on Intel® architecture. The components are built using oneAPI libraries for low-level compute optimizations. This toolkit maximizes performance from preprocessing through machine learning, and provides interoperability for efficient model development.

Using this toolkit, you can:

  • Deliver high-performance, deep learning training on Intel® XPUs and integrate fast inference into your AI development workflow with Intel®-optimized, deep learning frameworks for TensorFlow* and PyTorch*, pretrained models, and low-precision tools. 
  • Achieve drop-in acceleration for data preprocessing and machine learning workflows with compute-intensive Python packages, Modin*, scikit-learn*, and XGBoost.
  • Gain direct access to analytics and AI optimizations from Intel to ensure that your software works together seamlessly.
Download the Toolkit

Accelerate end-to-end machine learning and data science pipelines with optimized deep learning frameworks and high-performing Python* libraries.

Get It Now

Release Notes

Get Started Guide Linux*

Get Started Guide Windows*

Code Samples

See All Toolkits

What's New

  • Accelerate your deep learning training and inference workloads with support in Intel® oneAPI Deep Neural Network Library (oneDNN) for Intel® Xe Matrix Extensions on Intel® Data Center GPU Flex Series and Intel® Data Center GPU Max Series.
  • Run Intel® Extension for TensorFlow* and Intel® Extension for PyTorch* on discrete Intel GPUs.
  • Scale your DataFrame processing to large or distributed compute resources with the Heterogeneous Data Kernels (HDK) back end for Intel® Distribution of Modin*.
  • Get your AI projects started quickly with open source pretrained reference kits that include models, training data, end-to-end pipeline user guides, and Intel oneAPI components.
  • Runs natively on Windows* with full feature parity to Linux* (except for distributed training).

Features

Optimized Deep Learning 

  • Leverage popular, Intel-optimized frameworks—including TensorFlow and PyTorch—to use the full power of Intel architecture and yield high performance for training and inference.
  • Expedite development by using the open source, pretrained, machine learning models that are optimized by Intel for best performance. 
  • Take advantage of automatic accuracy-driven tuning strategies along with additional objectives like performance, model size, or memory footprint using low-precision optimizations.
     

Data Analytics and Machine Learning Acceleration

  • Increase machine learning model accuracy and performance with algorithms in scikit-learn and XGBoost, optimized for Intel architecture.
  • Scale out efficiently to clusters and perform distributed machine learning by using Intel® Extension for Scikit-learn*.

High-Performance Python*

  • Take advantage of the most popular and fastest growing programming language for AI and data analytics with underlying instruction sets optimized for Intel architecture.
  • Process larger scientific data sets more quickly using drop-in performance enhancements to existing Python code.
  • Achieve highly efficient multithreading, vectorization, and memory management, and scale scientific computations efficiently across a cluster.

 

Simplified Scaling across Multi-node DataFrames

  • Seamlessly scale and accelerate pandas workflows to multicores and multi-nodes with only one line of code change using the Intel Distribution of Modin, an extremely lightweight parallel DataFrame.
  • Accelerate data analytics with high-performance back ends, such as OmniSci*.

Benchmarks

These benchmarks illustrate the performance capabilities of the AI Kit.

In the News

CERN Uses Intel® Deep Learning Boost & oneAPI to Juice Inference without Accuracy Loss

Researchers at CERN and Intel showcase promising results with low-precision optimizations that exploit heterogeneous operations on CPUs for convolutional Generative Adversarial Networks (GAN).

Learn More

LAIKA Studios* & Intel Join Forces to Expand the Possibilities in Stop-Motion Film Making

See how LAIKA Studios* and the Intel Applied Machine Learning team used tools from the AI Kit to realize the limitless scope of stop-motion animation.

Learn More

Accelerate PyTorch* with oneAPI Libraries

Harnessing Intel® Deep Learning Boost and oneAPI libraries, Intel and Facebook* collaboratively improved PyTorch CPU performance across multiple training and inference workloads.

PyTorch with oneDNN

PyTorch with Intel® oneAPI Collective Communications Library (oneCCL)

MLPerf* Results for Deep Learning Training and Inference

Reflecting the broad range of AI workloads, Intel submitted results for MLPerf* Release v.0.7 for training and inference. Results in each use case demonstrated that Intel continues to improve standards for Intel® Xeon® Scalable processors as universal platforms for CPU-based machine learning training and inference.

MLPerf Training | MLPerf Inference

An Open Road to Swift DataFrame Scaling

This podcast looks at the challenges of data preprocessing, especially time-consuming, data-wrangling tasks. It discusses how Intel and OmniSci are collaborating to provide integrated solutions that improve DataFrame scaling.

Listen

Superior Machine Learning Performance on the Latest Intel® Xeon® Scalable Processors

Intel Extension for Scikit-learn gives data scientists the performance and ease-of-use they need to run machine learning algorithms with a simple drop-in replacement for the stock scikit-learn. This article showcases the speedups achieved on the latest Intel Xeon Scalable processors when compared to processors from NVIDIA* and AMD*.

Learn More

Optimize Performance of Gradient Boost Algorithms

Intel has been constantly improving training and inference performance for XGBoost algorithms. The following blogs compare the training performance of XGBoost 1.1 on a CPU with third-party GPUs, and showcase how to speed up inference with minimal code changes and no loss of quality.

Training | Inference

Accelerate Lung Disease Diagnoses with Intel® AI

Accrad developed CheXRad, an AI-powered solution to rapidly detect COVID-19 and 14 other thoracic diseases in the clinics and hospitals of Africa. With the help of Intel, they were able to train, optimize, and deploy in less time and at a lower operational cost than available alternatives.

Learn More

显示更多 显示较少

What’s Included

Intel® Optimization for TensorFlow*

In collaboration with Google*, TensorFlow has been directly optimized for Intel architecture using the primitives of oneDNN to maximize performance. This package:

  • Provides the latest TensorFlow binary version compiled with CPU-enabled settings
  • Adds extensions to further boost TensorFlow training and inference
  • Takes advantage of the latest Intel hardware features

 

Intel® Optimization for PyTorch*

In collaboration with Facebook*, this popular deep learning framework is now directly combined with many optimizations from Intel to provide superior performance on Intel architecture. This package provides the binary version of the latest PyTorch release for CPUs, and further adds extensions and bindings from Intel with oneCCL for efficient distributed training.

 

Model Zoo for Intel® Architecture

Access pretrained models, sample scripts, best practices, and step-by-step tutorials for many popular open-source, machine learning models optimized by Intel to run on Intel Xeon Scalable processors.

 

Intel® Neural Compressor

Provide a unified, low-precision inference interface across multiple deep learning frameworks optimized by Intel with this open source Python library.

Intel® Extension for Scikit-learn*

Seamlessly speed up your scikit-learn applications on Intel® CPUs and GPUs across single nodes and multi-nodes. This extension package dynamically patches scikit-learn estimators to use Intel® oneAPI Data Analytics Library (oneDAL) as the underlying solver, while achieving the speed up for your machine learning algorithms. The toolkit also includes stock scikit-learn to provide a comprehensive Python environment installed with all required packages. The extension supports up to the last four versions of scikit-learn, and provides flexibility to use with your existing packages.

 

Intel® Optimization for XGBoost*

In collaboration with the XGBoost community, Intel has been directly upstreaming many optimizations to provide superior performance on Intel CPUs. This well-known machine learning package for gradient-boosted decision trees now includes seamless, drop-in acceleration for Intel architecture to significantly speed up model training and improve accuracy for better predictions.

 

Intel® Distribution of Modin*

Accelerate your pandas workflows and scale data preprocessing across multi-nodes using this intelligent, distributed DataFrame library with an identical API to pandas. The library integrates with OmniSci in the back end for accelerated analytics. This component is available only via the Anaconda* distribution of the toolkit. To download and install it, refer to the Installation Guide.

 

Intel® Distribution for Python*

Achieve greater performance through acceleration of core Python numerical and scientific packages that are built using Intel® Performance Libraries. This package includes Numba Compiler*, a just-in-time compiler for decorated Python code that allows the latest Single Instruction Multiple Data (SIMD) features and multicore execution to fully use modern CPUs. You can program multiple devices using the same programming model, DPPy (Data Parallel Python) without rewriting CPU code to device code.

Data Science Workstations Powered by the AI Kit

Original equipment manufacturer (OEM) partners offer Intel®-based data science workstations, which are laptop, desktop, or tower configurations that include:

  • Intel® Core™ or Intel® Xeon® processors that are matched for data science work
  • Large memory capacities to enable in-memory processing of large datasets, which shortens the time required to sort, filter, label, and transform your data
  • Intel® Optane™ persistent memory that provides an affordable alternative to DRAM for extremely large-capacity workloads and in-memory databases
  • AI Kit software with applications and libraries that accelerate end-to-end AI and data analytics pipelines on Intel architecture


Data Science Workstations

The AI Kit comes preinstalled on select OEM data science workstations. Use the Installation Guide to download the AI Kit on your workstation.

  • Dell Precision* data science workstations: See the Installation Guide.
  • Z by Hewlett Packard Enterprise* data science workstations: AI Kit components are preloaded through the Z by HP Data Science Stack, an application for customizing data science environments.
  • Lenovo ThinkStation* and ThinkPad* P series workstations: factory installed.

Documentation & Code Samples

 Documentation
  • Installation Guides:
    Intel | Anaconda | Docker* | Dell Precision Data Science Workstation
  • Package Managers: Conda | APT | YUM/DNF/Zypper
  • Get Started Guides:
    Linux | Windows | Containers | scikit-learn
  • Release Notes
  • Maximize TensorFlow Performance on CPUs: Considerations and Recommendations for Inference Workloads


View All Documentation

Code Samples
  • Get Started:
    TensorFlow
    | PyTorch | Modin | XGBoost | scikit-learn
  • End-to-End Machine Learning for Census Workload
  • TensorFlow Performance Analysis
  • Multi-node Training with PyTorch
  • PyTorch Training with Intel® Advanced Matrix Extensions and bfloat16 Data


View All Code Samples

Training

Accelerate End-to-End AI Pipelines Using the Intel® AI Analytics Toolkit
Optimize the Latest Deep Learning Workloads Using Intel® Extension for PyTorch*
Achieve AI Performance from the Data Center to the Edge Using oneAPI Toolkits
AI Analytics Part 1: Optimize End-to-End Data Science and Machine Learning Acceleration
AI Analytics Part 2: Enhance Deep Learning Workloads on 3rd Generation Intel Xeon Scalable Processors
AI Analytics Part 3: Walk through the Steps to Optimize End-to-End Machine Learning Workflows
Maximize CPU Resources for XGBoost Training and Inference
Intel® Extension for TensorFlow*: Tips & Tricks for AI & HPC Convergence
Achieve High-Performance Scaling for End-to-End Machine-Learning and Data Analytics Workflows

Specifications

Processors:
  • Intel Xeon processors
  • Intel Xeon Scalable processors
  • Intel Core processors
  • Intel Data Center GPU Flex Series
  • Intel Data Center GPU Max Series

 

Language:
  • Python

 

Operating systems:
  • Linux
  • Windows

 

Development environments:
  • Compatible with Intel® compilers and others that follow established language standards
  • Linux: Eclipse* IDE

 

Distributed environments:
  • MPI (MPICH-based, Open MPI)


Support varies by tool. For details, see the system requirements.

 

Get Help

Your success is our success. Access these support resources when you need assistance.

  • AI Kit Support Forum
  • Deep Learning Frameworks Support Forum
  • Machine Learning and Data Analytics Support Forum


For more help, see our general oneAPI Support.

Stay in the Know with All Things CODE

Sign up to receive the latest trends, tutorials, tools, training, and more to
help you write better code optimized for CPUs, GPUs, FPGAs, and other
accelerators—stand-alone or in any combination.

 

Sign Up
  • Features
  • What's Included
  • Documentation & Code Samples
  • Training
  • Specifications
  • Help
  • 公司信息
  • 英特尔资本
  • 企业责任
  • 投资者关系
  • 联系我们
  • 新闻发布室
  • 网站地图
  • 招贤纳士 (英文)
  • © 英特尔公司
  • 沪 ICP 备 18006294 号-1
  • 使用条款
  • *商标
  • Cookie
  • 隐私条款
  • 请勿分享我的个人信息

英特尔技术可能需要支持的硬件、软件或服务激活。// 没有任何产品或组件能够做到绝对安全。// 您的成本和结果可能会有所不同。// 性能因用途、配置和其他因素而异。// 请参阅我们的完整法律声明和免责声明。// 英特尔致力于尊重人权,并避免成为侵犯人权行为的同谋。请参阅英特尔的《全球人权原则》。英特尔产品和软件仅可用于不会导致或有助于任何国际公认的侵犯人权行为的应用。

英特尔页脚标志