跳转至主要内容
英特尔标志 - 返回主页
我的工具

选择您的语言

  • Bahasa Indonesia
  • Deutsch
  • English
  • Español
  • Français
  • Português
  • Tiếng Việt
  • ไทย
  • 한국어
  • 日本語
  • 简体中文
  • 繁體中文
登录 以访问受限制的内容

使用 Intel.com 搜索

您可以使用几种方式轻松搜索整个 Intel.com 网站。

  • 品牌名称: 酷睿 i9
  • 文件号: 123456
  • Code Name: Emerald Rapids
  • 特殊操作符: “Ice Lake”、Ice AND Lake、Ice OR Lake、Ice*

快速链接

您也可以尝试使用以下快速链接查看最受欢迎搜索的结果。

  • 产品信息
  • 支持
  • 驱动程序和软件

最近搜索

登录 以访问受限制的内容

高级搜索

仅搜索

Sign in to access restricted content.

不建议本网站使用您正在使用的浏览器版本。
请考虑通过单击以下链接之一升级到最新版本的浏览器。

  • Safari
  • Chrome
  • Edge
  • Firefox

Developer Tools and Software for Intel® Data Center GPU Max Series

Drive breakthrough acceleration for HPC and AI workloads with the combined power of Intel® Data Center GPU Max Series and Intel® Xeon® Scalable processors—powered by oneAPI and Intel® AI developer tools.

  • Tools and Libraries
  • AI Workflows
  • HPC
  • Success Stories

Unleash the Power of Intel Data Center GPU Max Series through Software

Intel Data Center GPU Max Series combined with oneAPI helps developers deliver high-performance, cross-architecture applications and solutions. Intel toolkits provide tools, compilers, libraries, and AI middleware to unleash hardware performance while freeing developers from proprietary environments.

Convenient Software Suites for AI and HPC

Accelerate AI and HPC innovation with Intel's portfolio of compilers, libraries, and tools. Intel provides the software you need to solve the world's most demanding technical challenges.

The Intel® oneAPI Base Toolkit is a starting point for heterogeneous development across CPUs, GPUs, and FPGAs. It is open source, based on open standards, and features an industry-leading C++ compiler that implements SYCL*, an evolution of C++ for heterogeneous computing. A range of performance libraries provide portable acceleration. Enhanced profiling, design assistance, and debug tools are also included.

AI Tools include additional components for data scientists and AI developers with optimizations for popular AI frameworks to run on Intel Data Center GPU Max Series for training and inference.

The Intel® oneAPI HPC Toolkit delivers what developers need to build, analyze, optimize, and scale HPC applications with the latest techniques in vectorization, multithreading, multi-node parallelization, and memory optimization. Use it to build code with Intel C++ and Fortran compilers, scale with Intel® MPI library, and analyze MPI application behavior.

Get Started

GPU drivers must be installed first in order for the toolkits to be used on Intel Data Center GPU Max Series: 

  • Linux
  • Windows: not supported
Intel® oneAPI Base Toolkit

Use this core set of tools and libraries for developing high-performance, data-centric applications across diverse architectures.

Download the Intel oneAPI Base Toolkit
Accelerate HPC with Intel® oneAPI HPC Toolkit
  • Optimize code and tune performance with Intel Fortran Compilers, C++, and SYCL, as well as oneAPI libraries, analysis, and porting tools. 
  • oneAPI compilers activate Intel® Xe Matrix Extensions (Intel® XMX) for acceleration.
  • Intel® MPI Library activates Intel® Xe Link for faster direct GPU-to-GPU communications.
Download the Intel oneAPI HPC Toolkit

                                                         

Boost Deep Learning Training and Inference with AI Tools
  • Intel® oneAPI Deep Neural Network Library (oneDNN) in the Intel oneAPI Base Toolkit uses Intel XMX to accelerate AI training and inference.
  • Streamline AI visual inferencing and deploy quickly using the Intel® Distribution of OpenVINO™ toolkit.
  • Intel® Extension for PyTorch* and Intel® Extension for TensorFlow* accelerate the use of popular deep learning frameworks for Intel CPUs and GPUs.
Download AI Tools

                                                         

Create Multiarchitecture Code Efficiently with Code Migration Tools

Migrate CUDA* code to C++ with SYCL for easy portability across multiple vendors’ architectures, including Intel® Data Center GPUs. The Intel® DPC++ Compatibility Tool, based on open source SYCLomatic, automates most of the process. 

Get the Intel® DPC++ Compatibility Tool

   

Open Source SYCLomatic

More Resources

Get Started Guides & Articles

oneAPI GPU Optimization Guide

Compare CPUs, GPUs, and FPGAs for oneAPI Compute Workloads

Intel® VTune™ Profiler

Intel Distribution of OpenVINO Toolkit Get Started Guide

Training, Webinars & Tutorials

Intel oneAPI 2023 Release: Preview the Tools

Tune Applications on CPUs & GPUs with an LLVM*-Based Compiler from Intel

Profile Heterogeneous Computing Performance with Intel VTune Profiler

Migrate CUDA* Code to SYCL

SYCL Origins: A True Standard with a Growing Ecosystem

Quicky Migrate Existing CUDA Code to SYCL

Intel DPC++ Compatibility Tool Get Started Guide

Migrating the MonteCarloMultiGPU from CUDA to SYCL

显示更多 显示较少

AI Inference and Training Workflows

Intel Data Center GPU Max Series is ideal for AI inference and training workflows. AI Tools provide optimized extensions for AI frameworks such as TensorFlow* and PyTorch*. Optimize and deploy AI inference with the Intel® Distribution of OpenVINO™ toolkit.  

Get Started

The following Linux containers are part of the Intel® AI Reference Models project provided to quickly replicate the complete software environment that demonstrates the best-known performance of each of these target models or dataset combinations.

Intel® AI Reference Models

 

PyTorch Model Containers

ResNet* 50 Version 1.5 int8 Inference

(ImageNet 2012 dataset)

ResNet 50 Version 1.5 bfloat16 Training
(ImageNet 2012 dataset)

BERT Large FP16 Inference

(Stanford Question Answering [SQuAD] dataset)

BERT Large FP16 Training

(MLCommons dataset)

 

TensorFlow Model Containers

ResNet 50 Version 1.5 int8, FP16,and FP32 Inference

(ImageNet 2012 dataset)

ResNet 50 Version 1.5 bfloat16 Training

(ImageNet 2012 dataset)

BERT Large FP16, bfloat16, and FP32 Inference

(SQuAD dataset)

 

Additional Video and Coding Tutorials (Not Containerized)

Introduction to Intel Extension for PyTorch*

Intel® Extension for PyTorch* Getting Started Sample

PyTorch GPU Tutorial

Large Language Models (LLM)

Llama v2 Launch with Meta* AI

Intel® Extension for PyTorch* LLM Feature Get Started

Generative AI

Accelerate Stable Diffusion on Intel GPUs with Intel® Extension for OpenXLA*

Intel® Extension for OpenXLA (GitHub*)

Hugging Face Transformers

A broad set of more than 85 Hugging Face transformer training and inference models

Hugging Face Transformer Models

Install and Build Intel XPU Back End for NVIDIA Triton* Inference Server

Run Hugging Face Inductor Triton Benchmarks

显示更多 显示较少

High-Performance Computing

Intel Data Center GPU Max Series is built for high-performance computing. Intel® HPC Toolkit is an add-on to the Intel® oneAPI Base Toolkit. Both work together to deliver what developers need to build, analyze, optimize, and scale HPC applications with the latest techniques in vectorization, multithreading, multi-node parallelization, and memory optimization.

Get Started

A wide variety of HPC applications and open source projects are tested on Intel Data Center GPU Max Series. Many are already optimized, and more optimizations are becoming available. Intel's combination of compilers, optimized libraries, porting tools, and contributions to open source projects helps you to quickly start your scientific discoveries.

The following recipes are a subset of HPC workloads enabled for Intel® Data Center GPU Max Series.

 

System Test
  • Stream Triad (BabelSTREAM)
  • DGEMM
Life Sciences
  • LAMMPS
  • AutoDock-GPU
Financial Services Industry
  • Binomial Options
  • Black-Scholes
  • Monte Carlo
Physics
  • DPEcho
Additional Video and Coding Tutorials
  • Quicky Migrate Existing CUDA Code to SYCL
  • Migrating the MonteCarloMultiGPU Sample from CUDA to SYCL
  • Port Thermal Solver Code
  • Offload Fortran Workloads
  • Offload Fortran Workloads to Intel® GPUs Using OpenMP*
  • Accelerating Lower-Upper (LU) Factorization Using Fortran, Intel® oneAPI Math Kernel Library & OpenMP to Intel GPUs

Success Stories

Intel® oneAPI Tools Help Prepare Code for Aurora


The Aurora Supercomputer from Argonne National Laboratory (built on Intel® architecture and HPE Cray supercomputer) will be one of the first exascale systems in the US.

Convergence of HPC, AI & Big Data Analytics in the Exascale Era

Intel® oneAPI Tools Help Prepare Code for Aurora
 

"We're seeing encouraging early application performance results on our development systems using Intel Max Series GPU accelerators – applications built with Intel's oneAPI compilers and libraries. For leadership-class computational science, we value the benefits of code portability from multivendor, multiarchitecture programming standards such as SYCL and Python AI frameworks such as PyTorch accelerated by Intel libraries. We look forward to the first exascale scientific discoveries from these technologies on the Aurora system next year."

— Timothy Williams, deputy director, Argonne Computational Science Division

Zuse Institute Berlin (ZIB) Ported easyWAVE Tsunami Simulation Application

Learn how porting from CUDA to oneAPI delivered performance on CPUs, GPUs, and FPGAs.

Chasing Exascale: TACC’s Frontera Uses oneAPI to Accelerate Scientific Insights

Dr. Dan Stanzione of Texas Advanced Computing Center (TACC) discusses advancing HPC to exascale with oneAPI and Intel multiarchitecture to scale workloads on the Frontera supercomputer.

1 Note All information provided is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps.

Intel® Developer Cloud

Intel® Data Center GPU Max Series is Available Now 

Build and optimize oneAPI multiarchitecture applications using the latest optimized Intel oneAPI and AI Tools, and test your workloads across Intel CPUs and GPUs. No hardware installations, software downloads, or configuration necessary. Free for 120 days with extensions possible.

Try Intel Tiber AI Cloud Today

  • 公司信息
  • 英特尔资本
  • 企业责任部
  • 投资者关系
  • 联系我们
  • 新闻发布室
  • 网站地图
  • 招贤纳士 (英文)
  • © 英特尔公司
  • 沪 ICP 备 18006294 号-1
  • 使用条款
  • *商标
  • Cookie
  • 隐私条款
  • 请勿分享我的个人信息 California Consumer Privacy Act (CCPA) Opt-Out Icon

英特尔技术可能需要支持的硬件、软件或服务激活。// 没有任何产品或组件能够做到绝对安全。// 您的成本和结果可能会有所不同。// 性能因用途、配置和其他因素而异。请访问 intel.cn/performanceindex 了解更多信息。// 请参阅我们的完整法律声明和免责声明。// 英特尔致力于尊重人权,并避免成为侵犯人权行为的同谋。请参阅英特尔的《全球人权原则》。英特尔产品和软件仅可用于不会导致或有助于任何国际公认的侵犯人权行为的应用。

英特尔页脚标志