Developer Resources from Intel and the PyTorch* Foundation
Intel, a premier member of the PyTorch* Foundation, is a leading contributor of features and optimizations to open source PyTorch. This includes its contributions to oneAPI Deep Neural Network Library (oneDNN), which enables PyTorch acceleration across CPUs, GPUs, and AI accelerators.
Intel and PyTorch Case Studies
"The PyTorch Foundation is thrilled to welcome Intel as a premier member, marking a significant milestone in our mission to empower the global AI community. Intel's extensive expertise and commitment to advancing cutting-edge technologies align perfectly with our vision of fostering open source innovation. Together, we will accelerate the development and democratization of PyTorch, and use the collaboration to shape a vibrant future of AI for all."
— Ibrahim Haddad, executive director, PyTorch Foundation
Use PyTorch on Intel Platforms
Learn how to get started or how to get the most out of PyTorch running on Intel-based platforms spanning data centers, the cloud, and AI PCs. These joint offerings are based on OpenVINO™ toolkit, AI Tools, and Intel® Gaudi® software.
Multiplatform
- Get Started with PyTorch on Intel GPUs
- PyTorch Performance Tuning Guide
- 10 Tips for Quantizing LLMs and Vision Language Models (VLM) with AutoRound and torchao
- Accelerate PyTorch Models Using Quantization Techniques with Intel® Extension for PyTorch*
- Accelerate Inference on x86-64 Machines with oneDNN Graph
- How to Accelerate Model Serving with TorchServe and OpenVINO Toolkit
- Unlock the Latest Features in PyTorch 2.6 for Intel Platforms
- Accelerate PyTorch 2.7 on Intel GPUs
- Deploy Compiled PyTorch Models on Intel GPUs with AOTInductor
- PyTorch Export Quantization with Intel GPUs
AI PC
- Deploy Text Generation or Image Generation with OpenVINO Toolkit and torch.compile
- A Closer Look at Running AI on Intel® Arc™ GPUs
- Generative AI (GenAI) for AI PC Notebooks
- Intel® oneAPI DPC++/C++ Compiler Boosts PyTorch Inductor Performance
- How Intel Uses PyTorch to Empower GenAI through Intel® Arc™ GPUs
Data Center & Cloud
- Accelerate GenAI for PyTorch 2.5 on Intel® Xeon® Processors
- Use torch.compile for Accelerated CPU Inference with PyTorch Inductor
- Get Started with PyTorch Training on Intel® Gaudi® Accelerators
- Access Tutorials for Intel® Gaudi® Technology with PyTorch
- Run Models with an Intel Gaudi Accelerator Using PyTorch with Docker* Images
- Optimize Text and Image Generation Using PyTorch
- Optimize Stable Diffusion Upscaling with Diffusers and PyTorch
- Use PyTorch and DINOv2 for Multi-label Plant Species Classification
More Resources
AI Development Resources
Explore tutorials, training, documentation, and support resources for AI developers.
AI Tools
Download Intel-optimized end-to-end AI tools and frameworks.
Intel® AI Hardware
Learn what type of device best suits your AI workload, spanning CPUs, GPUs, and AI accelerators.