Deliver Innovation with Intel® AI Processors
Whether you need AI in the data center or at the edge, our scalable, high-performance AI processors can help you unlock new possibilities, drive efficiency, and achieve more.
Overcome Critical Performance, Scalability, and Cost Challenges
Get the tools and technologies you need to deliver AI innovation wherever it’s required—and the freedom to choose the right technology for your workloads. Your data infrastructure is likely already running on and optimized for Intel.
With our portfolio of AI processors and integrated accelerators, AI developers in all industries can enable high-performance and efficient AI solutions at the edge or in the data center.
Explore Intel® AI Processors
Intel® Xeon® Scalable Processors
From edge to cloud, boost performance for machine and deep learning training and inferencing without using specialized hardware.
Intel® Xeon® Scalable Processors
Intel® Max Series Product Family
Accelerate science and discovery with breakthrough CPU and GPU performance and fewer bottlenecks for memory-bound workloads.
Intel® Max Series Product Family
Habana® Gaudi® and Gaudi®2
Deliver high-efficiency, scalable compute via this deep learning processor that takes the place of GPUs for training and inference workloads in the data center.
Habana® Gaudi® and Gaudi®2
Discover the Impact of Built-In AI Accelerators
Intel® Xeon® Scalable processors offer integrated features that make advanced AI possible anywhere—no GPU required. Overcome space and cost challenges by taking advantage of purpose-built optimizations.
Up to
10x
higher PyTorch performance for both real-time inference and training workloads with built-in Intel® AMX BF16 vs. prior generation with FP32.1
Approximately
70%
of data center AI inferencing runs on Intel® Xeon® processors.2
Up to
20
key machine and deep learning workloads get better performance on Intel® Xeon® processors compared to NVIDIA and AMD offerings.3
Up to
10x
higher PyTorch performance for both real-time inference and training workloads with built-in Intel® AMX BF16 vs. prior generation with FP32.1
Approximately
70%
of data center AI inferencing runs on Intel® Xeon® processors.2
Up to
20
key machine and deep learning workloads get better performance on Intel® Xeon® processors compared to NVIDIA and AMD offerings.3
Put Intel® AI to Work for Your Organization Today
Browse AI Solutions Marketplace
Find Intel® partners and partner software offerings that can enhance, accelerate, and simplify your AI efforts.
Explore Intel® AI Developer Resources
Get development tools and resources to help you prepare, build, deploy, and scale your AI solutions.
Simplify AI with the Intel® Geti Platform
Unlock faster time to value with our new software that removes complexity from model development and enhances team collaboration.
See How Others Are Achieving AI Innovation with Intel
Get to Know Intel® AI
AI Solutions
Learn about our simplified approach to high-performance AI and explore use cases in your industry.
AI Hardware
Browse our flexible end-to-end portfolio for AI acceleration.
Advanced Analytics Solutions
Find out how to get better performance across the pipeline and minimize disruptions.
AI Software Tools and Resources
See how you can simplify and streamline development with our end-to-end AI toolkits and optimizations.
More AI Resources
Support
Find support information, documentation, downloads, community posts, and more.
产品和性能信息
请参阅 intel.com/processorclaims 上的 [A16] 和 [A17]:第四代英特尔® 至强® 可扩展处理器。结果可能有所差异。
基于截至 2021 年 12 月运行 AI 推理工作负载的全球数据中心服务器安装基础的英特尔市场建模。
请参阅 intel.com/processorclaims 上的 [44]:第三代英特尔® 至强® 可扩展处理器。结果可能会有所不同。
性能因用途、配置和其他因素而异。请访问 intel.com/performance 了解详情。性能结果基于截至配置中所示日期的测试,并且可能无法反映所有公开的更新。有关配置详细信息,请参见备用材料。没有任何产品或组件能够做到绝对安全。您的成本和结果可能会有所不同。