The range of computing applications today is incredibly varied—and it’s growing more so, especially with the proliferation of data, edge computing, and artificial intelligence. However, different workloads require different types of compute.
Intel is uniquely positioned to deliver a diverse mix of scalar, vector, matrix, and spatial architectures deployed in CPU, GPU, accelerator, and FPGA sockets. This gives our customers the ability to use the most appropriate type of compute where it’s needed. Combined with scalable interconnect and a single software abstraction, Intel’s multiple architectures deliver leadership across the compute spectrum to power the data-centric world.
From system boot to productivity applications to advanced workloads like cryptography and AI, most computing needs can be covered by scalar-based central processing units, or CPUs. CPUs work across a wide range of topographies with consistent, predictable performance.
Intel delivers two world-class microarchitectures for CPUs with Intel Atom® processor and Intel® Core™ processor, which is also the basis for our Intel® Xeon® processor line. Our scalable range of CPUs gives customers the choice to balance performance, power efficiency, and cost.
Graphics processing units, or GPUs, deliver vector-based parallel processing to accelerate workloads like graphics rendering for gaming. Because they excel at parallel computing, GPUs are also a good option for deep learning training.
Intel’s integrated GPUs bring excellent visuals to PCs. Now, we’ve announced the expansion of our portfolio to include discrete GPUs for client and data center applications starting in 2020, providing increased functionality in fast-growing areas including rich media, graphics, and analytics. By scaling our GPU IP from client to data center, we can take parallel processing performance from gigaflops to teraflops to petaflops to exaflops.
From the data center to edge devices, AI continues to permeate all aspects of the compute spectrum. To that end, we’ve developed purpose-built accelerators and added microarchitectural enhancements to our CPUs with new instructions to accelerate AI workloads.
Built from the ground up for a precise usage, an application-specific integrated circuit (ASIC) is a type of processor that in most cases will deliver best-in-class performance for the matrix compute workloads it was designed to support.
Intel is extending platforms with purpose-built ASICs that offer dramatic leaps in performance. These include Habana AI processors and Intel® Movidius™ Vision Processing Units (VPUs) for training and inference, which solve for the unique needs of the entire deep learning workflow. In addition, Intel® Deep Learning Boost (Intel® DL Boost), available on 3rd Gen Intel® Xeon® Scalable processors and 10th Gen Intel® Core™ processors, adds architectural extensions to accelerate Vector Neural Network Instructions (VNNI). This offers increased matrix computing performance for AI applications.
Field programmable gate arrays, or FPGAs, are integrated circuits that can physically manipulate how their logic gates open and close. The circuitry inside an FPGA chip is not hard etched—it can be reprogrammed as needed.
Intel® FPGAs provide completely customizable hardware acceleration while retaining the flexibility to evolve with rapidly changing computing needs. As blank, modifiable canvases, their purpose and power can be easily adapted again and again.
At Intel, we’re planning for the architectures of the future with research and development in next-generation computing. Among these are quantum and neuromorphic architectures.
Intel is innovating across six pillars of technology development to unleash the power of data for the industry and our customers.
Legal Disclosures2 3 4