Optimize Networks for the Intel® Neural Compute Stick 2 (Intel® NCS 2)

ID 标签 688978
已更新 11/14/2018
版本 Latest
公共

author-image

作者

This document pertains to the Intel® Distribution of OpenVINO™ toolkit and neural compute devices based on Intel® Movidius™ Myriad™ X such as the Intel® Neural Compute Stick 2 (Intel®  NCS 2).

Overview

The Neural Compute Engine (NCE) is an on-chip hardware block available in neural compute devices based on Intel® Movidius™ Myriad™ X. It is designed to run deep neural networks in hardware at much higher speeds than possible with previous generations of the Myriad VPU, still with low power and without compromising accuracy. With two NCEs, the Intel® Movidius™ Myriad™ X architecture is capable of 1 TOPS (1 trillion operations per second) of compute performance on deep neural network inferences.

The model optimizer in OpenVINO™ tookit automatically optimizes networks such that the device can process appropriate layers to take advantage of the NCEs onboard.

Supported Hardware Features:

Networks utilizing the following supported features can be compiled to run as hardware networks on the NCEs. If your network has other non-hardware features, it can still partially run in hardware on the NCE.

  • Multichannel convolution
    •  Matrix-Matrix Multiply/Accumulate
    •  Optional non-overlapping Max and Avg pooling
  • Pooling
    • Overlapping Max and Avg pooling
  • Fully connected
    • Vector-Matrix Multiplay/Accumulate
  • Post processing
    • Bias, Scale, Relu-x, PRelu

Supported Hardware Networks

To see the list of networks that validated to compile and run as hardware networks in this release please refer to the Release Notes.

Using the Intel® Distribution of OpenVINO™ toolkit Inference Engine API with Hardware Networks

No application changes are required to use OpenVINO™ toolkit with hardware networks.

Hardware acceleration with network configuration, HW_STAGES_OPTIMIZATION, is on by default. This can be turned off or back on. The Inference Engine supports different layers for different hardware targets. For a list of supported devices and layers, refer to the Inference Engine Guide.

"