Note: These instructions are written for the Orange pi PC Plus (http://www.orangepi.org/orangepipcplus/) but steps should be similar with other ARMv7 SBCs such as ASUS* Tinker Board S, Nano Pi M4, Raspberry Pi 3 and 4 as long as your environment is using a 32-bit operating system. The article was written using the 2019 R1.1 release of the open source distribution of the OpenVINO toolkit.
Note: For general instructions on building and using the open source distribution of the OpenVINO™ toolkit with the Intel® Neural Compute Stick 2 and the original Intel® Movidius™ Neural Compute Stick please take a look at the article on that topic.
Intro
The Intel® Distribution of OpenVINO™ Toolkit and the Intel® Neural Compute Stick 2 (Intel® NCS 2) are the perfect solution for vision applications in low power and development environments, but getting set up on so many different architectures represents a problem. ARM* platforms such as ARMv7 are becoming increasingly common for developers building and porting their solutions with low powered singe-board computers (SBCs) but can have widely varying requirements compared to traditional x86 computing environments. While the Intel® Distribution of the Intel® OpenVINO™ toolkit provides a binary installation for multiple environments, including the popular Raspberry Pi* SBC, the open-source version of the Intel® OpenVINO™ toolkit lets developers build the toolkit and their application for any environment they can port it to.
These steps generally follow this article about Intel® NCS 2 and Open Source OpenVINO™ toolkit, but include specific changes to get everything running on your board. If you’re using and ARM64 platform such as the ODroid-C2, follow the instructions in the ARM64 article[this link will change based on alias] instead.
Hardware
Make sure that you satisfy the following requirements before beginning. This will make sure that the install process goes smoothly:
- ARMv7 SBC such as the Orange pi PC Plus
- AT LEAST an 8GB microSD Card. You may utilize the onboard eMMC module if one is attached, but will you need a microSD card to write the operating system to the board
- Intel® Neural Compute Stick 2
- Ethernet Internet connection or compatible wireless network
- Dedicated DC Power Adapter
- Keyboard
- HDMI Monitor
- HDMI Cable
- USB Storage Device
- Separate Windows*, Ubuntu*, or macOS* computer (like the one you’re using right now) for writing the installer image to device with a compatible microSD card reader
Setting Up Your Build Environment
Note: This guide assumes you are using the root user and does not include sudo in its commands. If you have created another user and are logged in as that, run these commands as root to install correctly.
Make sure your device software is up to date:
apt update && apt upgrade –y
Some of the toolkit’s dependencies do not have prebuilt ARMv7 binaries and need to be built from source – this can increase the build time significantly compared to other platforms. Preparing to build the toolkit requires the following steps:
- Installing build tools
- Installing CMake* from source
- Installing OpenCV from source
- Cloning the toolkit
These are outlined below, step by step.
Installing Build Tools
Install build-essential:
apt install build-essential
This will install and setup the GNU C and GNU CPlusPlus compilers. If everything completes successfully, move on to install CMake* from source.
Install CMake* from Source
The open-source version of Intel® OpenVINO™ toolkit (and OpenCV, below) use CMake* as their build system. The version of CMake in the package repositories for both Ubuntu 16.04 (LTS) and Ubuntu 18.04 (LTS) are too out of date for our uses and
no official binary exists for the platform – as such we must build the tool from source. As of writing, the most recent stable supported version of CMake is 3.14.4.
To begin, fetch CMake from the Kitware* GitHub* release page, extract it, and enter the extracted folder:
wget https://github.com/Kitware/CMake/releases/download/v3.14.4/cmake-3.14.4.tar.gz
tar xvzf cmake-3.14.4.tar.gz
cd ~/cmake-3.14.4
Run the bootstrap script to install additional dependencies begin the build:
./bootstrap
make –j4
make install
Note: The install step is optional, but recommended. Without it, CMake will run from the build directory.
Note: The number of jobs the make command uses can be adjusted with the –j flag – it is recommended to set the number of jobs at the number of cores on your platform.
You can check the number of cores on your system by using the command grep –c ^processor /proc/cpuinfo . Be aware that setting the number too high can lead to memory overruns, failing the build. If time permits, it is recommended to run 1 to 2 jobs.
CMake is now fully installed.
Install OpenCV from Source
Intel® OpenVINO™ toolkit uses the power of OpenCV to accelerate vision-based inferencing. While the CMake process for Intel® OpenVINO™ toolkit downloads OpenCV if no version is installed for supported platforms, no specific version exists for ARMv7 platforms. As such, we must build OpenCV from source.
OpenCV requires some additional dependencies. Install the following from your package
manager (in this case, apt):
- git
- libgtk2.0-dev
- pkg-config
- libavcodec-dev
- libavformat-dev
- libswscale-dev
Clone the repository from OpenCV GitHub* page, prepare the build environment, and build:
git clone https://github.com/opencv/opencv.git
cd opencv && mkdir build && cd build
cmake –DCMAKE_BUILD_TYPE=Release –DCMAKE_INSTALL_PREFIX=/usr/local ..
make –j4
make install
OpenCV is now fully installed.
Download Source Code and Install Dependencies
The open-source version of Intel® OpenVINO™ toolkit is available through GitHub. The repository folder is titled dldt , for Deep Learning Development Toolkit.
git clone https://github.com/opencv/dldt.git
The repository also has submodules that must be fetched:
cd ~/dldt/inference-engine
git submodule init
git submodule update –-recursive
Intel® OpenVINO™ toolkit has a number of build dependencies. The install_dependencies.sh script fetches these for you. There must be some changes made to the script to run properly on ARM* platforms. If any issues arise when trying to run the script, then you must install each dependency individually.
Note: For images that ship with a non-Bash POSIX-Compliant shell this script (as of 2019 R1.1) includes the use of the function keyword and a set of double brackets which do not work for non-Bash shells.
Using your favorite text editor, make the following changes.
Original Line 8 :
function yes_or_no {
Line 8 Edit:
yes_or_no() {
Original Line 23:
if [[ -f /etc/lsb-release ]]; then
Line 23 Edit:
if [ -f /etc/lsb-release ]; then
The script also tries to install two packages that are not needed for ARM: gcc-multilib and gPlusPlus-multilib. They should be removed from the script, or all other packages will need to be installed independently.
Run the script to install:
sh ./install_dependencies.sh
If the script finished successfully, you are ready to build the toolkit. If something has failed at this point, make sure that you install any listed dependencies and try again.
Building
The first step to beginning the build is telling the system where the installation of OpenCV is. Use the following command:
export OpenCV_DIR=/usr/local/opencv4
The toolkit uses a CMake building system to guide and simplify the building process. To build both the inference engine and the MYRIAD plugin for Intel® NCS 2 use the following commands:
cd ~/dldt/inference-engine
mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release \
-DENABLE_MKL_DNN=OFF \
-DENABLE_CLDNN=OFF \
-DENABLE_GNA=OFF \
-DENABLE_SSE42=OFF \
-DTHREADING=SEQ \
..
make
If the make command fails because of an issue with an OpenCV library, make sure that you’ve told the system where your installation of OpenCV is. If the build completes at this point, Intel® OpenVINO™ toolkit is ready to run. The builds are placed in <dldt>/inference-engine/bin/armv7/Release/.
Verifying Installation
After successfully completing the inference engine build, you should verify that everything is set up correctly. To verify that the toolkit and Intel® NCS 2 work on your device, complete the following steps:
- Run the sample program benchmark_app to confirm that all libraries load correctly
- Download a trained model
- Select an input for the neural network
- Configure the Intel® NCS 2 Linux* USB driver
- Run benchmark_app with selected model and input.
Sample Programs: benchmark_app
The Intel® OpenVINO™ toolkit includes some sample programs that utilize the inference engine and IIntel® NCS 2. One of the programs is benchmark_app, a tool for estimating deep learning inference performance. It can be found in ~/dldt/inference-engine/bin/intel64/Release.
Run the following command in the folder to test benchmark_app:
./benchmark_app –h
It should print a help dialog, describing the available options for the program.
Downloading a Model
The program needs a model to pass the input through. Models for Intel® OpenVINO™ toolkit in IR format can be obtained by:
- Using the Model Optimizer to convert an existing model from one of the supported frameworks into IR format for the Inference Engine
- Using the Model Downloader tool to download from the Open Model Zoo
- Download the IR files directly from download.01.org
For our purposes, downloading directly is easiest. Use the following commands to grab an age and gender recognition model:
cd ~
mkdir models
cd models
wget https://download.01.org/opencv/2019/open_model_zoo/R1/models_bin/age-gender-recognition-retail-0013/FP16/age-gender-recognition-retail-0013.xml
wget https://download.01.org/opencv/2019/open_model_zoo/R1/models_bin/age-gender-recognition-retail-0013/FP16/age-gender-recognition-retail-0013.bin
Note: The Intel® NCS 2 requires models that are optimized for the 16-bit floating point format known as FP16. Your model, if it differs from the example, may require conversion using the Model Optimizer to FP16.
Input for the Neural Network
The last item needed is input for the neural network. For the model we’ve downloaded, you need a 62x62 image with 3 channels of color. This article includes an archive that contains an image that you can use, and is used in the example below. Copy the
archive to a USB Storage Device, connect the device to your board, and use the following commands to mount the drive and copy its contents to a folder called OpenVINO on your home directory:
lsblk
Use the lsblk command to list the available block devices, and make a note of your connected USB drive. Use its name in place of sdX in the next command:
mkdir /media/usb
mount /dev/sdX /media/usb
mkdir ~/OpenVINO
cp /media/archive_openvino.tar.gz ~/OpenVINO
tar xvzf ~/OpenVINO/archive_openvino.tar.gz
The OpenVINO folder should contain two images, a text file, and a folder named squeezenet. Note that the name of the archive may differ – it should match what you have downloaded from this article.
Configure the Intel® NCS 2 Linux* USB Driver
Some udev rules need to be added to allow the system to recognize Intel® NCS 2 USB devices. Inside the tar.gz file attached there is a file called 97-myriad-usbboot.rules_.txt. It should be downloaded to the user’s home directory. Follow the commands below to add the rules to your device:
Note: If the current user is not a member of the users group then run the following command and reboot your device:
sudo usermod –a –G users “$(whoami)”
While logged in as a user in the users group:
cd ~
cp 97-myriad-usbboot.rules_.txt /etc/udev/rules.d/97-myriad-usbboot.rules
udevadm control --reload-rules
udevadm trigger
ldconfig
The USB driver should be installed correctly now. If the Intel® NCS 2 is not detected when running demos, restart your device and try again.
Running benchmark_app
When the model is downloaded, an input image is available, and the Intel® NCS 2 is plugged into a USB port, use the following command to run the benchmark_app:
cd ~/dldt/inference-engine/bin/intel64/Release
./benchmark_app –I ~/president_reagan-62x62.png –m ~/models/age-gender-recognition-retail-0013.xml
–pp ./lib –api async –d MYRIAD
This will run the application with the selected options. The –d flag tells the program which device to use for inferencing – MYRIAD activates the MYRAID plugin, utilizing the Intel® NCS 2. After the command successfully executes the terminal will display statistics for inferencing.
If the application ran successfully on your Intel® NCS 2, then Intel® OpenVINO™ toolkit and Intel® NCS 2 are set up correctly for use on your device.
Inferencing at the Edge
Now that you’ve confirmed your ARMv7 is setup and working with Intel® NCS 2, you can now start building and deploying your AI applications or use one of the prebuilt sample applications to test your use-case. Next, we will try to do a simple image classification using SqueezeNetv1.1 and an image downloaded to the board. To simplify things the attached archive contains both the image and the network. The SqueezeNetv1.1 network has already been converted to IR format for use by the Inference Engine.
The following command will take the cat.jpg image that was included in the archive, use the squeezenet1.1 network model, use the MYRIAD plugin to load the model to the connected Intel® NCS 2, and infer the output. As before, the location of the sample application is <OpenVINO folder>/inference-engine/bin/armv7/Release/
./classification_sample –i ~/OpenVINO/cat.jpg –m ~/OpenVINO/squeezenet/squeezenet1.1.xml –d MYRIAD
The program will output a list of the top 10 results of the inferencing and an average of the image throughput.
If you’ve come this far, then your device is setup, verified, and ready to begin prototyping and deploying your own AI applications using the power of Intel® OpenVINO™ toolkit.
"