OpenVINO Sample Deep Dive - Hello Classifictation C++

ID 标签 689041
已更新 9/4/2019
版本 Latest
公共

author-image

作者

The Intel® Distribution of OpenVINO™ toolkit and its open source component, OpenVINO™ Open Source Deep Learning Development Toolkit (DLDT), includes a variety of samples that explore the Inference Engine API at the core of OpenVINO™. Perhaps the most basic and most important is the Hello Classification sample. This sample explores loading a classification network and an input image into the Inference Engine API (IE API), running the image through the network, and processing the outputted results. This simple workflow is at the center of computer vision applications and is a great introduction to the Inference Engine API.

The article below walks through the code in the sample itself, breaking down the usage of the IE API to better teach developers how to integrate the Inference Engine into their code.

This article targets the 2019 R2 version of OpenVINO™ and uses the 2019 R2 compatible models from the Open Model Zoo. This demo should be compatible with future versions of OpenVINO™.

Hello Classification C++

You can find the Hello Classification C++ sample inside of the OpenVINO installation directory or inside of the DLDT repository. Default locations for Microsoft Windows® 10 and Ubuntu* are below:

Windows: C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\inference_engine\samples\hello_classification

Ubuntu: /opt/intel/openvino/deployment_tools/inference_engine/samples/

These samples are licensed under the Apache-2.0 License, giving you the freedom to modify and redistribute the code under the terms of the license. More information on the Apache License, Version 2.0 is available at https://www.apache.org/licenses/LICENSE-2.0. Do note that other parts of the Intel® Distribution of OpenVINO™ toolkit are covered under different licenses. You can find more license information inside the directory of your OpenVINO™ installation.

main.cpp

The Hello Classification application is contained in a single C++ source file title main.cpp. This includes all of the program dependencies and the main entry point for the program. Included also is CMakeList.txt, a file that helps the CMake* build system add the program to the build solution, as well as a README file that includes additional information about the sample. Let’s dive into main.cpp.

Developing on Ubuntu: You must set up the proper environment variables before building this sample. Source the setupvars.sh script located in your OpenVINO installation at /opt/intel/openvino/bin/ to allow your system linker to find the proper libraries. You can also add this script to your shell's configuration file, such as .bashrc for bash* shells, to load when creating a new terminal.

Developing on Windows: You must set up the proper environment variables before building this sample. Run the setupvars.bat script located in your OpenVINO installation at C:\Program Files (x86)\IntelSWTools\openvino\bin\ to setup the current Command Prompt session. You can also launch Visual Studio* with these variables by running the setupvars.bat script and then using the devenv /UseEnv command to use the current Command Prompt's environment variables in the Visual Studio session. You can adjust the Project Settings in Visual Studio to add additional libraries and include directories to include the OpenVINO toolkit. See Microsoft Visual Studio documentation for more information.

Headers and main()

#include <vector>

#include <memory>

#include <string>

#include <samples/common.hpp>


#ifdef UNICODE

#include <tchar.h>

#endif


#include <inference_engine.hpp>

#include <samples/ocv_common.hpp>

#include <samples/classification_results.h>


using namespace InferenceEngine;


#ifndef UNICODE

#define tcout std::cout

#define _T(STR) STR

#else

#define tcout std::wcout

#endif


#ifndef UNICODE

int main(int argc, char *argv[]) {

#else

int wmain(int argc, wchar_t *argv[]) {

The computationally complex tasks are handled by OpenVINO™. The application pulls in only a few dependencies to run inference – everything else is handled by the Inference Engine API and by a header common to all of the samples. There are a few headers from the C++ standard library:

  • vector: enables the use of vectors to store input and output data dynamically
  • memory: allows the use of pointers to quickly access object data
  • string: provides the ability to utilize strings of characters for more readable information

The sample then uses a header that is used by all of the samples. This common header can be found at the following locations for Ubuntu and Windows:

Windows: C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\inference_engine\samples\common\samples\common.hpp

Ubuntu: /opt/intel/openvino/deployment_tools/inference_engine/samples/common/samples/common.hpp

This header includes additional standard headers and multiple helper functions and structs that support the applications by providing easy access to error listeners, functions for writing inferred data to files, functions for drawing rectangles over images and video, for displaying performance statistics, for helping define detected objects in code, and much more. This header is good code to refer to when developing your applications to act like those with the toolkit.

Next are the OpenVINO™ specific declarations:

  • inference_engine: provides primitives for creating and accessing the Inference Engine
  • ovc_common: provides a similar function to common.hpp, but for use with OpenCV specific code
  • classification_results: a set of helper functions for processing the output of the neural network

Namespaces: this sample uses the InferenceEngine namespace to override and simplify the calling of certain functions and objects. You should use the proper namespace-scoping for you application.

main() is the entry point for the program. Here it accepts command line arguments for adjusting the inputs and inferencing devices.

Parsing Arguments

// ------------------------------ Parsing and validation of input args ---------------------------------

        if (argc != 4) {

            tcout << _T("Usage : ./hello_classification <path_to_model> <path_to_image> <device_name>") << std::endl;

            return EXIT_FAILURE;

        }


        const file_name_t input_model{argv[1]};

        const file_name_t input_image_path{argv[2]};

        const std::string device_name{argv[3]};



        // -----------------------------------------------------------------------------------------------------

This helpfully labeled section of code processes arguments passed to the application on the command line through the standard argument vector. First, it checks for fewer arguments then expected. Without them, the program returns a predefined macro that provides information on the error. Then, it sorts the data into three variables to store the path to the model, the input image, and the inferencing device.

 Inference Engine Instantiation

// --------------------------- 1. Load inference engine instance -------------------------------------

        Core ie;

// -----------------------------------------------------------------------------------------------------

The first step is creating the main Inference Engine instance named ie.

This version of the sample targets OpenVINO™ 2019 R2 which introduces the new Core API, which simplifies and speeds up access to the Inference Engine. Earlier versions instantiate and access the Inference Engine in different ways. Refer to version-specific documentation at docs.openvinotoolkit.org for more information.

Load Network

// --------------------------- 2. Read IR Generated by ModelOptimizer (.xml and .bin files) ------------

        CNNNetReader network_reader;

        network_reader.ReadNetwork(fileNameToString(input_model));

        network_reader.ReadWeights(fileNameToString(input_model).substr(0, input_model.size() - 4) + ".bin");

        network_reader.getNetwork().setBatchSize(1);

        CNNNetwork network = network_reader.getNetwork();

// -----------------------------------------------------------------------------------------------------

This is where OpenVINO™ loads your pretrained neural network. It does the following:

  • Creates a CNNNetReader, an object that maps the topology of a network for the Inference Engine processor.
  • Fetches the networks weights and topological description and assigns it to the object
  • Assigns a batch size of one to the network_reader object to allow the single input image
  • Creates a network object, assigning the data gathered by the network_reader object.

The network object IS the network – it contains all of the information needed to infer on the input image.

Load Inputs

// --------------------------- 3. Configure input & output ---------------------------------------------

        // --------------------------- Prepare input blobs -----------------------------------------------------

        InputInfo::Ptr input_info = network.getInputsInfo().begin()->second;

        std::string input_name = network.getInputsInfo().begin()->first;


        /* Mark input as resizable by setting of a resize algorithm.

         * In this case we will be able to set an input blob of any shape to an infer request.

         * Resize and layout conversions are executed automatically during inference */

        input_info->getPreProcess().setResizeAlgorithm(RESIZE_BILINEAR);

        input_info->setLayout(Layout::NHWC);

        input_info->setPrecision(Precision::U8);


        // --------------------------- Prepare output blobs ----------------------------------------------------

        DataPtr output_info = network.getOutputsInfo().begin()->second;

        std::string output_name = network.getOutputsInfo().begin()->first;


        output_info->setPrecision(Precision::FP32);

// -----------------------------------------------------------------------------------------------------

Next steps are preparing the input blobs for the network. First, it sets a special pointer that grabs the input info from the network object to process the eventual input. Important here are the three lines that specify the function of the input blob: first, a resize algorithm is chosen to preprocess the image into a size usable by the network; second, it assigns the NHWC layout to the blob – this is preferred by the Inference Engine; third, it sets the precision of the input blob, in this case an unsigned 8 bit integer.

After readying the input blob, the application sets up the output blob. The output only needs the information from the output layer of the network. It fetches this from the network object. It also sets the precision of the output, in this case a 32-bit floating point integer.

Once the blobs are prepared, the model is ready to be loaded to the inference device.

Inference Device

// --------------------------- 4. Loading model to the device ------------------------------------------

        ExecutableNetwork executable_network = ie.LoadNetwork(network, device_name);

// -----------------------------------------------------------------------------------------------------

The ExecutableNetwork class combines the prepared network and the inference device of choice. This creates an object that is able to create inference requests to do the actual computation on the input. The device_name is taken from the command line – it should correlate to the name of a plugin available in the Intel® Distribution of OpenVINO™ toolkit (or the open source OpenVINO™ DLDT depending on your requirements). For example, you would use the MYRIAD plugin to utilize the Intel® Neural Compute Stick 2.

Inference Request

// --------------------------- 5. Create infer request -------------------------------------------------

        InferRequest infer_request = executable_network.CreateInferRequest();

// -----------------------------------------------------------------------------------------------------

To infer on a network, you must create an inference request that you will fill with your input data. This inference request takes your mathematical representation of your input data and runs it through your network to generate an output.

Prepare Input

// --------------------------- 6. Prepare input --------------------------------------------------------

        /* Read input image to a blob and set it to an infer request without resize and layout conversions. */

        cv::Mat image = cv::imread(input_image_path);

        Blob::Ptr imgBlob = wrapMat2Blob(image);  // just wrap Mat data by Blob::Ptr without allocating of new memory

        infer_request.SetBlob(input_name, imgBlob);  // infer_request accepts input blob of any size

// -----------------------------------------------------------------------------------------------------

We prepare our input image. First it is fetched using OpenCV and assigned to a n-dimensional dense numerical array that contains all of the image’s data. This is then wrapped in a blob that is passed to the request. This blob is reshaped by the input blob on the model during inference.

Do Inference

// --------------------------- 7. Do inference --------------------------------------------------------

        /* Running the request synchronously */

        infer_request.Infer();

// -----------------------------------------------------------------------------------------------------

Finally we do inference. The inference request takes our input blob, sends it to the model’s input blob for processing, infers on it using the network loaded to your inference device, and creates an output blob that contains the data that was created by the neural network.

Process Output

// --------------------------- 8. Process output ------------------------------------------------------

        Blob::Ptr output = infer_request.GetBlob(output_name);

        // Print classification results

        ClassificationResult classificationResult(output, {fileNameToString(input_image_path)});

        classificationResult.print();

// -----------------------------------------------------------------------------------------------------

The inference request now contains a blob generated by the output blob of the network. ClassificationResult, a class that is provided by one of the program’s header files, can take a special pointer to the output blob and display the information given by that output. It prints this output to the shell that the program was run in.

You’ve now done image classification, an extraordinarily complex computing task simplified by the use of the OpenVINO™ toolkit. Your eventual use-case may or may not include image classification, but this programmatic flow is at the center of any AI application, especially those built with the Intel® Distribution of OpenVINO™ toolkit.

To summarize, your AI application will:

  1. Prepare network
  2. Prepare inputs
  3. Infer
  4. Process output

The sample above and the other samples included with the toolkit follow this flow. The Inference Engine API and the OpenVINO™ toolkit provides tools to simplify these steps and optimize them for use on Intel® architectures.

"