Text Detection Demo - Microsoft Windows*

ID 标签 689324
已更新 9/4/2019
版本 Latest
公共

author-image

作者

The Intel® Distribution of OpenVINO™ toolkit includes many different demo vision applications intended to teach developers about how to design and integrate their own applications with the toolkit. The demos span from simple image classification to human emotion detection – whatever your use-case, you can find valuable information from these demos.

The Inference Engine demos are covered under the Apache* 2.0 license, giving you the freedom to modify for your purposes. Do be aware that other parts of the Intel® Distribution of OpenVINO™ toolkit have different licenses. More information can be found in this directory:  C:\Program Files(x86)\IntelSWTools\openvino\licensing\readme.txt.

The Text Detection demo showcases detection and recognition of printed text in various environment, regardless of angle.

More information about the demo can be found here, or at the README distributed with the demo in the demo folder.

This demo uses a text detection network to detect text from an image and a text recognition network for actually reading the text itself. It also supports recognizing handwritten digits at a lower accuracy.

This article will walk through setting up and running the demo on Windows, using both your already available Intel® Core™ Processor and the Intel® Neural Compute Stick 2 (Intel® NCS 2). Before we begin, make sure that you meet the prerequisites.

Prerequisites

Make sure you have completed the following steps. Many of these components may have been completed during the installation of the Intel® Distribution of OpenVINO™ Toolkit, but make sure everything is installed.

  • Microsoft Visual Studio* 2015/2017/2019 with C++, MSBuild, and the Build Tools for Visual Studio
    • For Visual Studio Installer 2017 and 2019, select the “Desktop development with C++” workflow
  • CMake* 3.4 or higher
  • At least Python* 3.6.5 64-bit with the Python libraries
    • The most recent Python3 installer from https://python.org contains all needed components. Make sure you use the 64-bit version of the installer.
  • Intel® Distribution of OpenVINO™ Toolkit 2019 R2
    • Make sure that the Inference Engine Runtime for Intel® CPU and the Inference Engine Runtime for Intel® Movidius™ VPU are installed if a custom installation is desired. Otherwise, install the full package.
    • The default installation directory is C:\Program Files(x86)\IntelSWTools\openvino.

This article targets the 2019 R2 version of OpenVINO™ and uses the 2019 R2 compatible models from the Open Model Zoo. This demo should be compatible with future versions of OpenVINO™.

  • Computer with Microsoft Windows® 10 64-bit OS and an Intel® Core™ processor
    • Note that the inference engine works with AMD* processors in x86_64 environments that support AVX2 extensions, but do not have the processor-specific optimizations available with Intel® Core™ processors. An Intel® Core™ processor is recommended for use with the Intel® Distribution of OpenVINO™ Toolkit.
  • Intel® Neural Compute Stick 2 (Intel® NCS 2)
    • If you’re running a current version of Windows 10, Intel® NCS 2 works just by plugging in. If you’re using an earlier version of Windows, or the Intel® NCS 2 is not detected when running demos, then install the Movidius USB driver located in C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\inference-engine\external\MovidiusDriver\. Right-click on Movidius_VSC_Device.inf and select Install. You may need to restart your machine for changes to take effect.

Building the Demos

As long as all of the prerequisites are met, then you should continue to build the demos. The demos ship as source code, giving you the power to learn and modify for your uses. To build the demos and their Visual Studio solutions, a script has been provided in C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\open_model_zoo\demos\ named build_demos_msvc.bat. Run this in an elevated command prompt to build the demos using the command-line Build Tools for Visual Studio*. If you’ve already built the demos, you can skip this step.

This article assumes you’ve installed Intel® Distribution of OpenVINO™ toolkit into the default install directory, located at C:\Program Files (x86)\IntelSWTools\openvino\. If you’ve changed the installation directory, make sure to change your paths to match your current system.

After completing, the built demos and their solutions are placed in %USERPROFILE%\Documents\Intel\OpenVINO\omz_demos_build\. The primary Visual Studio solution (.sln) is located in this folder, and the individual project files are located in their respective folders. The application binaries are in intel64\Release\.

Setup

You’ll need to follow some steps to set up the proper environment variables and ensure that you have the right network models.

To begin, open an elevated command prompt and scope to the OpenVINO installation directory. Run the setupvars.bat script in the /bin/ directory to set the environment variables for your current session.

cd "C:\Program Files (x86)\IntelSWTools\openvino\bin\"
setupvars.bat

You need to run this script every time you’re working in a shell. Alternatively, you can add the environment variables to your system to have them set every time a new command prompt is opened.

Fetching a Model

The models that we will be using for this demo are the text-detection-0002 and text-recognition-0002 networks available in the Open Model Zoo. You can fetch these models using the Model Downloader, a script distributed with Intel® Distribution of OpenVINO™ toolkit. The Model Downloader is a Python script located at C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\tools\model_downloader\. The following command is an example – it fetches the models and places them in a subfolder in the same folder as the Model Downloader:

python downloader.py --name text-detection-0002
python downloader.py --name text-recognition-0002

You can use the -o flag to change the output directory if desired.

If you have Python 2.7.3 installed on your system, then the python command may point to that Python version. In that case, use python3 to access Python 3.

You can also fetch the model directly from the Open Model Zoo at: 

https://download.01.org/opencv/2019/open_model_zoo/R2

Download the FP16 models for inferencing on the Intel® NCS 2. Make sure you download both the .bin file and the .xml file and place them in the same folder.

The Intel® Neural Compute Stick 2 requires using an FP16 model, a model that has a floating-point precision of 16 bits. FP16 models allow for inferencing with nearly the same amount of precision with less computational overhead compared to classical FP32 models. OpenVINO 2019 R2 supports the use of FP16 models with every plugin, including the MYRIAD plugin support the Intel® NCS 2.

Running the Demo Using Intel® NCS 2

With your model and your demo video, you’re ready to run the demo. If you’ve closed your Command Prompt before this point, you’ll need to rerun setupvars.bat in the OpenVINO installation directory to set the proper environment variables like above. After, scope to the folder that contains the demo:

cd %USERPROFILE%\Documents\Intel\OpenVINO\omz_demos_build\intel64\Release

The demos are command-line programs that use flags as options for running. The full list of options for the demo can be seen by running a demo with the –h flag:

text_detection_demo.exe -h

The demo requires an image or video to infer on. An example image is attached to this article:

text_detection_demo.exe –i %USERPROFILE%\Downloads\intelNCS2.jpg –m_tr “C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\tools\model_downloader\Retail\text_recognition\bilstm_crnn_bilstm_decoder\0012\dldt\FP16\text-recognition-0012.xml” –m_td “C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\tools\model_downloader\Retail\object_detection\text\pixel_link_mobilenet_v2\0003\dldt\FP16\text-detection-003.xml –d_tr MYRIAD –d_td MYRIAD –dt image

The demo can also use an attached webcam:

text_detection_demo.exe –i /dev/video0 –m_tr “C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\tools\model_downloader\Retail\text_recognition\bilstm_crnn_bilstm_decoder\0012\dldt\FP16\text-recognition-0012.xml” –m_td “C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\tools\model_downloader\Retail\object_detection\text\pixel_link_mobilenet_v2\0003\dldt\FP16\text-detection-003.xml –d_tr MYRIAD –d_td MYRIAD –dt webcam

The cam input tells OpenCV to look for a connected camera. OpenCV will find the first available camera device – for simplicity, make sure your desired camera is the only one activated.

The demo takes the input image then recognizes and labels text. The -dt selector tells the demo what type of data is being passed through. The MYRIAD device selector activates the MYRIAD plugin which loads networks to the Intel® NCS 2 and manages inference. A FP16 model is required for use with the MYRIAD plugin. The Open Model Zoo provides FP32 and FP16 versions of compatible networks, such as the one we are using here.

Inferencing Using an Intel® CPU

These demos can also be run on any computer with at least a 6th Generation Intel® Core Processor.

Scope to the location of the demo:

cd %USERPROFILE%\Documents\Intel\OpenVINO\omz_demos_build\intel64\Release

Finally, use the following command to run the demo using the CPU and the example image.

text_detection_demo.exe –i %USERPROFILE%\Downloads\intelNCS2.jpg –m_tr “C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\tools\model_downloader\Retail\text_recognition\bilstm_crnn_bilstm_decoder\0012\dldt\FP16\text-recognition-0012.xml” –m_td “C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\tools\model_downloader\Retail\object_detection\text\pixel_link_mobilenet_v2\0003\dldt\FP16\text-detection-003.xml –d_tr MYRIAD –d_td MYRIAD –dt image

We encourage you to explore the text_detection_demo project to see how the code interacts with the network and the Inference Engine and the best ways to integrate your application with Intel® Distribution of OpenVINO™ toolkit.

"