Intel factories have been using computer vision for over a decade to automate defect detection and classification. The factories use TensorFlow* as the core open source library to help develop and train deep-learning models. However, the interface between the computer vision systems and TensorFlow is cumbersome and requires days of custom programming from data scientists.
The Intel® Distribution of OpenVINO™ toolkit significantly streamlines this interface. Therefore, Intel IT has found it to be the most convenient and fastest way to deploy deep learning (in particular, deep neural networks) in the Microsoft* Windows environment.
- The OpenVINO™ toolkit helps data scientists more easily interface with powerful back-end deep-learning engines like TensorFlow.
- This frees up data scientists to use their time more productively.
- There is no unique hardware to deploy—the OpenVINO™ toolkit runs on existing Intel® Xeon® processor-based servers.
- Because it is optimized for Intel® hardware, the OpenVINO™ toolkit boosted model inference performance by 10x, according to internal Intel IT measurements.
When Intel IT began using the OpenVINO™ toolkit, they weren’t concerned with inference speed. However, the 10x performance increase that they experienced is an added benefit and opens up additional use cases. For example, they are now exploring the use of OpenVINO for real-time process control, which requires millisecond response times. They are currently working with the OpenVINO development team to add the necessary temporal convolutional network model into the Model Zoo.
Intel IT is committed to making Intel’s manufacturing processes as accurate and efficient as possible. Computer vision was an important step in achieving those goals. Now, the OpenVINO™ toolkit helps save time so that highly qualified engineers can accomplish more productive tasks, rather than coding a cumbersome interface to TensorFlow. OpenVINO helped Intel IT simplify development and optimize TensorFlow for top performance.