Inception V3 Deep Convolutional Architecture For Classifying Acute Myeloid/Lymphoblastic Leukemia
Inception V3 by Google is the 3rd version in a series of Deep Learning Convolutional Architectures. Inception V3 was trained using a dataset of 1,000 classes (See the list of classes here) from the original ImageNet dataset which was trained with over 1 million training images, the Tensorflow version has 1,001 classes which is due to an additional "background' class not used in the original ImageNet. Inception V3 was trained for the ImageNet Large Visual Recognition Challenge where it was a first runner up.
This article will take you through some information about Inception V3, transfer learning, and how we use these tools in the Acute Myeloid/Lymphoblastic Leukemia AI Research Project.
Convolutional Neural Networks
Convolutional neural networks are a type of deep learning neural network. These types of neural nets are widely used in computer vision and have pushed the capabilities of computer vision over the last few years, performing exceptionally better than older, more traditional neural networks; however, studies show that there are trade-offs related to training times and accuracy.
Transfer learning allows you to retrain the final layer of an existing model, resulting in a significant decrease in not only training time, but also the size of the dataset required. One of the most famous models that can be used for transfer learning is Inception V3. As mentioned above, this model was originally trained on over a million images from 1,000 classes on some very powerful machines. Being able to retrain the final layer means that you can maintain the knowledge that the model had learned during its original training and apply it to your smaller dataset, resulting in highly accurate classifications without the need for extensive training and computational power.
TensorFlow*-Slim image classification model library
TF-Slim is a high-level API for TensorFlow* that allows you to program, train and evaluate Convolutional Neural Networks. TF-Slim is a lightweight API so is well suited for lower powered devices.
Github: TensorFlow-Slim image classification model library
The Acute Myeloid/Lymphoblastic Leukemia AI Research Project Movidius NCS Classifier uses the following classes from the TensorFlow-Slim image classification model library:
In the project you will find these files in the AML-ALL-Classifiers/Python/_Movidius/NCS/Classes directory.
The inception_preprocessing file provides the tools required to preprocess both training and evaluation images allowing them to be used with Inception Networks.
Project Location: Python/_Movidius/NCS/Classes/inception_preprocessing.py
The inception_utils class file utility code that is common across all Inception versions.
Project Location: Python/_Movidius/NCS/Classes/inception_utils.py
The inception_v3 file provides the code required to create an Inception V3 network.
Project Location: Python/_Movidius/NCS/Classes/inception_v3.py
In this file you will find the inception_v3 function provided by TensorFlow, this function produces the exact Inception model from Rethinking the Inception Architecture for Computer Vision written by Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, Zbigniew Wojna.
In projects that use the Intel® Neural Compute Stick (NCS/NCS2), it is required to freeze the model, a technique mostly used for deploying TensorFlow models to mobile devices. Freezing a model basically removes unrequired/unused nodes such as training specific nodes etc. To find out more about model freezing, you can visit the Preparing models for mobile deployment TensorFlow tutorial, to find the related project code you can check out the NCS training program. The training program uses TF-Slim to produce a graph and uses graph_util.convert_variables_to_constants to create a TensorFlow GraphDef, saves it as a .pb file in the model directory.
Adam is founder of Peter Moss Leukemia AI Research and an Intel Software Innovator in the fields of Internet of Things, Artificial Intelligence and Virtual Reality.