Train customize object for object recognition by Tensorflow Part 2
This is the next article of Part1: Train customize object Tensorflow In this section we will complete the custom object train program using Tensorflow.
Next step: In this section we will train our object model to detect a custom object. To do this, we need the image, combine TFRecords for training and testing data, and then we need to set up the configuration of the model, then we can dig train. That means we need to set up a configuration file.
Here we have two options. We can use a pre-train model and then use the transition method to learn a new object or we can learn new things completely from scratch. . The benefit of transfer learning is that training can be much faster, and the data you need is less. For this reason, we will use transfer learning o day.
TensorFlow has quite a few pre-train models with available control point files, along with configuration files. You can do all this yourself if you want by looking at their configuration documentation. The API object also provides some sample configurations to choose from.
We will do with the mobilenet, using the following test and configuration files
Place the configuration in the training directory, and unpack ssd_mobilenet_v1 in the models / object_detection directory. In the configuration file, you need to search all PATH_TO_BE_CONFIGURED points and change them. You may also want to change the batch size. Currently, it is set to 24 in my configuration file.
Other models may have different batch sizes. If you experience a memory error, you can try reducing the batch size so that the model matches your VRAM. Finally, you also need to change the checkpoint / path name, num_classes to 1, num_examples bar12, and label_map_path: "training / object-detect.pbtxt".
There are some edits, this is my full configuration file:
From
We got output:
Depending on your GPU and the amount of training data you have, this process will take a different amount of time. If you have a lot of training data, it may take longer. You want to loos cavity ~ 1 (or lower). We will not stop train until you are sure of the loss below 2. You can test the model how to do through TensorBoard.

Loss meanning
Final Step:Export train file and create the project with the train file created.
After running for about 3 hours when train loss is about 1 we can turn off train and export train file.
We can see that in the training folder there are a lot of model checkpoints have been created, and I will get the last time with the lowest loss to export the train file.
To export the graph file train we have the file provided by tensorflow opensource "in the models / research / object_detection [export_inference_graph.py] directory. To run this scprit we need to execute the following command:
1> From folder: models/research
2> From folder: models/research/object_detection
"training / model.ckpt-1939" is the path to the model train file;
"ssd_mobilenet_v1_pets.config" is the path to the configure file
"macNchees_graph" is the last train folder we need to export.
After we finish running we get a folder containing the necessary training files. Next we will open an object detection program available in the tensorflow directory and use our train file to try to identify the object.
Here from the original file we need to change the path to the train folder:
Of course, the path to the test image itself will change a little depending on the image you named to test, here I set the image 3 to 5.
Finally, when running the program we get the results as expected, the program has received the macandcheese
Video
Next step: In this section we will train our object model to detect a custom object. To do this, we need the image, combine TFRecords for training and testing data, and then we need to set up the configuration of the model, then we can dig train. That means we need to set up a configuration file.
Here we have two options. We can use a pre-train model and then use the transition method to learn a new object or we can learn new things completely from scratch. . The benefit of transfer learning is that training can be much faster, and the data you need is less. For this reason, we will use transfer learning o day.
TensorFlow has quite a few pre-train models with available control point files, along with configuration files. You can do all this yourself if you want by looking at their configuration documentation. The API object also provides some sample configurations to choose from.
We will do with the mobilenet, using the following test and configuration files
Other models may have different batch sizes. If you experience a memory error, you can try reducing the batch size so that the model matches your VRAM. Finally, you also need to change the checkpoint / path name, num_classes to 1, num_examples bar12, and label_map_path: "training / object-detect.pbtxt".
There are some edits, this is my full configuration file:
inside tr
Install the support library for python and set back PythonPart. (from models / research)
aining
folder, create a file object-detection.pbtxt
: with bellow contentFrom
models/object_detection
:
We got output:
Depending on your GPU and the amount of training data you have, this process will take a different amount of time. If you have a lot of training data, it may take longer. You want to loos cavity ~ 1 (or lower). We will not stop train until you are sure of the loss below 2. You can test the model how to do through TensorBoard.
From
models/research/object_detection
, open terminal, run bellow cmd to open Tensorboard log:
After that we open the link
jacky:6006
(this is link genarated on my PC)
loss Graph

Loss meanning
Final Step:Export train file and create the project with the train file created.
After running for about 3 hours when train loss is about 1 we can turn off train and export train file.
We can see that in the training folder there are a lot of model checkpoints have been created, and I will get the last time with the lowest loss to export the train file.
To export the graph file train we have the file provided by tensorflow opensource "in the models / research / object_detection [export_inference_graph.py] directory. To run this scprit we need to execute the following command:
1> From folder: models/research
2> From folder: models/research/object_detection
"training / model.ckpt-1939" is the path to the model train file;
"ssd_mobilenet_v1_pets.config" is the path to the configure file
"macNchees_graph" is the last train folder we need to export.
After we finish running we get a folder containing the necessary training files. Next we will open an object detection program available in the tensorflow directory and use our train file to try to identify the object.
Here from the original file we need to change the path to the train folder:
Of course, the path to the test image itself will change a little depending on the image you named to test, here I set the image 3 to 5.
Finally, when running the program we get the results as expected, the program has received the macandcheese
Video
Train customize object for object recognition by Tensorflow Part 2
Reviewed by Jacky
on
December 18, 2017
Rating:

Hi,
ReplyDeleteThis is such a great tutorial for newbies like me. Thanks so much for writing this tutorial.
I want to ask one question....here in all SSD mobilenet versions, the decay_step is set as 800720. Could you please make me understand what is decay_step here and why it is set to 800720 when the num_steps are only 200000. Is there any impact of decay_step on model performance? It would be great if you can share some source to read more about decay_step.
Thanks