## Previous Post
In the last article, we went from setting up TensorFlow to letting it train, to collecting the data and preparing the data needed for training. From here, we will start the learning process.
※ I am a complete novice in machine learning.
Proccess
- Prepare Leaning Resources
- Setup TensorFlow
- Collect images
- Labeling
- Create TFRecord
- Training
- Prepare a config file for model training.
- Learning
- Check and write out the learning model.
- iOS
Prepare a config file for model training
The machine learns the collected images using a pre-trained training model. The recognition accuracy and speed seem to vary depending on the training model used. If you let this site train with each model There is a corresponding table of recognition speed and accuracy of the Nvidia GeForce GTX TITAN X card
. However, the GPU is measured with the Nvidia GeForce GTX TITAN X card
.
Download ssd-mobilenet-v1
from the above site and move the following files to tensorflow-stamp-mode/.
- model.ckpt.meta
- model.ckpt.index
- model.ckpt.data-00000-of-00001
- pipeline.config
And then edit pipeline.config. It’s basically the part that says PATH_TO_BE_CONFIGURED
.
# After modification
num_classes: 3 # label name
fine_tune_checkpoint: "model.ckpt"
label_map_path: "annotations/label_map.pbtxt"
input_path: "train.record"
label_map_path: "annotations/label_map.pbtxt"
input_path: "val.record"
Training
Now that you’re ready, it’s time to start learning, and the learning process is saved in the model folder. If you interrupt the training, you can start learning again by typing the following command.
Also, if you set fine_tune_checkpoint
as a comment, you can start training from zero without using the model you have already trained.
python object_detection/model_main.py \
--logtostderr \
--model_dir=model \
--pipeline_config_path=pipeline.config
It took me about two days to get to 15638 steps with repeated stops and learning in my environment due to the lack of a GPU.
Check the learning process
We use a tool called tensorboard to check on your learning.
tensorboard --logdir model
http://0.0.0.0:6006/
In the gif above, the same two images are side by side, with the left side being the objects detected by inference and the right side being the label positions when labeled. You can see that the number of objects that can be detected increases as the number of steps increases. If nothing appears in this left side of the image, you may have not learned enough or not enough features.
Write out the learning model
Once you have achieved a certain level of recognition accuracy, write out the data in the model folder as a training model.
Copy the following file in the model folder and move it to tensorflow-stamp-model/.
Move to tensorflow-stamp-model/, where you will find the following files
- model.ckpt-15752.index
- model.ckpt-15752.meta
- model.ckpt-15752.data-00000-of-00001
“`bash:/tensorflow-stamp-model
python object_detection/export_inference_graph.py \
–input_type image_tensor \
–pipeline_config_path pipeline.config \
–trained_checkpoint_prefix model.ckpt-15752 \
–output_directory output_inference_graph
“`
We now have a training model to use on iOS.
In the next article, we’ll document how to use this learning model and integrate it into iOS.
I won’t guarantee that it works, but this TensorFlow project is available on GitHub.
A project that incorporates the TensorFlow learning model into iOS is available at GitHub. Feel free to use it as you wish.