# [Preparing for Learning] How to use TensorFlow-Object-Detection to create a learning model and build it on iOS.

TensorFlow is an open source library for machine learning developed by Google that allows you to infer, for example, whether a single image is a cat or a dog, along with recognition accuracy. It can of course be used for inference other than images, but the focus is on how to use this technology, and I haven’t really come up with a good idea (in my case). In the meantime, TensorFlow has released a feature called Object Detection API from TensorFlow. It allows you to say things like, “There is an object reflected in a single image with this level of accuracy at various coordinates.

It’s not just a single image recognition, but it can even get to the location where it was detected, which makes it applicable. In this article, I’ve created a demo that uses this Object Detection API to train the original stamp to be detected on iOS, and I’ll describe the long road to completion as a reminder. By the way, I’m a complete novice in machine learning, so there may be some incorrect information in there.

## Development Environment

#### version

Python 3.6.0
TensorFlow 1.9.0
##### machine spec
PC MacBook Pro(Retina, 15-inch, Mid 2015)
processor 2.2 GHz Intel Core i7
memory 16 GB
graphics Intel Iris Pro 1536 MB

#### The main folder structure after learning

|-labelImg
|-models(本家のtensorflow/models)
|-tensorflow-stamp-model
|-anotations
|-label_map.pbtxt
|-trainval.txt
|-xmls
|-IMG_2703.xml
|-IMG_2704.xml
...
|-bin
|-protoc
|-cocoapi
|-images
|-IMG_2703.jpg
|-IMG_2704.jpg
|-model
|-pycocotools
|-model.ckpt.data-00000-of-00001
|-model.ckpt.index
|-model.ckpt.meta
|-pycocotools
|-slim
|-object_detection
|-model_main.py
|-train.record
|-val.record
|-create_tf_record.py


#### Proccess

1. Prepare Leaning Resources
• Setup TensorFlow
• Collect images
• Labeling
• Create TFRecord
2. Training
1. Prepare a config file for model training.
• Learning
• Check and write out the learning model.
3. iOS

## Setup TensorFlow

First, clone the original repository.
Then, follow the install instructions in it. We can use objectde_detection. The article from this one was also very helpful.

When you have finished setting up the main working folder tensorflow-stamp-model, prepare the main working folder tensorflow-stamp-model and copy only the slim and object_detection folders from the main models/research/ folder to the main working folder.

Run the following file in the main working folder to confirm that the environment is ready.

python object_detection/builders/model_builder_test.py

.....
----------------------------------------------------------------------
Ran 22 tests in 0.085s

OK


If you get tripped up here, check again to make sure you’re taking a pass.

export PYTHONPATH=\$PYTHONPATH:pwd:pwd/slim


## Collect Images

Even if you collect them, you need to take one stamp image from a variety of sizes and angles, and it seems that you need at least 100 images per image, even if you’re testing. It also affects the accuracy of recognition. I stamped various angle stamps on a piece of paper and took 100 pictures of it at various angles.

Save the images to tensorflow-stamp-mode/images/. In this case, you can reduce the learning time by setting the image size to 600×600.

## Labeling

Next, in order to train TensorFlow, we label the captured image to identify what (object) is reflected in which coordinates. This labeling data is saved in xml format and converted to a data format called TFRecord for use with tensorflow in a later step. Using the software [labelImg] (https://github.com/tzutalin/labelImg), you can easily create xml data suitable for conversion. However, on Windows, you can download and use the binary data as it is, but on Mac, you will need to download the source code and build it yourself.

Create a tensorflow-stamp-model/annotations/xmls folder and move the resulting xml file into it.
It is important to note that in the resulting xml data, you must name the filename of the filename node with the extension “IMG_2712.jpg”. In my environment, an error occurred when converting to TFRecord at a later stage. If you are converting part of an xml node from a large number of files, I would recommend automating the process as it is tedious. Please refer to this article.

## Create TFRecord

The xml file that you just created is converted to a TensorFlow-compatible format called TFRECord. In order to run the conversion code, there are two more pieces of data

• trainval.txt
• List of xml file paths (up to the file name)
IMG_2703
IMG_2704
...


I wrote an automated script to extract the file name only from the file in the directory and save it to a text file.

• label_map.pbtxt
• List of objects to be identified (the name of the object when it is labeled)
item {
id: 1
name: 'tent'
}

item {
id: 2
name: 'build'
}

item {
id: 3
name: 'house'
}


Save them in the TensorFlow/annotations/ folder.

Next, save the conversion code, create_tf_record.py, into tensorflow-stamp-model/. This conversion code can be found in the original models/research/object_detection/dataset_tools, but I used [this source] (https://github.com/bourdakos1/Custom-Object- Detection/blob/master/object_detection/create_tf_record.py) and changed only the path to the folder.

python create_tf_record.py


train.recordandval.record` are created.

I won’t guarantee that it works, but this TensorFlow project is available on GitHub. A project that incorporates the TensorFlow learning model into iOS is available at GitHub. Feel free to use it as you wish.