[Build on iOS] How to use TensorFlow-Object-Detection to create a learning model and build it on iOS

by user

Previous Post

In the previous article, we used TensorFlow to train the model. In this article, I will show you how to incorporate the trained model into iOS and recognize the stamp.

※ I am a complete novice when it comes to machine learning.


  1. Prepare Leaning Resources
    • Setup TensorFlow
    • Collect images
    • Labeling
    • Create TFRecord
  2. Training
    • Prepare a config file for model training.
    • Learning
    • Check and write out the learning model.
  3. Embed in iOS
    • Prep for the build
    • Install

The main folder structure of iOSbuild


Prep for the build

A clever person has already made a sample and I’ll use it.
Drop the source code from this link and follow the README to set it up. Here’s what the README baked in

brew install automake libtool

git clone https://github.com/tensorflow/tensorflow

cd tensorflow

Check out the supported versions of ANDROID_TYPES for use in future processes in the tensorflow folder you dropped off. The owner of the above repository has checked up to v1.12.0. Check the version with the git tag to be sure, and check out v1.12.0

git checkout -b v1.12.0 refs/tags/v1.12.0


git checkout origin/r1.12

If you encounter the following errors

thread-local storage is not supported for the current target

Execute the following command

brew install gnu-sed

gsed '/ifeq[^,]*,I386)/!b;n;n;n;n;n;s/thread_local//' < ./tensorflow/contrib/makefile/Makefile > foo; mv foo ./tensorflow/contrib/makefile/Makefile
gsed 's/thread_local int per_thread_max_parallism/__thread int per_thread_max_parallism/' < tensorflow/core/util/work_sharder.cc > foo; mv foo ./tensorflow/core/util/work_sharder.cc

build tensorflow


Move up one hierarchy (the directory where tensorflow is located) and make the libraries needed to build tensorflow on iOS. The build takes about two hours.



Once the build file is created, we can actually install the app using the learning model.
The repository tensorflowiOS, where the sample code is located, has two language versions, objC and swift, but I chose swift.

First, change the path to the tensorflow repository in this folder at tensorflow.xconfig.

TENSORFLOW_ROOT = /path/to/tensorflow

Next, add the training model to your project.
Copy frozen_inference_graph.pb and tensorflow-stamp-model/annotations/label_map.pbtxt from your last training and exporting folder to /tensorflowiOS/Models/. And rename the label_map.pbtxt to `label_map.txt.


In order to get the label name of a recognized object, the parameter display_name is added to label_map.txt. It works without adding it, but I don’t know what the recognized object is. I later realized that.

item {
  id: 1
  name: 'tent'
  display_name: 'tent'
item {
  id: 2
  name: 'build'
  display_name: 'build'
item {
  id: 3
  name: 'house'
  display_name: 'house'

The copied file is also dragged and dropped into the xcode project to add it.
Then edit the file name of the model used by TensorflowGraph.mm.

NSString *model = @"frozen_inference_graph";
NSString *label = @"label_map";

That’s it, you’re now ready to build on iOS, I think it will install when you Run it, but I ran into the following error and I’ll note how to fix it.

xcrun: error: SDK “iphoneos” cannot be located

# If you get this error, do the following
sudo xcode-select --switch /Applications/Xcode.app/Contents/Developer/

failed to load model

# It's possible that the pass isn't working.


Tensorflow Object Detection APIで寿司検出モデルを学習するまで
Tensorflow ObjectDetection APIでミレミアム・ファルコン検出

I won’t guarantee that it works, but this TensorFlow project is available on GitHub.
A project that incorporates the TensorFlow learning model into iOS is available at GitHub. Feel free to use it as you wish.

You may also like

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.

Close Bitnami banner