The brand new TensorFlow 2.0 has a lot of features which unify reduntencies inherited from TF 1.0. Here’s my conclusion of converting old TensorFlow models all the way to ready-to-serve, mobile-friendly FP16 TensorFlow Lite model in *.tflite format.

  1. Check your model

First off, you should have these files in check before dealing with your models:

1
2
3
4
checkpoint
model-78099.data-00000-of-00001
model-78099.index
model-78099.meta

These are typical checkpoint files genereated by tf.train.Saver() helper during steps of training, in the purpose of saving current variables from nodes and help recover training process with them. As we no longer need further training, let’s convert it to a SavedModel file with freeze_model.py under Windows PowerShell:

1
py .\freeze_model --model_dir "./your/model/directory" --output_node_names "output_node_1,output_node_2"

where --model_dir is your model directory, and --output_node_names are the names of your output nodes. The names of output nodes can be easily found by simply reviewing the original code, or typing the following code during a TensorFlow session after your model’s graph is loaded:

1
print([n.name for n in tf.get_default_graph().as_graph_def().node])
  1. Add tags to MetaGraphDef

After freezing your model, you will obtain a file called frozen_model.pb, but it still can’t be served because lack of MetaGraphDef tagged with serve. To get a properly labeled SavedModel, you need to utilize simple_save to add these meta tags:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
import tensorflow as tf

def load_graph(model_file, returnElements=None):
graph = tf.Graph()
graph_def = tf.GraphDef()
with open(model_file, "rb") as f:
graph_def.ParseFromString(f.read())

returns = None
with graph.as_default():
returns = tf.import_graph_def(graph_def, return_elements=returnElements)
if returnElements is None:
return graph
return graph, returns

exportDir = "./simple_save/"
exportName = exportDir + "saved_model.pb"
inputLayerName = "input_image:0" # your input node and index

# frozen_model.pb is previously converted from tf checkpoint
my_graph, r = load_graph("frozen_model.pb", returnElements=[inputLayerName])
inputTensor = r[0]
with tf.Session(graph=my_graph) as sess:
from tensorflow.saved_model import simple_save

# get your nodes' names and indices
with my_graph.as_default():
output_node_1 = my_graph.get_tensor_by_name('import/output_node_1:0')
output_node_2 = my_graph.get_tensor_by_name('import/output_node_2:0')

inputs = {
inputLayerName: inputTensor
}
outputs = {
"import/output_node_1:0": output_node_1,
"import/output_node_2:0": output_node_2,
# ...:0 : ...
}

simple_save(sess, exportDir, inputs, outputs)
...
  1. Conversion

Now your SavedModel has these MetaGraphDef tagged with serve. The next step is to convert it into *.tflite for simple and fast model serving on almost any platforms:

1
2
3
4
5
6
7
8
9
10
11
...
converter = tf.lite.TFLiteConverter.from_saved_model(exportDir)
tflite_model = converter.convert()
# write as an unoptimized model
# open("converted_model.tflite", "wb").write(tflite_model)

# or convert it to a fp16 model
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
tflite_fp16_model = converter.convert()
open("converted_model_fp16.tflite", "wb").write(tflite_fp16_model)

If you encounter an error complaining tf.lite.Optimize is not found, and your are using TensorFlow version 1.13 or lower, try the solution here.