Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
378 views
in Technique[技术] by (71.8m points)

python - Using Tensorflow checkpoint to restore model in C++

I've trained a network that I implemented with Tensorflow using Python. In the end, I saved the model with tf.train.Saver(). And now I would like to use C++ to make predictions using this pre trained network.

How can I do that ? Is there a way to convert checkpoint so I can use it with tiny-dnn or Tensorflow C++ ?

Any idea is welcome :) thank you !

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

You probably should export the model in the SavedModel format, which encapsulates the computational graph and the saved variables (tf.train.Saver only saves the variables, so you'd have to save the graph anyway).

You can then load the saved model in C++ using LoadSavedModel.

The exact invocation would depend on what the inputs and outputs of your model are. But the Python code would look something like so:

# You'd adjust the arguments here according to your model
signature = tf.saved_model.signature_def_utils.predict_signature_def(                                                                        
  inputs={'image': input_tensor}, outputs={'scores': output_tensor})                                                                         


builder = tf.saved_model.builder.SavedModelBuilder('/tmp/my_saved_model')                                                                    

builder.add_meta_graph_and_variables(                                                                                                        
   sess=sess,                                                                                                                    
   tags=[tf.saved_model.tag_constants.SERVING],                                                                                             
   signature_def_map={                                                                                                       
 tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:                                                                
        signature                                                                                                                        
})                                                                                                                                       

builder.save()

And then in C++ you'd do something like this:

tensorflow::SavedModelBundle model;
auto status = tensorflow::LoadSavedModel(session_options, run_options, "/tmp/my_saved_model", {tensorflow::kSavedModelTagServe}, &model);
if (!status.ok()) {
   std::cerr << "Failed: " << status;
   return;
}
// At this point you can use model.session

(Note that using the SavedModel format will also allow you to serve models using TensorFlow Serving, if that makes sense for your application)

Hope that helps.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...