hi Abraham and first of all, thank you for your amazing job... i have a couple of question:
-
what's the purpose of the back up?
why in some particular cases you perform a backup? is it necessary in case of resuming the training or is it just a precaution?
why someone would have to use the back up later?
-
in order to convert the model in tflite format (to play a bit in mobile environment) i need to freeze the model... in most of the guides i read that to do that, starting from checkpoint file, i need to pass the output nodes to 'convert_variables_to_constants' function ...
so i used the 'print_tensors_in_checkpoint_file' to get the nodes of your model, and i obtained the list below...
wich are the output node? do you think i need to pass all the tensors with 'decoder' in the path?
thank you
tensor_name: model/decoder/attention_decoder_cell/attention_layer/kernel
tensor_name: model/decoder/attention_decoder_cell/bahdanau_attention/attention_b
tensor_name: model/decoder/attention_decoder_cell/bahdanau_attention/attention_g
tensor_name: model/decoder/attention_decoder_cell/bahdanau_attention/attention_v
tensor_name: model/decoder/attention_decoder_cell/bahdanau_attention/query_layer/kernel
tensor_name: model/decoder/attention_decoder_cell/multi_rnn_cell/cell_0/basic_lstm_cell/bias
tensor_name: model/decoder/attention_decoder_cell/multi_rnn_cell/cell_0/basic_lstm_cell/kernel
tensor_name: model/decoder/attention_decoder_cell/multi_rnn_cell/cell_1/basic_lstm_cell/bias
tensor_name: model/decoder/attention_decoder_cell/multi_rnn_cell/cell_1/basic_lstm_cell/kernel
tensor_name: model/decoder/attention_decoder_cell/multi_rnn_cell/cell_2/basic_lstm_cell/bias
tensor_name: model/decoder/attention_decoder_cell/multi_rnn_cell/cell_2/basic_lstm_cell/kernel
tensor_name: model/decoder/attention_decoder_cell/multi_rnn_cell/cell_3/basic_lstm_cell/bias
tensor_name: model/decoder/attention_decoder_cell/multi_rnn_cell/cell_3/basic_lstm_cell/kernel
tensor_name: model/decoder/memory_layer/kernel
tensor_name: model/decoder/output_dense/bias
tensor_name: model/decoder/output_dense/kernel
tensor_name: model/encoder/bidirectional_rnn/bw/multi_rnn_cell/cell_0/basic_lstm_cell/bias
tensor_name: model/encoder/bidirectional_rnn/bw/multi_rnn_cell/cell_0/basic_lstm_cell/kernel
tensor_name: model/encoder/bidirectional_rnn/bw/multi_rnn_cell/cell_1/basic_lstm_cell/bias
tensor_name: model/encoder/bidirectional_rnn/bw/multi_rnn_cell/cell_1/basic_lstm_cell/kernel
tensor_name: model/encoder/bidirectional_rnn/fw/multi_rnn_cell/cell_0/basic_lstm_cell/bias
tensor_name: model/encoder/bidirectional_rnn/fw/multi_rnn_cell/cell_0/basic_lstm_cell/kernel
tensor_name: model/encoder/bidirectional_rnn/fw/multi_rnn_cell/cell_1/basic_lstm_cell/bias
tensor_name: model/encoder/bidirectional_rnn/fw/multi_rnn_cell/cell_1/basic_lstm_cell/kernel
tensor_name: model/encoder/shared_embeddings_matrix
hi Abraham and first of all, thank you for your amazing job... i have a couple of question:
what's the purpose of the back up?
why in some particular cases you perform a backup? is it necessary in case of resuming the training or is it just a precaution?
why someone would have to use the back up later?
in order to convert the model in tflite format (to play a bit in mobile environment) i need to freeze the model... in most of the guides i read that to do that, starting from checkpoint file, i need to pass the output nodes to 'convert_variables_to_constants' function ...
so i used the 'print_tensors_in_checkpoint_file' to get the nodes of your model, and i obtained the list below...
wich are the output node? do you think i need to pass all the tensors with 'decoder' in the path?
thank you
tensor_name: model/decoder/attention_decoder_cell/attention_layer/kernel
tensor_name: model/decoder/attention_decoder_cell/bahdanau_attention/attention_b
tensor_name: model/decoder/attention_decoder_cell/bahdanau_attention/attention_g
tensor_name: model/decoder/attention_decoder_cell/bahdanau_attention/attention_v
tensor_name: model/decoder/attention_decoder_cell/bahdanau_attention/query_layer/kernel
tensor_name: model/decoder/attention_decoder_cell/multi_rnn_cell/cell_0/basic_lstm_cell/bias
tensor_name: model/decoder/attention_decoder_cell/multi_rnn_cell/cell_0/basic_lstm_cell/kernel
tensor_name: model/decoder/attention_decoder_cell/multi_rnn_cell/cell_1/basic_lstm_cell/bias
tensor_name: model/decoder/attention_decoder_cell/multi_rnn_cell/cell_1/basic_lstm_cell/kernel
tensor_name: model/decoder/attention_decoder_cell/multi_rnn_cell/cell_2/basic_lstm_cell/bias
tensor_name: model/decoder/attention_decoder_cell/multi_rnn_cell/cell_2/basic_lstm_cell/kernel
tensor_name: model/decoder/attention_decoder_cell/multi_rnn_cell/cell_3/basic_lstm_cell/bias
tensor_name: model/decoder/attention_decoder_cell/multi_rnn_cell/cell_3/basic_lstm_cell/kernel
tensor_name: model/decoder/memory_layer/kernel
tensor_name: model/decoder/output_dense/bias
tensor_name: model/decoder/output_dense/kernel
tensor_name: model/encoder/bidirectional_rnn/bw/multi_rnn_cell/cell_0/basic_lstm_cell/bias
tensor_name: model/encoder/bidirectional_rnn/bw/multi_rnn_cell/cell_0/basic_lstm_cell/kernel
tensor_name: model/encoder/bidirectional_rnn/bw/multi_rnn_cell/cell_1/basic_lstm_cell/bias
tensor_name: model/encoder/bidirectional_rnn/bw/multi_rnn_cell/cell_1/basic_lstm_cell/kernel
tensor_name: model/encoder/bidirectional_rnn/fw/multi_rnn_cell/cell_0/basic_lstm_cell/bias
tensor_name: model/encoder/bidirectional_rnn/fw/multi_rnn_cell/cell_0/basic_lstm_cell/kernel
tensor_name: model/encoder/bidirectional_rnn/fw/multi_rnn_cell/cell_1/basic_lstm_cell/bias
tensor_name: model/encoder/bidirectional_rnn/fw/multi_rnn_cell/cell_1/basic_lstm_cell/kernel
tensor_name: model/encoder/shared_embeddings_matrix