Repository files navigation MXNet2Caffe2trt: Convert MXNet model to Caffe model,Then Convert Caffe model to TRT model
first:Convert Mxnet model to Caffe model
then :Convert Caffe model to Tensorrt model
json2prototxt.py prototxt_basic.py Read mxnet_json file and converte to prototxt.
mxnet2caffe.py Read mxnet_model params_dict and converte to .caffemodel.
mxnet_caffe_model_test.py Compare the outputs of Caffe model and TRT model.
mxnet_t2t.py Convert training model to inference model.Note you should change the file for your model.
Usage
First : Using json2prototxt.py to convert json file to prototxt. Using json2prototxt.py -h to get the args
Then : Using mxnet2caffe.py to convert params file to caffemodel. Using mxnet2caffe.py -h to get the args
Final : Using mxnet_caffe_model_test.py to compare the model outputs. Using mxnet_caffe_model_test.py -h to get the args
Note:Uisng Netron(https://github.com/lutzroeder/Netron ) to see the model structure
caffe2trt_int8.h Convert Caffe model to TRT Int8 model.
calibrator.h Get the Calibratir for int8 model.
CopyPlugin.h Example for Add Plugin.
Note
caffe2trt_half.h Convert Caffe model to TRT model. Note half mode will be used if device supports(NVIDIA Turing GPU architecture)
trt2engine.h Load TRT model and do model Inference. Note just one input and one output are supported now.
About
No description, website, or topics provided.
Resources
Stars
Watchers
Forks
You can’t perform that action at this time.