Skip to content

xxworkspace/mxnet_caffe_trt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MXNet2Caffe2trt: Convert MXNet model to Caffe model,Then Convert Caffe model to TRT model

  • first:Convert Mxnet model to Caffe model
  • then :Convert Caffe model to Tensorrt model

Mxnet2Caffe

  • json2prototxt.py prototxt_basic.py Read mxnet_json file and converte to prototxt.
  • mxnet2caffe.py Read mxnet_model params_dict and converte to .caffemodel.
  • mxnet_caffe_model_test.py Compare the outputs of Caffe model and TRT model.
  • mxnet_t2t.py Convert training model to inference model.Note you should change the file for your model.
  • Usage
    • First : Using json2prototxt.py to convert json file to prototxt. Using json2prototxt.py -h to get the args
    • Then : Using mxnet2caffe.py to convert params file to caffemodel. Using mxnet2caffe.py -h to get the args
    • Final : Using mxnet_caffe_model_test.py to compare the model outputs. Using mxnet_caffe_model_test.py -h to get the args
    • Note:Uisng Netron(https://github.com/lutzroeder/Netron) to see the model structure

caffe2trt_int8

  • caffe2trt_int8.h Convert Caffe model to TRT Int8 model.
  • calibrator.h Get the Calibratir for int8 model.
  • CopyPlugin.h Example for Add Plugin.
  • Note

caffe2trt

  • caffe2trt_half.h Convert Caffe model to TRT model. Note half mode will be used if device supports(NVIDIA Turing GPU architecture)

trt_inference

  • trt2engine.h Load TRT model and do model Inference. Note just one input and one output are supported now.

Build

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors