Recently I read a post on loading TensorFlow model into C++ here.
After a quick search, I feel current ML4NP applications prefer tensorflow over pytorch. We already implement loading torchscript model *.pt into PHASM via libtorch. If we can make another piece from tensorflow models working, we can demonstrate some real examples even without libtorch APIs.
This might involves:
- ONNX by Microsoft
- TensorRT by NVIDIA (for quicker inference)
- MIG (multi-instance GPU)
Recently I read a post on loading TensorFlow model into C++ here.
After a quick search, I feel current ML4NP applications prefer tensorflow over pytorch. We already implement loading torchscript model *.pt into PHASM via libtorch. If we can make another piece from tensorflow models working, we can demonstrate some real examples even without libtorch APIs.
This might involves: