This repo describe the steps needed to deploy a pod with inference server on OpenShift. Based on the instuctions here I have conducted two experiments of setting the inference server:
- As a deployment.
- As a pod (server).
- As a service.
More info can be found under the relevant directories.