We provide instructions on how to evaluate Mobile-VideoGPT models on MVBench, PerceptionTest, NextQA, MLVU, EgoSchema, and ActNet-QA. Please follow the instructions below:
Mobile-VideoGPT models are available on Mobile-VideoGPT. Please follow the instructions below to download,
Save the downloaded models under Checkpoints directory.
mkdir Checkpoints
cd Checkpoints
git lfs install
git clone https://huggingface.co/Amshaker/Mobile-VideoGPT-0.5B
git clone https://huggingface.co/Amshaker/Mobile-VideoGPT-1.5BFirst, clone the LMMS_eval repository as follows:
git clone https://github.com/EvolvingLMMs-Lab/lmms-evalSecond, you need to integrate the MobileVideoGPT model into the LMMS_eval framework into lmms_eval as follows:
- Copy eval/mobile_videogpt.py to lmms-eval/lmms_eval/models
- Update the available models of lmms_eval to include MobileVideoGPT to lmms_eval/models/init.py as follows:
"mobile_videogpt": "MobileVideoGPT",Third, copy the evaluation scripts to lmms-eval repository to run the evaluation for all benchmarks as follows:
We provide Mobile-VideoGPT-evaluation.sh script to run inference on multiple GPUs for Mobile-VideoGPT-0.5B or Mobile-VideoGPT-1.5B:
bash Mobile-VideoGPT-evaluation.sh Checkpoints/Mobile-VideoGPT-0.5B
bash Mobile-VideoGPT-evaluation.sh Checkpoints/Mobile-VideoGPT-1.5BWhere Checkpoints/Mobile-VideoGPT-0.5B is the path of Mobile-VideoGPT-0.5B model and Checkpoints/Mobile-VideoGPT-1.5B is the path of Mobile-VideoGPT-1.5B model.