Skip to content

UVA-Computer-Vision-Lab/stereo-walker

Repository files navigation

StereoWalker: Empowering Dynamic Urban Navigation with Stereo and Mid-Level Vision

This repository contains official implementation of the code for the paper Empowering dynamic urban navigation with stereo and mid-level vision.

Authors: Wentao Zhou, Xuweiyi Chen, Vignesh Rajagopal, Jeffrey Chen , Rohan Chandra, Zezhou Cheng

If you find this code useful, please consider citing:

@article{zhou2025empowering,
  title={Empowering Dynamic Urban Navigation with Stereo and Mid-Level Vision},
  author={Zhou, Wentao and Chen, Xuweiyi and Rajagopal, Vignesh and Chen, Jeffrey and Chandra, Rohan and Cheng, Zezhou},
  journal={arXiv preprint arXiv:2512.10956},
  year={2025}
}

Updates

  • Inference code released
  • Training code (to be released)
  • Training data and benchmark (to be released)

Installation

The project is tested with Python 3.11, PyTorch 2.5.0, and CUDA 12.1. Install dependencies with:

conda env create -f environment.yml
conda activate stereowalker

Inference

Pretrained StereoWalker weights: Download here

python test.py --config config/teleop_eval.yaml --checkpoint [path to ckpt]

Acknowledgments

We greatly appreciate CityWalker for open-source its code.

About

Official Code Release for Empowering Dynamic Urban Navigation with Stereo and Mid-Level Vision

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages