Rokuto Nagata1 Ozora Sako1 Zihao Ding1 Takahiro Kado2 Ibuki Fujioka2 Taro Beppu2
Mariko Isogawa1 Kentaro Yoshioka1
1 Keio University
2 Sony Semiconductor Solution
* Equal contribution
CVPR 2026
This is the official implementation of Ghost-FWL
- requirements
- install uv
- create Weights and Biases account
- clone the repositoryW
git clone git@github.com:Keio-CSG/Ghost-FWL.git
- init (only after cloning the repository)
uv sync # (optional) uv run pre-commit install uv run pre-commit autoupdate
See README_dataset.md for more details.
uv run python scripts/run_train.py --config configs/config_pretrain.yamluv run python scripts/run_train.py --config configs/config_train.yamluv run python scripts/run_estimate.py --config configs/config_estimate.yamluv run python scripts/run_test.py --config configs/config_test.yamluv run python src/visualize/evaluate_pcd_batch.py --config src/visualize/configs/evaluate_pcd_batch.yamlvis_pred.py- visualize prediction results, ground truth annotations, and temporal histogram at peak locations (matplotlib)
uv run python src/visualize/vis_pred.py --config configs/config_test.yamlvis_pcd.py- save .pcd file from estimated results or ground truth annotations
uv run python src/visualize/vis_pcd.py --config src/visualize/configs/vis_pcd.yamlvis_pcd_batch.py- batch save .pcd file from estimated results or ground truth annotations
uv run python src/visualize/vis_pcd_batch.py --config src/visualize/configs/vis_pcd_batch.yamlinteractive_histogram_viewer.py- visualize intensity map and histogram at the clicked location
uv run python src/visualize/interactive_histogram_viewer.py /path/to/voxel.b2 /path/to/{prediction,annotation}.b2@inproceedings{ikeda2026ghostfwl,
title = {Ghost-FWL: A Large-Scale Full-Waveform LiDAR Dataset for Ghost Detection and Removal},
author = {Ikeda, Kazuma and Hara, Ryosei and Nagata, Rokuto and Sako, Ozora and Ding, Zihao and Kado, Takahiro and Fujioka, Ibuki and Beppu, Taro and Isogawa, Mariko and Yoshioka, Kentaro},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2026},
}The primary references used for the implementation are listed below. Please refer to the original papers for all citations.
- Lidar Waveforms are Worth 40x128x33 Words
- MARMOT: Masked Autoencoder for Modeling Transient Imaging
- VideoMAE
If you have any questions, please post an issue and mention @ike-kazu and @ryhara.
