MICCAI'2025 | Paper | Code | Data
Official implementation of VoxelOpt: Voxel-Adaptive Message Passing for Discrete Optimization in Deformable Abdominal CT Registration, MICCAI'2025.
Hang Zhang ,
Yuxi Zhang,
Jiazheng Wang,
Xiang Chen,
Renjiu Hu,
Xin Tian,
Gaolei Li, and
Min Liu.
VoxelOpt is a training-free deformable registration method that combines foundation-model features, local 3D cost volumes, voxel-wise displacement entropy, and adaptive message passing. It delivers competitive abdominal CT registration accuracy without training a registration network on segmentation labels.
This repository is intentionally small, script-first, and terminal-agent
friendly for OpenAI Codex
and Anthropic Claude Code.
The main workflow has two entrypoints, src/get_unet_features.py and
src/test_abdomen.py, with a conservative .gitignore that keeps datasets,
feature maps, and logs out of commits.
If you are using Codex or Claude Code, start with the agent runbook:
INSTALL.md. It tells the agent exactly how to set up the
environment, where to place abdomenreg/, which commands to run, and what files
must not be committed.
- π Training-free registration: no registration-network training loop, no supervision labels during optimization.
- π§ Foundation features, discrete optimizer: pre-softmax segmentation features make the local displacement search sharply informative.
- π‘ Voxel-adaptive message passing: uncertain voxels receive more neighbor information, while confident boundary voxels preserve strong local signals.
- β‘ Fast 5-level pyramid solver: the Table 1 setting uses
k=1, 27-neighbor local search, 6 optimization steps, and 7-step scaling-and-squaring.
Abdominal CT registration, averaged over the 42 ordered test pairs:
| Method | Dice (%) β | HD95 β | SDLogJ β | Runtime |
|---|---|---|---|---|
| Initial | 30.86 | 29.77 | - | - |
| Deeds | 53.57 | 20.08 | 0.12 | 110.1 s |
| RDP, semi-supervised | 58.77 | 20.07 | 0.22 | <1 s |
| VoxelOpt | 58.51 | 18.54 | 0.21 | <1 s |
This repository was verified on the released test split with:
Dice: 58.46%
HD95: 18.62
SDLogJ: 0.218
Small differences are expected from GPU, PyTorch, and half-precision kernel behavior.
Create an environment with Python 3.9 or newer:
conda create -n voxelopt python=3.9 -y
conda activate voxeloptInstall PyTorch for your CUDA version from the official PyTorch selector, then install the remaining dependencies:
pip install numpy scipy pandas nibabelThe code has been verified with PyTorch 2.7 and CUDA GPUs. CPU execution is possible for feature extraction but is slow for full 3D volumes.
Download the preprocessed abdominal CT registration data from Dropbox:
https://www.dropbox.com/scl/fo/1ri37zp2awc1e218p0zjx/AHw9tXM-wowNqT8WzG6Uq5c?rlkey=ppgyoll7vzzg6hgdz8uzt9h7q&st=drein7eg&dl=0
Place the extracted folder in the repository root and name it exactly:
abdomenreg/
img/
img0001.nii.gz
...
img0030.nii.gz
label/
label0001.nii.gz
...
label0030.nii.gz
The released Table 1 evaluation uses subjects 0024 to 0030, producing
7 x 6 = 42 ordered test pairs. The feature extraction step below creates:
abdomenreg/fea/img0024.npy ... img0030.npy
The pretrained feature extractor checkpoint is expected at:
src/unet.pth
Run all commands from the repository root.
python src/get_unet_features.py --data_path ./abdomenreg --split test --gpu_id 0 --overwriteThe script clips CT intensities to [-500, 800], normalizes to [0, 1], runs
the pretrained segmentation backbone, and saves feature maps under
abdomenreg/fea/.
python src/test_abdomen.py --data_path ./abdomenreg --gpu_id 0The output CSV is saved to:
logs_abct/results_ks1_half1_ada1_foundation.csv
Expected final line:
Avg val dice 0.58..., Avg hd95 18..., Avg std dev 0.21...
To check the environment before launching the full 42-pair run:
python src/test_abdomen.py --data_path ./abdomenreg --gpu_id 0 --max_pairs 1This writes logs_abct/results_ks1_half1_ada1_foundation_n1.csv and does not
overwrite the full evaluation CSV.
Feature extraction:
python src/get_unet_features.py --helpRegistration:
python src/test_abdomen.py --helpCommon overrides:
# Raw CT features
python src/test_abdomen.py --fea_type raw --gpu_id 0
# MIND features
python src/test_abdomen.py --fea_type mind --gpu_id 0
# Disable voxel-adaptive message passing
python src/test_abdomen.py --gpu_id 0 is_adaptive=0
# Larger local cost-volume kernel
python src/test_abdomen.py --gpu_id 0 ks=2src/
get_unet_features.py # feature-map extraction
test_abdomen.py # VoxelOpt evaluation on abdomen CT
loaders/abdomenreg_loader.py
models/costVolComplex.py # VoxelOpt cost volume + adaptive message passing
models/preUnetComplex.py # pretrained feature extractor wrapper
models/universalmodel/unet.py
utils/functions.py # warping, Dice, Jacobian, HD95
utils/surface_distance/ # HD95 utilities
figs/
voxelopt_framework.png
entropy_distribution.png
If this repository helps your research, please cite:
@InProceedings{ZhaHan_VoxelOpt_MICCAI2025,
author = {Zhang, Hang and Zhang, Yuxi and Wang, Jiazheng and Chen, Xiang and Hu, Renjiu and Tian, Xin and Li, Gaolei and Liu, Min},
title = {VoxelOpt: Voxel-Adaptive Message Passing for Discrete Optimization in Deformable Abdominal CT Registration},
booktitle = {Medical Image Computing and Computer Assisted Intervention -- MICCAI 2025},
year = {2025},
publisher = {Springer Nature Switzerland},
volume = {LNCS 15963},
pages = {672--683}
}VoxelOpt, MICCAI 2025, deformable image registration, medical image registration, abdominal CT registration, 3D CT registration, discrete optimization, cost volume, voxel-adaptive message passing, mean-field inference, foundation model features, foundation segmentation model, diffeomorphic registration, scaling and squaring, Learn2Reg abdominal CT, PyTorch medical imaging.
Relevant links: Springer paper, DOI, Learn2Reg, VoxelOpt GitHub, OpenAI Codex, Claude Code.

