Skip to content

Latest commit

 

History

History
62 lines (35 loc) · 2.36 KB

File metadata and controls

62 lines (35 loc) · 2.36 KB

Geometry-Aware Implicit Neural Reconstruction of Oblique Micro-Ultrasound Scans

Official PyTorch Implementation of:

Link to published paper here


Abstract

Micro-ultrasound is a new modality for accurate, low-cost prostate cancer imaging, but its acquisition produces oblique slices that do not align with axial MRI or histopathology. This geometric mismatch complicates interpretation and prevents direct registration to histopathology, which is necessary to map ground-truth cancer outlines onto micro-ultrasound for training machine learning models for automated cancer detection. We address this challenge with a geometry-aware reconstruction framework that converts oblique micro-ultrasound slices into axial 3D volumes. Our method includes: (i) a coordinate-based sampling scheme that uses cylindrical geometry to accurately map each voxel into Cartesian space, and (ii) a generalized implicit neural representation that models the continuous intensity field between slices, preserving high-frequency speckle texture that traditional interpolation blurs. The reconstructed volumes achieve a 9% relative SSIM improvement over a coordinate-matched trilinear baseline while maintaining ultrasound-specific texture and boundary detail. This framework produces high-quality axial micro-ultrasound volumes suitable for reliable histopathology registration and for creating pathology-informed datasets to train cancer detection models.

Network Architecture

Our model utilizes a Dual-Path Hybrid Attention Transformer (HAT) augmented with Arc Length Embeddings to encode physical acquisition geometry.

Data Preparation

Our dataset is not publicaly available.

We include data/data_build.py so one can easily create their own dataset given micro ultra-sound scans.

Usage

All configurations for training, evalatuion, and inference are managed through the hydra configs.


1. Training

Training supports Torch Distributed Data Parallel.

# Train on 2 GPUs
torchrun --standalone --nproc_per_node=2 train.py

2. Sampling (Volumetric Reconstruction)

Reconstruct a full 3D NIfTI volume from raw DICOM data.

python inference.py

References


Code adapted from HAT: Hybrid Attention Transformer.

Citation


To do when created