This repository contains the official code and resources for the study on deep learning–based spinal cord segmentation and positional shift prediction from CT scans, designed to enhance adaptive radiotherapy workflows for head and neck cancer patients. The project investigates the integration of U-Net–based segmentation with a CNN–LSTM prediction model to automate and refine spinal cord tracking during treatment.
Accurate spinal cord segmentation and monitoring are critical for safe and effective adaptive radiotherapy. Manual contouring is time-consuming and subject to inter-observer variability, which can compromise treatment precision. This project presents an automated pipeline:
- A U-Net architecture performs pixel-level segmentation of the spinal cord from CBCT scans.
- The segmented spinal cord regions are analyzed by a CNN–LSTM prediction model to estimate positional shifts over the treatment course.
The approach achieved Dice scores exceeding 0.85 for segmentation and mean absolute errors of 1–2 mm for shift prediction, demonstrating strong potential for clinical integration.
- Encoder: Multi-level convolutional layers with ReLU activations and max pooling to capture contextual features.
- Decoder: Transposed convolutions with skip connections to restore spatial resolution and preserve fine-grained details.
- Output: Sigmoid-activated pixel-wise classification mask for spinal cord delineation.
- Loss: Dice loss combined with binary cross-entropy for robust optimization under class imbalance.
- CNN Feature Extractor: Convolutional blocks (Conv2D, ReLU, MaxPool) encode spatial features of segmented spinal cord slices.
- LSTM Layers: Capture temporal dependencies across sequential CBCT scans to track positional shifts.
- Fully Connected Layer: Predicts 2D displacement vectors (x, y) for spinal cord movement.
- Calibration: Outputs validated against expert landmarks to ensure clinically relevant measurements.
SpinalCord-Shift-Prediction/
├── Model.ipynb # Training pipeline for U-Net and CNN–LSTM
├── metrics/ # Loss, accuracy, Dice/IoU curves (PNG) │ ├── all_training_metrics.png │ ├── loss_accuracy_curve.png │ └── loss_curve.png
├── requirements.txt # Python dependencies ├── .gitignore └── README.md
- Python 3.8+
- PyTorch (with CUDA for GPU acceleration)
- NumPy, OpenCV, Matplotlib, and related libraries
Clone the repository:
git clone https://github.com/Rusheel86/spinal-cord-shift.git
cd SpinalCord-Shift-PredictionCreate and activate a virtual environment:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activateInstall dependencies:
pip install -r requirements.txtDue to privacy, patient CT scans are not included. Organize your dataset as follows:
data/
├── patient_01/
│ ├── CT001.dcm
│ ├── ...
│ └── RS.dcm # Ground truth contours
├── patient_02/
│ └── ...
└── patient_N/
- Model.ipynb – Preprocess Dicom files then train the U-Net segmentation model and CNN–LSTM shift predictor.
- Segmentation: Dice coefficient > 0.85 on validation, stable IoU trends across epochs.
- Prediction: Mean absolute error ~1–2 mm against expert measurements.
- Cross-validation: Confirmed robustness across multiple patients.
- Training Curves: Included in
metrics/for reproducibility.
If you use this repository or its methods, please cite the following works:
(A full citation will be provided once the related manuscript is published.)
This project is a collaboration between SVKM’s NMIMS University, Mukesh Patel School of Technology, Management & Engineering, Mumbai, and Nanavati Max Super Speciality Hospital, Mumbai. Special thanks to the clinical experts for providing guidance and validation for model evaluation.