Skip to content

ashish-code/MI-DQA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MI-DQA: Medical Imaging — Deep Quality Assessment

Python PyTorch License: MIT Stars Dataset: ABIDE-1 Dataset: DS030

Production-grade automated quality control for brain MRI scans combining DNN-based IQM classification with ResNet-18 visual feature extraction.

Replaces costly manual expert inspection at scale — transferable, extensible, and validated across multiple neuroimaging datasets.


🧠 Overview

MI-DQA is a dual-pathway deep learning system for automated quality control (QC) of structural brain MRI. It addresses the practical limitations of existing QA approaches — expensive re-training, limited transferability, and reliance on fixed hand-crafted features — by combining two complementary deep learning modules:

  1. DNN-based IQM Classifier — A multi-layer fully connected network trained on Image Quality Metrics (IQMs) derived from the PCP QAP protocol. Extensible to new metrics without re-engineering.

  2. ResNet-18 Visual Artifact Extractor — Learns low-level artifact representations directly from raw MRI slices, bypassing the need for hand-crafted features entirely.

The two pathways are fused at the decision layer, yielding robust QC predictions validated across ABIDE-1, DS030, and demonstrated to generalize to unseen acquisition sites.


🏗️ Dual-Pathway Architecture

flowchart TD
    subgraph Input
        A[3D MRI Volume\nNIfTI]
        B[Image Quality Metrics\nIQMs from QAP]
    end

    A --> C[2D Slice Extraction\nAxial / Coronal / Sagittal]
    C --> D[ResNet-18 Backbone\nPretrained on ImageNet\nFine-tuned on MRI]
    D --> E[Visual Feature Vector\n512-dim]

    B --> F[DNN Classifier\nFC → BN → ReLU → Dropout\n×3 blocks]
    F --> G[IQM Feature Vector\n128-dim]

    E --> H[Feature Fusion\nConcatenation + FC]
    G --> H

    H --> I{QC Decision}
    I --> J[✅ PASS\nClinically usable]
    I --> K[❌ FAIL\nArtifact detected]

    D --> L[GradCAM Visualization\nArtifact localization heatmap]
Loading

🔁 Experimental Workflow

flowchart LR
    subgraph Data Preparation
        A1[ABIDE-1\n15 training sites\n~700 volumes] --> A2[Preprocessing\nN4 bias field correction\nIntensity normalization]
        A2 --> A3[IQM Feature Extraction\nMRIQC / QAP pipeline\n~60 metrics per scan]
        A2 --> A4[Slice Extraction\n3 orthogonal planes\nCenter crop 224×224]
    end

    subgraph Model Training
        A3 --> B1[DNN Training\nBCE + class balance\nAdam lr=1e-3]
        A4 --> B2[ResNet-18 Fine-tuning\nTransfer from ImageNet\nAdam lr=1e-4]
        B1 --> B3[Fusion Layer Training\nFrozen backbones\nFC fusion head]
        B2 --> B3
    end

    subgraph Evaluation
        B3 --> C1[ABIDE-1 held-out\n2 novel sites]
        B3 --> C2[DS030 dataset\nExternal validation]
        C1 --> C3[AUC / F1 / BAcc]
        C2 --> C3
    end
Loading

📊 Performance Results

Comparison against Prior Art (ABIDE-1, held-out sites)

System Feature Type Classifier AUC Balanced Acc F1
Esteban 2017 (MRIQC) Hand-crafted IQMs Random Forest 0.81 0.76 0.74
Adhikari 2019 Hand-crafted IQMs SVM 0.79 0.73 0.71
MI-DQA DNN pathway IQMs (DNN) FC Network 0.87 0.82 0.80
MI-DQA ResNet pathway Learned (ResNet-18) FC Head 0.89 0.84 0.83
MI-DQA Fusion (ours) IQMs + Learned Fusion FC 0.93 0.89 0.88

External Validation (DS030)

System AUC Notes
RF + IQM (retrained on DS030) 0.75 Requires retraining
MI-DQA Fusion (zero-shot) 0.87 No retraining needed

🔬 GradCAM Artifact Localization

A key interpretability feature of MI-DQA is GradCAM visualization: the ResNet-18 pathway produces gradient-weighted class activation maps that highlight which regions of the MRI slice contributed to the FAIL decision.

PASS MRI slice: heatmap diffuse / low activation
FAIL MRI slice: heatmap concentrated on artifact region
  ↑ Motion artifact region   ↑ Aliasing artifact

This allows radiologists to quickly verify automated QC decisions and understand the nature of detected artifacts — turning a black-box classifier into an interpretable QC tool.


🚀 Installation

git clone https://github.com/ashish-code/MI-DQA.git
cd MI-DQA
pip install -r requirements.txt

Key requirements:

torch>=1.8.0
torchvision>=0.9
nibabel>=3.0
nilearn>=0.8
mriqc>=0.16              # for IQM extraction
numpy>=1.19
scikit-learn>=0.24
matplotlib>=3.4
grad-cam>=1.3            # for GradCAM visualization

💻 Usage

Extract IQMs (requires MRIQC/QAP)

# Using MRIQC to extract IQMs for your dataset
docker run -it --rm \
  -v /path/to/bids_dataset:/data:ro \
  -v /path/to/output:/out \
  nipreps/mriqc:latest /data /out participant \
  --participant-label sub-001 sub-002 sub-003

Run MI-DQA Inference

import torch
import nibabel as nib
import pandas as pd
from midqa.model import MIDQAFusion
from midqa.iqm_loader import load_iqm_features
from midqa.preprocess import extract_slices, normalize_volume

# Load trained fusion model
model = MIDQAFusion(iqm_dim=60, visual_dim=512, hidden_dim=256)
ckpt = torch.load("checkpoints/midqa_fusion_best.pth")
model.load_state_dict(ckpt["state_dict"])
model.eval()

# Load IQM features (from MRIQC output CSV)
iqm_df = pd.read_csv("mriqc_output/sub-001_T1w.csv")
iqm_tensor = load_iqm_features(iqm_df)  # → torch.Tensor [1, 60]

# Load and preprocess MRI volume
nii = nib.load("sub-001_T1w.nii.gz")
slices = extract_slices(nii, planes=["axial", "coronal", "sagittal"])
# → torch.Tensor [3, 1, 224, 224]

# Run inference
with torch.no_grad():
    logits = model(iqm_tensor, slices)
    prob_pass = torch.softmax(logits, dim=1)[0, 1].item()
    decision = "PASS ✅" if prob_pass > 0.5 else "FAIL ❌"
    print(f"QC Decision: {decision} (confidence = {prob_pass:.3f})")

Batch QC Pipeline

from midqa.batch import run_batch_qc
import pandas as pd

results = run_batch_qc(
    data_dir="data/BIDS_dataset/",
    iqm_csv="mriqc_output/group_T1w.tsv",
    model_path="checkpoints/midqa_fusion_best.pth",
    output_csv="qc_results.csv",
    threshold=0.5
)

# QC summary
df = pd.read_csv("qc_results.csv")
print(f"Total scans: {len(df)}")
print(f"PASS: {(df['qc_decision'] == 'PASS').sum()} ({(df['qc_decision'] == 'PASS').mean()*100:.1f}%)")
print(f"FAIL: {(df['qc_decision'] == 'FAIL').sum()} ({(df['qc_decision'] == 'FAIL').mean()*100:.1f}%)")

📁 Repository Structure

MI-DQA/
├── midqa/
│   ├── model.py              # DNN, ResNet-18, Fusion architectures
│   ├── iqm_loader.py         # IQM feature loading and normalization
│   ├── preprocess.py         # Slice extraction, volume preprocessing
│   ├── dataset.py            # ABIDE1Dataset, DS030Dataset
│   ├── train.py              # Training loop + logging
│   ├── evaluate.py           # Metrics: AUC, F1, confusion matrix
│   ├── gradcam.py            # GradCAM artifact visualization
│   └── batch.py              # Batch QC pipeline
├── notebooks/
│   ├── training_analysis.ipynb    # Training curves, ablation study
│   └── gradcam_visualization.ipynb # Artifact heatmap visualization
├── configs/
│   └── default.yaml          # Hyperparameters
├── checkpoints/              # Pretrained model weights
├── requirements.txt
└── README.md

🧪 Ablation Study

Configuration AUC Notes
IQM only (60 features, RF) 0.81 Prior art baseline
IQM only (DNN) 0.87 DNN vs RF on same features
ResNet-18 only 0.89 Learned features, no IQMs
Fusion (DNN + ResNet-18) 0.93 Complementary information
Fusion (frozen ResNet) 0.90 Fine-tuning adds +0.03 AUC

📚 References

  1. Esteban, O. et al. (2017). MRIQC: Advancing the Automatic Prediction of Image Quality in MRI from Unseen Sites. PLOS ONE.
  2. He, K. et al. (2016). Deep Residual Learning for Image Recognition. CVPR.
  3. Selvaraju, R. et al. (2017). Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. ICCV.
  4. Woodard, J.P. & Carley-Spencer, M.P. (2006). No-Reference Image Quality Metrics for Structural MRI. Neuroinformatics.

📄 License

MIT License — see LICENSE for details.


Built by Ashish Gupta · Senior Data Scientist, BrightAI

About

Dual-pathway deep QC for brain MRI: DNN on IQMs + ResNet-18 visual artifact extraction. Validated on ABIDE-1 and DS030 with GradCAM artifact localization.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors