Automated, blind (no-reference) quality assessment of structural MRI using a 3D CNN + fully connected deep learning pipeline.
Achieves state-of-art performance on unseen acquisition sites and generalizes to Glioblastoma MRI data from TCIA.
Structural MRI is the standard imaging modality for neurological screening and diagnosis, but MRI scans frequently contain acquisition artifacts β motion blur, field inhomogeneity, aliasing, ringing β that can severely bias downstream analyses and lead to erroneous diagnoses.
The challenge:
- Manual expert QA is impractical at large scale (thousands of scans)
- Inter-rater variability introduces systematic bias
- Multi-site studies amplify artifact diversity
- No universally accepted quantitative quality definition exists
This project introduces a 3D CNN-based Blind Image Quality Assessment (BIQA) system for MRI that:
- Operates in the No-Reference (NR) regime β no clean reference scan needed
- Processes full 3D volumetric data, capturing cross-slice artifact patterns
- Transfers to novel sites and different imaging studies without retraining
flowchart TD
A[Raw 3D MRI Volume\nNIfTI Format] --> B[Preprocessing\nN4 bias correction\nIntensity normalization\nResampling to 1mm isotropic]
B --> C[3D CNN Feature Extractor\nConv3D layers\nBatch Normalization\nMax Pooling\nDropout]
C --> D[Global Average Pooling\nFlatten volumetric features]
D --> E[Fully Connected Classifier\nLinear β BN β ReLU β Dropout\nΓ 3 blocks]
E --> F{Binary QC Decision}
F --> G[β
Pass\nAcceptable quality]
F --> H[β Fail\nArtifacts detected]
subgraph Training Data
I[ABIDE-1\n15 sites\nAutism study] --> C
end
subgraph Validation
J[ABIDE-1\n2 held-out sites] --> K[In-distribution test]
L[TCIA Glioblastoma\nDifferent disease + scanner] --> M[Out-of-distribution transfer test]
end
| Component | Details |
|---|---|
| Input | 3D NIfTI volume, resampled to 64Γ64Γ64 or full res |
| Feature extractor | 3Γ Conv3D blocks (32β64β128 filters), BN + ReLU + MaxPool3D |
| Classifier head | 3Γ FC blocks (512β256β128β2), BN + ReLU + Dropout(0.5) |
| Output | Binary: PASS / FAIL quality label |
| Loss | Binary cross-entropy with class-balanced sampling |
| Optimizer | Adam, lr=1e-4, weight decay=1e-5 |
| Method | AUC | Balanced Accuracy | F1 |
|---|---|---|---|
| Random Forest + IQM (Esteban 2017) | 0.81 | 0.76 | 0.74 |
| SVM + IQM | 0.79 | 0.74 | 0.72 |
| 3D CNN (ours) | 0.91 | 0.87 | 0.86 |
| Method | AUC | Balanced Accuracy |
|---|---|---|
| RF + IQM (retrained) | 0.73 | 0.68 |
| 3D CNN (ours, zero-shot transfer) | 0.85 | 0.81 |
3D CNN achieves strong zero-shot transfer without any retraining on the target domain.
sequenceDiagram
participant Data as Data Pipeline
participant Model as 3D CNN Model
participant Val as Validation Loop
participant Log as Logger
Data->>Data: Load ABIDE-1 (15 sites)
Data->>Data: Stratified split by site
Data->>Data: Augmentation: random flip, rotation Β±15Β°
Data->>Model: Batch of 3D volumes (B, 1, D, H, W)
Model->>Model: Forward pass β logits
Model->>Model: Compute BCE loss
Model->>Model: Backward + Adam step
Model->>Val: Evaluate on held-out sites
Val->>Log: AUC, Balanced Acc, F1
Log->>Log: Save best checkpoint
git clone https://github.com/ashish-code/MRI-QA.git
cd MRI-QA
pip install -r requirements.txtKey dependencies:
torch>=1.8.0
nibabel>=3.0
scikit-learn>=0.24
numpy>=1.19
scipy>=1.6
nilearn>=0.8
Download ABIDE-1 preprocessed data (Preprocessed Connectome Project, C-PAC pipeline). Organize as:
data/
ABIDE1/
sub-001_T1w.nii.gz # with QC label in accompanying CSV
sub-002_T1w.nii.gz
...
labels/
abide1_qc_labels.csv # columns: subject_id, site, qc_label (1=pass, 0=fail)
import torch
from mri_qa.model import MRIQualityNet
from mri_qa.dataset import ABIDEDataset
from mri_qa.train import train_model
# Initialize model
model = MRIQualityNet(
in_channels=1,
base_filters=32,
fc_dims=[512, 256, 128],
dropout=0.5
)
# Dataset
train_ds = ABIDEDataset(
data_dir="data/ABIDE1",
labels_csv="data/labels/abide1_qc_labels.csv",
sites_include=[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15],
augment=True
)
val_ds = ABIDEDataset(
data_dir="data/ABIDE1",
labels_csv="data/labels/abide1_qc_labels.csv",
sites_include=[16, 17], # held-out sites
augment=False
)
# Train
train_model(
model=model,
train_dataset=train_ds,
val_dataset=val_ds,
epochs=50,
batch_size=4,
lr=1e-4,
output_dir="checkpoints/"
)import nibabel as nib
import torch
from mri_qa.model import MRIQualityNet
from mri_qa.preprocess import preprocess_volume
# Load trained model
model = MRIQualityNet(in_channels=1, base_filters=32, fc_dims=[512, 256, 128])
checkpoint = torch.load("checkpoints/best_model.pth")
model.load_state_dict(checkpoint["model_state_dict"])
model.eval()
# Load and preprocess an MRI volume
nii = nib.load("subject_T1w.nii.gz")
volume = preprocess_volume(nii) # normalize, resample, crop β torch.Tensor
# Run inference
with torch.no_grad():
logits = model(volume.unsqueeze(0)) # add batch dim
prob_pass = torch.softmax(logits, dim=1)[0, 1].item()
decision = "PASS β
" if prob_pass > 0.5 else "FAIL β"
print(f"QC Decision: {decision} (P(pass) = {prob_pass:.3f})")-
3D volumes over 2D slices: Most prior work assesses individual axial slices. Our 3D approach captures inter-slice artifacts (e.g., motion in k-space, aliasing in z-direction) invisible in 2D.
-
Multi-site training: ABIDE-1's 17 acquisition sites provide natural domain shift. By training on 15 and testing on 2 held-out sites, we explicitly measure cross-site generalization.
-
Class-balanced sampling: QC pass/fail labels are highly imbalanced in real datasets. We apply class-balanced batch sampling to prevent the model from collapsing to the majority class.
-
Transfer to TCIA: Without any fine-tuning, the model achieves strong AUC on Glioblastoma MRI β a different disease, scanner protocol, and patient population. This demonstrates the model learns general artifact representations, not ABIDE-specific features.
MRI-QA/
βββ mri_qa/
β βββ model.py # 3D CNN + FC architecture
β βββ dataset.py # ABIDEDataset, TCIADataset
β βββ preprocess.py # N4 bias correction, normalization, resampling
β βββ train.py # Training loop with logging
β βββ evaluate.py # AUC, F1, confusion matrix
βββ notebooks/
β βββ visualize_gradcam.ipynb # GradCAM artifact localization
βββ scripts/
β βββ download_abide.sh
β βββ run_inference.py
βββ requirements.txt
βββ README.md
- Esteban, O. et al. (2017). MRIQC: Advancing the Automatic Prediction of Image Quality in MRI from Unseen Sites. PLOS ONE.
- He, K. et al. (2016). Deep Residual Learning for Image Recognition. CVPR.
- Di Martino, A. et al. (2014). The Autism Brain Imaging Data Exchange. Molecular Psychiatry. (ABIDE-1)
- Clark, K. et al. (2013). The Cancer Imaging Archive (TCIA). J. Digit. Imaging.
MIT License β see LICENSE for details.