Skip to content

matheusbbritto/Explainable-AI-for-Image-Classification-CNN-with-Saliency-Maps-GradCAM-LIME-RISE

Repository files navigation

🧠 Explainable AI for Image Classification

Seeing Through the Black Box: Explainability Techniques Applied to a Custom CNN

Authors: Matheus Braga (mbb4) · Philippe Menge (pmal)

📌 About

This project explores Explainable AI (xAI) methods applied to a Convolutional Neural Network (CNN) built with PyTorch. The goal is not only to achieve good classification performance, but to understand why the model makes each decision.
The model was optimized using Optuna (hyperparameter tuning) and evaluated with multiple explainability techniques.

🔍 Explainability Techniques
| Saliency Maps |
| Grad-CAM |
| LIME |
| RISE |
| Occlusion Sensitivity |
| Rejection |

🏗️ Model Architecture

Custom CNN with Residual Blocks
Batch Normalization + Dropout
Optimized with Optuna (automated hyperparameter search)
Training with early stopping

📊 Evaluation

Confusion Matrix (before and after rejection)
Per-class metrics (Precision, Recall, F1)
Precision-Recall curves

About

CNN-based image classifier with xAI explainability techniques — Saliency Maps, Grad-CAM, LIME, RISE and uncertainty rejection. Built with PyTorch and Optuna.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors