Skip to content

Latest commit

 

History

History
32 lines (22 loc) · 1.01 KB

File metadata and controls

32 lines (22 loc) · 1.01 KB

🧠 Explainable AI for Image Classification

Seeing Through the Black Box: Explainability Techniques Applied to a Custom CNN

Authors: Matheus Braga (mbb4) · Philippe Menge (pmal)

📌 About

This project explores Explainable AI (xAI) methods applied to a Convolutional Neural Network (CNN) built with PyTorch. The goal is not only to achieve good classification performance, but to understand why the model makes each decision.
The model was optimized using Optuna (hyperparameter tuning) and evaluated with multiple explainability techniques.

🔍 Explainability Techniques
| Saliency Maps |
| Grad-CAM |
| LIME |
| RISE |
| Occlusion Sensitivity |
| Rejection |

🏗️ Model Architecture

Custom CNN with Residual Blocks
Batch Normalization + Dropout
Optimized with Optuna (automated hyperparameter search)
Training with early stopping

📊 Evaluation

Confusion Matrix (before and after rejection)
Per-class metrics (Precision, Recall, F1)
Precision-Recall curves