Explainable, Adaptive and Ethical AI Framework for Domain-Agnostic Personalization
-
Updated
Jan 18, 2026 - Jupyter Notebook
Explainable, Adaptive and Ethical AI Framework for Domain-Agnostic Personalization
AI-powered skin analysis prototype with Power BI dashboard highlighting model performance, bias, and generalisation challenges in melanin-rich datasets.
This repository explores Explainable and Responsible AI concepts with projects on bias detection & mitigation, model interpretability, and machine unlearning. It demonstrates techniques to make AI systems fairer, transparent, and accountable.
Fairness Auditing in Dermoscopic AI: Quantified a 55% FNR disparity based on Anatomical Localization (Spurious Bias Audit). Focus on Disentangled Representation learning for Equitable AI.
Machine learning project analyzing bias and fairness in loan approval predictions using metrics like disparate impact and error disparity.
Mitigating algorithmic bias in facial detection systems using Debiasing Variational Autoencoders (DB-VAE). An implementation focusing on AI Fairness and latent space re-sampling.
A generative defense mechanism using High-Res CTGANs to expose and break adversarial attacks on XAI models (LIME & SHAP).
An intelligent auditing platform designed to detect and mitigate hidden biases in automated recruitment systems.
A comprehensive fairness-aware music recommendation system that detects and mitigates bias in collaborative filtering algorithms. Features interactive Streamlit demo, bias detection metrics, and multiple fairness-aware re-ranking approaches including MMR and constrained optimization.
Fairness-aware predictive modeling using Random Forest, Equalized Odds constraints, and fairness–performance tradeoff analysis on the UCI Adult Income dataset.
A research toolkit for systematically analyzing gender bias in Large Language Model (LLM) responses to job description generation tasks.
A flexible framework for Multi-Objective Neural Architecture Search (NAS) in PyTorch. It implements and compares Quantum-Inspired (MO-QNAS) and classic Evolutionary Algorithms (GA, NSGA-II, NSGA-III) to optimize CNNs for multiple objectives like accuracy, model size, and inference time. Includes a module for post-hoc fairness evaluation.
🎵 Build a fairness-aware music recommender that balances accuracy and bias, enhancing diversity and equity in recommendations from the Last.fm dataset.
Add a description, image, and links to the fairness-in-ai topic page so that developers can more easily learn about it.
To associate your repository with the fairness-in-ai topic, visit your repo's landing page and select "manage topics."