Skip to content

Latest commit

 

History

History
45 lines (23 loc) · 1.98 KB

File metadata and controls

45 lines (23 loc) · 1.98 KB

I found these 𝟏𝟎 𝐩𝐲𝐭𝐡𝐨𝐧 𝐥𝐢𝐛𝐫𝐚𝐫𝐢𝐞𝐬 𝐟𝐨𝐫 𝐀𝐈 𝐞𝐱𝐩𝐥𝐚𝐢𝐧𝐚𝐛𝐢𝐥𝐢𝐭𝐲:

📚SHAP (SHapley Additive exPlanations)

SHAP is a model agnostic and works by breaking down the contribution of each feature and attributing a score to each feature.

📚 LIME (Local Interpretable Model-agnostic Explanations)

LIME is another model agnostic method that works by approximating the behavior of the model locally around a specific prediction.

📚 Eli5

Eli5 is a library for debugging and explaining classifiers. It provides feature importance scores, as well as "reason codes" for scikit-learn, Keras, xgboost, LightGBM, CatBoost.

📚 Shapash

Shapash is a Python library which aims to make machine learning interpretable and understandable to everyone. Shapash provides several types of visualization with explicit labels.

📚 Anchors

Anchors is a method for generating human-interpretable rules that can be used to explain the predictions of a machine learning model.

📚 XAI (eXplainable AI)

XAI is a library for explaining and visualizing the predictions of machine learning models including feature importance scores, decision trees, and rule-based explanations.

📚 BreakDown

BreakDown is a tool that can be used to explain the predictions of linear models. It works by decomposing the model's output into the contribution of each input feature.

📚 interpret-text

interpret-text is a library for explaining the predictions of natural language processing models.

📚 iml (Interpretable Machine Learning)

iml currently contains the interface and IO code from the Shap project, and it will potentially also do the same for the Lime project.

📚 aix360 (AI Explainability 360)

aix360 includes a comprehensive set of algorithms that cover different dimensions

📚 OmniXAI

OmniXAI (short for Omni eXplainable AI), addresses several problems with interpreting judgments produced by machine learning models in practice.