DrWhy
is the collection of tools for Explainable AI (XAI). It's based on shared principles and simple grammar for exploration, explanation and visualisation of predictive models. The unified grammar beyond DrWhy universe is described in the Predictive Models: Visual Exploration, Explanation and Debugging book.
Please, note that DrWhy is under rapid development and is still maturing. If you are looking for a stable solution, please use the mature DALEX package.
Tools that are usefull during the model lifetime.
Data screening is an important first step of any statistical analysis. Here are tools which can help in this process.
- dataMaid A suite of checks for identification of potential errors in a data frame as part of the data screening process
- ggplot2 System for declaratively creating graphics, based on The Grammar of Graphics.
- Model Agnostic Variable Importance Scores. Surrogate learning = Train an elastic model and measure feature importance in such model. See DALEX, Model Class Reliance MCR
- vip Variable importance plots
- SAFE Surrogate learning = Train an elastic model and extract feature transformations.
- xspliner Using surrogate black-boxes to train interpretable spline based additive models
- factorMerger Set of tools for factors merging paper
- ingredients Set of tools for model level feature effects and feature importance.
- auditor model verification, validation, and error analysis vigniette
- DALEX Descriptive mAchine Learning EXplanations
- iml; interpretable machine learning R package
- randomForestExplainer A set of tools to understand what is happening inside a Random Forest
- survxai Explanations for survival models paper
- breakDown, pyBreakDown and breakDown2 Model Agnostic Explainers for Individual Predictions (with interactions)
- ceterisParibus, pyCeterisParibus, ceterisParibusD3 and ceterisParibus2 Ceteris Paribus Plots (What-If plots) for explanations of a single observation
- localModel and live LIME-like explanations with interpretable features based on Ceteris Paribus curves.
- lime; Local Interpretable Model-Agnostic Explanations (R port of original Python package)
- shapper An R wrapper of SHAP python library
- modelDown modelDown generates a website with HTML summaries for predictive models
- drifter Concept Drift and Concept Shift Detection for Predictive Models
- archivist A set of tools for datasets and plots archiving paper
DrWhy
works on fully trained predictive models. Models can be created with any tool.
DrWhy
uses DALEX2
package to wrap model with additional matadata required for explanations, like validation data, predict function etc.
Explainers for predictive models can be created with model agnostic or model specific functions implemented in various packages.