-
Notifications
You must be signed in to change notification settings - Fork 49
Open
Labels
enhancementImprovements to existing functionalityImprovements to existing functionality
Description
How we are today
To improve interaction and adoption by users and operators at process industries, it is important that models are interpretable. At present, the interpretability functionalities provided by BibMon are limited to the sklearnRegressor class and rely solely on feature importances.
Proposed enhancement
We propose the implementation of advanced interpretability techniques such as LIME (local interpretable model-agnostic explanations) (Ribeiro et al., 2016) and SHAP (Shapley additive explanations) (Lundberg and Lee, 2017).
Implementation
Ideally, these functionalities should be implemented in files such as _generic_model.py or _bibmon_tools.py. This approach will ensure that the new interpretability techniques are accessible for all models within the library.
Metadata
Metadata
Assignees
Labels
enhancementImprovements to existing functionalityImprovements to existing functionality