Description
Hi all, I recently accross this paper : https://arxiv.org/abs/2305.19187
It introduces two interesting metrics about uncertainty evaluation.
The two business questions addressed are :
- Is my uncertainty predictive of my errors? It is expected that larger uncertainties correlates with higher error rates
- How much errors do I spare if I reject predictions with an uncertainty cut-off? In relation to selective regression/classification/generation, it is expected that my error rate decreases if I delegate high uncertainty cases to humans (or to dustbin).
The corresponding two metrics are quite easy to implement:
1. The AUCROC(y_wrong, y_uncertainty), where y_wrong is 1 if the prediction is wrong, and y_uncertainty is simply the prediction uncertainty. This directly translate the ability of uncertainties to rank wrongest responses (in expectation).
2. The AUARC (Area Under the Accuracy Rejection Curve), which is simply the accuracy score as a function of rejection rate, or uncertainty cut-off.
Beyond the two basic metrics, we could push further the concept to:
• Precision recall curve
• "Mondrianized" metrics, with an additional groups
parameters, allowing to stratify the analysis by group
• Extensive utilities to plot diagnostic curves within plotly (as much as sklearn does), with additional information (e.g. the curve of a perfect/random model).
I think this would elegantly complete the existing coverage_scores metrics with metrics closer to business considerations. Moreover, these metrics are almost use case agnostic, since the user can quite easily compute y_wrong as a function of y_true and y_pred, and y_uncertainty as a function of y_pis (e.g. y_uncertainty = y_pis.sum(axis=1) for multiclass classification, which is the length of the prediction set).
Happy to discuss further about this !
Activity