Skip to content

Additional Metrics for uncertainty evaluation #551

Open
@gmartinonQM

Description

@gmartinonQM
Collaborator

Hi all, I recently accross this paper : https://arxiv.org/abs/2305.19187

It introduces two interesting metrics about uncertainty evaluation.

The two business questions addressed are :

  1. Is my uncertainty predictive of my errors? It is expected that larger uncertainties correlates with higher error rates
  2. How much errors do I spare if I reject predictions with an uncertainty cut-off? In relation to selective regression/classification/generation, it is expected that my error rate decreases if I delegate high uncertainty cases to humans (or to dustbin).

The corresponding two metrics are quite easy to implement:
1. The AUCROC(y_wrong, y_uncertainty), where y_wrong is 1 if the prediction is wrong, and y_uncertainty is simply the prediction uncertainty. This directly translate the ability of uncertainties to rank wrongest responses (in expectation).
2. The AUARC (Area Under the Accuracy Rejection Curve), which is simply the accuracy score as a function of rejection rate, or uncertainty cut-off.

Beyond the two basic metrics, we could push further the concept to:
• Precision recall curve
• "Mondrianized" metrics, with an additional groups parameters, allowing to stratify the analysis by group
• Extensive utilities to plot diagnostic curves within plotly (as much as sklearn does), with additional information (e.g. the curve of a perfect/random model).

I think this would elegantly complete the existing coverage_scores metrics with metrics closer to business considerations. Moreover, these metrics are almost use case agnostic, since the user can quite easily compute y_wrong as a function of y_true and y_pred, and y_uncertainty as a function of y_pis (e.g. y_uncertainty = y_pis.sum(axis=1) for multiclass classification, which is the length of the prediction set).

Happy to discuss further about this !

Activity

added
BacklogThis is in the MAPIE team development backlog, yet to be prioritised.
and removed
Needs decisionThe MAPIE team is deciding what to do next.
on Mar 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    BacklogThis is in the MAPIE team development backlog, yet to be prioritised.Other or internalIf no other grey tag is relevant or if issue from the MAPIE team

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @Valentin-Laurent@jawadhussein462@gmartinonQM

        Issue actions

          Additional Metrics for uncertainty evaluation · Issue #551 · scikit-learn-contrib/MAPIE