-
Notifications
You must be signed in to change notification settings - Fork 225
Metrics available
Nandan Thakur edited this page Jun 29, 2022
·
1 revision
We evaluate our models using pytrec_eval and in future we can extend to include more retrieval-based metrics:
- NDCG (
NDCG@k) - MAP (
MAP@k) - Recall (
Recall@k) - Precision (
P@k)
We also include custom-metrics now which can be used for evaluation, please refer here - evaluate_custom_metrics.py
- MRR (
MRR@k) - Capped Recall (
R_cap@k) - Hole (
Hole@k): % of top-k docs retrieved unseen by annotators - Top-K Accuracy (
Accuracy@k): % of relevant docs present in top-k results
If you use the BEIR benchmark in your research, please cite the BEIR paper: https://openreview.net/forum?id=wCu6T5xFjeJ.