-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrated various German tasks #97
Conversation
DATASET_NAME = None | ||
|
||
def __init__(self, subject): | ||
self.DATASET_NAME = subject |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There seems to be a problem with the dataset: Even when a dataset config is specified, documents from many different subjects are returned, e.g. for ogx_hendrycksTest_de-abstract_algebra
:
Es folgen multiple-choice Fragen (mit Antworten) über das Thema Abstrakte Algebra.
Frage: Welche der folgenden listet die elektromagnetischen Spektralbereiche in absteigender Reihenfolge der Wellenlänge auf?
Optionen:
A. Ultraviolett, sichtbar, Infrarot, Röntgen
B. Röntgen, sichtbar, Ultraviolett, Infrarot
C. Röntgen, Ultraviolett, sichtbar, Infrarot
D. Infrarot, sichtbar, Ultraviolett, Röntgen
Antwort:
This question is actually a physics question, and not from abstract algebra.
If this cannot be worked around, it probably doesn't make sense to include this benchmark before the bug is fixed in the HF dataset.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It indeed appears that the HF dataset builder script is faulty and suppllies the entire test-set over all subjects regardless of which subject is supplied in datasets.load_dataset()
.
I'd advocate in favour of closing this since #99 is a superset of the here implemented tasks. |
Sure about this? Even though they are technically the same tasks but I think the translations were created differently. I'd recommend having both variations implemented to be comparable to the literature. |
Cherrypicked some German task implementations from https://github.com/bjoernpl/lm-evaluation-harness-de/tree/mmlu_de.
Contains Hellaswag_de, TruthfulQA_de, ARC-Challenge_de and HendrycksTest_de.