Skip to content

Issue in evaluate.EvaluationModule.add #670

Open
@gabriel-gubert

Description

@gabriel-gubert

Unlike evaluate.EvaluationModule.add_batch, evaluate.EvaluationModule.add DOES NOT WORK with Metrics (evaluate.Metric(evaluate.EvaluationModule)) whose Features (evaluate.MetricInfo(evaluate.EvaluationModuleInfo).features) are List[datasets.Features], instead of datasets.Features. Because a list DOES NOT HAVE an encode_example Method as in Line 533 from evaluate/src/evaluate/module.py, throwing a ValueError that DOES NOT TRULY RELATE to the cause itself, as it can be seen in:

ValueError: Predictions and/or references don't match the expected format. Expected format: Feature option 0: {'predictions': Value(dtype='string', id='sequence'), 'references': Sequence(feature=Value(dtype='string', id='sequence'), length=-1, id='references')} Feature option 1: {'predictions': Value(dtype='string', id='sequence'), 'references': Value(dtype='string', id='sequence')}, Input predictions: ['void hello_world()'], Input references: ['void hello_world()']

The evaluate.EvaluationModule.add Method should instead use the selected_feature_format Field from evaluate.EvaluationModule, as in Line 486 from evaluate/src/evaluate/module.py at evaluate.EvaluationModule.add_batch.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions