-
Notifications
You must be signed in to change notification settings - Fork 127
models Relevance Evaluator
github-actions[bot] edited this page Dec 13, 2024
·
7 revisions
Score range | Integer [1-5]: 1 is the lowest quality and 5 is the highest quality. |
What is this metric? | Coherence measures the logical and orderly presentation of ideas in a response, allowing the reader to easily follow and understand the writer's train of thought. A coherent response directly addresses the question with clear connections between sentences and paragraphs, using appropriate transitions and a logical sequence of ideas. |
How does it work? | The coherence metric is calculated by instructing a language model to follow the definition (in the description) and a set of grading rubrics, evaluate the user inputs, and output a score on a 5-point scale (higher means better quality). Learn more about our definition and grading rubrics. |
When to use it? | The recommended scenario is generative business writing such as summarizing meeting notes, creating marketing materials, and drafting email. |
What does it need as input? | Query, Response |
Version: 5
hiddenlayerscanned
View in Studio: https://ml.azure.com/registries/azureml/models/Relevance-Evaluator/version/5
is-promptflow: True
is-evaluator: True
show-artifact: True
_default-display-file: ./RelevanceEvaluator/relevance.prompty