Skip to content

add evaluation structure output compliance #2554

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

Vikaspal8923
Copy link
Contributor

Details

Added new evaluation metric "structure output compliance" to the frontend for using LLM-as-a-judge from the UI (Online Evaluation tab) as well as in the Python SDK.

Resolves #2528
/claim #2528

video ::

Screen.Recording.2025-06-23.184814.mp4

@Vikaspal8923 Vikaspal8923 requested a review from a team as a code owner June 23, 2025 14:40
@aadereiko aadereiko self-assigned this Jun 25, 2025
@Vikaspal8923
Copy link
Contributor Author

@vincentkoc Can I get a review on this if there is no update from the first PR raised by the other person?

@andrescrz
Copy link
Collaborator

Hi @Vikaspal8923

Apologies for the delay in reviewing this PR. Could you please resolve the current conflicts? I’ll make sure someone reviews it as soon as possible once the conflicts are addressed.

Thank you for your patience!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[FR]: New Evaluaton Metric "Structured Output Compliance"
3 participants