-
Notifications
You must be signed in to change notification settings - Fork 449
chore(llmobs): implement non skeleton code for ragas faithfulness #10795
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…py into evan.li/ragas-skeleton
…py into evan.li/ragas-skeleton
|
BenchmarksBenchmark execution time: 2024-10-30 15:46:17 Comparing candidate commit 23753ec in PR branch Found 0 performance improvements and 0 performance regressions! Performance is the same for 328 metrics, 2 unstable metrics. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasn't able to fully review the entire PR but some initial thoughts:
- Since we're actively making calls to OpenAI and using third party libraries on behalf of customers here we should make sure we have really great documentation (in form of both docstrings and public corp docs).
- Do we require OpenAI API keys?
- Not sure if it's because I'm running on fumes but the evaluation code is surprisingly difficult to follow 😢 Would appreciate either more context about RAGAS evals (in the PR description) or more readability or more docstrings. Thanks!
…11121) This PR fixes behavior where the last span generated right before process exit was not being evaluated by the ragas faithfulness evaluator. Previous behavior: 1. On process exit, `LLMObs.disable()` is called 2. `LLMObs.disabled` is set to `True` 3. Eval metric writers and span writers are stopped 4. Evaluator runner is stopped There is a problem here where span & eval metric writers are stopped & LLM Obs disabled is set to true while the eval runner has not finished tracing & evaluating the span events in it's buffer. ### Fix attempt 1 1. Make sure eval runner is stopped **_BEFORE_** eval metric writer and span writers are stopped 2. Make sure `LLMOBs.disabled` is set to false only after the eval runner is stopped 3. Within `_stop_service` call `self.periodic` and `self.executor.shutdown(wait=True)` The assumption here is that periodic will schedule all the evaluation job threads, and calling shutdown with `wait=True` will block until the threads are finished This didn’t work though; after `self.executor.map` was called to create the job threads, the app exited without the `run_and_submit_evaluation` faithfulness function even being called. Could there be some issue with scheduling these threads while the process is exiting? ### Fix attempt 2 (this PR) Same steps as 1) and 2) For step 3, we add a `_wait_syncronously` argument to `periodic`. Setting `_wait_synchrousnly=True` turns periodic into a blocking function that synchronously runs each evaluation metric over every span. This ensures the last span events in the evaluator run buffer are evaluated, metrics are enqueued to the metric writer & the eval metric writer & span writers are flushed before we disable llmobs. ## Checklist - [x] PR author has checked that all the criteria below are met - The PR description includes an overview of the change - The PR description articulates the motivation for the change - The change includes tests OR the PR description describes a testing strategy - The PR description notes risks associated with the change, if any - Newly-added code is easy to change - The change follows the [library release note guidelines](https://ddtrace.readthedocs.io/en/stable/releasenotes.html) - The change includes or references documentation updates if necessary - Backport labels are set (if [applicable](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)) ## Reviewer Checklist - [ ] Reviewer has checked that all the criteria below are met - Title is accurate - All changes are related to the pull request's stated goal - Avoids breaking [API](https://ddtrace.readthedocs.io/en/stable/versioning.html#interfaces) changes - Testing strategy adequately addresses listed risks - Newly-added code is easy to change - Release note makes sense to a user of the library - If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment - Backport labels are set in a manner that is consistent with the [release branch maintenance policy](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting) --------- Co-authored-by: lievan <[email protected]>
Datadog ReportBranch report: ✅ 0 Failed, 1286 Passed, 0 Skipped, 33m 34.08s Total duration (6m 45.38s time saved) |
…ace-py into evan.li/ragas-faithfulness
Co-authored-by: Sam Brenner <[email protected]>
Add the following telemetry for ragas faithfulness: I leaned toward using simple count metrics since i'm not sure if `ragas_faithfulness` fits under the concept of a tracing integration ## Metrics - Number of init attempts `dd.instrumentation_telemetry_data.llmobs.evaluators.init`, tagged with `state=success/failure`, `evaluator_label` - Number of run attempts`dd.instrumentation_telemetry_data.llmobs.evaluators.run`, tagged with `state=success/failure reason`, `evaluator_label` - Number of sampling rule parsing failures `dd.instrumentation_telemetry_data.llmobs.evaluators.errors`, tagged with `reason` ## Configurations - Configuration telemetry for `_DD_LLMOBS_EVALUATOR_SAMPLING_RULES` ## Logging - Logging ragas dependencies failures - Logging sampling rule parsing failures Testing... ## Checklist - [x] PR author has checked that all the criteria below are met - The PR description includes an overview of the change - The PR description articulates the motivation for the change - The change includes tests OR the PR description describes a testing strategy - The PR description notes risks associated with the change, if any - Newly-added code is easy to change - The change follows the [library release note guidelines](https://ddtrace.readthedocs.io/en/stable/releasenotes.html) - The change includes or references documentation updates if necessary - Backport labels are set (if [applicable](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)) ## Reviewer Checklist - [ ] Reviewer has checked that all the criteria below are met - Title is accurate - All changes are related to the pull request's stated goal - Avoids breaking [API](https://ddtrace.readthedocs.io/en/stable/versioning.html#interfaces) changes - Testing strategy adequately addresses listed risks - Newly-added code is easy to change - Release note makes sense to a user of the library - If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment - Backport labels are set in a manner that is consistent with the [release branch maintenance policy](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting) --------- Co-authored-by: lievan <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🚢 let's get it in some customer hands!
…0795) This PR adds in the non-boiler plate code for the ragas faithfulness evaluator. The majority of LOC changes are from cassettes/requirements. The main logic is in `ddtrace/llmobs/_evaluators/ragas/faithfulness.py`. There are four important features of this PR: ### 1 .Extracting the inputs to a ragas faithfulness eval from a span A span event must contain data necessary for ragas evaluations - question, context, and answer. The evaluator tries to extract this data by looking at the span event using the following logic: ``` question = input.prompt.variables.question OR input.messages[-1].content context = input.prompt.variables.context answer = output.messages[-1].content ``` **Relevant tests...** `test_ragas_faithfulness_submits_evaluation...` `test_ragas_faithfulness_returns_none_if_inputs_extraction_fails` ### 2. Ragas faithfulness implementation See the [evaluate](https://github.com/DataDog/dd-trace-py/blob/a26b4ab514db35d6135bef510be13dc3d8dc4d52/ddtrace/llmobs/_evaluators/ragas/faithfulness.py#L110) function for the underlying Ragas faithfulness implementation. It roughly follows the [original](https://github.com/explodinggradients/ragas/blob/89115bfc9250d5335d8f8ab328f065021740d931/src/ragas/metrics/_faithfulness.py#L278C1-L279C1) source implementation in the ragas framework. **Relevant tests...** `test_ragas_faithfulness_submits_evaluation...` ### 3. Tracing RAGAS Tracing RAGAS is a requirement for faithfulness a user's ml app will be polluted by a bunch of auto-instrumented langchain spans. - The `ml_app` of ragas traces should be `dd-ragas-{original ml app name}`. - All ragas traces are marked with an `runner.integration:ragas` tag. This tells us that these traces are evaluations traces from the ragas integration. We can tell it's a ragas span by looking at the `ml app` of the span at trace processing time. We also use this to safegaurd against infinite eval loops (enqueuing an llm span generated from an evaluation to the evaluator runner). **Relevant tests...** `test_ragas_faithfulness_emits_traces` `test_llmobs_with_evaluator_runner_does_not_enqueue_evaluation_spans` ### 4. RAGAS Evaluator Setup - Ragas dependencies (ragas, langchain) are only required if the ragas faithfulness evaluator is configured. - The ragas evaluator should also always use the most up to date faithfulness instance from the ragas library itself to allow a user to customize the llm's & prompts for faithfulness. - If an llm is not set by the user, we use the default llm given to us by ragas's `llm_factory` method **Relevant tests...** `test_ragas_faithfulness_disabled_if_dependencies_not_present` `test_ragas_evaluator_init` `test_ragas_faithfulness_has_modified_faithfulness_instance` ## Checklist - [x] PR author has checked that all the criteria below are met - The PR description includes an overview of the change - The PR description articulates the motivation for the change - The change includes tests OR the PR description describes a testing strategy - The PR description notes risks associated with the change, if any - Newly-added code is easy to change - The change follows the [library release note guidelines](https://ddtrace.readthedocs.io/en/stable/releasenotes.html) - The change includes or references documentation updates if necessary - Backport labels are set (if [applicable](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)) ## Reviewer Checklist - [x] Reviewer has checked that all the criteria below are met - Title is accurate - All changes are related to the pull request's stated goal - Avoids breaking [API](https://ddtrace.readthedocs.io/en/stable/versioning.html#interfaces) changes - Testing strategy adequately addresses listed risks - Newly-added code is easy to change - Release note makes sense to a user of the library - If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment - Backport labels are set in a manner that is consistent with the [release branch maintenance policy](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting) --------- Co-authored-by: lievan <[email protected]> Co-authored-by: Sam Brenner <[email protected]> Co-authored-by: kyle <[email protected]> (cherry picked from commit 2378740)
…0795) This PR adds in the non-boiler plate code for the ragas faithfulness evaluator. The majority of LOC changes are from cassettes/requirements. The main logic is in `ddtrace/llmobs/_evaluators/ragas/faithfulness.py`. There are four important features of this PR: ### 1 .Extracting the inputs to a ragas faithfulness eval from a span A span event must contain data necessary for ragas evaluations - question, context, and answer. The evaluator tries to extract this data by looking at the span event using the following logic: ``` question = input.prompt.variables.question OR input.messages[-1].content context = input.prompt.variables.context answer = output.messages[-1].content ``` **Relevant tests...** `test_ragas_faithfulness_submits_evaluation...` `test_ragas_faithfulness_returns_none_if_inputs_extraction_fails` ### 2. Ragas faithfulness implementation See the [evaluate](https://github.com/DataDog/dd-trace-py/blob/a26b4ab514db35d6135bef510be13dc3d8dc4d52/ddtrace/llmobs/_evaluators/ragas/faithfulness.py#L110) function for the underlying Ragas faithfulness implementation. It roughly follows the [original](https://github.com/explodinggradients/ragas/blob/89115bfc9250d5335d8f8ab328f065021740d931/src/ragas/metrics/_faithfulness.py#L278C1-L279C1) source implementation in the ragas framework. **Relevant tests...** `test_ragas_faithfulness_submits_evaluation...` ### 3. Tracing RAGAS Tracing RAGAS is a requirement for faithfulness a user's ml app will be polluted by a bunch of auto-instrumented langchain spans. - The `ml_app` of ragas traces should be `dd-ragas-{original ml app name}`. - All ragas traces are marked with an `runner.integration:ragas` tag. This tells us that these traces are evaluations traces from the ragas integration. We can tell it's a ragas span by looking at the `ml app` of the span at trace processing time. We also use this to safegaurd against infinite eval loops (enqueuing an llm span generated from an evaluation to the evaluator runner). **Relevant tests...** `test_ragas_faithfulness_emits_traces` `test_llmobs_with_evaluator_runner_does_not_enqueue_evaluation_spans` ### 4. RAGAS Evaluator Setup - Ragas dependencies (ragas, langchain) are only required if the ragas faithfulness evaluator is configured. - The ragas evaluator should also always use the most up to date faithfulness instance from the ragas library itself to allow a user to customize the llm's & prompts for faithfulness. - If an llm is not set by the user, we use the default llm given to us by ragas's `llm_factory` method **Relevant tests...** `test_ragas_faithfulness_disabled_if_dependencies_not_present` `test_ragas_evaluator_init` `test_ragas_faithfulness_has_modified_faithfulness_instance` ## Checklist - [x] PR author has checked that all the criteria below are met - The PR description includes an overview of the change - The PR description articulates the motivation for the change - The change includes tests OR the PR description describes a testing strategy - The PR description notes risks associated with the change, if any - Newly-added code is easy to change - The change follows the [library release note guidelines](https://ddtrace.readthedocs.io/en/stable/releasenotes.html) - The change includes or references documentation updates if necessary - Backport labels are set (if [applicable](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting)) ## Reviewer Checklist - [x] Reviewer has checked that all the criteria below are met - Title is accurate - All changes are related to the pull request's stated goal - Avoids breaking [API](https://ddtrace.readthedocs.io/en/stable/versioning.html#interfaces) changes - Testing strategy adequately addresses listed risks - Newly-added code is easy to change - Release note makes sense to a user of the library - If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment - Backport labels are set in a manner that is consistent with the [release branch maintenance policy](https://ddtrace.readthedocs.io/en/latest/contributing.html#backporting) --------- Co-authored-by: lievan <[email protected]> Co-authored-by: Sam Brenner <[email protected]> Co-authored-by: kyle <[email protected]>
This PR adds in the non-boiler plate code for the ragas faithfulness evaluator.
The majority of LOC changes are from cassettes/requirements. The main logic is in
ddtrace/llmobs/_evaluators/ragas/faithfulness.py
.There are four important features of this PR:
1 .Extracting the inputs to a ragas faithfulness eval from a span
A span event must contain data necessary for ragas evaluations - question, context, and answer.
The evaluator tries to extract this data by looking at the span event using the following logic:
Relevant tests...
test_ragas_faithfulness_submits_evaluation...
test_ragas_faithfulness_returns_none_if_inputs_extraction_fails
2. Ragas faithfulness implementation
See the evaluate function for the underlying Ragas faithfulness implementation.
It roughly follows the original source implementation in the ragas framework.
Relevant tests...
test_ragas_faithfulness_submits_evaluation...
3. Tracing RAGAS
Tracing RAGAS is a requirement for faithfulness a user's ml app will be polluted by a bunch of auto-instrumented langchain spans.
ml_app
of ragas traces should bedd-ragas-{original ml app name}
.runner.integration:ragas
tag. This tells us that these traces are evaluations traces from the ragas integration. We can tell it's a ragas span by looking at theml app
of the span at trace processing time. We also use this to safegaurd against infinite eval loops (enqueuing an llm span generated from an evaluation to the evaluator runner).Relevant tests...
test_ragas_faithfulness_emits_traces
test_llmobs_with_evaluator_runner_does_not_enqueue_evaluation_spans
4. RAGAS Evaluator Setup
llm_factory
methodRelevant tests...
test_ragas_faithfulness_disabled_if_dependencies_not_present
test_ragas_evaluator_init
test_ragas_faithfulness_has_modified_faithfulness_instance
Checklist
Reviewer Checklist