Skip to content

update eval docs to use poetry #486

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

update eval docs to use poetry #486

wants to merge 1 commit into from

Conversation

aantn
Copy link
Contributor

@aantn aantn commented Jun 6, 2025

No description provided.

@aantn aantn requested a review from nherment June 6, 2025 07:26
Copy link
Contributor

coderabbitai bot commented Jun 6, 2025

Walkthrough

The documentation was updated across multiple files to standardize the use of poetry run pytest instead of directly invoking pytest for running tests. Instructions now also include installing dependencies with poetry install. No changes were made to evaluation logic or the underlying process descriptions.

Changes

File(s) Change Summary
docs/evals-introduction.md Updated all test-running instructions to use poetry run pytest; added poetry install step.
docs/evals-reporting.md, docs/evals-writing.md Replaced all instances of pytest with poetry run pytest in command examples and instructions.

Suggested reviewers

  • Sheeproid
  • moshemorad

📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 446e2fe and 84d9e50.

📒 Files selected for processing (3)
  • docs/evals-introduction.md (5 hunks)
  • docs/evals-reporting.md (1 hunks)
  • docs/evals-writing.md (4 hunks)
🧰 Additional context used
🪛 LanguageTool
docs/evals-introduction.md

[uncategorized] ~118-~118: Did you mean: “By default,”?
Context: ...y run pytest ./tests/llm/test_*.py ``` By default the tests load and present mock files t...

(BY_DEFAULT_COMMA)


[style] ~119-~119: Consider using a synonym to be more concise.
Context: ...sed through to the live tool itself. In a lot of cases this can cause the eval to fail u...

(A_LOT_OF)


[uncategorized] ~119-~119: A comma might be missing here.
Context: ...gh to the live tool itself. In a lot of cases this can cause the eval to fail unless ...

(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)

⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: build (3.10)
  • GitHub Check: build (3.12)
  • GitHub Check: build (3.11)
  • GitHub Check: build (3.12)
🔇 Additional comments (14)
docs/evals-introduction.md (7)

109-112: Add Poetry dependency installation step
The new “Install dependencies” section correctly introduces poetry install at the start of the workflow, ensuring users set up the environment before running tests.


116-116: Prefix all test commands with Poetry
Updating the “Run all evaluations” command to poetry run pytest ensures tests run within the Poetry-managed environment.


123-124: Consistently use poetry run pytest for suites
Both test suite invocations are now correctly prefixed, aligning with the new standard.


129-129: Use Poetry for single test cases
The specific test-case command is updated as expected to maintain consistency.


156-156: Apply Poetry prefix to parallel execution
Replacing pytest -n 10 with poetry run pytest -n 10 keeps the parallel run inside the Poetry context.


165-165: Ensure live-test command uses Poetry
The live testing example now correctly uses poetry run pytest, matching the rest of the documentation.


174-174: Model comparison commands updated
The “Create Baseline” and “Test New Model” commands are now properly prefixed with Poetry for consistency.

docs/evals-writing.md (5)

71-71: Prefix mock-generation command with Poetry
Switching to ITERATIONS=100 poetry run pytest integrates mock generation into the Poetry-managed test environment.


77-77: Use Poetry for example test run
The example for running the Ask Holmes test is now correctly prefixed.


131-131: Update mock-generation snippet
The “Automatic Generation” example now runs under Poetry, ensuring consistency across instructions.


275-275: Use Poetry for verbose debug run
The debug command has been updated to poetry run pytest -v -s as expected.


279-279: Consistent Poetry usage for fresh mocks
The “Generate fresh mocks” command now executes within the Poetry context.

docs/evals-reporting.md (2)

45-45: Prefix basic Braintrust run with Poetry
Switching the Braintrust evaluation command to poetry run pytest maintains the Poetry-managed environment for reporting steps.


53-53: Update parallel Braintrust run to Poetry
The named experiment example now correctly uses poetry run pytest -n 10.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@@ -166,12 +171,12 @@ Live testing requires a Kubernetes cluster and will execute `before-test` and `a

1. **Create Baseline**: Run evaluations with a reference model
```bash
EXPERIMENT_ID=baseline_gpt4o MODEL=gpt-4o pytest -n 10 ./tests/llm/test_*
EXPERIMENT_ID=baseline_gpt4o MODEL=gpt-4o poetry run pytest -n 10 ./tests/llm/test_*
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to verify that environment variables like EXPERIMENT_ID are still propogated to pytest this way

Copy link
Contributor

github-actions bot commented Jun 6, 2025

Results of HolmesGPT evals

Test suite Test case Status
ask_holmes 01_how_many_pods ⚠️
ask_holmes 02_what_is_wrong_with_pod
ask_holmes 02_what_is_wrong_with_pod_LOKI
ask_holmes 03_what_is_the_command_to_port_forward
ask_holmes 04_related_k8s_events
ask_holmes 05_image_version
ask_holmes 06_explain_issue
ask_holmes 07_high_latency
ask_holmes 07_high_latency_LOKI
ask_holmes 08_sock_shop_frontend
ask_holmes 09_crashpod
ask_holmes 10_image_pull_backoff
ask_holmes 11_init_containers
ask_holmes 12_job_crashing
ask_holmes 12_job_crashing_CORALOGIX
ask_holmes 12_job_crashing_LOKI
ask_holmes 13_pending_node_selector
ask_holmes 14_pending_resources
ask_holmes 15_failed_readiness_probe
ask_holmes 16_failed_no_toolset_found
ask_holmes 17_oom_kill
ask_holmes 18_crash_looping_v2
ask_holmes 19_detect_missing_app_details
ask_holmes 20_long_log_file_search
ask_holmes 20_long_log_file_search_LOKI
ask_holmes 21_job_fail_curl_no_svc_account ⚠️
ask_holmes 22_high_latency_dbi_down
ask_holmes 23_app_error_in_current_logs
ask_holmes 23_app_error_in_current_logs_LOKI
ask_holmes 24_misconfigured_pvc
ask_holmes 25_misconfigured_ingress_class ⚠️
ask_holmes 26_multi_container_logs
ask_holmes 27_permissions_error_no_helm_tools
ask_holmes 28_permissions_error_helm_tools_enabled
ask_holmes 29_events_from_alert_manager
ask_holmes 30_basic_promql_graph_cluster_memory
ask_holmes 31_basic_promql_graph_pod_memory
ask_holmes 32_basic_promql_graph_pod_cpu
ask_holmes 33_http_latency_graph
ask_holmes 34_memory_graph
ask_holmes 35_tempo
ask_holmes 36_argocd_find_resource
ask_holmes 37_argocd_wrong_namespace ⚠️
ask_holmes 38_rabbitmq_split_head
ask_holmes 39_failed_toolset
ask_holmes 40_disabled_toolset
ask_holmes 41_setup_argo
investigate 01_oom_kill
investigate 02_crashloop_backoff
investigate 03_cpu_throttling
investigate 04_image_pull_backoff
investigate 05_crashpod
investigate 05_crashpod_LOKI
investigate 06_job_failure
investigate 07_job_syntax_error
investigate 08_memory_pressure
investigate 09_high_latency
investigate 10_KubeDeploymentReplicasMismatch
investigate 11_KubePodCrashLooping
investigate 12_KubePodNotReady
investigate 13_Watchdog
investigate 14_tempo

Legend

  • ✅ the test was successful
  • ⚠️ the test failed but is known to be flakky or known to fail
  • ❌ the test failed and should be fixed before merging the PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant