-
Notifications
You must be signed in to change notification settings - Fork 18.5k
fix(ops): add streaming metrics and LLM span for agent-chat traces #28320
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @minimAluminiumalism, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the observability and tracing capabilities for agent-chat interactions, particularly for streaming LLM responses. It introduces mechanisms to track and persist critical streaming performance metrics like time-to-first-token and time-to-generate, while also providing more granular LLM-specific tracing data. These improvements will allow for better performance analysis and debugging of LLM-powered agent conversations. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request effectively adds streaming metrics and a dedicated LLM span for agent-chat traces, which will improve observability. The changes are well-implemented across the different files. I've provided a couple of suggestions to improve code readability and simplify logic. Overall, great work on this enhancement.
| self._task_state.llm_result.usage.latency = message.provider_response_latency | ||
|
|
||
| # Add streaming metrics to usage if available | ||
| if self._task_state.is_streaming_response and self._task_state.first_token_time: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The condition self._task_state.is_streaming_response and self._task_state.first_token_time is redundant. Based on the logic in _process_stream_response, is_streaming_response is always True when first_token_time is set. You can simplify this condition to make the code more concise.
| if self._task_state.is_streaming_response and self._task_state.first_token_time: | |
| if self._task_state.first_token_time: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer to keep the explicit check because the dual condition provides better protection.
| model_provider = trace_metadata.get("ls_provider") or ( | ||
| message_data.get("model_provider", "") if isinstance(message_data, dict) else "" | ||
| ) | ||
| model_name = trace_metadata.get("ls_model_name") or ( | ||
| message_data.get("model_id", "") if isinstance(message_data, dict) else "" | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic for extracting model_provider and model_name is a bit dense and could be refactored for better readability. Also, there's a small typo in the comment on line 234 (metadata`` should be metadata`). Consider breaking down the logic into more explicit steps.
| model_provider = trace_metadata.get("ls_provider") or ( | |
| message_data.get("model_provider", "") if isinstance(message_data, dict) else "" | |
| ) | |
| model_name = trace_metadata.get("ls_model_name") or ( | |
| message_data.get("model_id", "") if isinstance(message_data, dict) else "" | |
| ) | |
| model_provider = trace_metadata.get("ls_provider", "") | |
| if not model_provider and isinstance(message_data, dict): | |
| model_provider = message_data.get("model_provider", "") | |
| model_name = trace_metadata.get("ls_model_name", "") | |
| if not model_name and isinstance(message_data, dict): | |
| model_name = message_data.get("model_id", "") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this is purely a style preference I'd prefer to keep it as-is unless there's a functional issue.
54ff0cd to
6e87f22
Compare
b768471 to
3d27bdc
Compare
Important
Fixes #<issue number>.Summary
Fix #28319
This PR contains the following changes.
Screenshots
Checklist
dev/reformat(backend) andcd web && npx lint-staged(frontend) to appease the lint gods