-
Notifications
You must be signed in to change notification settings - Fork 847
Switched langchain llm callbacks to use genai utils handler #3889
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Switched langchain llm callbacks to use genai utils handler #3889
Conversation
…tils apis instead.
...ntelemetry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/__init__.py
Show resolved
Hide resolved
...ntelemetry-instrumentation-langchain/src/opentelemetry/instrumentation/langchain/__init__.py
Show resolved
Hide resolved
instrumentation-genai/opentelemetry-instrumentation-langchain/pyproject.toml
Outdated
Show resolved
Hide resolved
| "Programming Language :: Python :: 3.13", | ||
| ] | ||
| dependencies = [ | ||
| "opentelemetry-api >= 1.31.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't checked, but are we sure we don't need the api or semantic conventions? The previous two deps I was referring to to be removed were splunk dependencies, are we sure we don't need these?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, I checked. It did not complain. Also semantic conventions not required for sure because that will come from utils.
Description
This PR3768(merged) added llm span support in genai utils.
Removed telemetry creation in langchain's llm callbacks and added genai utils handler.
New llm span attributes using utils,
Attributes: -> gen_ai.operation.name: Str(chat) -> gen_ai.request.model: Str(gpt-3.5-turbo) -> gen_ai.provider.name: Str(openai) -> gen_ai.input.messages: Str([{"role":"system","parts":[{"content":"You are a helpful assistant!","type":"text"}]},{"role":"human","parts":[{"content":"What is the capital of France?","type":"text"}]}]) -> gen_ai.output.messages: Str([{"role":"ai","parts":[{"content":"The capital of France is Paris.","type":"text"}],"finish_reason":"stop"}]) -> gen_ai.request.temperature: Double(0.1) -> gen_ai.request.top_p: Double(0.9) -> gen_ai.request.frequency_penalty: Double(0.5) -> gen_ai.request.presence_penalty: Double(0.5) -> gen_ai.request.max_tokens: Int(100) -> gen_ai.request.stop_sequences: Slice(["\n","Human:","AI:"]) -> gen_ai.request.seed: Int(100) -> gen_ai.response.finish_reasons: Slice(["stop"]) -> gen_ai.response.model: Str(gpt-3.5-turbo-0125) -> gen_ai.response.id: Str(chatcmpl-Cvqdva0q8IamQKHpMRTtcjK9ixvm0) -> gen_ai.usage.input_tokens: Int(24) -> gen_ai.usage.output_tokens: Int(7)Old llm span attributes before using utils
Attributes: -> gen_ai.operation.name: Str(chat) -> gen_ai.request.model: Str(gpt-3.5-turbo) -> gen_ai.request.top_p: Double(0.9) -> gen_ai.request.frequency_penalty: Double(0.5) -> gen_ai.request.presence_penalty: Double(0.5) -> gen_ai.request.stop_sequences: Slice(["\n","Human:","AI:"]) -> gen_ai.request.seed: Int(100) -> gen_ai.provider.name: Str(openai) -> gen_ai.request.temperature: Double(0.1) -> gen_ai.request.max_tokens: Int(100) -> gen_ai.usage.input_tokens: Int(24) -> gen_ai.usage.output_tokens: Int(7) -> gen_ai.response.finish_reasons: Slice(["stop"]) -> gen_ai.response.model: Str(gpt-3.5-turbo-0125) -> gen_ai.response.id: Str(chatcmpl-CvqieNzCqFnyKWXXmVGrbiORoNhh6)Type of change
Please delete options that are not relevant.
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
Does This PR Require a Core Repo Change?
Checklist:
See contributing.md for styleguide, changelog guidelines, and more.