You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
People use portkey SDK for making calls to various inference endpoints and also portkey endpoints.
There are cases in which users want to use the portkey sdk for logging and not merely log inference calls.
While this is verily possible even today with manual log insertions, some of the common workflows that users have can be auto-instrumented
This is a feature request to enable auto-instrumentation of the following libraries (not exhaustive):
anthropic
autogen
aws_bedrock
cerebras
chroma
cohere
crewai
dspy
embedchain
gemini
google_genai
groq
langchain
langchain_community
langchain_core
langgraph
litellm
llamaindex
milvus
mistral
ollama
openai
pinecone
pymongo
qdrant
vertexai
weaviate
Approach:
Here are the approaches I considered:
Monkey Patching With OpenTelemetry
Callback Handlers
Just Monkey Patching
✅ Helps in building hierarchy with span ids
➖ have to build hierarchy manually depending on the callback handler implementation
❌ have to build hierarchy manually
✅ Code is much simpler than monkey patching
✅ No need to install any external libraries
❌ Need to install the library as a dependecy to inherit from the CallbackHandler class
✅ More ownership and customizability
Here's a comparision of existing Open source OpenTelemetry for AI solutions that I considered
Langtrace
Uses OpenTelemetry along with monkey patching, this is the most comprehensive implementation, they're capturing useful information for debugging
OpenLit
Similar to Langtrace, fewer integrations
OpenLLMetry
This is a pretty good implementation, they have config objects and parse them for instrumentation, I like it
AgentOps
They don't use opentelemetry, but use monkey patching and also partnerships with internal implementations
Langfuse
Trash implementation, criss-crosses between different levels of abstraction, for example, the OpenAI tracing module is a wrapper and users are expected to import languse.openai.OpenAI, but when someone uses bedrock, they have to use a decorator They also have partnership implementations like Haystack where the code is inside the library emits the instrumentation traces
Things to keep in mind for implementation:
logging should be asynchronous
If a user is using the library both for instrumentation and as client, there should be no duplicate logs
The text was updated successfully, but these errors were encountered:
Feature
People use portkey SDK for making calls to various inference endpoints and also portkey endpoints.
There are cases in which users want to use the portkey sdk for logging and not merely log inference calls.
While this is verily possible even today with manual log insertions, some of the common workflows that users have can be auto-instrumented
This is a feature request to enable auto-instrumentation of the following libraries (not exhaustive):
Approach:
Here are the approaches I considered:
span ids
manually depending on the callback handler implementation
Here's a comparision of existing Open source OpenTelemetry for AI solutions that I considered
but when someone uses bedrock, they have to use a decorator
They also have partnership implementations like Haystack where the code is inside the library emits the instrumentation traces
Things to keep in mind for implementation:
The text was updated successfully, but these errors were encountered: