Releases: Scale3-Labs/langtrace
3.0.15
What's Changed
- Added emojis to README.md For better readability by @NishantRana07 in #312
 - Added a contributing.md by @heysagnik in #315
 - Error Handling: Always check if the response is OK (i.e., response.st… by @jatin9823 in #314
 - Support for OTLP gRPC protocol
 - Bugfixes
 - Release 3.0.15 by @karthikscale3 in #317
 
New Contributors
- @NishantRana07 made their first contribution in #312
 - @heysagnik made their first contribution in #315
 - @jatin9823 made their first contribution in #314
 
Full Changelog: 3.0.14...3.0.15
3.0.14
What's Changed
- Support for tracing xAI models (https://x.ai/)
 - Updates to README.md
 
Full Changelog: 3.0.13...3.0.14
3.0.13
3.0.12
What's Changed
- OTLP Protobuf exporter support
 - CSV uploads for dataset
 - Docs: Typo Fix by @Dnaynu in #294
 - Release 3.0.12 by @karthikscale3 in #300
 
New Contributors
Full Changelog: 3.0.11...3.0.12
3.0.11
What's Changed
- Release 3.011 by @karthikscale3 in #293
 - Added support for LiteLLM tracking
 - Added cost tracking for Google Gemini family of models
 
Full Changelog: 3.0.10...3.0.11
3.0.10
3.0.9
3.0.8
What's Changed
This release includes the following changes:
- 
DSPy project type
 - 
Experiment tracking for DSPy experiments. Note that, for experiments to show up, pass the following additional attributes using the
inject_additional_attributes. This way Langtrace knows that you are running an experiment: - 
(Required)
experiment- Experiment name. Ex:experiment 1. - 
(Optional)
description- Some useful description about the experiment. - 
(Optional)
run_id- When you want to associate traces to a specific runs, pass a unique run ID. This is useful when you are runningEvaluate()as part of your experiment where the traces specific to theEvaluate()will appear as an individual entry. - 
The Eval Chart will appear when you run
Evaluate(). Note: Currently the score ranges it supports are between 0 and 100. So if you have scores that do not fall within this range, it could cause some UI issues. - 
By default, checkpoints are traced for DSPy pipelines. If you would like to disable it, set the following env var in your application code,
TRACE_DSPY_CHECKPOINT=false 
from langtrace_python_sdk import inject_additional_attributes
predictor = inject_additional_attributes(lambda: compiled_rag(my_question), {'experiment': 'experiment 1', 'description': 'some useful description', 'run_id': 'run_1'})
- Bug fixes and query performance improvements
 
Full Changelog: 3.0.7...3.0.8
3.0.7
What's Changed
- Minor bugfix for live prompts @dylanzuber-scale3 in #278
 
Full Changelog: 3.0.6...3.0.7
3.0.6
What's Changed
- Cost tracking and playground support for OpenAI's latest models o1-preview and o1-mini
 - Release 3.0.6 by @karthikscale3 in #276
 
Full Changelog: 3.0.5...3.0.6