Add continual learning to any LLM agent with one line of code. This SDK enables agents to learn from every conversation and recall context across sessionsβmaking your agents truly stateful.
from openai import OpenAI
from agentic_learning import learning
client = OpenAI()
with learning(agent="my_agent"):
response = client.chat.completions.create(...) # LLM is now stateful!pip install agentic-learningnpm install @letta-ai/agentic-learning# Set your API keys
export OPENAI_API_KEY="your-openai-key"
export LETTA_API_KEY="your-letta-key"from openai import OpenAI
from agentic_learning import learning
client = OpenAI()
# Add continual learning with one line
with learning(agent="my_assistant"):
# All LLM calls inside this block have learning enabled
response = client.chat.completions.create(
model="gpt-5",
messages=[{"role": "user", "content": "My name is Alice"}]
)
# Agent remembers prior context
response = client.chat.completions.create(
model="gpt-5",
messages=[{"role": "user", "content": "What's my name?"}]
)
# Returns: "Your name is Alice"That's it - this SDK automatically:
- β Learns from every conversation
- β Recalls relevant context when needed
- β Remembers across sessions
- β Works with your existing LLM code
# Set your API keys
export OPENAI_API_KEY="your-openai-key"
export LETTA_API_KEY="your-letta-key"import { learning } from '@letta-ai/agentic-learning';
import OpenAI from 'openai';
const client = new OpenAI();
// Add continual learning with one line
await learning({ agent: "my_assistant" }, async () => {
// All LLM calls inside this block have learning enabled
const response = await client.chat.completions.create({
model: "gpt-5",
messages: [{ role: "user", content: "My name is Alice" }]
});
// Agent remembers prior context
const response2 = await client.chat.completions.create({
model: "gpt-5",
messages: [{ role: "user", content: "What's my name?" }]
});
// Returns: "Your name is Alice"
});| Provider | Package | Status | Py Example | TS Example |
|---|---|---|---|---|
| Anthropic | anthropic |
β Stable | anthropic_example.py | anthropic_example.ts |
| Claude Agent SDK | @anthropic-ai/claude-agent-sdk |
β Stable | claude_example.py | claude_example.ts |
| OpenAI Chat Completions | openai |
β Stable | openai_example.py | openai_example.ts |
| OpenAI Responses API | openai |
β Stable | openai_responses_example.py | openai_responses_example.ts |
| Gemini | google-generativeai |
β Stable | gemini_example.py | gemini_example.ts |
| Vercel AI SDK | ai |
β Stable | N/A (TS only) | vercel_example.ts |
Create an issue to request support for another provider, or contribute a PR.
Wrap any LLM calls in a learning() context to enable continual learning:
with learning(agent="agent_name"):
# All LLM calls inside this block have learning enabled
response = llm_client.generate(...)Note: Learning is scoped by agent name. Each agent learns independently, so agent="sales_bot" and agent="support_bot" maintain separate memories.
The SDK automatically retrieves relevant context from past conversations:
# First session
with learning(agent="sales_bot", memory=["customer"]):
response = client.chat.completions.create(
messages=[{"role": "user", "content": "I'm interested in Product X"}]
)
# Later session - agent remembers any information related to "customer"
with learning(agent="sales_bot", memory=["customer"]):
response = client.chat.completions.create(
messages=[{"role": "user", "content": "Tell me more about that product"}]
)
# Agent knows you're asking about Product XStore conversations without injecting context (useful for logging or background processing):
with learning(agent="agent_name", capture_only=True):
# Conversations saved for learning but not injected into prompts
response = client.chat.completions.create(...)
# Later, list entire conversation history
learning_client = AgenticLearning()
messages = learning_client.messages.list("agent_name")Query what your agent has learned with semantic search:
# Search for relevant conversations
messages = learning_client.memory.search(
agent="agent_name",
query="What are my project requirements?"
)This SDK adds stateful memory to your existing LLM code with zero architectural changes:
Benefits:
- π Drop-in integration - Works with your existing LLM Provider SDK code
- π§ Automatic memory - Relevant context retrieved and injected into prompts
- πΎ Persistent across sessions - Conversations remembered even after restarts
- π° Cost-effective - Only relevant context injected, reducing token usage
- β‘ Fast retrieval - Semantic search powered by Letta's optimized infrastructure
- π’ Production-ready - Built on Letta's proven memory management platform
Architecture:
1. π― Wrap 2. π Capture 3. π Retrieve 4. π€ Respond
your code conversations relevant with full
in learning automatically memories context
βββββββββββββββ
β Your Code β
β learning() β
ββββββββ¬βββββββ
β
βΌ
βββββββββββββββ ββββββββββββββββ
β Interceptor βββββΆβ Letta Server β (Stores conversations,
β (Inject) ββββββ (Memory) β retrieves context)
ββββββββ¬βββββββ ββββββββββββββββ
β
βΌ
βββββββββββββββ
β LLM API β (Sees enriched prompts)
β OpenAI/etc β
βββββββββββββββ
The SDK provides interceptors for different integration patterns:
- API-Level Interceptors (OpenAI, Anthropic, Gemini) - Patch HTTP API methods
- Transport-Level Interceptors (Claude Agent SDK) - Patch subprocess transport layer
All interceptors share common logic through BaseAPIInterceptor, making it easy to add new providers.
AgenticLearning()
βββ agents # Agent management
β βββ create()
β βββ update()
β βββ retrieve()
β βββ list()
β βββ delete()
β βββ sleeptime # Background memory processing
βββ memory # Memory block management
β βββ create()
β βββ upsert()
β βββ retrieve()
β βββ list()
β βββ search() # Semantic search
β βββ remember() # Store memories
β βββ context # Memory context retrieval
βββ messages # Message history
βββ capture() # Save conversation turn
βββ list()
βββ create() # Send message to LLM- Python 3.9+
- Letta API key (sign up at letta.com)
- At least one LLM SDK:
openai>=1.0.0anthropic>=0.18.0google-generativeai>=0.3.0@anthropic-ai/claude-agent-sdk>=0.1.0
- Node.js 18+
- Letta API key (sign up at letta.com)
- At least one LLM SDK:
openai>=4.0.0@anthropic-ai/sdk>=0.30.0@google/generative-ai>=0.21.0@anthropic-ai/claude-agent-sdk>=0.1.0ai>=3.0.0(Vercel AI SDK)
For local development, you can run Letta server locally:
from agentic_learning import AgenticLearning, learning
# Connect to local server
learning_client = AgenticLearning(base_url="http://localhost:8283")
with learning(agent="my_agent", client=learning_client):
response = client.chat.completions.create(...)Run Letta locally with Docker:
docker run \
-v ~/.letta/.persist/pgdata:/var/lib/postgresql/data \
-p 8283:8283 \
-e OPENAI_API_KEY="your_key" \
letta/letta:latestSee the self-hosting guide for more options.
# Clone repository
git clone https://github.com/letta-ai/agentic_learning_sdk.git
cd agentic_learning_sdk
# Install in development mode
pip install -e python/
# Run tests
cd python
.venv/bin/python3 -m pytest tests/ -v
# Run examples
cd ../examples
python3 openai_example.py# Clone repository
git clone https://github.com/letta-ai/agentic_learning_sdk.git
cd agentic_learning_sdk/typescript
# Install dependencies
npm install
# Build
npm run build
# Run tests
npm test
# Run examples
cd ../examples
npx tsx openai_example.tslearning_client = AgenticLearning(base_url="http://custom-host:8283")# Create agent with custom memory blocks
agent = learning_client.agents.create(
agent="my_agent",
memory=["human", "persona", "project_context"],
model="anthropic/claude-sonnet-4-20250514"
)
# Create custom memory block
learning_client.memory.create(
agent="my_agent",
label="user_preferences",
value="Prefers concise technical responses"
)from agentic_learning import learning_async, AsyncAgenticLearning
async_client = AsyncAgenticLearning()
async with learning_async(agent="my_agent", client=async_client):
response = await async_llm_client.generate(...)This SDK includes comprehensive test suites for both Python and TypeScript:
- 36/36 tests passing (100%)
- Unit tests with mocked LLM HTTP calls
- Integration tests with real API calls
- See python/tests/README.md for details
- 40/40 tests passing (100%)
- Unit tests with Jest mocks
- Integration tests with real API calls
- See typescript/tests/README.md for details
Both test suites cover all supported providers and validate:
- β Conversation capture and storage
- β Memory injection into prompts
- β Capture-only mode
- β Interceptor cleanup
Contributions are welcome! Please feel free to submit a Pull Request.
- Create a new interceptor in
python/src/agentic_learning/interceptors/ - Extend
BaseAPIInterceptor(for API-level) orBaseInterceptor(for transport-level) - Implement SDK-specific methods:
extract_user_messages()extract_assistant_message()inject_memory_context()_build_response_from_chunks()
- Register in
__init__.py - Add example to
examples/
See existing interceptors for reference implementations.
Apache 2.0 - See LICENSE for details.
- π Homepage
- π Examples
- π Issue Tracker
- π¬ Letta Discord
- π Letta Documentation
Built with Letta - the leading platform for building stateful AI agents with long-term memory.