Skip to content

letta-ai/agentic-learning-sdk

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

38 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Agentic Learning SDK - AI Memory Layer for Any Application

Add continual learning to any LLM agent with one line of code. This SDK enables agents to learn from every conversation and recall context across sessionsβ€”making your agents truly stateful.

from openai import OpenAI
from agentic_learning import learning

client = OpenAI()

with learning(agent="my_agent"):
    response = client.chat.completions.create(...)  # LLM is now stateful!

pypi npm shield License

Quick Start

Python Installation

pip install agentic-learning

TypeScript Installation

npm install @letta-ai/agentic-learning

Basic Usage (Python)

# Set your API keys
export OPENAI_API_KEY="your-openai-key"
export LETTA_API_KEY="your-letta-key"
from openai import OpenAI
from agentic_learning import learning

client = OpenAI()

# Add continual learning with one line
with learning(agent="my_assistant"):
    # All LLM calls inside this block have learning enabled
    response = client.chat.completions.create(
        model="gpt-5",
        messages=[{"role": "user", "content": "My name is Alice"}]
    )

    # Agent remembers prior context
    response = client.chat.completions.create(
        model="gpt-5",
        messages=[{"role": "user", "content": "What's my name?"}]
    )
    # Returns: "Your name is Alice"

That's it - this SDK automatically:

  • βœ… Learns from every conversation
  • βœ… Recalls relevant context when needed
  • βœ… Remembers across sessions
  • βœ… Works with your existing LLM code

Basic Usage (TypeScript)

# Set your API keys
export OPENAI_API_KEY="your-openai-key"
export LETTA_API_KEY="your-letta-key"
import { learning } from '@letta-ai/agentic-learning';
import OpenAI from 'openai';

const client = new OpenAI();

// Add continual learning with one line
await learning({ agent: "my_assistant" }, async () => {
    // All LLM calls inside this block have learning enabled
    const response = await client.chat.completions.create({
        model: "gpt-5",
        messages: [{ role: "user", content: "My name is Alice" }]
    });

    // Agent remembers prior context
    const response2 = await client.chat.completions.create({
        model: "gpt-5",
        messages: [{ role: "user", content: "What's my name?" }]
    });
    // Returns: "Your name is Alice"
});

Supported Providers

Provider Package Status Py Example TS Example
Anthropic anthropic βœ… Stable anthropic_example.py anthropic_example.ts
Claude Agent SDK @anthropic-ai/claude-agent-sdk βœ… Stable claude_example.py claude_example.ts
OpenAI Chat Completions openai βœ… Stable openai_example.py openai_example.ts
OpenAI Responses API openai βœ… Stable openai_responses_example.py openai_responses_example.ts
Gemini google-generativeai βœ… Stable gemini_example.py gemini_example.ts
Vercel AI SDK ai βœ… Stable N/A (TS only) vercel_example.ts

Create an issue to request support for another provider, or contribute a PR.

Core Concepts

Learning Context

Wrap any LLM calls in a learning() context to enable continual learning:

with learning(agent="agent_name"):
    # All LLM calls inside this block have learning enabled
    response = llm_client.generate(...)

Note: Learning is scoped by agent name. Each agent learns independently, so agent="sales_bot" and agent="support_bot" maintain separate memories.

Context Injection

The SDK automatically retrieves relevant context from past conversations:

# First session
with learning(agent="sales_bot", memory=["customer"]):
    response = client.chat.completions.create(
        messages=[{"role": "user", "content": "I'm interested in Product X"}]
    )

# Later session - agent remembers any information related to "customer"
with learning(agent="sales_bot", memory=["customer"]):
    response = client.chat.completions.create(
        messages=[{"role": "user", "content": "Tell me more about that product"}]
    )
    # Agent knows you're asking about Product X

Capture-Only Mode

Store conversations without injecting context (useful for logging or background processing):

with learning(agent="agent_name", capture_only=True):
    # Conversations saved for learning but not injected into prompts
    response = client.chat.completions.create(...)

# Later, list entire conversation history
learning_client = AgenticLearning()
messages = learning_client.messages.list("agent_name")

Knowledge Search

Query what your agent has learned with semantic search:

# Search for relevant conversations
messages = learning_client.memory.search(
    agent="agent_name",
    query="What are my project requirements?"
)

How It Works

This SDK adds stateful memory to your existing LLM code with zero architectural changes:

Benefits:

  • πŸ”Œ Drop-in integration - Works with your existing LLM Provider SDK code
  • 🧠 Automatic memory - Relevant context retrieved and injected into prompts
  • πŸ’Ύ Persistent across sessions - Conversations remembered even after restarts
  • πŸ’° Cost-effective - Only relevant context injected, reducing token usage
  • ⚑ Fast retrieval - Semantic search powered by Letta's optimized infrastructure
  • 🏒 Production-ready - Built on Letta's proven memory management platform

Architecture:

1. 🎯 Wrap      2. πŸ“ Capture       3. πŸ” Retrieve   4. πŸ€– Respond
   your code       conversations      relevant         with full
   in learning     automatically      memories         context

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Your Code  β”‚
β”‚  learning() β”‚
β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
       β”‚
       β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Interceptor │───▢│ Letta Server β”‚  (Stores conversations,
β”‚  (Inject)   │◀───│  (Memory)    β”‚   retrieves context)
β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
       β”‚
       β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  LLM API    β”‚  (Sees enriched prompts)
β”‚ OpenAI/etc  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Architecture

Interceptors

The SDK provides interceptors for different integration patterns:

  • API-Level Interceptors (OpenAI, Anthropic, Gemini) - Patch HTTP API methods
  • Transport-Level Interceptors (Claude Agent SDK) - Patch subprocess transport layer

All interceptors share common logic through BaseAPIInterceptor, making it easy to add new providers.

Client Architecture

AgenticLearning()
β”œβ”€β”€ agents          # Agent management
β”‚   β”œβ”€β”€ create()
β”‚   β”œβ”€β”€ update()
β”‚   β”œβ”€β”€ retrieve()
β”‚   β”œβ”€β”€ list()
β”‚   β”œβ”€β”€ delete()
β”‚   └── sleeptime   # Background memory processing
β”œβ”€β”€ memory          # Memory block management
β”‚   β”œβ”€β”€ create()
β”‚   β”œβ”€β”€ upsert()
β”‚   β”œβ”€β”€ retrieve()
β”‚   β”œβ”€β”€ list()
β”‚   β”œβ”€β”€ search()    # Semantic search
β”‚   β”œβ”€β”€ remember()  # Store memories
β”‚   └── context     # Memory context retrieval
└── messages        # Message history
    β”œβ”€β”€ capture()   # Save conversation turn
    β”œβ”€β”€ list()
    └── create()    # Send message to LLM

Requirements

Python

  • Python 3.9+
  • Letta API key (sign up at letta.com)
  • At least one LLM SDK:
    • openai>=1.0.0
    • anthropic>=0.18.0
    • google-generativeai>=0.3.0
    • @anthropic-ai/claude-agent-sdk>=0.1.0

TypeScript/JavaScript

  • Node.js 18+
  • Letta API key (sign up at letta.com)
  • At least one LLM SDK:
    • openai>=4.0.0
    • @anthropic-ai/sdk>=0.30.0
    • @google/generative-ai>=0.21.0
    • @anthropic-ai/claude-agent-sdk>=0.1.0
    • ai>=3.0.0 (Vercel AI SDK)

Local Development (Optional)

For local development, you can run Letta server locally:

from agentic_learning import AgenticLearning, learning

# Connect to local server
learning_client = AgenticLearning(base_url="http://localhost:8283")

with learning(agent="my_agent", client=learning_client):
    response = client.chat.completions.create(...)

Run Letta locally with Docker:

docker run \
  -v ~/.letta/.persist/pgdata:/var/lib/postgresql/data \
  -p 8283:8283 \
  -e OPENAI_API_KEY="your_key" \
  letta/letta:latest

See the self-hosting guide for more options.

Development Setup

Python Development

# Clone repository
git clone https://github.com/letta-ai/agentic_learning_sdk.git
cd agentic_learning_sdk

# Install in development mode
pip install -e python/

# Run tests
cd python
.venv/bin/python3 -m pytest tests/ -v

# Run examples
cd ../examples
python3 openai_example.py

TypeScript Development

# Clone repository
git clone https://github.com/letta-ai/agentic_learning_sdk.git
cd agentic_learning_sdk/typescript

# Install dependencies
npm install

# Build
npm run build

# Run tests
npm test

# Run examples
cd ../examples
npx tsx openai_example.ts

Advanced Usage

Custom Letta Server URL

learning_client = AgenticLearning(base_url="http://custom-host:8283")

Agent Configuration

# Create agent with custom memory blocks
agent = learning_client.agents.create(
    agent="my_agent",
    memory=["human", "persona", "project_context"],
    model="anthropic/claude-sonnet-4-20250514"
)

# Create custom memory block
learning_client.memory.create(
    agent="my_agent",
    label="user_preferences",
    value="Prefers concise technical responses"
)

Async Support

from agentic_learning import learning_async, AsyncAgenticLearning

async_client = AsyncAgenticLearning()

async with learning_async(agent="my_agent", client=async_client):
    response = await async_llm_client.generate(...)

Testing

This SDK includes comprehensive test suites for both Python and TypeScript:

Python Tests

  • 36/36 tests passing (100%)
  • Unit tests with mocked LLM HTTP calls
  • Integration tests with real API calls
  • See python/tests/README.md for details

TypeScript Tests

  • 40/40 tests passing (100%)
  • Unit tests with Jest mocks
  • Integration tests with real API calls
  • See typescript/tests/README.md for details

Both test suites cover all supported providers and validate:

  • βœ… Conversation capture and storage
  • βœ… Memory injection into prompts
  • βœ… Capture-only mode
  • βœ… Interceptor cleanup

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Adding a New Provider

  1. Create a new interceptor in python/src/agentic_learning/interceptors/
  2. Extend BaseAPIInterceptor (for API-level) or BaseInterceptor (for transport-level)
  3. Implement SDK-specific methods:
    • extract_user_messages()
    • extract_assistant_message()
    • inject_memory_context()
    • _build_response_from_chunks()
  4. Register in __init__.py
  5. Add example to examples/

See existing interceptors for reference implementations.

License

Apache 2.0 - See LICENSE for details.

Links

Acknowledgments

Built with Letta - the leading platform for building stateful AI agents with long-term memory.

About

Drop-in SDK for adding persistent memory and learning to any agent.

Resources

License

Stars

Watchers

Forks

Packages

No packages published