Skip to content

letta-ai/letta-evals

Repository files navigation

Letta Evals

Letta Evals provides a framework for evaluating AI agents built with Letta. We offer a flexible evaluation system to test different dimensions of agent behavior and the ability to write your own custom evals for use cases you care about. You can use your own datasets to build private evals that represent common patterns in your agentic workflows.

Letta Evals running an evaluation suite with real-time progress tracking

If you are building with agentic systems, creating high quality evals is one of the most impactful things you can do. Without evals, it can be very difficult and time intensive to understand how agent configurations, model versions, or prompt changes might affect your use case. In the words of OpenAI's President Greg Brockman:

https://x.com/gdb/status/1733553161884127435?s=20

Setup

To run evals against Letta agents, you will need a running Letta server. You can either:

  • Self-hosted: Follow the Letta installation guide to get started with self-hosting your server.
  • Letta Cloud: Create an account at app.letta.com and configure your environment:
    export LETTA_API_KEY=your-api-key        # Get from Letta Cloud dashboard
    export LETTA_PROJECT_ID=your-project-id  # Get from Letta Cloud dashboard
    
    Then set `base_url: https://api.letta.com/` in your suite YAML.
    

If you plan to use LLM-based grading (rubric graders), you'll also need to configure API keys for your chosen provider (e.g., OPENAI_API_KEY).

Minimum Required Version: Python 3.9

Installing Letta Evals

If you are going to be creating custom evals or contributing to this repository, clone the repo directly from GitHub and install using:

# we recommend uv
uv sync --extra dev

Using the editable install, changes you make to your evals will be reflected immediately without having to reinstall.

Running Evals Only

If you simply want to run existing evals locally, you can install the package via pip:

pip install letta-evals

Quick Start

  1. Create a test dataset (dataset.jsonl):
{"input": "What's the capital of France?", "ground_truth": "Paris"}
{"input": "Calculate 2+2", "ground_truth": "4"}
  1. Write a suite configuration (suite.yaml):
name: my-eval-suite
dataset: dataset.jsonl
target:
  kind: letta_agent
  agent_file: my_agent.af  # or use agent_id for existing agents
  base_url: http://localhost:8283
graders:
  quality:
    kind: tool
    function: contains  # or exact_match
    extractor: last_assistant
gate:
  kind: simple
  metric_key: quality
  aggregation: avg_score
  op: gte
  value: 0.75  # require average score >= 0.75
  1. Run the evaluation:
letta-evals run suite.yaml

Running Evals

The core evaluation flow is:

Dataset → Target (Agent) → Extractor → Grader → Gate → Result

# run an evaluation suite with real-time progress
letta-evals run suite.yaml

# save results to a directory (header.json, summary.json, results.jsonl)
letta-evals run suite.yaml --output results

# run multiple times for statistical analysis
letta-evals run suite.yaml --num-runs 5

# validate suite configuration before running
letta-evals validate suite.yaml

# list available components
letta-evals list-extractors
letta-evals list-graders

See the examples/ directory for complete working examples of different eval types.

Writing Evals

Letta Evals supports multiple approaches for creating evaluations, from simple YAML-based configs to fully custom Python implementations.

Getting Started

We suggest getting started with these examples:

Writing Custom Components

Letta Evals provides Python decorators for extending the framework:

  • @grader: Register custom scoring functions for domain-specific evaluation logic
  • @extractor: Create custom extractors to parse agent responses in specialized ways
  • @agent_factory: Define programmatic agent creation for dynamic instantiation per sample
  • @suite_setup: Run initialization code before evaluation starts. Supports three signatures:
    • () -> None - Run once at the start with no parameters
    • (client: AsyncLetta) -> None - Run once at the start with client access
    • (client: AsyncLetta, model_name: str) -> None - Run once per model when evaluating multiple models (useful for model-specific setup like creating isolated working directories)

See examples/custom-tool-grader-and-extractor/ for implementation examples.

FAQ

Do you have examples of different eval types?

  • Yes! See the examples/ directory. Each subdirectory contains a complete working example with dataset, suite config, and any custom components.

Can I use this without writing any Python code?

How do I evaluate multi-turn agent interactions?

  • Letta Evals natively supports multiturn conversations! Simply provide input as a list of strings in your dataset instead of a single string. The framework will send each message sequentially and capture the full trajectory. Use extractors like last_turn, all_assistant, or memory_block to evaluate different aspects of the multiturn interaction. See examples/multiturn-memory-block-extractor/ for a complete example testing memory updates across conversation turns.

Can I test the same agent with different LLM models?

Can I run evaluations multiple times to measure consistency?

  • Yes! Run evaluations multiple times to measure consistency and variance. See examples/simple-tool-grader/multi_run_tool_output_suite.yaml for an example.

    # run 5 times and get mean/std dev statistics
    letta-evals run suite.yaml --num-runs 5 --output results/

    Results include aggregate statistics across runs with mean and standard deviation for all metrics.

Can I monitor long-running evaluations in real-time?

  • Yes! Results are written incrementally as JSONL, allowing you to monitor evaluations in real-time and resume interrupted runs.

Can I reuse agent trajectories when testing different graders?

  • Yes! Use --cached-results to reuse agent trajectories across evaluations, avoiding redundant agent runs when testing different graders.

Can I evaluate Letta Code agents across different models?

  • Yes! The Letta Code target supports evaluating multiple models. In your suite YAML, specify multiple model handles:
    target:
      kind: letta_code
      model_handles:
        - anthropic/claude-sonnet-4-5-20250929
        - gpt-5-low
    The framework automatically creates isolated working directories for each model to prevent interference between concurrent evaluations. When combined with @suite_setup functions that accept model_name, you can perform model-specific initialization for each evaluation run.

Can I use this in CI/CD pipelines?

  • Absolutely! Letta Evals is designed to integrate seamlessly into continuous integration workflows. Check out our .github/workflows/e2e-tests.yml for an example of running evaluations in GitHub Actions. The workflow automatically discovers and runs all suite files, making it easy to gate releases or validate changes to your agents.

I don't have access to LLM provider API keys - can I still use LLM-as-judge / rubric grading?

  • Yes! Use the agent-as-judge feature instead of the standard rubric grader. With agent-as-judge, you configure a Letta agent (with its own LLM access) to act as the evaluator. This is perfect for:

    • Teams without direct LLM API access (using Letta Cloud or managed instances)
    • Scenarios where you want the judge to use tools (e.g., web search, database queries) during evaluation
    • Organizations with centralized LLM access through Letta

    See examples/letta-agent-rubric-grader/ for a complete working example. The judge agent just needs a submit_grade(score: float, rationale: str) tool, and the framework handles the rest!

Contributing

Contributions are welcome! If you have an interesting eval or feature, please submit an issue or contact us on Discord.

License

This project is licensed under the MIT License. By contributing to evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an eval. Letta reserves the right to use this data in future service improvements to our product.