Skip to content

Conversation

@hebenon
Copy link

@hebenon hebenon commented Nov 1, 2025

Please describe the purpose of this pull request.
This PR is a potential fix for #3057. When a tool executes client-side (e.g., via letta-code) the resulting MessageRole.tool object persisted by the server contains populated tool_returns but an empty content list. The OpenAI Responses serializer in Message.to_openai_responses_dicts still assumes a single TextContent entry and asserts, causing every subsequent tool step to crash before the agent can continue.

Changes in the fix include:

  • Reworked Message.to_openai_dict and Message.to_openai_responses_dicts so tool messages prefer the tool_returns payload (mirroring the Anthropic serializer) with fallback to legacy single-TextContent handling.
  • Updated create_parallel_tool_messages_from_llm_response to inject a TextContent placeholder when only tool_returns are produced, preserving older consumers.
  • Added test_message_serialization.py to validate both behaviors (tool-return-first and legacy fallback).

How to test
How can we test your PR during review? What commands should we run? What outcomes should we expect?

  • Configure an agent to use an OpenAI model, with upserted tools from letta-cli.
  • Give the agent a request that relies on local tools to complete.

Have you tested this PR?
Post-fix, tools can run in letta-cli with OpenAI agents. E.g.:
image

Related issues or PRs
This PR addresses #3057.

Is your PR over 500 lines of code?
No.

Additional context
For transparency, this fix was vibecoded with GitHub Copilot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant