fix: OpenAI Tool Return Serialisation #3058
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Please describe the purpose of this pull request.
This PR is a potential fix for #3057. When a tool executes client-side (e.g., via letta-code) the resulting MessageRole.tool object persisted by the server contains populated tool_returns but an empty content list. The OpenAI Responses serializer in Message.to_openai_responses_dicts still assumes a single TextContent entry and asserts, causing every subsequent tool step to crash before the agent can continue.
Changes in the fix include:
How to test
How can we test your PR during review? What commands should we run? What outcomes should we expect?
Have you tested this PR?

Post-fix, tools can run in letta-cli with OpenAI agents. E.g.:
Related issues or PRs
This PR addresses #3057.
Is your PR over 500 lines of code?
No.
Additional context
For transparency, this fix was vibecoded with GitHub Copilot.