-
Notifications
You must be signed in to change notification settings - Fork 19.8k
Description
Checked other resources
- This is a bug, not a usage question.
- I added a clear and descriptive title that summarizes this issue.
- I used the GitHub search to find a similar question and didn't find it.
- I am sure that this is a bug in LangChain rather than my code.
- The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
- This is not related to the langchain-community package.
- I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.
Package (Required)
- langchain
- langchain-openai
- langchain-anthropic
- langchain-classic
- langchain-core
- langchain-cli
- langchain-model-profiles
- langchain-tests
- langchain-text-splitters
- langchain-chroma
- langchain-deepseek
- langchain-exa
- langchain-fireworks
- langchain-groq
- langchain-huggingface
- langchain-mistralai
- langchain-nomic
- langchain-ollama
- langchain-perplexity
- langchain-prompty
- langchain-qdrant
- langchain-xai
- Other / not sure / general
Example Code (Python)
from langchain_openai import AzureChatOpenAI
import warnings
from langchain.agents import create_agent
from langchain.agents.middleware import SummarizationMiddleware
from pydantic import BaseModel, Field
from langchain_core.messages import HumanMessage
from langchain_core.prompts import PromptTemplate
import json
warnings.filterwarnings('ignore')
azure_open_ai_config_gpt5 = {
"api_key": "**obfuscated**",
"azure_endpoint": "**obfuscated**",
"azure_deployment": "gpt-5",
"model": "gpt-5",
"deployment_name": "gpt-5",
"api_version": "2025-03-01-preview"
}
reasoning = ["minimal", 'low', 'medium', 'high']
verbosity = ['low', 'medium', 'high']
model = {
(
f"gpt5_{r}_reasoning" if v == "medium"
else f"gpt5_{r}_reasoning_{v}_verbosity"
): AzureChatOpenAI(
**azure_open_ai_config_gpt5,
temperature=0,
model_kwargs={"reasoning": {"effort": r}, "verbosity": v, "max_output_tokens": 128_000},
timeout=60*10,
max_retries=3,
max_tokens=128_000
)
for r in reasoning for v in verbosity
}
def add(
x: int,
y: int
) -> int:
"""Add two integers"""
return x + y
class AgentResponse(BaseModel):
content: str = Field(..., description="Standard response")
tool_calls: int = Field(..., description="Number of tool calls")
agent = create_agent(
model=model['gpt5_medium_reasoning'],
response_format=AgentResponse,
middleware=[SummarizationMiddleware(
model=model['gpt5_low_reasoning'],
max_tokens_before_summary=2000,
messages_to_keep=10,
summary_prompt="Summarize the following tool-call history: {messages}"
)
],
tools=[add],
system_prompt="You are a helpful assistant that can add numbers",
)
test = agent.invoke(
{
"messages": [
HumanMessage(
"""
Find the first 100 numbers of the fibonacci sequence using your tools
"""
)
]
}
)Error Message and Stack Trace (if applicable)
'list' object has no attribute 'strip'"Description
The error message surfaces in the returned message from the summarisation middleware:
HumanMessage(content="Here is a summary of the conversation to date:\n\nError generating summary: 'list' object has no attribute 'strip'", additional_kwargs={}, response_metadata={}, id='cce7000c-af83-4b57-8f3a-1d3e901e1720'),
AIMessage(content=[{'arguments': '{"x":28657,"y":46368}', 'call_id': 'call_ChyA0TPDeU3LMnB14g2Prg1R', 'name': 'add', 'type': 'function_call', 'id': 'fc_081ee22ba7dbf85500691f33b3ea0881939f076de38f38f90c', 'status': 'completed'}], additional_kwargs={}, response_metadata={'id': 'resp_081ee22ba7dbf85500691f33b356f081938b4355be25524dcd', 'created_at': 1763652531.0, 'metadata': {}, 'model': 'gpt-5', 'object': 'response', 'service_tier': 'auto', 'status': 'completed', 'model_provider': 'openai', 'model_name': 'gpt-5'}, id='resp_081ee22ba7dbf85500691f33b356f081938b4355be25524dcd', tool_calls=[{'name': 'add', 'args': {'x': 28657, 'y': 46368}, 'id': 'call_ChyA0TPDeU3LMnB14g2Prg1R', 'type': 'tool_call'}], usage_metadata={'input_tokens': 4418, 'output_tokens': 23, 'total_tokens': 4441, 'input_token_details': {'cache_read': 4352}, 'output_token_details': {'reasoning': 0}})
I can see from the mlflow trace that the summary is successfully being created:
User asked for the first 100 Fibonacci numbers.
The assistant iteratively used a single tool named "add" to sum consecutive terms, starting with add(0, 1) = 1.
Subsequent calls summed the previous two results: add(1, 1) = 2, add(1, 2) = 3, add(2, 3) = 5, … continuing up to add(17711, 28657) = 46368.
Results produced: 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368.
The process generated 23 Fibonacci numbers and stopped at 46368, not completing the requested 100 numbers.
Clearly somewhere a list is attempting to be processed as a string but I was wondering if anyone else has a similar issue? I don't think I'm doing anything wrong here but if there's a fix/workaround that would be great appreciated.
System Info
System Information
OS: Linux
OS Version: #100-Ubuntu SMP Tue May 27 21:41:06 UTC 2025
Python Version: 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0]
Package Information
langchain_core: 1.0.7
langchain: 1.0.8
langsmith: 0.4.44
langchain_openai: 1.0.3
langchain_tavily: 0.2.12
langchain_text_splitters: 1.0.0
langgraph_sdk: 0.2.9
Optional packages not installed
langserve
Other Dependencies
aiohttp: 3.13.2
httpx: 0.27.0
jsonpatch: 1.33
langgraph: 1.0.3
openai: 2.8.1
opentelemetry-api: 1.32.1
opentelemetry-sdk: 1.32.1
orjson: 3.11.4
packaging: 24.2
pydantic: 2.10.6
pytest: 8.3.5
pyyaml: 6.0.2
requests: 2.32.3
requests-toolbelt: 1.0.0
rich: 14.1.0
tenacity: 9.0.0
tiktoken: 0.12.0
typing-extensions: 4.12.2
zstandard: 0.25.0