-
Notifications
You must be signed in to change notification settings - Fork 19.8k
Description
Checked other resources
- This is a feature request, not a bug report or usage question.
- I added a clear and descriptive title that summarizes the feature request.
- I used the GitHub search to find a similar feature request and didn't find it.
- I checked the LangChain documentation and API reference to see if this feature already exists.
- This is not related to the langchain-community package.
Package (Required)
- langchain
- langchain-openai
- langchain-anthropic
- langchain-classic
- langchain-core
- langchain-cli
- langchain-model-profiles
- langchain-tests
- langchain-text-splitters
- langchain-chroma
- langchain-deepseek
- langchain-exa
- langchain-fireworks
- langchain-groq
- langchain-huggingface
- langchain-mistralai
- langchain-nomic
- langchain-ollama
- langchain-perplexity
- langchain-prompty
- langchain-qdrant
- langchain-xai
- Other / not sure / general
Feature Description
I would like to request two small additions to improve the ergonomics of LCEL pipelines when working with message-based agents:
1. A helper function
to_message_state(obj)
that converts strings, BaseMessage objects, lists of messages, or mixed inputs into the standard message-state format:
{"messages": [BaseMessage, ...]}2. A Runnable utility
as_message_state()
which allows this conversion to be inserted cleanly in LCEL pipe chains (e.g. prompt | llm | as_message_state()).
This avoids the need for repeated:
RunnableLambda(lambda x: {"messages": [x]})and standardizes a pattern used by many LangGraph-based workflows.
Use Case
Many LCEL pipelines ultimately need to output a message-state dictionary so they can be consumed by LangGraph or any message-history–based workflow.
Currently, users must manually wrap LLM outputs:
RunnableLambda(lambda x: {"messages": [x]})This pattern:
- is verbose
- is error-prone
- appears repeatedly in real-world user code
- makes LCEL pipelines harder to read
- creates friction when integrating LangChain with LangGraph
Providing to_message_state() and as_message_state() would make this workflow clean, consistent, and beginner-friendly.
Proposed Solution
Helper function
to_message_state(obj)
Behavior:
str→ converted intoHumanMessageBaseMessage→ wrapped directlylist→ normalized into a list of messages{"messages": [...]}→ returned unchangedNone→ returns{"messages": []}- invalid types →
TypeError
Always outputs:
{"messages": [BaseMessage, ...]}Runnable utility
as_message_state()
A thin wrapper around the helper:
chain = prompt | llm | as_message_state()Implementation notes
I am happy to submit a PR with:
- the helper
- the runnable
- complete test coverage
- necessary documentation updates
- updated examples where appropriate
Alternatives Considered
1. Keep using RunnableLambda manually
This works:
RunnableLambda(lambda x: {"messages": [x]})But is verbose, error-prone, and inconsistent.
2. Implement this in LangGraph instead of LangChain
This is not ideal because:
- LCEL, Runnables, and message types live in LangChain
- LangGraph only consumes normalized state and does not transform LLM output
3. Modify LLM classes to output MessageState directly
Too intrusive; breaks separation of concerns.
Additional Context
- MessageState (
{"messages": [...]}) is widely used in LangGraph and message-history–based workflows. - There is currently no official helper or Runnable in LangChain to convert arbitrary LLM outputs into this state format.
- Several community examples show users doing this manually with
RunnableLambda. - This proposal aims to standardize and simplify that pattern.
- I can contribute a PR once maintainers approve the design.