Skip to content

Add the ability to dynamically change return_direct inside the function, or some other way dynamically make a tool return direct or not. #34066

@severeserpent

Description

@severeserpent

Checked other resources

  • This is a feature request, not a bug report or usage question.
  • I added a clear and descriptive title that summarizes the feature request.
  • I used the GitHub search to find a similar feature request and didn't find it.
  • I checked the LangChain documentation and API reference to see if this feature already exists.
  • This is not related to the langchain-community package.

Package (Required)

  • langchain
  • langchain-openai
  • langchain-anthropic
  • langchain-classic
  • langchain-core
  • langchain-cli
  • langchain-model-profiles
  • langchain-tests
  • langchain-text-splitters
  • langchain-chroma
  • langchain-deepseek
  • langchain-exa
  • langchain-fireworks
  • langchain-groq
  • langchain-huggingface
  • langchain-mistralai
  • langchain-nomic
  • langchain-ollama
  • langchain-perplexity
  • langchain-prompty
  • langchain-qdrant
  • langchain-xai
  • Other / not sure / general

Feature Description

Feature Request: Ability to Dynamically Toggle return_direct at Runtime
Summary

Currently, tool definitions in LangChain / LangGraph allow configuring return_direct statically at tool creation time. However, there is no supported way for a tool function itself to decide—based on runtime logic—whether its output should be returned directly to the user or passed back into the agent workflow.

This limitation makes it impossible to implement advanced behaviors such as conditional branching, short-circuit responses, guardrails, or dynamic overrides without duplicating tools or injecting out-of-band hacks.

Like if i am dealing with apis.. and say ,my parsing fails for some reason.. and I am not able to do json-->text I want my llm to handle it, but if there was no exception.. then return direct..

Use Case

If i am dealing with apis.. and say ,my parsing fails for some reason.. and I am not able to do json-->text I want my llm to handle it, but if there was no exception.. then return direct..

Proposed Solution

No response

Alternatives Considered

No response

Additional Context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    coreRelated to the package `langchain-core`

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions