Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to force use tool and send the result to llm? #281

Open
moseshu opened this issue Mar 21, 2025 · 3 comments
Open

how to force use tool and send the result to llm? #281

moseshu opened this issue Mar 21, 2025 · 3 comments
Labels
question Question about using the SDK

Comments

@moseshu
Copy link

moseshu commented Mar 21, 2025

question

I hope that when calling the book agent, the tool of this agent must be called and the result of the tool must be sent to the LLM. The code below to execute. Sometimes the book agent does not execute the tool. If the tool_choise is set to 'required', it will fall into an infinite loop. If it is set to None, the tool may not be executed. How can I solve this problem?

# book agent
book_agent = Agent(
    name="Book Agent",
    instructions="""
    You are a professional book content consultant. Your tasks are:
1. You must use the search_book_content tool to search for relevant content
2. Answer the user's question based on the search results
3. If the search results are not sufficient to answer the question, make it clear
    """,
    tools=[search_book_content],
    model=OpenAIChatCompletionsModel(
        model="gpt-4o-mini-2024-07-18",
        openai_client=openai_client,
    ),
     tool_use_behavior='run_llm_again',
     model_settings=ModelSettings(temperature=0.7,max_tokens=8192, tool_choice=None),
    hooks=CustomAgentHooks(display_name="Book Agent")
)
#
chat_agent = Agent(
    name="Chat Agent",
    instructions="""
   You are a friendly chat assistant who is responsible for daily conversations with users.
Please respond to users' questions in a natural and friendly tone.
Remember to keep the conversation coherent and interesting.
    """,
    model=OpenAIChatCompletionsModel(
        model="gpt-4o-mini-2024-07-18",
        openai_client=openai_client
    ),
    model_settings=ModelSettings(temperature=0.7),
    hooks=CustomAgentHooks(display_name="Chat Agent")
)

to_book_agent = handoff(
    agent=book_agent,
    tool_name_override="handle_book_query",
    tool_description_override="Handling book-related inquiries",
    on_handoff=on_handoff,
    input_type=EscalationData,
)

to_chat_agent = handoff(
    agent=chat_agent,
    tool_name_override="handle_chat",
    tool_description_override="Handling general chat inquiries"
)

triga_agent = Agent(
    name="Triga Agent",
     instructions=prompt_with_handoff_instructions("""You are an intent classification agent. Your task is to classify users into different agents based on their intent.
Book Agent: It handles queries related to book content (such as asking about the author, content, chapter, etc.). You can directly call this agent through the handle_book_query tool.
Chat Agent: Ordinary chat content. You can directly call this agent through handle_chat
    """),
    
    handoffs=[to_book_agent,to_chat_agent]
)
async def main():
    response =  Runner.run_streamed(triga_agent,input="What is the last sentence of Chapter 1?",max_turns=10,run_config=run_config)
    async for event in response.stream_events():
#         print(event)
        if event.type == "raw_response_event" and isinstance(event.data, ResponseTextDeltaEvent):
            print(event.data.delta, end="", flush=True)
@moseshu moseshu added the question Question about using the SDK label Mar 21, 2025
@DanieleMorotti
Copy link

DanieleMorotti commented Mar 21, 2025

HI, this new part of the documentation may help you. It explains how to deal with it.

Moreover, there's also an example.

@moseshu
Copy link
Author

moseshu commented Mar 21, 2025

HI, this new part of the documentation may help you. It explains how to deal with it.

Moreover, there's also an example.

I wrote the code with reference to this example. If I set tool_choise='required' and use the custom custom_tool_use_behavior,
the final_output is the result of the tool. It is not sent to llm

async def custom_tool_use_behavior(
    context: RunContextWrapper[Any], results: list[FunctionToolResult]
) -> ToolsToFinalOutputResult:
    data: str = results[0].output
#     print(f"data:{data}")
    return ToolsToFinalOutputResult(
        is_final_output=True,final_output="Hello"
    )
book_agent = Agent(
    name="Book Agent",
    instructions="""
    You are a professional book content consultant. Your tasks are:
1. You must use the search_book_content tool to search for relevant content
2. Answer the user's question based on the search results
3. If the search results are not sufficient to answer the question, make it clear
    """,
    tools=[search_book_content],
    model=OpenAIChatCompletionsModel(
        model="gpt-4o-mini-2024-07-18",
        openai_client=openai_client,
    ),
     tool_use_behavior=custom_tool_use_behavior,
     model_settings=ModelSettings(temperature=0.7,max_tokens=8192, tool_choice='required'),
    hooks=CustomAgentHooks(display_name="Book Agent")
)

@rm-openai
Copy link
Collaborator

#263 will enable this @moseshu

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Question about using the SDK
Projects
None yet
Development

No branches or pull requests

3 participants