You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
by using some of your template i found a bug on agent behaviour kinda troublesome when you use anthropic AI.
i fix it so using claude prompting, here the code as i don t know which file contain it. I might make a commit later if i find it.
well since i updated the agent on my side it seems fixed so
fromlangchain_core.toolsimportStructuredToolfromlangflow.base.agents.agentimportLCToolsAgentComponentfromlangflow.base.models.model_input_constantsimportALL_PROVIDER_FIELDS, MODEL_PROVIDERS_DICTfromlangflow.base.models.model_utilsimportget_model_namefromlangflow.components.helpersimportCurrentDateComponentfromlangflow.components.helpers.memoryimportMemoryComponentfromlangflow.components.langchain_utilities.tool_callingimportToolCallingAgentComponentfromlangflow.ioimportBoolInput, DropdownInput, MultilineInput, Outputfromlangflow.schema.dotdictimportdotdictfromlangflow.schema.messageimportMessagedefset_advanced_true(component_input):
component_input.advanced=Truereturncomponent_inputclassAgentComponent(ToolCallingAgentComponent):
display_name: str="Agent"description: str="Define the agent's instructions, then enter a task to complete using tools."icon="bot"beta=Falsename="Agent"memory_inputs= [set_advanced_true(component_input) forcomponent_inputinMemoryComponent().inputs]
inputs= [
DropdownInput(
name="agent_llm",
display_name="Model Provider",
info="The provider of the language model that the agent will use to generate responses.",
options=[*sorted(MODEL_PROVIDERS_DICT.keys()), "Custom"],
value="Anthropic",
real_time_refresh=True,
input_types=[],
),
*MODEL_PROVIDERS_DICT["Anthropic"]["inputs"],
MultilineInput(
name="system_prompt",
display_name="Agent Instructions",
info="System Prompt: Initial instructions and context provided to guide the agent's behavior.",
value="You are a helpful assistant that can use tools to answer questions and perform tasks.",
advanced=False,
),
*LCToolsAgentComponent._base_inputs,
*memory_inputs,
BoolInput(
name="add_current_date_tool",
display_name="Add tool Current Date",
advanced=True,
info="If true, will add a tool to the agent that returns the current date.",
value=True,
),
]
outputs= [Output(name="response", display_name="Response", method="message_response")]
asyncdefmessage_response(self) ->Message:
llm_model, display_name=self.get_llm()
self.model_name=get_model_name(llm_model, display_name=display_name)
ifllm_modelisNone:
msg="No language model selected"raiseValueError(msg)
self.chat_history=self.get_memory_data()
ifself.add_current_date_tool:
ifnotisinstance(self.tools, list): # type: ignore[has-type]self.tools= []
# Convert CurrentDateComponent to a StructuredToolcurrent_date_tool=CurrentDateComponent().to_toolkit()[0]
ifisinstance(current_date_tool, StructuredTool):
self.tools.append(current_date_tool)
else:
msg="CurrentDateComponent must be converted to a StructuredTool"raiseValueError(msg)
ifnotself.tools:
msg="Tools are required to run the agent."raiseValueError(msg)
self.set(
llm=llm_model,
tools=self.tools,
chat_history=self.chat_history,
input_value=self.input_value,
system_prompt=self.system_prompt,
)
agent=self.create_agent_runnable()
returnawaitself.run_agent(agent)
defget_memory_data(self):
memory_kwargs= {
component_input.name: getattr(self, f"{component_input.name}") forcomponent_inputinself.memory_inputs
}
returnMemoryComponent().set(**memory_kwargs).retrieve_messages()
defget_llm(self):
ifisinstance(self.agent_llm, str):
try:
provider_info=MODEL_PROVIDERS_DICT.get(self.agent_llm)
ifprovider_info:
component_class=provider_info.get("component_class")
display_name=component_class.display_nameinputs=provider_info.get("inputs")
prefix=provider_info.get("prefix", "")
returnself._build_llm_model(component_class, inputs, prefix), display_nameexceptExceptionase:
msg=f"Error building {self.agent_llm} language model"raiseValueError(msg) fromereturnself.agent_llm, Nonedef_build_llm_model(self, component, inputs, prefix=""):
model_kwargs= {input_.name: getattr(self, f"{prefix}{input_.name}") forinput_ininputs}
returncomponent.set(**model_kwargs).build_model()
defdelete_fields(self, build_config: dotdict, fields: dict|list[str]) ->None:
"""Delete specified fields from build_config."""forfieldinfields:
build_config.pop(field, None)
defupdate_input_types(self, build_config: dotdict) ->dotdict:
"""Update input types for all fields in build_config."""forkey, valueinbuild_config.items():
ifisinstance(value, dict):
ifvalue.get("input_types") isNone:
build_config[key]["input_types"] = []
elifhasattr(value, "input_types") andvalue.input_typesisNone:
value.input_types= []
returnbuild_configdefupdate_build_config(self, build_config: dotdict, field_value: str, field_name: str|None=None) ->dotdict:
# Iterate over all providers in the MODEL_PROVIDERS_DICT# Existing logic for updating build_configiffield_name=="agent_llm":
provider_info=MODEL_PROVIDERS_DICT.get(field_value)
ifprovider_info:
component_class=provider_info.get("component_class")
ifcomponent_classandhasattr(component_class, "update_build_config"):
# Call the component class's update_build_config methodbuild_config=component_class.update_build_config(build_config, field_value, field_name)
provider_configs: dict[str, tuple[dict, list[dict]]] = {
provider: (
MODEL_PROVIDERS_DICT[provider]["fields"],
[
MODEL_PROVIDERS_DICT[other_provider]["fields"]
forother_providerinMODEL_PROVIDERS_DICTifother_provider!=provider
],
)
forproviderinMODEL_PROVIDERS_DICT
}
iffield_valueinprovider_configs:
fields_to_add, fields_to_delete=provider_configs[field_value]
# Delete fields from other providersforfieldsinfields_to_delete:
self.delete_fields(build_config, fields)
# Add provider-specific fieldsiffield_value=="OpenAI"andnotany(fieldinbuild_configforfieldinfields_to_add):
build_config.update(fields_to_add)
else:
build_config.update(fields_to_add)
# Reset input types for agent_llmbuild_config["agent_llm"]["input_types"] = []
eliffield_value=="Custom":
# Delete all provider fieldsself.delete_fields(build_config, ALL_PROVIDER_FIELDS)
# Update with custom componentcustom_component=DropdownInput(
name="agent_llm",
display_name="Language Model",
options=[*sorted(MODEL_PROVIDERS_DICT.keys()), "Custom"],
value="Custom",
real_time_refresh=True,
input_types=["LanguageModel"],
)
build_config.update({"agent_llm": custom_component.to_dict()})
# Update input types for all fieldsbuild_config=self.update_input_types(build_config)
# Validate required keysdefault_keys= [
"code",
"_type",
"agent_llm",
"tools",
"input_value",
"add_current_date_tool",
"system_prompt",
"agent_description",
"max_iterations",
"handle_parsing_errors",
"verbose",
]
missing_keys= [keyforkeyindefault_keysifkeynotinbuild_config]
ifmissing_keys:
msg=f"Missing required keys in build_config: {missing_keys}"raiseValueError(msg)
ifisinstance(self.agent_llm, str) andself.agent_llminMODEL_PROVIDERS_DICT:
provider_info=MODEL_PROVIDERS_DICT.get(self.agent_llm)
ifprovider_info:
component_class=provider_info.get("component_class")
prefix=provider_info.get("prefix")
ifcomponent_classandhasattr(component_class, "update_build_config"):
# Call each component class's update_build_config method# remove the prefix from the field_nameifisinstance(field_name, str) andisinstance(prefix, str):
field_name=field_name.replace(prefix, "")
build_config=component_class.update_build_config(build_config, field_value, field_name)
returnbuild_config
Hey, @Axle-Bucamp! How's it going? I'm here to help you out with any issues or questions you have. Let's tackle this bug together!
To address the issue where changing the agent provider to "Anthropic" does not update the model choices, ensure that the MODEL_PROVIDERS_DICT includes a properly configured entry for "Anthropic". This entry should specify the component_class, inputs, and any other necessary fields for the "Anthropic" provider. Here's a general approach to resolve this:
Check MODEL_PROVIDERS_DICT: Verify that "Anthropic" is a key in the MODEL_PROVIDERS_DICT and that it includes the necessary configuration details.
Update update_build_config: Ensure that the update_build_config method correctly handles the switch to "Anthropic", similar to how it manages other providers like "OpenAI".
Provider Configuration:
Define the component_class for "Anthropic" and ensure it has an update_build_config method if required.
Specify the inputs and any prefix needed for the "Anthropic" provider.
Field Management: Ensure that fields specific to "Anthropic" are added to the build configuration and that fields from other providers are removed when "Anthropic" is selected.
If "Anthropic" is not properly configured in the MODEL_PROVIDERS_DICT, the application will not update the model choices correctly. You need to add or correct the configuration for "Anthropic" in the MODEL_PROVIDERS_DICT[1].
Bug Description
Hey git hub community, what's up ?
by using some of your template i found a bug on agent behaviour kinda troublesome when you use anthropic AI.
i fix it so using claude prompting, here the code as i don t know which file contain it. I might make a commit later if i find it.
well since i updated the agent on my side it seems fixed so
Reproduction
Expected behavior
update of the model choices based on provider
Who can help?
@anovazzi1
@italojohnny
@NadirJ
Operating System
Windows (atlas os)
Langflow Version
lastest
Python Version
3.12
Screenshot
No response
Flow File
No response
The text was updated successfully, but these errors were encountered: