-
Notifications
You must be signed in to change notification settings - Fork 8.4k
fix: Add max_token param in ModelInput #11464
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the
WalkthroughThe changes add a new Changes
Sequence Diagram(s)sequenceDiagram
participant User as User/UI
participant Component as LanguageModelComponent
participant BuildModel as build_model()
participant GetLLM as get_llm()
participant Metadata as Provider Metadata
participant Provider as Provider API<br/>(OpenAI/Google/etc)
User->>Component: Set max_tokens input
Component->>BuildModel: Call build_model()
BuildModel->>GetLLM: Call get_llm(model, ..., max_tokens=value)
GetLLM->>Metadata: Lookup max_tokens_field_name<br/>for provider
Metadata-->>GetLLM: Return field name<br/>(e.g., "max_tokens"<br/>or "max_output_tokens")
GetLLM->>Provider: Call with provider-specific<br/>field name and value
Provider-->>GetLLM: Return configured model
GetLLM-->>BuildModel: Return LanguageModel
BuildModel-->>Component: Complete model setup
Component-->>User: Ready for inference
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Important Pre-merge checks failedPlease resolve all errors before merging. Addressing warnings is optional. ❌ Failed checks (1 error, 3 warnings)
✅ Passed checks (3 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #11464 +/- ##
==========================================
- Coverage 34.84% 33.21% -1.63%
==========================================
Files 1420 1420
Lines 68188 68188
Branches 9977 9977
==========================================
- Hits 23762 22651 -1111
- Misses 43202 44313 +1111
Partials 1224 1224
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
src/backend/base/langflow/initial_setup/starter_projects/Custom Component Generator.json (1)
2549-2558: Exposemax_tokensin the starter project’s node template.The
codestring now definesmax_tokens(Line 2699), but the LanguageModelComponent schema here still omits amax_tokenstemplate entry andfield_orderdoesn’t include it (Line 2549-2558, Line 2624-2890). Importing this starter project will hide the new parameter. Please add the template field and include it infield_order(or regenerate the starter project asset from the updated component definitions).🔧 Suggested update to surface `max_tokens` in the UI ordering
"field_order": [ "model", "api_key", "base_url_ibm_watsonx", "project_id", "ollama_base_url", "input_value", "system_message", "stream", - "temperature" + "temperature", + "max_tokens" ],Also applies to: 2624-2890
src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json (1)
3360-3716: Addmax_tokensto the serialized template/field order so the starter project exposes it.Line 3525 adds the IntInput in the embedded component code, but Line 3375 (field_order) and Line 3686+ (template) still omit it. That means this starter project likely loads without the new field, so users can’t configure
max_tokens.🛠️ Suggested update to keep the starter project template in sync
@@ - "field_order": [ - "model", - "api_key", - "base_url_ibm_watsonx", - "project_id", - "ollama_base_url", - "input_value", - "system_message", - "stream", - "temperature" - ], + "field_order": [ + "model", + "api_key", + "base_url_ibm_watsonx", + "project_id", + "ollama_base_url", + "input_value", + "system_message", + "stream", + "temperature", + "max_tokens" + ], @@ "temperature": { "_input_type": "SliderInput", "advanced": true, "display_name": "Temperature", "dynamic": false, "info": "Controls randomness in responses", "max_label": "", "max_label_icon": "", "min_label": "", "min_label_icon": "", "name": "temperature", "override_skip": false, "placeholder": "", "range_spec": { "max": 1, "min": 0, "step": 0.01, "step_type": "float" }, "required": false, "show": true, "slider_buttons": false, "slider_buttons_options": [], "slider_input": false, "title_case": false, "tool_mode": false, "track_in_telemetry": false, "type": "slider", "value": 0.1 - } + }, + "max_tokens": { + "_input_type": "IntInput", + "advanced": true, + "display_name": "Max Tokens", + "dynamic": false, + "info": "Maximum number of tokens to generate. Field name varies by provider.", + "list": false, + "name": "max_tokens", + "placeholder": "", + "required": false, + "show": true, + "title_case": false, + "tool_mode": false, + "trace_as_metadata": true, + "type": "int", + "value": "" + }src/backend/tests/unit/components/models_and_agents/test_language_model_component.py (1)
172-203: Fix provider label inconsistency: "Google" should be "Google Generative AI" on lines 260 and 385.Lines 260 and 385 in the same test file use
"provider": "Google", which is inconsistent with line 176 ("Google Generative AI") and with the canonical provider string used throughout the codebase (test_unified_models.py, test_models_api.py, test_models_enabled_providers.py, test_embedding_model_component.py, and starter projects all use"Google Generative AI"). Update lines 260 and 385 to use the same provider label for consistency.src/lfx/src/lfx/base/models/unified_models.py (2)
917-995:max_tokens_field_nameisn’t present in options metadata used by the UI.Line 991 looks only at
metadata.get("max_tokens_field_name"), butget_language_model_optionsdoesn’t inject that field into each option’s metadata. For models selected via the UI, this will fall back to"max_tokens"and ignore provider-specific names. Consider adding a fallback to provider metadata (or adding the field when building options) so the mapping is honored.🔧 Suggested fix (fallback to provider metadata)
- if max_tokens is not None: - max_tokens_param = metadata.get("max_tokens_field_name", "max_tokens") - kwargs[max_tokens_param] = max_tokens + if max_tokens is not None: + provider_meta = model_provider_metadata.get(provider, {}) + max_tokens_param = ( + metadata.get("max_tokens_field_name") + or provider_meta.get("max_tokens_field_name", "max_tokens") + ) + kwargs[max_tokens_param] = max_tokens
63-92: Fix incorrect max_tokens field names for Ollama and IBM WatsonX.The current field name mappings contain critical errors that will cause runtime failures:
- Ollama (line 85): should be
"num_predict", not"max_tokens"(LangChain ChatOllama reference)- IBM WatsonX (line 91): should be
"max_completion_tokens", not"max_tokens"(ChatWatsonx chat models usemax_completion_tokensparameter)The other providers are correct (OpenAI and Anthropic use
max_tokens; Google Generative AI usesmax_output_tokens). Passing unsupported kwargs will cause model instantiation to fail.
🤖 Fix all issues with AI agents
In `@src/backend/base/langflow/initial_setup/starter_projects/Blog` Writer.json:
- Line 1571: The template is missing the serialized IntInput for max_tokens and
its entry in field_order, so the UI won't show the max_tokens input; update the
serialized template for LanguageModelComponent to include the IntInput
definition corresponding to the IntInput(name="max_tokens", display_name="Max
Tokens", ...) and add "max_tokens" to the component's field_order array (e.g.,
place it after "temperature") so build_model and update_build_config can find
and use getattr(self, "max_tokens", None) correctly.
In `@src/backend/base/langflow/initial_setup/starter_projects/Instagram`
Copywriter.json:
- Line 2747: The starter project JSON adds a max_tokens input to the
LanguageModelComponent but the node template's field_order and template inputs
don't include "max_tokens", so the UI won't show it; update the JSON templates
for both LanguageModelComponent nodes to include "max_tokens" in their
field_order arrays and add a corresponding input entry in each node's template
inputs (matching the Multiline/IntInput definition used in
LanguageModelComponent) so the stored template and runtime component stay in
sync.
In `@src/backend/base/langflow/initial_setup/starter_projects/Meeting`
Summary.json:
- Line 3209: The LanguageModelComponent now exposes a max_tokens input but the
serialized node templates are missing it; update the template and field_order
for both LanguageModelComponent-cPCaH and LanguageModelComponent-mMKmF to
include "max_tokens" so the control appears in the UI. Locate the serialized
nodes for LanguageModelComponent (ids LanguageModelComponent-cPCaH and
LanguageModelComponent-mMKmF) and add "max_tokens" to their template dict (with
corresponding default/value entry) and insert "max_tokens" into the field_order
array in the appropriate position consistent with other inputs (e.g., alongside
temperature/stream). Ensure the field name matches exactly "max_tokens" so
build_model/getattr(getter) picks it up.
In `@src/backend/base/langflow/initial_setup/starter_projects/Memory`
Chatbot.json:
- Line 1449: The template JSON is missing the max_tokens input metadata even
though LanguageModelComponent defines an IntInput named "max_tokens" and
build_model() passes it to get_llm(); add a corresponding "max_tokens" field to
the template's inputs section next to "temperature" including the same
properties (name "max_tokens", display_name "Max Tokens", info, advanced true)
so the UI exposes it and aligns with LanguageModelComponent and get_llm usage.
In `@src/backend/base/langflow/initial_setup/starter_projects/Research`
Agent.json:
- Line 2255: The node template is missing a max_tokens input and therefore the
LanguageModelComponent's new max_tokens setting isn't configurable; add a new
IntInput definition named "max_tokens" (matching the
LanguageModelComponent.inputs entry) to each LanguageModelComponent node
template and insert "max_tokens" into the node's field_order array (ensure
placement matches other advanced fields like temperature/stream), updating both
LanguageModelComponent node entries so saved flows render and persist the
max_tokens value.
In `@src/backend/base/langflow/initial_setup/starter_projects/Research`
Translation Loop.json:
- Line 1091: The template JSON is missing the max_tokens field metadata even
though LanguageModelComponent defines an IntInput named "max_tokens"; add a
corresponding entry for max_tokens into the template's fields array and include
"max_tokens" in the template's field_order (place it near "temperature") so the
IntInput appears in the UI; update the template metadata to mirror the IntInput
properties (name="max_tokens", display_name, info, advanced flag if applicable)
to match the component's definition.
In `@src/backend/base/langflow/initial_setup/starter_projects/SEO` Keyword
Generator.json:
- Line 1102: The template metadata is missing the "max_tokens" field in its
field_order so the IntInput named max_tokens (declared in LanguageModelComponent
and used in build_model()) doesn't show in the UI; update the component/template
metadata's field_order array to include "max_tokens" immediately after
"temperature" so the field appears and respects the intended ordering.
🧹 Nitpick comments (3)
src/lfx/src/lfx/components/models_and_agents/language_model.py (1)
92-107: Add validation formax_tokensto enforce positive integer requirement across all providers.
Currently, any integer including negatives and zero can be passed through. All major LLM providers (OpenAI, Anthropic, Google Gemini, IBM watsonx) requiremax_tokensto be a positive integer (≥ 1). Add early validation to fail fast and improve UX rather than letting provider-side errors occur.♻️ Proposed refactor
- def build_model(self) -> LanguageModel: - return get_llm( + def build_model(self) -> LanguageModel: + max_tokens = getattr(self, "max_tokens", None) + if max_tokens is not None and max_tokens < 1: + raise ValueError("max_tokens must be a positive integer.") + return get_llm( model=self.model, user_id=self.user_id, api_key=self.api_key, temperature=self.temperature, stream=self.stream, - max_tokens=getattr(self, "max_tokens", None), + max_tokens=max_tokens, watsonx_url=getattr(self, "base_url_ibm_watsonx", None), watsonx_project_id=getattr(self, "project_id", None), ollama_base_url=getattr(self, "ollama_base_url", None), )src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json (1)
1508-1517: Optional: Consider addingmax_tokenstofield_orderfor consistency.The
field_orderarray doesn't include the newmax_tokensfield, while other advanced inputs liketemperatureandstreamare listed. Sincemax_tokensis marked asadvanced=True, it will still appear correctly in the advanced section, so this is purely a consistency suggestion.src/lfx/src/lfx/_assets/component_index.json (1)
89759-89762: Normalizemax_tokensbefore callingget_llm.
The default config leaves the value blank, so a defensive cast avoids passing empty strings or non‑positive values downstream.♻️ Proposed fix
def build_model(self) -> LanguageModel: - return get_llm( + max_tokens = getattr(self, "max_tokens", None) + if not isinstance(max_tokens, int) or max_tokens <= 0: + max_tokens = None + return get_llm( model=self.model, user_id=self.user_id, api_key=self.api_key, temperature=self.temperature, stream=self.stream, - max_tokens=getattr(self, "max_tokens", None), + max_tokens=max_tokens, watsonx_url=getattr(self, "base_url_ibm_watsonx", None), watsonx_project_id=getattr(self, "project_id", None), ollama_base_url=getattr(self, "ollama_base_url", None), )
| "title_case": false, | ||
| "type": "code", | ||
| "value": "from lfx.base.models.model import LCModelComponent\nfrom lfx.base.models.unified_models import (\n get_language_model_options,\n get_llm,\n update_model_options_in_build_config,\n)\nfrom lfx.base.models.watsonx_constants import IBM_WATSONX_URLS\nfrom lfx.field_typing import LanguageModel\nfrom lfx.field_typing.range_spec import RangeSpec\nfrom lfx.inputs.inputs import BoolInput, DropdownInput, StrInput\nfrom lfx.io import MessageInput, ModelInput, MultilineInput, SecretStrInput, SliderInput\n\nDEFAULT_OLLAMA_URL = \"http://localhost:11434\"\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n ModelInput(\n name=\"model\",\n display_name=\"Language Model\",\n info=\"Select your model provider\",\n real_time_refresh=True,\n required=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n advanced=True,\n ),\n DropdownInput(\n name=\"base_url_ibm_watsonx\",\n display_name=\"watsonx API Endpoint\",\n info=\"The base URL of the API (IBM watsonx.ai only)\",\n options=IBM_WATSONX_URLS,\n value=IBM_WATSONX_URLS[0],\n show=False,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"project_id\",\n display_name=\"watsonx Project ID\",\n info=\"The project ID associated with the foundation model (IBM watsonx.ai only)\",\n show=False,\n required=False,\n ),\n MessageInput(\n name=\"ollama_base_url\",\n display_name=\"Ollama API URL\",\n info=f\"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}\",\n value=DEFAULT_OLLAMA_URL,\n show=False,\n real_time_refresh=True,\n load_from_db=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n return get_llm(\n model=self.model,\n user_id=self.user_id,\n api_key=self.api_key,\n temperature=self.temperature,\n stream=self.stream,\n watsonx_url=getattr(self, \"base_url_ibm_watsonx\", None),\n watsonx_project_id=getattr(self, \"project_id\", None),\n ollama_base_url=getattr(self, \"ollama_base_url\", None),\n )\n\n def update_build_config(self, build_config: dict, field_value: str, field_name: str | None = None):\n \"\"\"Dynamically update build config with user-filtered model options.\"\"\"\n # Update model options\n build_config = update_model_options_in_build_config(\n component=self,\n build_config=build_config,\n cache_key_prefix=\"language_model_options\",\n get_options_func=get_language_model_options,\n field_name=field_name,\n field_value=field_value,\n )\n\n # Show/hide provider-specific fields based on selected model\n # Get current model value - from field_value if model is being changed, otherwise from build_config\n current_model_value = field_value if field_name == \"model\" else build_config.get(\"model\", {}).get(\"value\")\n if isinstance(current_model_value, list) and len(current_model_value) > 0:\n selected_model = current_model_value[0]\n provider = selected_model.get(\"provider\", \"\")\n\n # Show/hide watsonx fields\n is_watsonx = provider == \"IBM WatsonX\"\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = is_watsonx\n build_config[\"project_id\"][\"show\"] = is_watsonx\n build_config[\"base_url_ibm_watsonx\"][\"required\"] = is_watsonx\n build_config[\"project_id\"][\"required\"] = is_watsonx\n\n # Show/hide Ollama fields\n is_ollama = provider == \"Ollama\"\n build_config[\"ollama_base_url\"][\"show\"] = is_ollama\n\n return build_config\n" | ||
| "value": "from lfx.base.models.model import LCModelComponent\nfrom lfx.base.models.unified_models import (\n get_language_model_options,\n get_llm,\n update_model_options_in_build_config,\n)\nfrom lfx.base.models.watsonx_constants import IBM_WATSONX_URLS\nfrom lfx.field_typing import LanguageModel\nfrom lfx.field_typing.range_spec import RangeSpec\nfrom lfx.inputs.inputs import BoolInput, DropdownInput, StrInput\nfrom lfx.io import IntInput, MessageInput, ModelInput, MultilineInput, SecretStrInput, SliderInput\n\nDEFAULT_OLLAMA_URL = \"http://localhost:11434\"\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n ModelInput(\n name=\"model\",\n display_name=\"Language Model\",\n info=\"Select your model provider\",\n real_time_refresh=True,\n required=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n advanced=True,\n ),\n DropdownInput(\n name=\"base_url_ibm_watsonx\",\n display_name=\"watsonx API Endpoint\",\n info=\"The base URL of the API (IBM watsonx.ai only)\",\n options=IBM_WATSONX_URLS,\n value=IBM_WATSONX_URLS[0],\n show=False,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"project_id\",\n display_name=\"watsonx Project ID\",\n info=\"The project ID associated with the foundation model (IBM watsonx.ai only)\",\n show=False,\n required=False,\n ),\n MessageInput(\n name=\"ollama_base_url\",\n display_name=\"Ollama API URL\",\n info=f\"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}\",\n value=DEFAULT_OLLAMA_URL,\n show=False,\n real_time_refresh=True,\n load_from_db=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n info=\"Maximum number of tokens to generate. Field name varies by provider.\",\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n return get_llm(\n model=self.model,\n user_id=self.user_id,\n api_key=self.api_key,\n temperature=self.temperature,\n stream=self.stream,\n max_tokens=getattr(self, \"max_tokens\", None),\n watsonx_url=getattr(self, \"base_url_ibm_watsonx\", None),\n watsonx_project_id=getattr(self, \"project_id\", None),\n ollama_base_url=getattr(self, \"ollama_base_url\", None),\n )\n\n def update_build_config(self, build_config: dict, field_value: str, field_name: str | None = None):\n \"\"\"Dynamically update build config with user-filtered model options.\"\"\"\n # Update model options\n build_config = update_model_options_in_build_config(\n component=self,\n build_config=build_config,\n cache_key_prefix=\"language_model_options\",\n get_options_func=get_language_model_options,\n field_name=field_name,\n field_value=field_value,\n )\n\n # Show/hide provider-specific fields based on selected model\n # Get current model value - from field_value if model is being changed, otherwise from build_config\n current_model_value = field_value if field_name == \"model\" else build_config.get(\"model\", {}).get(\"value\")\n if isinstance(current_model_value, list) and len(current_model_value) > 0:\n selected_model = current_model_value[0]\n provider = selected_model.get(\"provider\", \"\")\n\n # Show/hide watsonx fields\n is_watsonx = provider == \"IBM WatsonX\"\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = is_watsonx\n build_config[\"project_id\"][\"show\"] = is_watsonx\n build_config[\"base_url_ibm_watsonx\"][\"required\"] = is_watsonx\n build_config[\"project_id\"][\"required\"] = is_watsonx\n\n # Show/hide Ollama fields\n is_ollama = provider == \"Ollama\"\n build_config[\"ollama_base_url\"][\"show\"] = is_ollama\n\n return build_config\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
python - <<'PY'
import json
from pathlib import Path
path = Path("src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json")
data = json.loads(path.read_text())
for node in data["data"]["nodes"]:
if str(node.get("id","")).startswith("LanguageModelComponent-"):
tmpl = node["data"]["node"]["template"]
print("node id:", node["id"])
print("has max_tokens key:", "max_tokens" in tmpl)
print("field_order:", node["data"]["node"].get("field_order", []))
if "max_tokens" in tmpl:
print("max_tokens config:", json.dumps(tmpl["max_tokens"], indent=2))
PYRepository: langflow-ai/langflow
Length of output: 150
🏁 Script executed:
#!/bin/bash
# Check file size first
wc -l "src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json"
# Search for max_tokens in the file
grep -n "max_tokens" "src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json" | head -20Repository: langflow-ai/langflow
Length of output: 5863
🏁 Script executed:
#!/bin/bash
# Extract lines around the LanguageModelComponent node to see the JSON template structure
sed -n '1560,1797p' "src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json" | tail -100Repository: langflow-ai/langflow
Length of output: 3298
🏁 Script executed:
#!/bin/bash
# Search for max_tokens and field_order specifically in the node
grep -n '"max_tokens"' "src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json"
echo "---"
grep -n '"field_order"' "src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json"Repository: langflow-ai/langflow
Length of output: 267
🏁 Script executed:
#!/bin/bash
# Extract context around line 1420 to see the field_order
sed -n '1410,1450p' "src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json"Repository: langflow-ai/langflow
Length of output: 1383
Add max_tokens field to the template and include it in field_order.
The embedded LanguageModelComponent code defines an IntInput for max_tokens, but the serialized template is missing both the field definition in the template object and the max_tokens entry in field_order (which currently ends with "temperature"). Without these, the UI will not expose the max_tokens input to users.
🤖 Prompt for AI Agents
In `@src/backend/base/langflow/initial_setup/starter_projects/Blog` Writer.json at
line 1571, The template is missing the serialized IntInput for max_tokens and
its entry in field_order, so the UI won't show the max_tokens input; update the
serialized template for LanguageModelComponent to include the IntInput
definition corresponding to the IntInput(name="max_tokens", display_name="Max
Tokens", ...) and add "max_tokens" to the component's field_order array (e.g.,
place it after "temperature") so build_model and update_build_config can find
and use getattr(self, "max_tokens", None) correctly.
| "title_case": false, | ||
| "type": "code", | ||
| "value": "from lfx.base.models.model import LCModelComponent\nfrom lfx.base.models.unified_models import (\n get_language_model_options,\n get_llm,\n update_model_options_in_build_config,\n)\nfrom lfx.base.models.watsonx_constants import IBM_WATSONX_URLS\nfrom lfx.field_typing import LanguageModel\nfrom lfx.field_typing.range_spec import RangeSpec\nfrom lfx.inputs.inputs import BoolInput, DropdownInput, StrInput\nfrom lfx.io import MessageInput, ModelInput, MultilineInput, SecretStrInput, SliderInput\n\nDEFAULT_OLLAMA_URL = \"http://localhost:11434\"\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n ModelInput(\n name=\"model\",\n display_name=\"Language Model\",\n info=\"Select your model provider\",\n real_time_refresh=True,\n required=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n advanced=True,\n ),\n DropdownInput(\n name=\"base_url_ibm_watsonx\",\n display_name=\"watsonx API Endpoint\",\n info=\"The base URL of the API (IBM watsonx.ai only)\",\n options=IBM_WATSONX_URLS,\n value=IBM_WATSONX_URLS[0],\n show=False,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"project_id\",\n display_name=\"watsonx Project ID\",\n info=\"The project ID associated with the foundation model (IBM watsonx.ai only)\",\n show=False,\n required=False,\n ),\n MessageInput(\n name=\"ollama_base_url\",\n display_name=\"Ollama API URL\",\n info=f\"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}\",\n value=DEFAULT_OLLAMA_URL,\n show=False,\n real_time_refresh=True,\n load_from_db=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n return get_llm(\n model=self.model,\n user_id=self.user_id,\n api_key=self.api_key,\n temperature=self.temperature,\n stream=self.stream,\n watsonx_url=getattr(self, \"base_url_ibm_watsonx\", None),\n watsonx_project_id=getattr(self, \"project_id\", None),\n ollama_base_url=getattr(self, \"ollama_base_url\", None),\n )\n\n def update_build_config(self, build_config: dict, field_value: str, field_name: str | None = None):\n \"\"\"Dynamically update build config with user-filtered model options.\"\"\"\n # Update model options\n build_config = update_model_options_in_build_config(\n component=self,\n build_config=build_config,\n cache_key_prefix=\"language_model_options\",\n get_options_func=get_language_model_options,\n field_name=field_name,\n field_value=field_value,\n )\n\n # Show/hide provider-specific fields based on selected model\n # Get current model value - from field_value if model is being changed, otherwise from build_config\n current_model_value = field_value if field_name == \"model\" else build_config.get(\"model\", {}).get(\"value\")\n if isinstance(current_model_value, list) and len(current_model_value) > 0:\n selected_model = current_model_value[0]\n provider = selected_model.get(\"provider\", \"\")\n\n # Show/hide watsonx fields\n is_watsonx = provider == \"IBM WatsonX\"\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = is_watsonx\n build_config[\"project_id\"][\"show\"] = is_watsonx\n build_config[\"base_url_ibm_watsonx\"][\"required\"] = is_watsonx\n build_config[\"project_id\"][\"required\"] = is_watsonx\n\n # Show/hide Ollama fields\n is_ollama = provider == \"Ollama\"\n build_config[\"ollama_base_url\"][\"show\"] = is_ollama\n\n return build_config\n" | ||
| "value": "from lfx.base.models.model import LCModelComponent\nfrom lfx.base.models.unified_models import (\n get_language_model_options,\n get_llm,\n update_model_options_in_build_config,\n)\nfrom lfx.base.models.watsonx_constants import IBM_WATSONX_URLS\nfrom lfx.field_typing import LanguageModel\nfrom lfx.field_typing.range_spec import RangeSpec\nfrom lfx.inputs.inputs import BoolInput, DropdownInput, StrInput\nfrom lfx.io import IntInput, MessageInput, ModelInput, MultilineInput, SecretStrInput, SliderInput\n\nDEFAULT_OLLAMA_URL = \"http://localhost:11434\"\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n ModelInput(\n name=\"model\",\n display_name=\"Language Model\",\n info=\"Select your model provider\",\n real_time_refresh=True,\n required=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n advanced=True,\n ),\n DropdownInput(\n name=\"base_url_ibm_watsonx\",\n display_name=\"watsonx API Endpoint\",\n info=\"The base URL of the API (IBM watsonx.ai only)\",\n options=IBM_WATSONX_URLS,\n value=IBM_WATSONX_URLS[0],\n show=False,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"project_id\",\n display_name=\"watsonx Project ID\",\n info=\"The project ID associated with the foundation model (IBM watsonx.ai only)\",\n show=False,\n required=False,\n ),\n MessageInput(\n name=\"ollama_base_url\",\n display_name=\"Ollama API URL\",\n info=f\"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}\",\n value=DEFAULT_OLLAMA_URL,\n show=False,\n real_time_refresh=True,\n load_from_db=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n info=\"Maximum number of tokens to generate. Field name varies by provider.\",\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n return get_llm(\n model=self.model,\n user_id=self.user_id,\n api_key=self.api_key,\n temperature=self.temperature,\n stream=self.stream,\n max_tokens=getattr(self, \"max_tokens\", None),\n watsonx_url=getattr(self, \"base_url_ibm_watsonx\", None),\n watsonx_project_id=getattr(self, \"project_id\", None),\n ollama_base_url=getattr(self, \"ollama_base_url\", None),\n )\n\n def update_build_config(self, build_config: dict, field_value: str, field_name: str | None = None):\n \"\"\"Dynamically update build config with user-filtered model options.\"\"\"\n # Update model options\n build_config = update_model_options_in_build_config(\n component=self,\n build_config=build_config,\n cache_key_prefix=\"language_model_options\",\n get_options_func=get_language_model_options,\n field_name=field_name,\n field_value=field_value,\n )\n\n # Show/hide provider-specific fields based on selected model\n # Get current model value - from field_value if model is being changed, otherwise from build_config\n current_model_value = field_value if field_name == \"model\" else build_config.get(\"model\", {}).get(\"value\")\n if isinstance(current_model_value, list) and len(current_model_value) > 0:\n selected_model = current_model_value[0]\n provider = selected_model.get(\"provider\", \"\")\n\n # Show/hide watsonx fields\n is_watsonx = provider == \"IBM WatsonX\"\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = is_watsonx\n build_config[\"project_id\"][\"show\"] = is_watsonx\n build_config[\"base_url_ibm_watsonx\"][\"required\"] = is_watsonx\n build_config[\"project_id\"][\"required\"] = is_watsonx\n\n # Show/hide Ollama fields\n is_ollama = provider == \"Ollama\"\n build_config[\"ollama_base_url\"][\"show\"] = is_ollama\n\n return build_config\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Starter project template still omits max_tokens.
Line 2747 adds the max_tokens input in the embedded component code, but the node template/field_order in this JSON doesn’t include a max_tokens field. If the starter project uses the stored template (as it does for other inputs), the UI won’t expose the new control. Please add max_tokens to field_order and the template inputs for both LanguageModelComponent nodes (or confirm templates are rehydrated on load).
🤖 Prompt for AI Agents
In `@src/backend/base/langflow/initial_setup/starter_projects/Instagram`
Copywriter.json at line 2747, The starter project JSON adds a max_tokens input
to the LanguageModelComponent but the node template's field_order and template
inputs don't include "max_tokens", so the UI won't show it; update the JSON
templates for both LanguageModelComponent nodes to include "max_tokens" in their
field_order arrays and add a corresponding input entry in each node's template
inputs (matching the Multiline/IntInput definition used in
LanguageModelComponent) so the stored template and runtime component stay in
sync.
| "title_case": false, | ||
| "type": "code", | ||
| "value": "from lfx.base.models.model import LCModelComponent\nfrom lfx.base.models.unified_models import (\n get_language_model_options,\n get_llm,\n update_model_options_in_build_config,\n)\nfrom lfx.base.models.watsonx_constants import IBM_WATSONX_URLS\nfrom lfx.field_typing import LanguageModel\nfrom lfx.field_typing.range_spec import RangeSpec\nfrom lfx.inputs.inputs import BoolInput, DropdownInput, StrInput\nfrom lfx.io import MessageInput, ModelInput, MultilineInput, SecretStrInput, SliderInput\n\nDEFAULT_OLLAMA_URL = \"http://localhost:11434\"\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n ModelInput(\n name=\"model\",\n display_name=\"Language Model\",\n info=\"Select your model provider\",\n real_time_refresh=True,\n required=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n advanced=True,\n ),\n DropdownInput(\n name=\"base_url_ibm_watsonx\",\n display_name=\"watsonx API Endpoint\",\n info=\"The base URL of the API (IBM watsonx.ai only)\",\n options=IBM_WATSONX_URLS,\n value=IBM_WATSONX_URLS[0],\n show=False,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"project_id\",\n display_name=\"watsonx Project ID\",\n info=\"The project ID associated with the foundation model (IBM watsonx.ai only)\",\n show=False,\n required=False,\n ),\n MessageInput(\n name=\"ollama_base_url\",\n display_name=\"Ollama API URL\",\n info=f\"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}\",\n value=DEFAULT_OLLAMA_URL,\n show=False,\n real_time_refresh=True,\n load_from_db=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n return get_llm(\n model=self.model,\n user_id=self.user_id,\n api_key=self.api_key,\n temperature=self.temperature,\n stream=self.stream,\n watsonx_url=getattr(self, \"base_url_ibm_watsonx\", None),\n watsonx_project_id=getattr(self, \"project_id\", None),\n ollama_base_url=getattr(self, \"ollama_base_url\", None),\n )\n\n def update_build_config(self, build_config: dict, field_value: str, field_name: str | None = None):\n \"\"\"Dynamically update build config with user-filtered model options.\"\"\"\n # Update model options\n build_config = update_model_options_in_build_config(\n component=self,\n build_config=build_config,\n cache_key_prefix=\"language_model_options\",\n get_options_func=get_language_model_options,\n field_name=field_name,\n field_value=field_value,\n )\n\n # Show/hide provider-specific fields based on selected model\n # Get current model value - from field_value if model is being changed, otherwise from build_config\n current_model_value = field_value if field_name == \"model\" else build_config.get(\"model\", {}).get(\"value\")\n if isinstance(current_model_value, list) and len(current_model_value) > 0:\n selected_model = current_model_value[0]\n provider = selected_model.get(\"provider\", \"\")\n\n # Show/hide watsonx fields\n is_watsonx = provider == \"IBM WatsonX\"\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = is_watsonx\n build_config[\"project_id\"][\"show\"] = is_watsonx\n build_config[\"base_url_ibm_watsonx\"][\"required\"] = is_watsonx\n build_config[\"project_id\"][\"required\"] = is_watsonx\n\n # Show/hide Ollama fields\n is_ollama = provider == \"Ollama\"\n build_config[\"ollama_base_url\"][\"show\"] = is_ollama\n\n return build_config\n" | ||
| "value": "from lfx.base.models.model import LCModelComponent\nfrom lfx.base.models.unified_models import (\n get_language_model_options,\n get_llm,\n update_model_options_in_build_config,\n)\nfrom lfx.base.models.watsonx_constants import IBM_WATSONX_URLS\nfrom lfx.field_typing import LanguageModel\nfrom lfx.field_typing.range_spec import RangeSpec\nfrom lfx.inputs.inputs import BoolInput, DropdownInput, StrInput\nfrom lfx.io import IntInput, MessageInput, ModelInput, MultilineInput, SecretStrInput, SliderInput\n\nDEFAULT_OLLAMA_URL = \"http://localhost:11434\"\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n ModelInput(\n name=\"model\",\n display_name=\"Language Model\",\n info=\"Select your model provider\",\n real_time_refresh=True,\n required=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n advanced=True,\n ),\n DropdownInput(\n name=\"base_url_ibm_watsonx\",\n display_name=\"watsonx API Endpoint\",\n info=\"The base URL of the API (IBM watsonx.ai only)\",\n options=IBM_WATSONX_URLS,\n value=IBM_WATSONX_URLS[0],\n show=False,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"project_id\",\n display_name=\"watsonx Project ID\",\n info=\"The project ID associated with the foundation model (IBM watsonx.ai only)\",\n show=False,\n required=False,\n ),\n MessageInput(\n name=\"ollama_base_url\",\n display_name=\"Ollama API URL\",\n info=f\"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}\",\n value=DEFAULT_OLLAMA_URL,\n show=False,\n real_time_refresh=True,\n load_from_db=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n info=\"Maximum number of tokens to generate. Field name varies by provider.\",\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n return get_llm(\n model=self.model,\n user_id=self.user_id,\n api_key=self.api_key,\n temperature=self.temperature,\n stream=self.stream,\n max_tokens=getattr(self, \"max_tokens\", None),\n watsonx_url=getattr(self, \"base_url_ibm_watsonx\", None),\n watsonx_project_id=getattr(self, \"project_id\", None),\n ollama_base_url=getattr(self, \"ollama_base_url\", None),\n )\n\n def update_build_config(self, build_config: dict, field_value: str, field_name: str | None = None):\n \"\"\"Dynamically update build config with user-filtered model options.\"\"\"\n # Update model options\n build_config = update_model_options_in_build_config(\n component=self,\n build_config=build_config,\n cache_key_prefix=\"language_model_options\",\n get_options_func=get_language_model_options,\n field_name=field_name,\n field_value=field_value,\n )\n\n # Show/hide provider-specific fields based on selected model\n # Get current model value - from field_value if model is being changed, otherwise from build_config\n current_model_value = field_value if field_name == \"model\" else build_config.get(\"model\", {}).get(\"value\")\n if isinstance(current_model_value, list) and len(current_model_value) > 0:\n selected_model = current_model_value[0]\n provider = selected_model.get(\"provider\", \"\")\n\n # Show/hide watsonx fields\n is_watsonx = provider == \"IBM WatsonX\"\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = is_watsonx\n build_config[\"project_id\"][\"show\"] = is_watsonx\n build_config[\"base_url_ibm_watsonx\"][\"required\"] = is_watsonx\n build_config[\"project_id\"][\"required\"] = is_watsonx\n\n # Show/hide Ollama fields\n is_ollama = provider == \"Ollama\"\n build_config[\"ollama_base_url\"][\"show\"] = is_ollama\n\n return build_config\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
python - <<'PY'
import json
from pathlib import Path
path = Path("src/backend/base/langflow/initial_setup/starter_projects/Meeting Summary.json")
data = json.loads(path.read_text())
lm_nodes = [n for n in data["data"]["nodes"] if n.get("data", {}).get("type") == "LanguageModelComponent"]
for n in lm_nodes:
node = n["data"]["node"]
tmpl = node.get("template", {})
field_order = node.get("field_order", [])
print(
n["id"],
"max_tokens_in_template:", "max_tokens" in tmpl,
"max_tokens_in_field_order:", "max_tokens" in field_order,
)
PYRepository: langflow-ai/langflow
Length of output: 247
Update template and field_order to expose max_tokens for LanguageModelComponent nodes.
The embedded code at line 3209 now defines and uses max_tokens, but both LanguageModelComponent nodes (LanguageModelComponent-cPCaH and LanguageModelComponent-mMKmF) are missing max_tokens from their serialized template and field_order. This will prevent the max_tokens control from appearing in the UI. Add max_tokens to both nodes' template and field_order to match the updated code.
🤖 Prompt for AI Agents
In `@src/backend/base/langflow/initial_setup/starter_projects/Meeting`
Summary.json at line 3209, The LanguageModelComponent now exposes a max_tokens
input but the serialized node templates are missing it; update the template and
field_order for both LanguageModelComponent-cPCaH and
LanguageModelComponent-mMKmF to include "max_tokens" so the control appears in
the UI. Locate the serialized nodes for LanguageModelComponent (ids
LanguageModelComponent-cPCaH and LanguageModelComponent-mMKmF) and add
"max_tokens" to their template dict (with corresponding default/value entry) and
insert "max_tokens" into the field_order array in the appropriate position
consistent with other inputs (e.g., alongside temperature/stream). Ensure the
field name matches exactly "max_tokens" so build_model/getattr(getter) picks it
up.
| "title_case": false, | ||
| "type": "code", | ||
| "value": "from lfx.base.models.model import LCModelComponent\nfrom lfx.base.models.unified_models import (\n get_language_model_options,\n get_llm,\n update_model_options_in_build_config,\n)\nfrom lfx.base.models.watsonx_constants import IBM_WATSONX_URLS\nfrom lfx.field_typing import LanguageModel\nfrom lfx.field_typing.range_spec import RangeSpec\nfrom lfx.inputs.inputs import BoolInput, DropdownInput, StrInput\nfrom lfx.io import MessageInput, ModelInput, MultilineInput, SecretStrInput, SliderInput\n\nDEFAULT_OLLAMA_URL = \"http://localhost:11434\"\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n ModelInput(\n name=\"model\",\n display_name=\"Language Model\",\n info=\"Select your model provider\",\n real_time_refresh=True,\n required=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n advanced=True,\n ),\n DropdownInput(\n name=\"base_url_ibm_watsonx\",\n display_name=\"watsonx API Endpoint\",\n info=\"The base URL of the API (IBM watsonx.ai only)\",\n options=IBM_WATSONX_URLS,\n value=IBM_WATSONX_URLS[0],\n show=False,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"project_id\",\n display_name=\"watsonx Project ID\",\n info=\"The project ID associated with the foundation model (IBM watsonx.ai only)\",\n show=False,\n required=False,\n ),\n MessageInput(\n name=\"ollama_base_url\",\n display_name=\"Ollama API URL\",\n info=f\"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}\",\n value=DEFAULT_OLLAMA_URL,\n show=False,\n real_time_refresh=True,\n load_from_db=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n return get_llm(\n model=self.model,\n user_id=self.user_id,\n api_key=self.api_key,\n temperature=self.temperature,\n stream=self.stream,\n watsonx_url=getattr(self, \"base_url_ibm_watsonx\", None),\n watsonx_project_id=getattr(self, \"project_id\", None),\n ollama_base_url=getattr(self, \"ollama_base_url\", None),\n )\n\n def update_build_config(self, build_config: dict, field_value: str, field_name: str | None = None):\n \"\"\"Dynamically update build config with user-filtered model options.\"\"\"\n # Update model options\n build_config = update_model_options_in_build_config(\n component=self,\n build_config=build_config,\n cache_key_prefix=\"language_model_options\",\n get_options_func=get_language_model_options,\n field_name=field_name,\n field_value=field_value,\n )\n\n # Show/hide provider-specific fields based on selected model\n # Get current model value - from field_value if model is being changed, otherwise from build_config\n current_model_value = field_value if field_name == \"model\" else build_config.get(\"model\", {}).get(\"value\")\n if isinstance(current_model_value, list) and len(current_model_value) > 0:\n selected_model = current_model_value[0]\n provider = selected_model.get(\"provider\", \"\")\n\n # Show/hide watsonx fields\n is_watsonx = provider == \"IBM WatsonX\"\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = is_watsonx\n build_config[\"project_id\"][\"show\"] = is_watsonx\n build_config[\"base_url_ibm_watsonx\"][\"required\"] = is_watsonx\n build_config[\"project_id\"][\"required\"] = is_watsonx\n\n # Show/hide Ollama fields\n is_ollama = provider == \"Ollama\"\n build_config[\"ollama_base_url\"][\"show\"] = is_ollama\n\n return build_config\n" | ||
| "value": "from lfx.base.models.model import LCModelComponent\nfrom lfx.base.models.unified_models import (\n get_language_model_options,\n get_llm,\n update_model_options_in_build_config,\n)\nfrom lfx.base.models.watsonx_constants import IBM_WATSONX_URLS\nfrom lfx.field_typing import LanguageModel\nfrom lfx.field_typing.range_spec import RangeSpec\nfrom lfx.inputs.inputs import BoolInput, DropdownInput, StrInput\nfrom lfx.io import IntInput, MessageInput, ModelInput, MultilineInput, SecretStrInput, SliderInput\n\nDEFAULT_OLLAMA_URL = \"http://localhost:11434\"\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n ModelInput(\n name=\"model\",\n display_name=\"Language Model\",\n info=\"Select your model provider\",\n real_time_refresh=True,\n required=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n advanced=True,\n ),\n DropdownInput(\n name=\"base_url_ibm_watsonx\",\n display_name=\"watsonx API Endpoint\",\n info=\"The base URL of the API (IBM watsonx.ai only)\",\n options=IBM_WATSONX_URLS,\n value=IBM_WATSONX_URLS[0],\n show=False,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"project_id\",\n display_name=\"watsonx Project ID\",\n info=\"The project ID associated with the foundation model (IBM watsonx.ai only)\",\n show=False,\n required=False,\n ),\n MessageInput(\n name=\"ollama_base_url\",\n display_name=\"Ollama API URL\",\n info=f\"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}\",\n value=DEFAULT_OLLAMA_URL,\n show=False,\n real_time_refresh=True,\n load_from_db=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n info=\"Maximum number of tokens to generate. Field name varies by provider.\",\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n return get_llm(\n model=self.model,\n user_id=self.user_id,\n api_key=self.api_key,\n temperature=self.temperature,\n stream=self.stream,\n max_tokens=getattr(self, \"max_tokens\", None),\n watsonx_url=getattr(self, \"base_url_ibm_watsonx\", None),\n watsonx_project_id=getattr(self, \"project_id\", None),\n ollama_base_url=getattr(self, \"ollama_base_url\", None),\n )\n\n def update_build_config(self, build_config: dict, field_value: str, field_name: str | None = None):\n \"\"\"Dynamically update build config with user-filtered model options.\"\"\"\n # Update model options\n build_config = update_model_options_in_build_config(\n component=self,\n build_config=build_config,\n cache_key_prefix=\"language_model_options\",\n get_options_func=get_language_model_options,\n field_name=field_name,\n field_value=field_value,\n )\n\n # Show/hide provider-specific fields based on selected model\n # Get current model value - from field_value if model is being changed, otherwise from build_config\n current_model_value = field_value if field_name == \"model\" else build_config.get(\"model\", {}).get(\"value\")\n if isinstance(current_model_value, list) and len(current_model_value) > 0:\n selected_model = current_model_value[0]\n provider = selected_model.get(\"provider\", \"\")\n\n # Show/hide watsonx fields\n is_watsonx = provider == \"IBM WatsonX\"\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = is_watsonx\n build_config[\"project_id\"][\"show\"] = is_watsonx\n build_config[\"base_url_ibm_watsonx\"][\"required\"] = is_watsonx\n build_config[\"project_id\"][\"required\"] = is_watsonx\n\n # Show/hide Ollama fields\n is_ollama = provider == \"Ollama\"\n build_config[\"ollama_base_url\"][\"show\"] = is_ollama\n\n return build_config\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if the file exists and get its size
if [ -f "src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json" ]; then
wc -l "src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json"
else
echo "File not found"
fiRepository: langflow-ai/langflow
Length of output: 145
🏁 Script executed:
#!/bin/bash
# Search for max_tokens in the JSON file
rg -n "max_tokens" "src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json"Repository: langflow-ai/langflow
Length of output: 5784
🏁 Script executed:
#!/bin/bash
# Search for "field_order" to understand template structure
rg -n "field_order" "src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json" | head -20Repository: langflow-ai/langflow
Length of output: 229
🏁 Script executed:
#!/bin/bash
# Read lines around 1449 to find the node ID and template structure
sed -n '1400,1550p' "src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json" | cat -nRepository: langflow-ai/langflow
Length of output: 12494
🏁 Script executed:
#!/bin/bash
# Search for the node id that contains this code (likely before line 1449)
sed -n '1350,1450p' "src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json" | grep -E '"id"|"template"|"field_order"' | head -20Repository: langflow-ai/langflow
Length of output: 89
🏁 Script executed:
#!/bin/bash
# Search for max_tokens field in the template
sed -n '1300,1700p' "src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json" | grep -A 20 '"max_tokens"' | head -30Repository: langflow-ai/langflow
Length of output: 46
🏁 Script executed:
#!/bin/bash
# Find the field_order for this node (should be after the template definition and before the next node)
sed -n '1550,1650p' "src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json" | cat -nRepository: langflow-ai/langflow
Length of output: 4349
🏁 Script executed:
#!/bin/bash
# Search for field_order in the context of this node
sed -n '1650,1700p' "src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json" | grep -B 5 -A 15 "field_order"Repository: langflow-ai/langflow
Length of output: 46
🏁 Script executed:
#!/bin/bash
# Let's also verify by checking what fields are defined in the template vs what the code defines
echo "=== Fields in template (from previous output) ==="
echo "base_url_ibm_watsonx, code, input_value, model, ollama_base_url, project_id, stream, system_message, temperature"
echo ""
echo "=== Fields defined in code string ==="
rg -o '"name":"[^"]+' "src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json" | grep -A 1 'IntInput\|StrInput\|MessageInput\|ModelInput\|SliderInput\|MultilineInput\|BoolInput\|SecretStrInput\|DropdownInput' | grep 'name' | head -20Repository: langflow-ai/langflow
Length of output: 265
🏁 Script executed:
#!/bin/bash
# Search for max_tokens in the entire JSON file to see if it's defined anywhere
grep -n '"max_tokens"' "src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json"Repository: langflow-ai/langflow
Length of output: 46
🏁 Script executed:
#!/bin/bash
# Get a larger context to see the full template structure and what comes after temperature
sed -n '1600,1680p' "src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json"Repository: langflow-ai/langflow
Length of output: 2489
Add max_tokens field to the template metadata.
The Language Model code string defines max_tokens as an IntInput with advanced=True, and the build_model() method passes it to get_llm(). However, this field is not exposed in the template section of the JSON. Add the max_tokens field definition to the template alongside temperature to make it available in the UI.
🤖 Prompt for AI Agents
In `@src/backend/base/langflow/initial_setup/starter_projects/Memory` Chatbot.json
at line 1449, The template JSON is missing the max_tokens input metadata even
though LanguageModelComponent defines an IntInput named "max_tokens" and
build_model() passes it to get_llm(); add a corresponding "max_tokens" field to
the template's inputs section next to "temperature" including the same
properties (name "max_tokens", display_name "Max Tokens", info, advanced true)
so the UI exposes it and aligns with LanguageModelComponent and get_llm usage.
| "title_case": false, | ||
| "type": "code", | ||
| "value": "from lfx.base.models.model import LCModelComponent\nfrom lfx.base.models.unified_models import (\n get_language_model_options,\n get_llm,\n update_model_options_in_build_config,\n)\nfrom lfx.base.models.watsonx_constants import IBM_WATSONX_URLS\nfrom lfx.field_typing import LanguageModel\nfrom lfx.field_typing.range_spec import RangeSpec\nfrom lfx.inputs.inputs import BoolInput, DropdownInput, StrInput\nfrom lfx.io import MessageInput, ModelInput, MultilineInput, SecretStrInput, SliderInput\n\nDEFAULT_OLLAMA_URL = \"http://localhost:11434\"\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n ModelInput(\n name=\"model\",\n display_name=\"Language Model\",\n info=\"Select your model provider\",\n real_time_refresh=True,\n required=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n advanced=True,\n ),\n DropdownInput(\n name=\"base_url_ibm_watsonx\",\n display_name=\"watsonx API Endpoint\",\n info=\"The base URL of the API (IBM watsonx.ai only)\",\n options=IBM_WATSONX_URLS,\n value=IBM_WATSONX_URLS[0],\n show=False,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"project_id\",\n display_name=\"watsonx Project ID\",\n info=\"The project ID associated with the foundation model (IBM watsonx.ai only)\",\n show=False,\n required=False,\n ),\n MessageInput(\n name=\"ollama_base_url\",\n display_name=\"Ollama API URL\",\n info=f\"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}\",\n value=DEFAULT_OLLAMA_URL,\n show=False,\n real_time_refresh=True,\n load_from_db=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n return get_llm(\n model=self.model,\n user_id=self.user_id,\n api_key=self.api_key,\n temperature=self.temperature,\n stream=self.stream,\n watsonx_url=getattr(self, \"base_url_ibm_watsonx\", None),\n watsonx_project_id=getattr(self, \"project_id\", None),\n ollama_base_url=getattr(self, \"ollama_base_url\", None),\n )\n\n def update_build_config(self, build_config: dict, field_value: str, field_name: str | None = None):\n \"\"\"Dynamically update build config with user-filtered model options.\"\"\"\n # Update model options\n build_config = update_model_options_in_build_config(\n component=self,\n build_config=build_config,\n cache_key_prefix=\"language_model_options\",\n get_options_func=get_language_model_options,\n field_name=field_name,\n field_value=field_value,\n )\n\n # Show/hide provider-specific fields based on selected model\n # Get current model value - from field_value if model is being changed, otherwise from build_config\n current_model_value = field_value if field_name == \"model\" else build_config.get(\"model\", {}).get(\"value\")\n if isinstance(current_model_value, list) and len(current_model_value) > 0:\n selected_model = current_model_value[0]\n provider = selected_model.get(\"provider\", \"\")\n\n # Show/hide watsonx fields\n is_watsonx = provider == \"IBM WatsonX\"\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = is_watsonx\n build_config[\"project_id\"][\"show\"] = is_watsonx\n build_config[\"base_url_ibm_watsonx\"][\"required\"] = is_watsonx\n build_config[\"project_id\"][\"required\"] = is_watsonx\n\n # Show/hide Ollama fields\n is_ollama = provider == \"Ollama\"\n build_config[\"ollama_base_url\"][\"show\"] = is_ollama\n\n return build_config\n" | ||
| "value": "from lfx.base.models.model import LCModelComponent\nfrom lfx.base.models.unified_models import (\n get_language_model_options,\n get_llm,\n update_model_options_in_build_config,\n)\nfrom lfx.base.models.watsonx_constants import IBM_WATSONX_URLS\nfrom lfx.field_typing import LanguageModel\nfrom lfx.field_typing.range_spec import RangeSpec\nfrom lfx.inputs.inputs import BoolInput, DropdownInput, StrInput\nfrom lfx.io import IntInput, MessageInput, ModelInput, MultilineInput, SecretStrInput, SliderInput\n\nDEFAULT_OLLAMA_URL = \"http://localhost:11434\"\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n ModelInput(\n name=\"model\",\n display_name=\"Language Model\",\n info=\"Select your model provider\",\n real_time_refresh=True,\n required=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n advanced=True,\n ),\n DropdownInput(\n name=\"base_url_ibm_watsonx\",\n display_name=\"watsonx API Endpoint\",\n info=\"The base URL of the API (IBM watsonx.ai only)\",\n options=IBM_WATSONX_URLS,\n value=IBM_WATSONX_URLS[0],\n show=False,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"project_id\",\n display_name=\"watsonx Project ID\",\n info=\"The project ID associated with the foundation model (IBM watsonx.ai only)\",\n show=False,\n required=False,\n ),\n MessageInput(\n name=\"ollama_base_url\",\n display_name=\"Ollama API URL\",\n info=f\"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}\",\n value=DEFAULT_OLLAMA_URL,\n show=False,\n real_time_refresh=True,\n load_from_db=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n info=\"Maximum number of tokens to generate. Field name varies by provider.\",\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n return get_llm(\n model=self.model,\n user_id=self.user_id,\n api_key=self.api_key,\n temperature=self.temperature,\n stream=self.stream,\n max_tokens=getattr(self, \"max_tokens\", None),\n watsonx_url=getattr(self, \"base_url_ibm_watsonx\", None),\n watsonx_project_id=getattr(self, \"project_id\", None),\n ollama_base_url=getattr(self, \"ollama_base_url\", None),\n )\n\n def update_build_config(self, build_config: dict, field_value: str, field_name: str | None = None):\n \"\"\"Dynamically update build config with user-filtered model options.\"\"\"\n # Update model options\n build_config = update_model_options_in_build_config(\n component=self,\n build_config=build_config,\n cache_key_prefix=\"language_model_options\",\n get_options_func=get_language_model_options,\n field_name=field_name,\n field_value=field_value,\n )\n\n # Show/hide provider-specific fields based on selected model\n # Get current model value - from field_value if model is being changed, otherwise from build_config\n current_model_value = field_value if field_name == \"model\" else build_config.get(\"model\", {}).get(\"value\")\n if isinstance(current_model_value, list) and len(current_model_value) > 0:\n selected_model = current_model_value[0]\n provider = selected_model.get(\"provider\", \"\")\n\n # Show/hide watsonx fields\n is_watsonx = provider == \"IBM WatsonX\"\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = is_watsonx\n build_config[\"project_id\"][\"show\"] = is_watsonx\n build_config[\"base_url_ibm_watsonx\"][\"required\"] = is_watsonx\n build_config[\"project_id\"][\"required\"] = is_watsonx\n\n # Show/hide Ollama fields\n is_ollama = provider == \"Ollama\"\n build_config[\"ollama_base_url\"][\"show\"] = is_ollama\n\n return build_config\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Expose max_tokens in the node template so the new input is actually configurable.
The embedded component code adds max_tokens, but this starter project’s node template doesn’t define a max_tokens field or include it in field_order for either LanguageModelComponent node. In saved flows, inputs render from the node template, so the new setting can be hidden.
Please add the max_tokens field definition and include it in field_order for both LanguageModelComponent nodes (or regenerate the starter project template).
🛠️ Example JSON edits (apply to both LanguageModelComponent nodes)
"field_order": [
"model",
"api_key",
"base_url_ibm_watsonx",
"project_id",
"ollama_base_url",
"input_value",
"system_message",
"stream",
- "temperature"
+ "temperature",
+ "max_tokens"
],
...
"temperature": {
"_input_type": "SliderInput",
...
"value": 0.1
},
+ "max_tokens": {
+ "_input_type": "IntInput",
+ "advanced": true,
+ "display_name": "Max Tokens",
+ "dynamic": false,
+ "info": "Maximum number of tokens to generate. Field name varies by provider.",
+ "list": false,
+ "list_add_label": "Add More",
+ "name": "max_tokens",
+ "placeholder": "",
+ "required": false,
+ "show": true,
+ "title_case": false,
+ "tool_mode": false,
+ "trace_as_metadata": true,
+ "type": "int",
+ "value": null
+ }Also applies to: 2632-2632
🤖 Prompt for AI Agents
In `@src/backend/base/langflow/initial_setup/starter_projects/Research` Agent.json
at line 2255, The node template is missing a max_tokens input and therefore the
LanguageModelComponent's new max_tokens setting isn't configurable; add a new
IntInput definition named "max_tokens" (matching the
LanguageModelComponent.inputs entry) to each LanguageModelComponent node
template and insert "max_tokens" into the node's field_order array (ensure
placement matches other advanced fields like temperature/stream), updating both
LanguageModelComponent node entries so saved flows render and persist the
max_tokens value.
| "title_case": false, | ||
| "type": "code", | ||
| "value": "from lfx.base.models.model import LCModelComponent\nfrom lfx.base.models.unified_models import (\n get_language_model_options,\n get_llm,\n update_model_options_in_build_config,\n)\nfrom lfx.base.models.watsonx_constants import IBM_WATSONX_URLS\nfrom lfx.field_typing import LanguageModel\nfrom lfx.field_typing.range_spec import RangeSpec\nfrom lfx.inputs.inputs import BoolInput, DropdownInput, StrInput\nfrom lfx.io import MessageInput, ModelInput, MultilineInput, SecretStrInput, SliderInput\n\nDEFAULT_OLLAMA_URL = \"http://localhost:11434\"\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n ModelInput(\n name=\"model\",\n display_name=\"Language Model\",\n info=\"Select your model provider\",\n real_time_refresh=True,\n required=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n advanced=True,\n ),\n DropdownInput(\n name=\"base_url_ibm_watsonx\",\n display_name=\"watsonx API Endpoint\",\n info=\"The base URL of the API (IBM watsonx.ai only)\",\n options=IBM_WATSONX_URLS,\n value=IBM_WATSONX_URLS[0],\n show=False,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"project_id\",\n display_name=\"watsonx Project ID\",\n info=\"The project ID associated with the foundation model (IBM watsonx.ai only)\",\n show=False,\n required=False,\n ),\n MessageInput(\n name=\"ollama_base_url\",\n display_name=\"Ollama API URL\",\n info=f\"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}\",\n value=DEFAULT_OLLAMA_URL,\n show=False,\n real_time_refresh=True,\n load_from_db=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n return get_llm(\n model=self.model,\n user_id=self.user_id,\n api_key=self.api_key,\n temperature=self.temperature,\n stream=self.stream,\n watsonx_url=getattr(self, \"base_url_ibm_watsonx\", None),\n watsonx_project_id=getattr(self, \"project_id\", None),\n ollama_base_url=getattr(self, \"ollama_base_url\", None),\n )\n\n def update_build_config(self, build_config: dict, field_value: str, field_name: str | None = None):\n \"\"\"Dynamically update build config with user-filtered model options.\"\"\"\n # Update model options\n build_config = update_model_options_in_build_config(\n component=self,\n build_config=build_config,\n cache_key_prefix=\"language_model_options\",\n get_options_func=get_language_model_options,\n field_name=field_name,\n field_value=field_value,\n )\n\n # Show/hide provider-specific fields based on selected model\n # Get current model value - from field_value if model is being changed, otherwise from build_config\n current_model_value = field_value if field_name == \"model\" else build_config.get(\"model\", {}).get(\"value\")\n if isinstance(current_model_value, list) and len(current_model_value) > 0:\n selected_model = current_model_value[0]\n provider = selected_model.get(\"provider\", \"\")\n\n # Show/hide watsonx fields\n is_watsonx = provider == \"IBM WatsonX\"\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = is_watsonx\n build_config[\"project_id\"][\"show\"] = is_watsonx\n build_config[\"base_url_ibm_watsonx\"][\"required\"] = is_watsonx\n build_config[\"project_id\"][\"required\"] = is_watsonx\n\n # Show/hide Ollama fields\n is_ollama = provider == \"Ollama\"\n build_config[\"ollama_base_url\"][\"show\"] = is_ollama\n\n return build_config\n" | ||
| "value": "from lfx.base.models.model import LCModelComponent\nfrom lfx.base.models.unified_models import (\n get_language_model_options,\n get_llm,\n update_model_options_in_build_config,\n)\nfrom lfx.base.models.watsonx_constants import IBM_WATSONX_URLS\nfrom lfx.field_typing import LanguageModel\nfrom lfx.field_typing.range_spec import RangeSpec\nfrom lfx.inputs.inputs import BoolInput, DropdownInput, StrInput\nfrom lfx.io import IntInput, MessageInput, ModelInput, MultilineInput, SecretStrInput, SliderInput\n\nDEFAULT_OLLAMA_URL = \"http://localhost:11434\"\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n ModelInput(\n name=\"model\",\n display_name=\"Language Model\",\n info=\"Select your model provider\",\n real_time_refresh=True,\n required=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n advanced=True,\n ),\n DropdownInput(\n name=\"base_url_ibm_watsonx\",\n display_name=\"watsonx API Endpoint\",\n info=\"The base URL of the API (IBM watsonx.ai only)\",\n options=IBM_WATSONX_URLS,\n value=IBM_WATSONX_URLS[0],\n show=False,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"project_id\",\n display_name=\"watsonx Project ID\",\n info=\"The project ID associated with the foundation model (IBM watsonx.ai only)\",\n show=False,\n required=False,\n ),\n MessageInput(\n name=\"ollama_base_url\",\n display_name=\"Ollama API URL\",\n info=f\"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}\",\n value=DEFAULT_OLLAMA_URL,\n show=False,\n real_time_refresh=True,\n load_from_db=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n info=\"Maximum number of tokens to generate. Field name varies by provider.\",\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n return get_llm(\n model=self.model,\n user_id=self.user_id,\n api_key=self.api_key,\n temperature=self.temperature,\n stream=self.stream,\n max_tokens=getattr(self, \"max_tokens\", None),\n watsonx_url=getattr(self, \"base_url_ibm_watsonx\", None),\n watsonx_project_id=getattr(self, \"project_id\", None),\n ollama_base_url=getattr(self, \"ollama_base_url\", None),\n )\n\n def update_build_config(self, build_config: dict, field_value: str, field_name: str | None = None):\n \"\"\"Dynamically update build config with user-filtered model options.\"\"\"\n # Update model options\n build_config = update_model_options_in_build_config(\n component=self,\n build_config=build_config,\n cache_key_prefix=\"language_model_options\",\n get_options_func=get_language_model_options,\n field_name=field_name,\n field_value=field_value,\n )\n\n # Show/hide provider-specific fields based on selected model\n # Get current model value - from field_value if model is being changed, otherwise from build_config\n current_model_value = field_value if field_name == \"model\" else build_config.get(\"model\", {}).get(\"value\")\n if isinstance(current_model_value, list) and len(current_model_value) > 0:\n selected_model = current_model_value[0]\n provider = selected_model.get(\"provider\", \"\")\n\n # Show/hide watsonx fields\n is_watsonx = provider == \"IBM WatsonX\"\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = is_watsonx\n build_config[\"project_id\"][\"show\"] = is_watsonx\n build_config[\"base_url_ibm_watsonx\"][\"required\"] = is_watsonx\n build_config[\"project_id\"][\"required\"] = is_watsonx\n\n # Show/hide Ollama fields\n is_ollama = provider == \"Ollama\"\n build_config[\"ollama_base_url\"][\"show\"] = is_ollama\n\n return build_config\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
python3 <<'PY'
import json
import pathlib
path = pathlib.Path("src/backend/base/langflow/initial_setup/starter_projects/Research Translation Loop.json")
try:
data = json.loads(path.read_text())
# Search for LanguageModelComponent node
found = False
for node in data.get("data", {}).get("nodes", []):
node_data = node.get("data", {})
node_type = node_data.get("type")
node_obj = node_data.get("node", {})
display_name = node_obj.get("display_name")
if node_type == "LanguageModelComponent" or display_name == "Language Model":
found = True
print(f"=== Found node: {display_name} (type: {node_type}) ===")
template = node_obj.get("template", {})
print(f"\nTemplate fields present: {list(template.keys())}")
print(f"\n'max_tokens' in template: {'max_tokens' in template}")
field_order = node_obj.get("field_order", [])
print(f"\nfield_order: {field_order}")
if "max_tokens" in template:
print(f"\nmax_tokens config: {json.dumps(template['max_tokens'], indent=2)}")
break
if not found:
print("LanguageModelComponent node not found. Searching for 'Language Model' by display name...")
# Try broader search
for node in data.get("data", {}).get("nodes", []):
if "Language Model" in str(node):
print("Found node containing 'Language Model':", node.get("data", {}).get("node", {}).get("display_name"))
except Exception as e:
print(f"Error: {e}")
PYRepository: langflow-ai/langflow
Length of output: 559
Add max_tokens to the template metadata.
The LanguageModelComponent code defines max_tokens as an IntInput, but it's missing from the JSON template metadata. The template fields do not include max_tokens, and it's absent from field_order as well. This will prevent the field from appearing in the UI. Add the max_tokens field configuration to the template alongside temperature.
🤖 Prompt for AI Agents
In `@src/backend/base/langflow/initial_setup/starter_projects/Research`
Translation Loop.json at line 1091, The template JSON is missing the max_tokens
field metadata even though LanguageModelComponent defines an IntInput named
"max_tokens"; add a corresponding entry for max_tokens into the template's
fields array and include "max_tokens" in the template's field_order (place it
near "temperature") so the IntInput appears in the UI; update the template
metadata to mirror the IntInput properties (name="max_tokens", display_name,
info, advanced flag if applicable) to match the component's definition.
| "title_case": false, | ||
| "type": "code", | ||
| "value": "from lfx.base.models.model import LCModelComponent\nfrom lfx.base.models.unified_models import (\n get_language_model_options,\n get_llm,\n update_model_options_in_build_config,\n)\nfrom lfx.base.models.watsonx_constants import IBM_WATSONX_URLS\nfrom lfx.field_typing import LanguageModel\nfrom lfx.field_typing.range_spec import RangeSpec\nfrom lfx.inputs.inputs import BoolInput, DropdownInput, StrInput\nfrom lfx.io import MessageInput, ModelInput, MultilineInput, SecretStrInput, SliderInput\n\nDEFAULT_OLLAMA_URL = \"http://localhost:11434\"\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n ModelInput(\n name=\"model\",\n display_name=\"Language Model\",\n info=\"Select your model provider\",\n real_time_refresh=True,\n required=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n advanced=True,\n ),\n DropdownInput(\n name=\"base_url_ibm_watsonx\",\n display_name=\"watsonx API Endpoint\",\n info=\"The base URL of the API (IBM watsonx.ai only)\",\n options=IBM_WATSONX_URLS,\n value=IBM_WATSONX_URLS[0],\n show=False,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"project_id\",\n display_name=\"watsonx Project ID\",\n info=\"The project ID associated with the foundation model (IBM watsonx.ai only)\",\n show=False,\n required=False,\n ),\n MessageInput(\n name=\"ollama_base_url\",\n display_name=\"Ollama API URL\",\n info=f\"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}\",\n value=DEFAULT_OLLAMA_URL,\n show=False,\n real_time_refresh=True,\n load_from_db=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n return get_llm(\n model=self.model,\n user_id=self.user_id,\n api_key=self.api_key,\n temperature=self.temperature,\n stream=self.stream,\n watsonx_url=getattr(self, \"base_url_ibm_watsonx\", None),\n watsonx_project_id=getattr(self, \"project_id\", None),\n ollama_base_url=getattr(self, \"ollama_base_url\", None),\n )\n\n def update_build_config(self, build_config: dict, field_value: str, field_name: str | None = None):\n \"\"\"Dynamically update build config with user-filtered model options.\"\"\"\n # Update model options\n build_config = update_model_options_in_build_config(\n component=self,\n build_config=build_config,\n cache_key_prefix=\"language_model_options\",\n get_options_func=get_language_model_options,\n field_name=field_name,\n field_value=field_value,\n )\n\n # Show/hide provider-specific fields based on selected model\n # Get current model value - from field_value if model is being changed, otherwise from build_config\n current_model_value = field_value if field_name == \"model\" else build_config.get(\"model\", {}).get(\"value\")\n if isinstance(current_model_value, list) and len(current_model_value) > 0:\n selected_model = current_model_value[0]\n provider = selected_model.get(\"provider\", \"\")\n\n # Show/hide watsonx fields\n is_watsonx = provider == \"IBM WatsonX\"\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = is_watsonx\n build_config[\"project_id\"][\"show\"] = is_watsonx\n build_config[\"base_url_ibm_watsonx\"][\"required\"] = is_watsonx\n build_config[\"project_id\"][\"required\"] = is_watsonx\n\n # Show/hide Ollama fields\n is_ollama = provider == \"Ollama\"\n build_config[\"ollama_base_url\"][\"show\"] = is_ollama\n\n return build_config\n" | ||
| "value": "from lfx.base.models.model import LCModelComponent\nfrom lfx.base.models.unified_models import (\n get_language_model_options,\n get_llm,\n update_model_options_in_build_config,\n)\nfrom lfx.base.models.watsonx_constants import IBM_WATSONX_URLS\nfrom lfx.field_typing import LanguageModel\nfrom lfx.field_typing.range_spec import RangeSpec\nfrom lfx.inputs.inputs import BoolInput, DropdownInput, StrInput\nfrom lfx.io import IntInput, MessageInput, ModelInput, MultilineInput, SecretStrInput, SliderInput\n\nDEFAULT_OLLAMA_URL = \"http://localhost:11434\"\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n ModelInput(\n name=\"model\",\n display_name=\"Language Model\",\n info=\"Select your model provider\",\n real_time_refresh=True,\n required=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n advanced=True,\n ),\n DropdownInput(\n name=\"base_url_ibm_watsonx\",\n display_name=\"watsonx API Endpoint\",\n info=\"The base URL of the API (IBM watsonx.ai only)\",\n options=IBM_WATSONX_URLS,\n value=IBM_WATSONX_URLS[0],\n show=False,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"project_id\",\n display_name=\"watsonx Project ID\",\n info=\"The project ID associated with the foundation model (IBM watsonx.ai only)\",\n show=False,\n required=False,\n ),\n MessageInput(\n name=\"ollama_base_url\",\n display_name=\"Ollama API URL\",\n info=f\"Endpoint of the Ollama API (Ollama only). Defaults to {DEFAULT_OLLAMA_URL}\",\n value=DEFAULT_OLLAMA_URL,\n show=False,\n real_time_refresh=True,\n load_from_db=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n info=\"Maximum number of tokens to generate. Field name varies by provider.\",\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n return get_llm(\n model=self.model,\n user_id=self.user_id,\n api_key=self.api_key,\n temperature=self.temperature,\n stream=self.stream,\n max_tokens=getattr(self, \"max_tokens\", None),\n watsonx_url=getattr(self, \"base_url_ibm_watsonx\", None),\n watsonx_project_id=getattr(self, \"project_id\", None),\n ollama_base_url=getattr(self, \"ollama_base_url\", None),\n )\n\n def update_build_config(self, build_config: dict, field_value: str, field_name: str | None = None):\n \"\"\"Dynamically update build config with user-filtered model options.\"\"\"\n # Update model options\n build_config = update_model_options_in_build_config(\n component=self,\n build_config=build_config,\n cache_key_prefix=\"language_model_options\",\n get_options_func=get_language_model_options,\n field_name=field_name,\n field_value=field_value,\n )\n\n # Show/hide provider-specific fields based on selected model\n # Get current model value - from field_value if model is being changed, otherwise from build_config\n current_model_value = field_value if field_name == \"model\" else build_config.get(\"model\", {}).get(\"value\")\n if isinstance(current_model_value, list) and len(current_model_value) > 0:\n selected_model = current_model_value[0]\n provider = selected_model.get(\"provider\", \"\")\n\n # Show/hide watsonx fields\n is_watsonx = provider == \"IBM WatsonX\"\n build_config[\"base_url_ibm_watsonx\"][\"show\"] = is_watsonx\n build_config[\"project_id\"][\"show\"] = is_watsonx\n build_config[\"base_url_ibm_watsonx\"][\"required\"] = is_watsonx\n build_config[\"project_id\"][\"required\"] = is_watsonx\n\n # Show/hide Ollama fields\n is_ollama = provider == \"Ollama\"\n build_config[\"ollama_base_url\"][\"show\"] = is_ollama\n\n return build_config\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, find the exact line and surrounding context to understand the JSON structure
wc -l src/backend/base/langflow/initial_setup/starter_projects/SEO\ Keyword\ Generator.jsonRepository: langflow-ai/langflow
Length of output: 152
🏁 Script executed:
# Read the file around line 1102 to see the full context
sed -n '1090,1150p' src/backend/base/langflow/initial_setup/starter_projects/SEO\ Keyword\ Generator.jsonRepository: langflow-ai/langflow
Length of output: 7879
🏁 Script executed:
# Search for max_tokens references in the file
rg "max_tokens" src/backend/base/langflow/initial_setup/starter_projects/SEO\ Keyword\ Generator.json -nRepository: langflow-ai/langflow
Length of output: 5784
🏁 Script executed:
# Look at the LanguageModelComponent section to understand template structure
# Search for the component definition containing this code
rg -A 50 '"display_name".*"Language Model"' src/backend/base/langflow/initial_setup/starter_projects/SEO\ Keyword\ Generator.json | head -100Repository: langflow-ai/langflow
Length of output: 3486
Add max_tokens to the template's field_order array.
The Language Model component code defines max_tokens as an IntInput and uses it in the build_model() method, but it's missing from the field_order array in the template metadata. This prevents the field from appearing in the UI. Add "max_tokens" to field_order after "temperature".
🤖 Prompt for AI Agents
In `@src/backend/base/langflow/initial_setup/starter_projects/SEO` Keyword
Generator.json at line 1102, The template metadata is missing the "max_tokens"
field in its field_order so the IntInput named max_tokens (declared in
LanguageModelComponent and used in build_model()) doesn't show in the UI; update
the component/template metadata's field_order array to include "max_tokens"
immediately after "temperature" so the field appears and respects the intended
ordering.
Summary by CodeRabbit
Release Notes
✏️ Tip: You can customize this high-level summary in your review settings.