Skip to content

Conversation

@gliozzo
Copy link

@gliozzo gliozzo commented Jan 27, 2026

Released new component https://github.com/gliozzo/langflow/tree/agentics_boundle_llm_fix/src/backend/base/langflow/components/agentics
Need to figure out how to integrate the requested agentics-py module

Summary by CodeRabbit

  • New Features
    • Introduced the Agentics component enabling advanced data processing and transformation workflows.
    • Supports multiple transduction modes: data mapping, reduction, and generation.
    • Compatible with language models from IBM WatsonX and Google Generative AI providers.
    • Enables schema-based data generation with optional metadata tracking for success/failure monitoring.

✏️ Tip: You can customize this high-level summary in your review settings.

@github-actions github-actions bot added the community Pull Request from an external contributor label Jan 27, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 27, 2026

Walkthrough

Introduces an Agentics component enabling AI-driven data transformation via language models. Supports multiple transduction types (amap, areduce, generate) with configurable metadata tracking and multi-provider LLM support. Adds corresponding package dependency.

Changes

Cohort / File(s) Summary
New Agentics Component
src/backend/base/langflow/components/agentics/agentics.py
Introduces Agentics component class with async transduce workflow for DataFrame transformation. Includes language model instantiation with support for IBM WatsonX and Google Generative AI providers, schema-based Pydantic model building, transduction execution (amap/areduce/generate), and metadata enrichment. Helper methods for row formatting, TOML conversion, and conditional metadata appending.
Dependency Management
src/backend/base/pyproject.toml
Added agentics-py>=0.2.2a1 to project dependencies.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Agentics
    participant LLM as Language Model
    participant DataFrame as Data Processing
    
    User->>Agentics: Call transduce() with config
    Agentics->>Agentics: Instantiate/configure LLM<br/>(IBM WatsonX or Google)
    Agentics->>DataFrame: Convert source to AG format
    Agentics->>DataFrame: Build target Pydantic model<br/>from schema
    alt Transduction Type: generate
        Agentics->>LLM: Generate prototypical instances
        LLM-->>Agentics: Generated data
    else Transduction Type: amap/areduce
        Agentics->>LLM: Execute transduction<br/>(amap/areduce)
        LLM-->>Agentics: Transformed data
        alt merge_source enabled
            Agentics->>DataFrame: Merge outputs<br/>with source
        end
    end
    Agentics->>DataFrame: Convert AG output to DataFrame
    Agentics->>Agentics: Add metadata if enabled<br/>(success/error/stats)
    Agentics-->>User: Return transformed DataFrame
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 3
❌ Failed checks (1 error, 2 warnings)
Check name Status Explanation Resolution
Test Coverage For New Implementations ❌ Error The Agentics component was added with 253 lines of code but no test files were included to validate functionality. Add comprehensive unit and integration tests for the Agentics component covering model instantiation, DataFrame I/O, transduction types, error handling, and build config updates.
Test Quality And Coverage ⚠️ Warning The pull request introduces a 253-line Agentics component with complex async functionality, multi-provider LLM instantiation, and DataFrame transformations but lacks any test coverage. Create comprehensive pytest test coverage for the Agentics component including async tests, success paths for all transduction types, error cases, provider validation, and DataFrame conversion tests.
Test File Naming And Structure ⚠️ Warning Pull request introduces a new Agentics component with 253 lines of code but includes no test files, violating the project's established testing patterns. Create comprehensive test files following project patterns: add test_agentics_component.py covering initialization, I/O verification, configuration methods, transduce method with multiple providers, and edge cases.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Excessive Mock Usage Warning ✅ Passed No test files exist for the new Agentics component, and the only related test file uses real objects rather than mocks.
Title check ✅ Passed The title 'feat: add IBM Agentics component' accurately summarizes the main change—introducing a new Agentics component with IBM/multi-provider support.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🤖 Fix all issues with AI agents
In `@src/backend/base/langflow/components/agentics/agentics.py`:
- Around line 31-36: Fix the metadata on the Agentics component class: correct
the typo in the description attribute from "amongs" to "among" and update the
documentation string to include the protocol prefix (e.g.,
"https://github.com/IBM/agentics/"); verify the display_name and icon attributes
(Agentics, "List") remain unchanged. Ensure the changes are made on the Agentics
class attributes: display_name, description, documentation, icon.
- Around line 31-36: The Agentics component class (Agentics) lives in a
non-standard subdirectory (agentics/) and the package __init__.py is empty;
either move the Agentics class file into an existing category (preferably
processing/ or agents/) and remove the agentics/ folder, or if you intend to
keep agentics/ as a new category, populate agentics/__init__.py to export and
forward the component (import Agentics and any helpers and expose them via
__all__) and add any package-level registration logic required by the component
loader; update imports throughout the codebase to reference the new location or
package export accordingly.
- Around line 252-253: The method currently constructs df =
output.to_dataframe() and returns df.to_dict(orient="records") while its
signature declares -> DataFrame; fix by either returning the DataFrame object
(return df) to match the DataFrame return type, or change the function's return
type annotation to list[dict] (or List[Dict[str, Any]]) to reflect
df.to_dict(orient="records"); update the signature accordingly and adjust any
downstream callers/types if you choose the latter.
- Around line 146-178: Remove the three dead helper methods _format_row_as_toml,
_create_base_row, and _add_metadata from the Agentics class: these methods are
never invoked (not used by transduce) and reference undefined attributes
(self.output_column_name, self.enable_metadata), so delete their definitions to
avoid dead code and invalid attribute references; ensure no other part of the
class or tests call these method names before removing.
- Around line 184-220: The model-instantiation block in Agentics must validate
and exhaustively handle providers and return typed errors: first check that
self.model is a non-empty list (avoid IndexError on self.model[0]) and raise a
clear ValueError if empty; use get_model_classes() to resolve
metadata.get("model_class") as you already do, then add explicit branches for
the other supported providers (OpenAI, Anthropic, Ollama) and instantiate the
correct LLM (or equivalent client) with the proper parameters (use
get_api_key_for_provider(self.user_id, provider, self.api_key) for API keys and
self.get_project_name() where needed); never return a string on failure—raise a
descriptive exception (fix the typo “paramters”) for unsupported/invalid
provider or missing API key so downstream consumers get a consistent error type.
- Around line 180-248: In transduce(), llm can remain None when self.model is
not a list; before creating target = AG(..., llm=llm) detect and handle
already-instantiated models by checking self.model's type/interface (e.g.,
isinstance(self.model, LLM) or a known Agentic wrapper) and set llm =
self.model, or if self.model is a dict/object that needs instantiation,
construct the LLM similarly to the list branch; if no valid model is available
raise a clear ValueError. Update the logic around the existing llm variable in
transduce (refer to the llm local, the isinstance(self.model, list) branch, and
the AG(...) instantiation) so llm is always a valid model or a clear error is
thrown before calling AG.
🧹 Nitpick comments (3)
src/backend/base/langflow/components/agentics/agentics.py (2)

77-77: Address TODO comment before merge.

The TODO comment suggests the default value should be removed. Please either remove the default or remove the TODO if intentionally keeping it.

Do you want me to open an issue to track this task?


222-225: Remove commented-out debug code.

These commented lines appear to be debug artifacts and should be removed before merge.

Suggested fix
-        # print("AAAAAA" , type(self.source))
-
-        # if isinstance(self.source, list):
-
         source = AG.from_dataframe(DataFrame(self.source))
src/backend/base/pyproject.toml (1)

20-20: Consider moving agentics-py to optional dependencies.

The alpha version (0.2.2a1) is a pre-release; the latest stable version is 0.2.1. Since agentics-py is only used by the Agentics component (isolated to src/backend/base/langflow/components/agentics/), moving it to [project.optional-dependencies] would prevent all users from installing an unstable version unnecessarily. The >=0.2.2a1 constraint is also too permissive for a pre-release—consider pinning to a tighter range if keeping it in core dependencies.

Comment on lines +31 to +36
class Agentics(Component):
display_name = "Agentics"
description = "Enables Map Reduce Style Agentic data transformations amongs dataframes"
documentation: str = "github.com/IBM/agentics/"
icon = "List"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Minor issues in class metadata.

  1. Typo in description: "amongs" → "among"
  2. Documentation URL missing protocol prefix
Suggested fix
 class Agentics(Component):
     display_name = "Agentics"
-    description = "Enables Map Reduce Style Agentic data transformations amongs dataframes"
-    documentation: str = "github.com/IBM/agentics/"
+    description = "Enables Map Reduce Style Agentic data transformations among dataframes"
+    documentation: str = "https://github.com/IBM/agentics/"
     icon = "List"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
class Agentics(Component):
display_name = "Agentics"
description = "Enables Map Reduce Style Agentic data transformations amongs dataframes"
documentation: str = "github.com/IBM/agentics/"
icon = "List"
class Agentics(Component):
display_name = "Agentics"
description = "Enables Map Reduce Style Agentic data transformations among dataframes"
documentation: str = "https://github.com/IBM/agentics/"
icon = "List"
🤖 Prompt for AI Agents
In `@src/backend/base/langflow/components/agentics/agentics.py` around lines 31 -
36, Fix the metadata on the Agentics component class: correct the typo in the
description attribute from "amongs" to "among" and update the documentation
string to include the protocol prefix (e.g.,
"https://github.com/IBM/agentics/"); verify the display_name and icon attributes
(Agentics, "List") remain unchanged. Ensure the changes are made on the Agentics
class attributes: display_name, description, documentation, icon.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if __init__.py exists in agentics subdirectory
echo "=== Checking agentics/__init__.py ==="
if [ -f "src/backend/base/langflow/components/agentics/__init__.py" ]; then
  cat -n src/backend/base/langflow/components/agentics/__init__.py
else
  echo "No __init__.py found in agentics/"
fi

echo ""
echo "=== Checking parent components/__init__.py for agentics import ==="
rg "agentics" src/backend/base/langflow/components/__init__.py || echo "No agentics import found"

echo ""
echo "=== Listing all subdirectories in components ==="
fd -t d -d 1 . src/backend/base/langflow/components/ | sort

Repository: langflow-ai/langflow

Length of output: 394


🏁 Script executed:

#!/bin/bash
# Verify agentics.py file exists
echo "=== Checking if agentics.py exists ==="
ls -la src/backend/base/langflow/components/agentics/

echo ""
echo "=== Checking knowledge_bases/__init__.py ==="
if [ -f "src/backend/base/langflow/components/knowledge_bases/__init__.py" ]; then
  cat -n src/backend/base/langflow/components/knowledge_bases/__init__.py
else
  echo "No __init__.py found"
fi

echo ""
echo "=== Checking if knowledge_bases is in parent __init__.py ==="
rg "knowledge_bases" src/backend/base/langflow/components/__init__.py || echo "No knowledge_bases import found"

echo ""
echo "=== Checking processing/__init__.py for structure reference ==="
head -20 src/backend/base/langflow/components/processing/__init__.py

Repository: langflow-ai/langflow

Length of output: 3872


🏁 Script executed:

#!/bin/bash
# Check the Agentics class definition
echo "=== Agentics component structure ==="
head -50 src/backend/base/langflow/components/agentics/agentics.py | tail -40

echo ""
echo "=== Example: processing/__init__.py pattern ==="
cat src/backend/base/langflow/components/processing/__init__.py

echo ""
echo "=== Check data/ subdirectory __init__.py if exists ==="
if [ -f "src/backend/base/langflow/components/data/__init__.py" ]; then
  cat src/backend/base/langflow/components/data/__init__.py
else
  echo "data/ directory does not exist"
fi

Repository: langflow-ai/langflow

Length of output: 1294


🏁 Script executed:

#!/bin/bash
# Check the main components __init__.py
echo "=== Main components/__init__.py ==="
cat src/backend/base/langflow/components/__init__.py | head -100

echo ""
echo "=== Check if there's dynamic discovery pattern ==="
rg "import_module|importlib" src/backend/base/langflow/components/__init__.py -A 3 | head -30

Repository: langflow-ai/langflow

Length of output: 598


Component directory placement and module configuration incomplete.

The agentics/ subdirectory is placed outside the documented structure. Coding guidelines specify new components should use: agents/, data/, embeddings/, input_output/, models/, processing/, prompts/, tools/, or vectorstores/.

Additionally, agentics/__init__.py is empty. If agentics/ is intentional as a new category (like knowledge_bases/), the __init__.py must be configured with component imports and forwarding logic. Otherwise, move the component to processing/ or agents/ and remove the agentics/ subdirectory.

🤖 Prompt for AI Agents
In `@src/backend/base/langflow/components/agentics/agentics.py` around lines 31 -
36, The Agentics component class (Agentics) lives in a non-standard subdirectory
(agentics/) and the package __init__.py is empty; either move the Agentics class
file into an existing category (preferably processing/ or agents/) and remove
the agentics/ folder, or if you intend to keep agentics/ as a new category,
populate agentics/__init__.py to export and forward the component (import
Agentics and any helpers and expose them via __all__) and add any package-level
registration logic required by the component loader; update imports throughout
the codebase to reference the new location or package export accordingly.

Comment on lines +146 to +178
def _format_row_as_toml(self, row: dict[str, Any]) -> str:
"""Convert a dictionary (row) into a TOML-formatted string."""
formatted_dict = {str(col): {"value": str(val)} for col, val in row.items()}
return toml.dumps(formatted_dict)

def _create_base_row(
self, original_row: dict[str, Any], model_response: str = "", batch_index: int = -1
) -> dict[str, Any]:
"""Create a base row with original columns and additional metadata."""
row = original_row.copy()
row[self.output_column_name] = model_response
row["batch_index"] = batch_index
return row

def _add_metadata(
self, row: dict[str, Any], *, success: bool = True, system_msg: str = "", error: str | None = None
) -> None:
"""Add metadata to a row if enabled."""
if not self.enable_metadata:
return

if success:
row["metadata"] = {
"has_system_message": bool(system_msg),
"input_length": len(row.get("text_input", "")),
"response_length": len(row[self.output_column_name]),
"processing_status": "success",
}
else:
row["metadata"] = {
"error": error,
"processing_status": "failed",
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Get file info and read the agentics.py file
wc -l src/backend/base/langflow/components/agentics/agentics.py

Repository: langflow-ai/langflow

Length of output: 125


🏁 Script executed:

# Read the file to understand the class structure and these methods
cat -n src/backend/base/langflow/components/agentics/agentics.py

Repository: langflow-ai/langflow

Length of output: 11154


🏁 Script executed:

# Search for usages of these helper methods across the entire codebase
rg -n '_format_row_as_toml|_create_base_row|_add_metadata' --type=py

Repository: langflow-ai/langflow

Length of output: 2740


Remove unused helper methods that reference undefined attributes.

These methods are dead code that should not exist in this component:

  • _format_row_as_toml (line 146): Never called in transduce()
  • _create_base_row (line 151): Never called; references undefined self.output_column_name
  • _add_metadata (line 160): Never called; references undefined self.enable_metadata and self.output_column_name

The Agentics class has no attributes for output_column_name or enable_metadata in its inputs. Remove these methods unless they will be actively used in the component's logic.

🤖 Prompt for AI Agents
In `@src/backend/base/langflow/components/agentics/agentics.py` around lines 146 -
178, Remove the three dead helper methods _format_row_as_toml, _create_base_row,
and _add_metadata from the Agentics class: these methods are never invoked (not
used by transduce) and reference undefined attributes (self.output_column_name,
self.enable_metadata), so delete their definitions to avoid dead code and
invalid attribute references; ensure no other part of the class or tests call
these method names before removing.

Comment on lines +180 to +248
async def transduce(self) -> DataFrame:
"""Process each row in df[column_name] with the language model asynchronously."""
# Check if model is already an instance (for testing) or needs to be instantiated
llm = None
if isinstance(self.model, list):
# Extract model configuration
model_selection = self.model[0]
model_name = model_selection.get("name")
provider = model_selection.get("provider")
metadata = model_selection.get("metadata", {})

# Get model class and parameters from metadata
model_class = get_model_classes().get(metadata.get("model_class"))
if model_class is None:
msg = f"No model class defined for {model_name}"
raise ValueError(msg)

# Get API key from global variables
from lfx.base.models.unified_models import get_api_key_for_provider

api_key = get_api_key_for_provider(self.user_id, provider, self.api_key)

if not api_key and provider != "Ollama":
msg = f"{provider} API key is required. Please configure it globally."
raise ValueError(msg)

if provider == "IBM WatsonX":
llm = LLM(
model="watsonx/" + model_name,
base_url="https://us-south.ml.cloud.ibm.com",
project_id=self.get_project_name(),
api_key=api_key,
temperature=0,
max_tokens=4000,
max_input_tokens=100000,
)
elif provider == "Google Generative AI":
llm = LLM(model="gemini/" + model_name, api_key=api_key)

else:
return "Please fix model paramters"

# print("AAAAAA" , type(self.source))

# if isinstance(self.source, list):

source = AG.from_dataframe(DataFrame(self.source))
schema_fields = [
(
field["name"],
field["description"],
field["type"] if not field["multiple"] else f"list[{field['type']}]",
False,
)
for field in self.schema
]
atype = create_pydantic_model(schema_fields, name=self.atype_name)
if self.transduction_type == "generate":
output_states = await generate_prototypical_instances(atype, n_instances=self.batch_size)
output = AG(states=output_states)
else:
target = AG(
atype=atype,
instructions=self.instructions,
transduction_type=self.transduction_type,
amap_batch_size=self.batch_size,
llm=llm,
)
output = await (target << source)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

llm remains None when self.model is not a list.

If the model is provided via a connected handle (not a list selection), the if isinstance(self.model, list) block is skipped, leaving llm = None. This None value is then passed to AG() at line 246, which may cause failures depending on how AG handles a missing LLM.

Consider handling the case where self.model is an already-instantiated model object:

Suggested approach
     async def transduce(self) -> DataFrame:
         """Process each row in df[column_name] with the language model asynchronously."""
-        # Check if model is already an instance (for testing) or needs to be instantiated
         llm = None
         if isinstance(self.model, list):
             # ... existing list handling ...
+        elif self.model is not None:
+            # Model provided via handle connection - use directly or wrap appropriately
+            llm = self.model  # Or wrap: LLM(model=self.model) if needed
+        else:
+            msg = "No model provided"
+            raise ValueError(msg)
🤖 Prompt for AI Agents
In `@src/backend/base/langflow/components/agentics/agentics.py` around lines 180 -
248, In transduce(), llm can remain None when self.model is not a list; before
creating target = AG(..., llm=llm) detect and handle already-instantiated models
by checking self.model's type/interface (e.g., isinstance(self.model, LLM) or a
known Agentic wrapper) and set llm = self.model, or if self.model is a
dict/object that needs instantiation, construct the LLM similarly to the list
branch; if no valid model is available raise a clear ValueError. Update the
logic around the existing llm variable in transduce (refer to the llm local, the
isinstance(self.model, list) branch, and the AG(...) instantiation) so llm is
always a valid model or a clear error is thrown before calling AG.

Comment on lines +184 to +220
if isinstance(self.model, list):
# Extract model configuration
model_selection = self.model[0]
model_name = model_selection.get("name")
provider = model_selection.get("provider")
metadata = model_selection.get("metadata", {})

# Get model class and parameters from metadata
model_class = get_model_classes().get(metadata.get("model_class"))
if model_class is None:
msg = f"No model class defined for {model_name}"
raise ValueError(msg)

# Get API key from global variables
from lfx.base.models.unified_models import get_api_key_for_provider

api_key = get_api_key_for_provider(self.user_id, provider, self.api_key)

if not api_key and provider != "Ollama":
msg = f"{provider} API key is required. Please configure it globally."
raise ValueError(msg)

if provider == "IBM WatsonX":
llm = LLM(
model="watsonx/" + model_name,
base_url="https://us-south.ml.cloud.ibm.com",
project_id=self.get_project_name(),
api_key=api_key,
temperature=0,
max_tokens=4000,
max_input_tokens=100000,
)
elif provider == "Google Generative AI":
llm = LLM(model="gemini/" + model_name, api_key=api_key)

else:
return "Please fix model paramters"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Incomplete provider handling and type-unsafe error return.

Several issues in the model instantiation logic:

  1. Line 186: self.model[0] can raise IndexError if the list is empty
  2. Lines 206-220: Only handles "IBM WatsonX" and "Google Generative AI", but get_model_classes() supports OpenAI, Anthropic, and Ollama as well
  3. Line 220: Returns a string "Please fix model paramters" (typo: "paramters") instead of raising an exception—this violates the DataFrame return type and causes silent failures downstream
Suggested fix
     async def transduce(self) -> DataFrame:
         """Process each row in df[column_name] with the language model asynchronously."""
         llm = None
         if isinstance(self.model, list):
+            if not self.model:
+                msg = "No model selected"
+                raise ValueError(msg)
             # Extract model configuration
             model_selection = self.model[0]
             model_name = model_selection.get("name")
             provider = model_selection.get("provider")
             metadata = model_selection.get("metadata", {})

             # Get model class and parameters from metadata
             model_class = get_model_classes().get(metadata.get("model_class"))
             if model_class is None:
                 msg = f"No model class defined for {model_name}"
                 raise ValueError(msg)

             # Get API key from global variables
             from lfx.base.models.unified_models import get_api_key_for_provider

             api_key = get_api_key_for_provider(self.user_id, provider, self.api_key)

             if not api_key and provider != "Ollama":
                 msg = f"{provider} API key is required. Please configure it globally."
                 raise ValueError(msg)

             if provider == "IBM WatsonX":
                 llm = LLM(
                     model="watsonx/" + model_name,
                     base_url="https://us-south.ml.cloud.ibm.com",
                     project_id=self.get_project_name(),
                     api_key=api_key,
                     temperature=0,
                     max_tokens=4000,
                     max_input_tokens=100000,
                 )
             elif provider == "Google Generative AI":
                 llm = LLM(model="gemini/" + model_name, api_key=api_key)
-
+            elif provider == "OpenAI":
+                llm = LLM(model="openai/" + model_name, api_key=api_key)
+            elif provider == "Anthropic":
+                llm = LLM(model="anthropic/" + model_name, api_key=api_key)
+            elif provider == "Ollama":
+                llm = LLM(model="ollama/" + model_name)
             else:
-                return "Please fix model paramters"
+                msg = f"Unsupported provider: {provider}"
+                raise ValueError(msg)
🤖 Prompt for AI Agents
In `@src/backend/base/langflow/components/agentics/agentics.py` around lines 184 -
220, The model-instantiation block in Agentics must validate and exhaustively
handle providers and return typed errors: first check that self.model is a
non-empty list (avoid IndexError on self.model[0]) and raise a clear ValueError
if empty; use get_model_classes() to resolve metadata.get("model_class") as you
already do, then add explicit branches for the other supported providers
(OpenAI, Anthropic, Ollama) and instantiate the correct LLM (or equivalent
client) with the proper parameters (use get_api_key_for_provider(self.user_id,
provider, self.api_key) for API keys and self.get_project_name() where needed);
never return a string on failure—raise a descriptive exception (fix the typo
“paramters”) for unsupported/invalid provider or missing API key so downstream
consumers get a consistent error type.

Comment on lines +252 to +253
df = output.to_dataframe()
return df.to_dict(orient="records")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Return type mismatch: returns list[dict] instead of DataFrame.

The method signature declares -> DataFrame, but df.to_dict(orient="records") returns list[dict]. Either update the return type annotation or return the DataFrame directly.

Option 1: Return DataFrame directly
         df = output.to_dataframe()
-        return df.to_dict(orient="records")
+        return df
Option 2: Update return type annotation
-    async def transduce(self) -> DataFrame:
+    async def transduce(self) -> list[dict[str, Any]]:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
df = output.to_dataframe()
return df.to_dict(orient="records")
df = output.to_dataframe()
return df
🤖 Prompt for AI Agents
In `@src/backend/base/langflow/components/agentics/agentics.py` around lines 252 -
253, The method currently constructs df = output.to_dataframe() and returns
df.to_dict(orient="records") while its signature declares -> DataFrame; fix by
either returning the DataFrame object (return df) to match the DataFrame return
type, or change the function's return type annotation to list[dict] (or
List[Dict[str, Any]]) to reflect df.to_dict(orient="records"); update the
signature accordingly and adjust any downstream callers/types if you choose the
latter.

@Cristhianzl Cristhianzl changed the title Agentics boundle feat: add IBM Agentics Jan 27, 2026
@github-actions github-actions bot added the enhancement New feature or request label Jan 27, 2026
@Cristhianzl Cristhianzl changed the title feat: add IBM Agentics feat: add IBM Agentics component Jan 27, 2026
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Jan 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

community Pull Request from an external contributor enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant