Skip to content

Conversation

@viktoravelino
Copy link
Collaborator

@viktoravelino viktoravelino commented Jan 23, 2026

ticket: LE-145

Root Cause

The Smart Router component was making redundant LLM calls during execution. Each connected output triggers its associated method independently, and both the route-matching logic and the fallback (Else) logic were performing their own LLM categorization calls.

This meant that with multiple routes connected plus the Else output, the same categorization prompt was being sent to the LLM multiple times per execution—resulting in 2x or more latency.

Fix

Introduced a caching mechanism that ensures the LLM categorization is performed only once per component execution. All output methods now share the cached result, eliminating redundant API calls and significantly reducing response time.

LLM Categorization Caching and Routing Logic Improvements:

  • Added _categorization_result attribute and _get_categorization() method to cache the LLM categorization result, preventing multiple LLM calls during a single component execution. [1] [2]
  • Refactored process_case() to use the cached categorization result and simplified category matching logic, ensuring the match state is cleared only on the first call. [1] [2]
  • Updated default_response() to use the cached categorization result and removed duplicate prompt generation and LLM invocation logic, streamlining the else-case handling.
  • Cleaned up status messaging and error handling to provide clearer diagnostics and feedback during routing and categorization. [1] [2]
  • Removed redundant code and improved formatting for custom prompts and route category comparisons, reducing complexity and improving maintainability. [1] [2] [3]
Model Before After
GPT-5 6.14s 2.70s
  6.36s 2.27s
  7.21s 2.06s
GPT-5-mini 4.01s 2.23s
  5.89s 2.38s
  9.63s 2.04s
GPT-5-nano 3.51s 1.89s
  3.93s 1.69s
  7.10s 2.15s
Screen.Recording.2026-01-23.at.10.49.05.AM.mov

Summary by CodeRabbit

Release Notes

  • Performance & Reliability
    • Optimized LLM-based categorization routing through intelligent result caching, reducing redundant LLM calls during component execution.
    • Enhanced error handling for categorization failures with improved fallback mechanisms.
    • Streamlined routing logic while maintaining full compatibility with existing configurations.

✏️ Tip: You can customize this high-level summary in your review settings.

@viktoravelino viktoravelino self-assigned this Jan 23, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 23, 2026

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

  • 🔍 Trigger a full review

Walkthrough

This PR introduces caching for LLM categorization results in the SmartRouterComponent. A new internal state _categorization_result and helper method _get_categorization are added to memoize LLM invocations per component execution. The routing logic in process_case and default_response is refactored to use the cached categorization result, reducing redundant LLM calls while maintaining existing output control behavior.

Changes

Cohort / File(s) Summary
SmartRouterComponent Implementation
src/lfx/src/lfx/components/llm_operations/llm_conditional_router.py
Introduced _categorization_result internal state and _get_categorization() helper method to centralize and cache LLM categorization. Refactored process_case and default_response to use cached categorization. Enhanced prompt construction logic with conditional custom_prompt support and "NONE" fallback. Added error handling around LLM invocation.
Component Asset Registry
src/lfx/src/lfx/_assets/component_index.json
Updated embedded SmartRouterComponent code string with caching mechanism and refactored routing logic to match implementation file changes. Updated asset sha256 and code_hash metadata.
Hash History
src/lfx/src/lfx/_assets/stable_hash_history.json
Updated stable hash for SmartRouter version 0.3.0 from 9c6736e784f6 to a80ce86c8ebc.

Sequence Diagram

sequenceDiagram
    participant Input as Input/Execution
    participant SRC as SmartRouterComponent
    participant Cache as Categorization Cache
    participant LLM as LLM Service
    participant Router as Routing Logic
    participant Output as Output Handlers

    Input->>SRC: Trigger component execution
    SRC->>Cache: Check _categorization_result
    alt Cache miss
        Cache->>LLM: Invoke _get_categorization()
        LLM->>Cache: Return categorization result
        Cache->>Cache: Store in _categorization_result
    else Cache hit
        Cache->>Cache: Return cached result
    end
    Cache->>Router: Pass cached categorization
    Router->>Router: Match against route categories
    alt Category matched
        Router->>Output: Route to matching output
    else Category not matched
        Router->>Output: Route to default/else output
    end
    Output->>Output: Emit result with cached category
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 3
❌ Failed checks (1 error, 2 warnings)
Check name Status Explanation Resolution
Test Coverage For New Implementations ❌ Error PR modifies critical SmartRouterComponent functionality with caching and refactored logic but includes no test files. Add comprehensive test coverage in test_llm_conditional_router.py verifying caching mechanism, categorization logic, error handling, and edge cases.
Test Quality And Coverage ⚠️ Warning Pull request implements substantial changes to SmartRouterComponent with new caching and security fixes, but no test file exists for llm_conditional_router.py despite similar components having comprehensive tests. Create comprehensive test file at src/backend/tests/unit/components/llm_operations/test_llm_conditional_router.py with caching, error handling, format string safety, fallback behavior, and edge case tests.
Test File Naming And Structure ⚠️ Warning PR modifies llm_conditional_router.py with caching and error handling logic but introduces no corresponding test file, despite repository's clear naming convention (test_<component_name>.py) for similar components. Create test file src/backend/tests/unit/components/llm_operations/test_llm_conditional_router.py with comprehensive test coverage for caching, format string injection, exception handling, and category matching logic.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title 'fix: improve model process logic for conditional router' is directly related to the main change in the changeset. The PR centers on refactoring the SmartRouterComponent's process logic by introducing LLM result caching to eliminate redundant categorization calls, which is precisely what the title conveys.
Docstring Coverage ✅ Passed Docstring coverage is 80.00% which is sufficient. The required threshold is 80.00%.
Excessive Mock Usage Warning ✅ Passed This PR does not include any test files, therefore the excessive mock usage check is not applicable.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/le-145

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Jan 23, 2026
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Jan 23, 2026
@github-actions
Copy link
Contributor

github-actions bot commented Jan 23, 2026

Frontend Unit Test Coverage Report

Coverage Summary

Lines Statements Branches Functions
Coverage: 17%
17.54% (5050/28789) 10.96% (2432/22176) 11.63% (733/6299)

Unit Test Results

Tests Skipped Failures Errors Time
2036 0 💤 0 ❌ 0 🔥 26.635s ⏱️

@codecov
Copy link

codecov bot commented Jan 23, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 34.88%. Comparing base (a097b68) to head (91910ae).

Additional details and impacted files

Impacted file tree graph

@@           Coverage Diff           @@
##             main   #11429   +/-   ##
=======================================
  Coverage   34.88%   34.88%           
=======================================
  Files        1420     1420           
  Lines       68215    68215           
  Branches     9984     9984           
=======================================
  Hits        23797    23797           
+ Misses      43184    43183    -1     
- Partials     1234     1235    +1     
Flag Coverage Δ
backend 54.14% <ø> (+<0.01%) ⬆️
frontend 16.05% <ø> (ø)
lfx 41.70% <ø> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.
see 3 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@github-actions github-actions bot removed the bug Something isn't working label Jan 23, 2026
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In `@src/lfx/src/lfx/_assets/component_index.json`:
- Line 86024: The custom prompt formatting in
SmartRouterComponent._get_categorization uses
custom_prompt.format(input_text=..., routes=...) which can raise on
user-provided text containing braces; update _get_categorization to sanitize or
avoid str.format: either escape braces in input_text and simple_routes before
calling .format (e.g., replace "{"->"{{" and "}"->"}}" ) or switch to a safe
replacement like custom_prompt.replace("{input_text}",
input_text).replace("{routes}", simple_routes); ensure the change is applied
where formatted_custom is created and keep status messages intact.
- Line 86024: The except in _get_categorization currently only catches
RuntimeError so other LLM errors bubble up; update the exception handler in the
_get_categorization function (the try/except that sets
self._categorization_result = "NONE" and updates self.status) to catch a broader
Exception (e.g., except Exception as e) and keep the same error-status
assignment and fallback result so any provider/connection/timeout errors are
handled consistently.

In `@src/lfx/src/lfx/components/llm_operations/llm_conditional_router.py`:
- Around line 235-249: The try/except around the LLM invocation only catches
RuntimeError, so non-Runtime exceptions (HTTP, network, timeouts,
client-specific errors) will escape and break routing; change the except to
catch Exception (or add additional specific exception types your LLM client
raises) and on any failure set self._categorization_result = "NONE" and
self.status to include the exception message so fallback routing works; update
the block around llm.invoke / llm(prompt) and response.content handling
(symbols: llm.invoke, response.content, self.status,
self._categorization_result) to use the broader exception handler and preserve
the existing behavior for successful responses.
- Around line 220-228: The custom_prompt formatting can raise
KeyError/ValueError for unknown placeholders or unmatched braces; wrap the
format call in a safe block: build a mapping with the expected keys (e.g., using
collections.defaultdict(lambda: "") filled with input_text and routes), then
attempt formatted_custom = custom_prompt.format_map(safe_map) inside a
try/except catching (KeyError, ValueError); on exception fall back to using the
raw custom_prompt (or a sanitized version) so the LlmConditionalRouter
(custom_prompt, formatted_custom) won't crash. Ensure you update the code around
the existing custom_prompt usage and status assignment to use this safe
formatting approach.

@github-actions github-actions bot added the bug Something isn't working label Jan 23, 2026
@HzaRashid
Copy link
Collaborator

cool! haven't read it too deeply but i would just add some test coverage to make sure the new cache logic works correctly

@viktoravelino
Copy link
Collaborator Author

@HzaRashid will do that

@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Jan 23, 2026
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Jan 23, 2026
@viktoravelino viktoravelino requested a review from Jkavia January 23, 2026 20:44
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Jan 24, 2026
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Jan 26, 2026
@github-actions github-actions bot added the lgtm This PR has been approved by a maintainer label Jan 26, 2026
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Jan 27, 2026
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Jan 27, 2026
@viktoravelino viktoravelino added this pull request to the merge queue Jan 27, 2026
github-merge-queue bot pushed a commit that referenced this pull request Jan 27, 2026
* fix: improve model process logic

* [autofix.ci] apply automated fixes

* test: add unit tests for SmartRouterComponent categorization logic

* fix(tests): Update test to verify dynamic loading of options in CurrentDateComponent schema

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Himavarsha <40851462+HimavarshaVS@users.noreply.github.com>
@viktoravelino viktoravelino removed this pull request from the merge queue due to a manual request Jan 27, 2026
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Jan 27, 2026
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Jan 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working lgtm This PR has been approved by a maintainer

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants