-
Notifications
You must be signed in to change notification settings - Fork 8.4k
fix: improve model process logic for conditional router #11429
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the
WalkthroughThis PR introduces caching for LLM categorization results in the SmartRouterComponent. A new internal state Changes
Sequence DiagramsequenceDiagram
participant Input as Input/Execution
participant SRC as SmartRouterComponent
participant Cache as Categorization Cache
participant LLM as LLM Service
participant Router as Routing Logic
participant Output as Output Handlers
Input->>SRC: Trigger component execution
SRC->>Cache: Check _categorization_result
alt Cache miss
Cache->>LLM: Invoke _get_categorization()
LLM->>Cache: Return categorization result
Cache->>Cache: Store in _categorization_result
else Cache hit
Cache->>Cache: Return cached result
end
Cache->>Router: Pass cached categorization
Router->>Router: Match against route categories
alt Category matched
Router->>Output: Route to matching output
else Category not matched
Router->>Output: Route to default/else output
end
Output->>Output: Emit result with cached category
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 3❌ Failed checks (1 error, 2 warnings)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #11429 +/- ##
=======================================
Coverage 34.88% 34.88%
=======================================
Files 1420 1420
Lines 68215 68215
Branches 9984 9984
=======================================
Hits 23797 23797
+ Misses 43184 43183 -1
- Partials 1234 1235 +1
Flags with carried forward coverage won't be shown. Click here to find out more. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@src/lfx/src/lfx/_assets/component_index.json`:
- Line 86024: The custom prompt formatting in
SmartRouterComponent._get_categorization uses
custom_prompt.format(input_text=..., routes=...) which can raise on
user-provided text containing braces; update _get_categorization to sanitize or
avoid str.format: either escape braces in input_text and simple_routes before
calling .format (e.g., replace "{"->"{{" and "}"->"}}" ) or switch to a safe
replacement like custom_prompt.replace("{input_text}",
input_text).replace("{routes}", simple_routes); ensure the change is applied
where formatted_custom is created and keep status messages intact.
- Line 86024: The except in _get_categorization currently only catches
RuntimeError so other LLM errors bubble up; update the exception handler in the
_get_categorization function (the try/except that sets
self._categorization_result = "NONE" and updates self.status) to catch a broader
Exception (e.g., except Exception as e) and keep the same error-status
assignment and fallback result so any provider/connection/timeout errors are
handled consistently.
In `@src/lfx/src/lfx/components/llm_operations/llm_conditional_router.py`:
- Around line 235-249: The try/except around the LLM invocation only catches
RuntimeError, so non-Runtime exceptions (HTTP, network, timeouts,
client-specific errors) will escape and break routing; change the except to
catch Exception (or add additional specific exception types your LLM client
raises) and on any failure set self._categorization_result = "NONE" and
self.status to include the exception message so fallback routing works; update
the block around llm.invoke / llm(prompt) and response.content handling
(symbols: llm.invoke, response.content, self.status,
self._categorization_result) to use the broader exception handler and preserve
the existing behavior for successful responses.
- Around line 220-228: The custom_prompt formatting can raise
KeyError/ValueError for unknown placeholders or unmatched braces; wrap the
format call in a safe block: build a mapping with the expected keys (e.g., using
collections.defaultdict(lambda: "") filled with input_text and routes), then
attempt formatted_custom = custom_prompt.format_map(safe_map) inside a
try/except catching (KeyError, ValueError); on exception fall back to using the
raw custom_prompt (or a sanitized version) so the LlmConditionalRouter
(custom_prompt, formatted_custom) won't crash. Ensure you update the code around
the existing custom_prompt usage and status assignment to use this safe
formatting approach.
|
cool! haven't read it too deeply but i would just add some test coverage to make sure the new cache logic works correctly |
|
@HzaRashid will do that |
…ntDateComponent schema
* fix: improve model process logic * [autofix.ci] apply automated fixes * test: add unit tests for SmartRouterComponent categorization logic * fix(tests): Update test to verify dynamic loading of options in CurrentDateComponent schema --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Himavarsha <40851462+HimavarshaVS@users.noreply.github.com>
ticket: LE-145
Root Cause
The Smart Router component was making redundant LLM calls during execution. Each connected output triggers its associated method independently, and both the route-matching logic and the fallback (Else) logic were performing their own LLM categorization calls.
This meant that with multiple routes connected plus the Else output, the same categorization prompt was being sent to the LLM multiple times per execution—resulting in 2x or more latency.
Fix
Introduced a caching mechanism that ensures the LLM categorization is performed only once per component execution. All output methods now share the cached result, eliminating redundant API calls and significantly reducing response time.
LLM Categorization Caching and Routing Logic Improvements:
_categorization_resultattribute and_get_categorization()method to cache the LLM categorization result, preventing multiple LLM calls during a single component execution. [1] [2]process_case()to use the cached categorization result and simplified category matching logic, ensuring the match state is cleared only on the first call. [1] [2]default_response()to use the cached categorization result and removed duplicate prompt generation and LLM invocation logic, streamlining the else-case handling.Screen.Recording.2026-01-23.at.10.49.05.AM.mov
Summary by CodeRabbit
Release Notes
✏️ Tip: You can customize this high-level summary in your review settings.