-
Notifications
You must be signed in to change notification settings - Fork 8.4k
⚡️ Speed up function run_response_to_workflow_response by 12% in PR #11255 (developer-api)
#11359
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
⚡️ Speed up function run_response_to_workflow_response by 12% in PR #11255 (developer-api)
#11359
Conversation
- Add workflow API endpoints (POST /workflow, GET /workflow, POST /workflow/stop) - Implement developer API protection with settings check - Add comprehensive workflow schema models with proper validation - Create extensive unit test suite covering all scenarios - Apply Ruff linting standards and fix all code quality issues - Support API key authentication for all workflow endpoints
Co-authored-by: Gabriel Luiz Freitas Almeida <gabriel@langflow.org>
Co-authored-by: codeflash-ai[bot] <148906541+codeflash-ai[bot]@users.noreply.github.com>
Co-authored-by: codeflash-ai[bot] <148906541+codeflash-ai[bot]@users.noreply.github.com>
Co-authored-by: codeflash-ai[bot] <148906541+codeflash-ai[bot]@users.noreply.github.com>
…omponent_index.json
The optimized code achieves an **11% speedup** (from 2.78ms to 2.49ms) through two key optimizations:
## 1. Fast-Path Message Text Extraction (Primary Speedup)
The main performance gain comes from adding a fast-path in `_simplify_output_content` for common message structures **before** falling back to the heavier `_extract_text_from_message` function:
```python
# Fast-path checks using direct dict.get() operations
msg = content.get("message")
if isinstance(msg, dict):
nested_msg = msg.get("message")
if isinstance(nested_msg, str):
return nested_msg
# ... more fast-path checks ...
if isinstance(msg, str):
return msg # Early return for simple case
```
**Why this is faster:**
- Direct `dict.get()` operations are much faster than `_extract_nested_value()`, which uses `hasattr()` and `getattr()` checks on each level
- Line profiler shows `_extract_text_from_message` time dropped from **3.28ms → 0.33ms** (90% reduction)
- Only **22 out of 232 calls** now reach the expensive fallback function (versus all 232 previously)
- Most messages in the test workload follow simple structures like `{"message": "text"}` which hit the fast-path
## 2. Set-Based Terminal Node Filtering
Converting `terminal_node_ids` to a set for O(1) membership testing:
```python
terminal_node_ids_set = set(terminal_node_ids)
terminal_vertices = [v for v in graph.vertices if v.id in terminal_node_ids_set]
```
**Why this is faster:**
- List comprehension filtering time reduced from **201μs → 117μs** (42% reduction)
- Avoids O(n²) list membership checks when filtering vertices
- Particularly beneficial when there are many vertices in the graph
## 3. Minor: getattr() for Attribute Access
Replaced `hasattr()` + attribute access with single `getattr()` calls in `_get_raw_content`:
```python
outputs = getattr(vertex_output_data, "outputs", None)
if outputs is not None:
return outputs
```
This eliminates redundant attribute lookups, though the impact is minor.
## Impact on Workloads
Based on test results, these optimizations are particularly effective for:
- **Message-heavy workflows**: Tests with nested message structures see the biggest gains from fast-path extraction
- **Large graphs**: The set-based filtering helps when processing many terminal nodes
- **Common message formats**: Simple `{"message": "text"}` patterns benefit most from early returns
The optimizations preserve all original logic and handle all edge cases correctly, making them safe to merge.
|
Important Review skippedBot user detected. To trigger a single review, invoke the You can disable this status message by setting the Comment |
Codecov Report❌ Patch coverage is
❌ Your project check has failed because the head coverage (41.60%) is below the target coverage (60.00%). You can increase the head coverage or adjust the target coverage. Additional details and impacted files@@ Coverage Diff @@
## developer-api #11359 +/- ##
=================================================
+ Coverage 34.72% 34.73% +0.01%
=================================================
Files 1415 1415
Lines 67426 67452 +26
Branches 9910 9910
=================================================
+ Hits 23411 23427 +16
- Misses 42800 42809 +9
- Partials 1215 1216 +1
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
⚡️ This pull request contains optimizations for PR #11255
If you approve this dependent PR, these changes will be merged into the original PR branch
developer-api.📄 12% (0.12x) speedup for
run_response_to_workflow_responseinsrc/backend/base/langflow/api/v2/converters.py⏱️ Runtime :
2.78 milliseconds→2.49 milliseconds(best of45runs)📝 Explanation and details
The optimized code achieves an 11% speedup (from 2.78ms to 2.49ms) through two key optimizations:
1. Fast-Path Message Text Extraction (Primary Speedup)
The main performance gain comes from adding a fast-path in
_simplify_output_contentfor common message structures before falling back to the heavier_extract_text_from_messagefunction:Why this is faster:
dict.get()operations are much faster than_extract_nested_value(), which useshasattr()andgetattr()checks on each level_extract_text_from_messagetime dropped from 3.28ms → 0.33ms (90% reduction){"message": "text"}which hit the fast-path2. Set-Based Terminal Node Filtering
Converting
terminal_node_idsto a set for O(1) membership testing:Why this is faster:
3. Minor: getattr() for Attribute Access
Replaced
hasattr()+ attribute access with singlegetattr()calls in_get_raw_content:This eliminates redundant attribute lookups, though the impact is minor.
Impact on Workloads
Based on test results, these optimizations are particularly effective for:
{"message": "text"}patterns benefit most from early returnsThe optimizations preserve all original logic and handle all edge cases correctly, making them safe to merge.
✅ Correctness verification report:
⚙️ Click to see Existing Unit Tests
🌀 Click to see Generated Regression Tests
To edit these changes
git checkout codeflash/optimize-pr11255-2026-01-19T21.34.46and push.