⚡️ Speed up function calculate_text_metrics by 66% in PR #11114 (feat/langchain-1.0)
#11355
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
⚡️ This pull request contains optimizations for PR #11114
If you approve this dependent PR, these changes will be merged into the original PR branch
feat/langchain-1.0.📄 66% (0.66x) speedup for
calculate_text_metricsinsrc/backend/base/langflow/api/v1/knowledge_bases.py⏱️ Runtime :
80.2 milliseconds→48.3 milliseconds(best of74runs)📝 Explanation and details
The optimized code achieves a 66% speedup by eliminating redundant pandas string operations in a loop. Here's why it's faster:
Key Optimization: Batch Processing Over Iteration
Original approach: Iterates through each text column, applying
astype(str),fillna(""),str.len(), andstr.split()separately for each column. This triggers pandas overhead (method dispatch, memory allocation, intermediate series creation) repeatedly—174 times in the profiler results.Optimized approach:
pd.concat()str.len()andstr.split()) once on the combined seriesWhy This Works
Pandas string methods have significant per-call overhead. The line profiler shows:
The
pd.concat()cost (~94ms) is more than offset by eliminating 173 redundant string method calls. This batching reduces:Test Case Performance
Based on annotated tests, the optimization excels when:
test_multiple_columns_aggregate_counts,test_large_dataframe_multiple_columns) – more columns = higher relative gain from batch processingif col not in df.columnschecks inside the loopThe optimization maintains identical behavior for all edge cases (NaN handling, type conversion, empty strings) since the order of operations (
astype(str).fillna("")) is preserved.Impact Assessment
Without
function_references, the specific deployment context is unclear. However, this function likely processes knowledge base content where text metrics inform chunking strategies or resource allocation. The 66% speedup would significantly benefit workflows that:✅ Correctness verification report:
🌀 Click to see Generated Regression Tests
To edit these changes
git checkout codeflash/optimize-pr11114-2026-01-19T15.29.55and push.