⚡️ Speed up method AuthService.decrypt_api_key by 14% in PR #10702 (pluggable-auth-service)
#11404
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
⚡️ This pull request contains optimizations for PR #10702
If you approve this dependent PR, these changes will be merged into the original PR branch
pluggable-auth-service.📄 14% (0.14x) speedup for
AuthService.decrypt_api_keyinsrc/backend/base/langflow/services/auth/service.py⏱️ Runtime :
4.99 milliseconds→4.36 milliseconds(best of8runs)📝 Explanation and details
The optimized code achieves a 14% speedup by implementing Fernet instance caching and eliminating redundant string encoding.
Key Optimizations
1. Fernet Instance Caching (Primary Optimization)
The original code reconstructed a
Fernetobject on everydecrypt_api_keycall via_get_fernet(). Line profiler shows this consumed 19.3% of total time in the original (3.99ms out of 20.73ms), withFernet(valid_key)initialization taking 60.1% of_get_fernet()'s time alone.The optimized version adds caching fields (
_cached_secret_keyand_cached_fernet) that store the last SECRET_KEY and its corresponding Fernet instance. When the SECRET_KEY hasn't changed (the common case in production),_get_fernet()returns the cached instance immediately. Line profiler confirms the cache hit path is extremely fast: 204 out of 209 calls hit the cache, reducing_get_fernet()time from 3.19ms to just 0.87ms (73% reduction).This optimization is particularly effective because:
2. Token Bytes Preparation (Micro-optimization)
The original code called
encrypted_api_key.encode()inside thetryblock. The optimized version hoists this totoken_bytes = encrypted_api_key.encode()before the try/except, eliminating one redundant.encode()call in the common success path. This is a minor optimization but avoids unnecessary string-to-bytes conversion when decryption succeeds (the overwhelmingly common case).Test Case Performance
Based on annotated tests, the optimizations excel when:
test_large_scale_decrypt_many_tokens_under_limit(200 tokens) benefits most since cache hits compound across callsThe optimizations maintain identical behavior for edge cases (invalid inputs, plaintext keys, corrupted tokens) while significantly reducing overhead in the hot path.
Why This Matters
In production authentication services,
decrypt_api_keyis likely called frequently (e.g., per-request authentication). Since the SECRET_KEY is essentially static after initialization, caching the Fernet instance eliminates ~73% of the key-processing overhead on every call after the first, making this a high-impact optimization for any workload with repeated decryption operations.✅ Correctness verification report:
🌀 Click to see Generated Regression Tests
To edit these changes
git checkout codeflash/optimize-pr10702-2026-01-21T21.21.40and push.