Skip to content

Conversation

@codeflash-ai
Copy link
Contributor

@codeflash-ai codeflash-ai bot commented Jan 21, 2026

⚡️ This pull request contains optimizations for PR #10702

If you approve this dependent PR, these changes will be merged into the original PR branch pluggable-auth-service.

This PR will be automatically closed if the original PR is merged.


📄 14% (0.14x) speedup for AuthService.decrypt_api_key in src/backend/base/langflow/services/auth/service.py

⏱️ Runtime : 4.99 milliseconds 4.36 milliseconds (best of 8 runs)

📝 Explanation and details

The optimized code achieves a 14% speedup by implementing Fernet instance caching and eliminating redundant string encoding.

Key Optimizations

1. Fernet Instance Caching (Primary Optimization)

The original code reconstructed a Fernet object on every decrypt_api_key call via _get_fernet(). Line profiler shows this consumed 19.3% of total time in the original (3.99ms out of 20.73ms), with Fernet(valid_key) initialization taking 60.1% of _get_fernet()'s time alone.

The optimized version adds caching fields (_cached_secret_key and _cached_fernet) that store the last SECRET_KEY and its corresponding Fernet instance. When the SECRET_KEY hasn't changed (the common case in production), _get_fernet() returns the cached instance immediately. Line profiler confirms the cache hit path is extremely fast: 204 out of 209 calls hit the cache, reducing _get_fernet() time from 3.19ms to just 0.87ms (73% reduction).

This optimization is particularly effective because:

  • Fernet construction involves cryptographic key validation and internal state setup (expensive operations)
  • The SECRET_KEY rarely changes during runtime—typically set once at service startup
  • The function is called on every API key decryption in test scenarios (200+ times)

2. Token Bytes Preparation (Micro-optimization)

The original code called encrypted_api_key.encode() inside the try block. The optimized version hoists this to token_bytes = encrypted_api_key.encode() before the try/except, eliminating one redundant .encode() call in the common success path. This is a minor optimization but avoids unnecessary string-to-bytes conversion when decryption succeeds (the overwhelmingly common case).

Test Case Performance

Based on annotated tests, the optimizations excel when:

  • High-volume decryption: test_large_scale_decrypt_many_tokens_under_limit (200 tokens) benefits most since cache hits compound across calls
  • Repeated decryptions with stable SECRET_KEY: All tests benefit as they use the same SECRET_KEY throughout their execution
  • Valid encrypted tokens: The token_bytes optimization helps the success path (most test cases)

The optimizations maintain identical behavior for edge cases (invalid inputs, plaintext keys, corrupted tokens) while significantly reducing overhead in the hot path.

Why This Matters

In production authentication services, decrypt_api_key is likely called frequently (e.g., per-request authentication). Since the SECRET_KEY is essentially static after initialization, caching the Fernet instance eliminates ~73% of the key-processing overhead on every call after the first, making this a high-impact optimization for any workload with repeated decryption operations.

Correctness verification report:

Test Status
⚙️ Existing Unit Tests 🔘 None Found
🌀 Generated Regression Tests 219 Passed
⏪ Replay Tests 🔘 None Found
🔎 Concolic Coverage Tests 🔘 None Found
📊 Tests Coverage 100.0%
🌀 Click to see Generated Regression Tests
from __future__ import annotations

# imports
import base64
import base64 as _base64  # to avoid shadowing the top-level base64 import in the test file
import random
import random as _random

import pytest
from cryptography.fernet import Fernet
from cryptography.fernet import Fernet as _Fernet
from langflow.services.auth.service import AuthService


# Minimal settings classes to provide the nested attribute access:
class SecretValue:
    """Wraps a secret string and exposes get_secret_value method as used by AuthService."""
    def __init__(self, secret: str):
        self._secret = secret

    def get_secret_value(self) -> str:
        return self._secret


class AuthSettings:
    """Container for auth-related settings. Only SECRET_KEY is needed for these tests."""
    def __init__(self, secret_key: str):
        self.SECRET_KEY = SecretValue(secret_key)


class SettingsService:
    """SettingsService-like container that exposes auth_settings attribute."""
    def __init__(self, secret_key: str):
        self.auth_settings = AuthSettings(secret_key)


def test_invalid_inputs_return_empty_for_none_and_non_string():
    # Arrange: create a settings service with a valid Fernet key
    fernet_key = Fernet.generate_key().decode()  # valid Fernet key string
    settings = SettingsService(secret_key=fernet_key)
    service = AuthService(settings_service=settings)

    # Act & Assert: None input should return empty string
    codeflash_output = service.decrypt_api_key(None)  # None is invalid type -> empty string

    # Act & Assert: empty string input should return empty string
    codeflash_output = service.decrypt_api_key("")  # empty string treated as invalid input

    # Act & Assert: non-string types (e.g., integer) should also return empty string
    codeflash_output = service.decrypt_api_key(12345)  # non-string -> empty string


def test_plaintext_key_returned_as_is():
    # Arrange: plaintext API key that does not start with the Fernet token prefix
    plaintext_key = "plain_api_key_12345"
    settings = SettingsService(secret_key=Fernet.generate_key().decode())
    service = AuthService(settings_service=settings)

    # Act: pass plaintext (not starting with "gAAAAA")
    codeflash_output = service.decrypt_api_key(plaintext_key); result = codeflash_output


def test_decrypt_with_valid_long_secret_key_returns_original_plaintext():
    # Arrange: create a valid long secret key (generated by Fernet.generate_key)
    secret_key = Fernet.generate_key().decode()  # properly formatted base64 key string
    settings = SettingsService(secret_key=secret_key)
    service = AuthService(settings_service=settings)

    # Use the same Fernet instance that service will use to encrypt a plaintext
    f = service._get_fernet()

    plaintext = "super-secret-api-key"
    # Act: encrypt the plaintext using the service's fernet instance
    encrypted = f.encrypt(plaintext.encode()).decode()

    # Act: decrypt via the method under test
    codeflash_output = service.decrypt_api_key(encrypted); decrypted = codeflash_output


def test_decrypt_with_short_secret_key_generates_deterministic_key_and_decrypts():
    # Arrange: use a short secret key (len < MINIMUM_KEY_LENGTH)
    raw_short_secret = "short_secret"  # length < 32 triggers the random-seeded branch
    settings = SettingsService(secret_key=raw_short_secret)
    service = AuthService(settings_service=settings)

    # The derived Fernet should be deterministic for the same short secret key.
    f = service._get_fernet()

    plaintext = "edge-case-key"
    token = f.encrypt(plaintext.encode()).decode()

    # Act: decrypt using the service method
    codeflash_output = service.decrypt_api_key(token); decrypted = codeflash_output


def test_corrupted_token_returns_empty_string_and_does_not_raise():
    # Arrange: valid settings and valid token to start with
    secret_key = Fernet.generate_key().decode()
    settings = SettingsService(secret_key=secret_key)
    service = AuthService(settings_service=settings)
    f = service._get_fernet()

    plaintext = "will_be_corrupted"
    token = f.encrypt(plaintext.encode()).decode()

    # Corrupt the token by altering a character in the middle
    # This should cause decryption to fail and result in empty string
    corrupted = token[:10] + ("A" if token[10] != "A" else "B") + token[11:]

    # Act: attempt decryption of corrupted token
    codeflash_output = service.decrypt_api_key(corrupted); result = codeflash_output


def test_large_scale_decrypt_many_tokens_under_limit():
    # Arrange: use a valid, long secret key to create many tokens
    secret_key = Fernet.generate_key().decode()
    settings = SettingsService(secret_key=secret_key)
    service = AuthService(settings_service=settings)
    f = service._get_fernet()

    # Create a moderate number of tokens (kept well below 1000 per instructions)
    num_tokens = 200  # large but within limits to test scalability
    plaintexts = [f"bulk_key_{i}" for i in range(num_tokens)]

    # Encrypt all plaintexts
    encrypted_tokens = [f.encrypt(p.encode()).decode() for p in plaintexts]

    # Act & Assert: decrypt all tokens with the service and verify each original plaintext
    for original, token in zip(plaintexts, encrypted_tokens):
        codeflash_output = service.decrypt_api_key(token); decrypted = codeflash_output


def test_mixed_inputs_batch_processing():
    # Arrange: a settings instance and a mixture of plaintext and encrypted API keys
    secret_key = Fernet.generate_key().decode()
    settings = SettingsService(secret_key=secret_key)
    service = AuthService(settings_service=settings)
    f = service._get_fernet()

    items = [
        "plain_text_no_prefix",  # plain text -> returned as-is
        f.encrypt(b"enc1").decode(),  # encrypted -> decrypted
        "",  # empty string -> invalid -> empty result
        None,  # None -> invalid -> empty result
        999,   # non-string -> invalid -> empty result
    ]

    # Act: process each input with decrypt_api_key-like semantics and collect results
    results = []
    for item in items:
        results.append(service.decrypt_api_key(item))
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.
#------------------------------------------------
import base64
import random
from unittest.mock import Mock, patch

import pytest
from cryptography.fernet import Fernet
from langflow.services.auth.service import AuthService
from lfx.services.settings.service import SettingsService

# ============================================================================
# FIXTURES
# ============================================================================

@pytest.fixture
def mock_settings_service():
    """Create a mock SettingsService with a valid SECRET_KEY."""
    mock_settings = Mock(spec=SettingsService)
    # Create a valid Fernet key (base64-encoded 32-byte key)
    valid_key = Fernet.generate_key()
    mock_settings.auth_settings.SECRET_KEY.get_secret_value.return_value = valid_key.decode()
    return mock_settings


@pytest.fixture
def auth_service(mock_settings_service):
    """Create an AuthService instance with mocked settings."""
    service = AuthService(mock_settings_service)
    return service


@pytest.fixture
def fernet_cipher():
    """Create a Fernet cipher for encrypting test data."""
    return Fernet(Fernet.generate_key())

To edit these changes git checkout codeflash/optimize-pr10702-2026-01-21T21.21.40 and push.

Codeflash

The optimized code achieves a **14% speedup** by implementing **Fernet instance caching** and **eliminating redundant string encoding**.

## Key Optimizations

### 1. Fernet Instance Caching (Primary Optimization)
The original code reconstructed a `Fernet` object on every `decrypt_api_key` call via `_get_fernet()`. Line profiler shows this consumed **19.3% of total time** in the original (3.99ms out of 20.73ms), with `Fernet(valid_key)` initialization taking **60.1%** of `_get_fernet()`'s time alone.

The optimized version adds caching fields (`_cached_secret_key` and `_cached_fernet`) that store the last SECRET_KEY and its corresponding Fernet instance. When the SECRET_KEY hasn't changed (the common case in production), `_get_fernet()` returns the cached instance immediately. Line profiler confirms the cache hit path is extremely fast: **204 out of 209 calls** hit the cache, reducing `_get_fernet()` time from 3.19ms to just **0.87ms** (73% reduction).

This optimization is particularly effective because:
- Fernet construction involves cryptographic key validation and internal state setup (expensive operations)
- The SECRET_KEY rarely changes during runtime—typically set once at service startup
- The function is called on every API key decryption in test scenarios (200+ times)

### 2. Token Bytes Preparation (Micro-optimization)
The original code called `encrypted_api_key.encode()` inside the `try` block. The optimized version hoists this to `token_bytes = encrypted_api_key.encode()` before the try/except, eliminating one redundant `.encode()` call in the common success path. This is a minor optimization but avoids unnecessary string-to-bytes conversion when decryption succeeds (the overwhelmingly common case).

## Test Case Performance
Based on annotated tests, the optimizations excel when:
- **High-volume decryption**: `test_large_scale_decrypt_many_tokens_under_limit` (200 tokens) benefits most since cache hits compound across calls
- **Repeated decryptions with stable SECRET_KEY**: All tests benefit as they use the same SECRET_KEY throughout their execution
- **Valid encrypted tokens**: The token_bytes optimization helps the success path (most test cases)

The optimizations maintain identical behavior for edge cases (invalid inputs, plaintext keys, corrupted tokens) while significantly reducing overhead in the hot path.

## Why This Matters
In production authentication services, `decrypt_api_key` is likely called frequently (e.g., per-request authentication). Since the SECRET_KEY is essentially static after initialization, caching the Fernet instance eliminates ~73% of the key-processing overhead on every call after the first, making this a high-impact optimization for any workload with repeated decryption operations.
@codeflash-ai codeflash-ai bot added the ⚡️ codeflash Optimization PR opened by Codeflash AI label Jan 21, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 21, 2026

Important

Review skipped

Bot user detected.

To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions github-actions bot added the community Pull Request from an external contributor label Jan 21, 2026
@codecov
Copy link

codecov bot commented Jan 21, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 34.37%. Comparing base (de6e7f4) to head (464d118).

❌ Your project check has failed because the head coverage (41.66%) is below the target coverage (60.00%). You can increase the head coverage or adjust the target coverage.

Additional details and impacted files

Impacted file tree graph

@@                    Coverage Diff                     @@
##           pluggable-auth-service   #11404      +/-   ##
==========================================================
- Coverage                   34.37%   34.37%   -0.01%     
==========================================================
  Files                        1414     1414              
  Lines                       66787    66787              
  Branches                     9896     9896              
==========================================================
- Hits                        22961    22960       -1     
  Misses                      42616    42616              
- Partials                     1210     1211       +1     
Flag Coverage Δ
lfx 41.66% <ø> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
src/backend/base/langflow/services/auth/service.py 75.00% <ø> (ø)

... and 1 file with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

⚡️ codeflash Optimization PR opened by Codeflash AI community Pull Request from an external contributor

Projects

None yet

Development

Successfully merging this pull request may close these issues.

0 participants