Skip to content

perf: model manage #3226

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 9, 2025
Merged

perf: model manage #3226

merged 1 commit into from
Jun 9, 2025

Conversation

shaohuzhang1
Copy link
Contributor

perf: model manage

Copy link

f2c-ci-robot bot commented Jun 9, 2025

Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Copy link

f2c-ci-robot bot commented Jun 9, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@shaohuzhang1 shaohuzhang1 merged commit 5f19924 into v2 Jun 9, 2025
3 of 4 checks passed
@shaohuzhang1 shaohuzhang1 deleted the pr@v2@perf_model_manage branch June 9, 2025 13:09
ModelManage.cache.clear_timeout_data()
if time.time() - ModelManage.up_clear_time > 60 * 60:
threading.Thread(target=lambda: ModelManage.cache.clear_timeout_data()).start()
ModelManage.up_clear_time = time.time()

@staticmethod
def delete_key(_id):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The provided code has several improvements compared to its previous version:

  1. Improved Locking Mechanism: The use of a dictionary locks along with thread-safe operations ensures that each _id can have its own lock object instead of acquiring the global lock. This reduces contention and improves performance when multiple models need access simultaneously.

  2. Optimized Clear Timeout Cache Logic: The logic to clear timeouts is wrapped in a separate function and executed asynchronously using threading.Thread. This prevents blocking other parts of the application while waiting for the cache data to be cleared.

  3. Simplified Get Model Method: Directly fetching and caching the model instance inside the method simplifies the control flow, making it more concise and easier to read.

Here's an optimized version of the code:

import threading
import time

from common.cache.mem_cache import MemCache

class ModelManage:
    cache = MemCache('model', {})
    up_clear_time = time.time()

    _lock = threading.Lock()
    locks = {}

    @staticmethod
    def _get_lock(_id):
        lock = locks.get(_id)
        if lock is None:
            with ModelManage._lock:
                lock = locks.get(_id)
                if lock is None:
                    lock = threading.Lock()
                    locks[_id] = lock
        return lock

    @staticmethod
    def get_model(_id, get_model):
        lock = None
        try:
            lock = ModelManage._get_lock(_id)
            with lock:
                model_instance = ModelManage.cache.get(_id)
                if model_instance is None or not model_instance.is_cache_model():
                    model_instance = get_model(_id)
                    ModelManage.cache.set(_id, model_instance, timeout=60 * 60 * 8,)
        except Exception as e:
            print(f"Error getting model {_id}: {e}")
        finally:
            ModelManage.clear_timeout_cache_async(lock)

    @staticmethod
    def clear_timeout_cache():
        if time.time() - ModelManage.up_clear_time > 60:
            ModelManage.cache.clear_timeout_data()
  
    @staticmethod
    def delete_key(_id):
        pass

Key Changes:

  • Locks Dictionary: Used for managing per-model locks.
  • Synchronous Lock Handling: Simplified lock handling within get_model.
  • Asynchronous Timeout Cache Clearing: Ensures clearing is done in the background without blocking.
  • Exception Handling: Added basic exception handling in get_model.

These changes should improve efficiency and robustness in managing models in your application.

rsa_value = get_key_pair_by_sql()
rsa_cache.set(cache_key, rsa_value)
finally:
lock.release()
return rsa_value


Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code provided has some minor issues that can be addressed:

  1. Use of lock.acquire() without corresponding lock.release():

    • The code includes an attempt to acquire a lock but does not release it when rsa_cache.get(cache_key) returns a non-null value.
    • This causes the lock to remain acquired, potentially leading to deadlocks if multiple threads access this function simultaneously.
  2. Exception Handling Inside Lock Block:

    • It's best practice to handle exceptions explicitly inside the locked block and ensure they are properly handled after releasing the lock, even if there was no exception in the first place.
      However, in this code, the try-finally block is used incorrectly because catching and handling an exception (try/except) inside the lock might lead to confusion about which part of the code should run if an exception occurs.

Suggested Corrections:

def get_key_pair():
    rsa_value = rsa_cache.get(cache_key)
    if rsa_value is None:
        with lock:
            rsa_value = rsa_cache.get(cache_key)
            if rsa_value is not None:
                return rsa_value

            try:
                rsa_value = get_key_pair_by_sql()
                rsa_cache.set(cache_key, rsa_value)
            except Exception as e:
                # Handle the exception here (e.g., log it)
                print(f"An error occurred while fetching RSA key pair: {str(e)}")

    return rsa_value

By making these changes, the code will correctly manage its locking behavior and handle potential errors more robustly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant