-
Notifications
You must be signed in to change notification settings - Fork 2.2k
perf: model manage #3226
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf: model manage #3226
Conversation
Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
ModelManage.cache.clear_timeout_data() | ||
if time.time() - ModelManage.up_clear_time > 60 * 60: | ||
threading.Thread(target=lambda: ModelManage.cache.clear_timeout_data()).start() | ||
ModelManage.up_clear_time = time.time() | ||
|
||
@staticmethod | ||
def delete_key(_id): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The provided code has several improvements compared to its previous version:
-
Improved Locking Mechanism: The use of a dictionary
locks
along with thread-safe operations ensures that each_id
can have its own lock object instead of acquiring the global lock. This reduces contention and improves performance when multiple models need access simultaneously. -
Optimized Clear Timeout Cache Logic: The logic to clear timeouts is wrapped in a separate function and executed asynchronously using
threading.Thread
. This prevents blocking other parts of the application while waiting for the cache data to be cleared. -
Simplified Get Model Method: Directly fetching and caching the model instance inside the method simplifies the control flow, making it more concise and easier to read.
Here's an optimized version of the code:
import threading
import time
from common.cache.mem_cache import MemCache
class ModelManage:
cache = MemCache('model', {})
up_clear_time = time.time()
_lock = threading.Lock()
locks = {}
@staticmethod
def _get_lock(_id):
lock = locks.get(_id)
if lock is None:
with ModelManage._lock:
lock = locks.get(_id)
if lock is None:
lock = threading.Lock()
locks[_id] = lock
return lock
@staticmethod
def get_model(_id, get_model):
lock = None
try:
lock = ModelManage._get_lock(_id)
with lock:
model_instance = ModelManage.cache.get(_id)
if model_instance is None or not model_instance.is_cache_model():
model_instance = get_model(_id)
ModelManage.cache.set(_id, model_instance, timeout=60 * 60 * 8,)
except Exception as e:
print(f"Error getting model {_id}: {e}")
finally:
ModelManage.clear_timeout_cache_async(lock)
@staticmethod
def clear_timeout_cache():
if time.time() - ModelManage.up_clear_time > 60:
ModelManage.cache.clear_timeout_data()
@staticmethod
def delete_key(_id):
pass
Key Changes:
- Locks Dictionary: Used for managing per-model locks.
- Synchronous Lock Handling: Simplified lock handling within
get_model
. - Asynchronous Timeout Cache Clearing: Ensures clearing is done in the background without blocking.
- Exception Handling: Added basic exception handling in
get_model
.
These changes should improve efficiency and robustness in managing models in your application.
rsa_value = get_key_pair_by_sql() | ||
rsa_cache.set(cache_key, rsa_value) | ||
finally: | ||
lock.release() | ||
return rsa_value | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code provided has some minor issues that can be addressed:
-
Use of
lock.acquire()
without correspondinglock.release()
:- The code includes an attempt to acquire a lock but does not release it when
rsa_cache.get(cache_key)
returns a non-null value. - This causes the lock to remain acquired, potentially leading to deadlocks if multiple threads access this function simultaneously.
- The code includes an attempt to acquire a lock but does not release it when
-
Exception Handling Inside Lock Block:
- It's best practice to handle exceptions explicitly inside the locked block and ensure they are properly handled after releasing the lock, even if there was no exception in the first place.
However, in this code, the try-finally block is used incorrectly because catching and handling an exception (try/except
) inside the lock might lead to confusion about which part of the code should run if an exception occurs.
- It's best practice to handle exceptions explicitly inside the locked block and ensure they are properly handled after releasing the lock, even if there was no exception in the first place.
Suggested Corrections:
def get_key_pair():
rsa_value = rsa_cache.get(cache_key)
if rsa_value is None:
with lock:
rsa_value = rsa_cache.get(cache_key)
if rsa_value is not None:
return rsa_value
try:
rsa_value = get_key_pair_by_sql()
rsa_cache.set(cache_key, rsa_value)
except Exception as e:
# Handle the exception here (e.g., log it)
print(f"An error occurred while fetching RSA key pair: {str(e)}")
return rsa_value
By making these changes, the code will correctly manage its locking behavior and handle potential errors more robustly.
perf: model manage