Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ChatCompletionClient to support request caching #4752

Open
ekzhu opened this issue Dec 18, 2024 · 0 comments
Open

ChatCompletionClient to support request caching #4752

ekzhu opened this issue Dec 18, 2024 · 0 comments

Comments

@ekzhu
Copy link
Collaborator

ekzhu commented Dec 18, 2024

Support client-side caching for any ChatCompletionClient type.

Simplest way to do it is to create a ChatCompletionCache type that implements the ChatCompletionClient protocol but wraps an existing client.

Example how this may work:

from autogen_ext.stores.diskcache import DiskCacheStore
from autogen_ext.models.cache import ChatCompletionCache
from autogen_ext.models.openai import OpenAIChatCompletionClient

# Cached client.
cached_client = ChatCompletionCache(OpenAIChatCompletionClient(model="gpt-4o"), store=DiskCacheStore())
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant