Support client-side caching for any ChatCompletionClient type.
Simplest way to do it is to create a ChatCompletionCache type that implements the ChatCompletionClient protocol but wraps an existing client.
Example how this may work:
from autogen_ext.stores.diskcache import DiskCacheStore
from autogen_ext.models.cache import ChatCompletionCache
from autogen_ext.models.openai import OpenAIChatCompletionClient
# Cached client.
cached_client = ChatCompletionCache(OpenAIChatCompletionClient(model="gpt-4o"), store=DiskCacheStore())