Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Semantic Cache is not working #651

Open
awais-nayyar opened this issue Sep 16, 2024 · 2 comments
Open

Semantic Cache is not working #651

awais-nayyar opened this issue Sep 16, 2024 · 2 comments

Comments

@awais-nayyar
Copy link

I am utilizing the GPT Semantic Cache as outlined in the Langchain documentation, combined with the Groq API and the Llama3-70b-8192 model. However, I'm encountering an issue where the semantic cache returns the same response repeatedly, even when different questions are asked.
Can anybody know what could be the possible cause?
Here is the code:

import hashlib

from gptcache import Cache
from gptcache.adapter.api import init_similar_cache
from langchain.cache import GPTCache
from langchain_core.globals import set_llm_cache

def get_hashed_name(name):
return hashlib.sha256(name.encode()).hexdigest()

def init_gptcache(cache_obj: Cache, llm: str):
hashed_llm = get_hashed_name(llm)
init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{hashed_llm}")

set_llm_cache(GPTCache(init_gptcache))

@dipanjannC
Copy link

Wanted to know which vector store you're using . Are you having any issues using cloud vector stores?

@awais-nayyar
Copy link
Author

according to langchain documentation of Semantic GPTCache I did not see any vector store utilization?
can you please guide me or share any link from where i can get it correctly?
here is the link of langchain doc of Semantic GPTCache: https://python.langchain.com/docs/integrations/llm_caching/#gptcache

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants