-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Passage NER exception #42
Comments
Hello. Could you show how you add support for Qwen models? I think you may need to use langchain to add support for those models in the current HippoRAG framework: https://github.com/OSU-NLP-Group/HippoRAG/blob/main/src/langchain_util.py |
我是从ollama上运行”ollama run qwen2:7b“拉取的qwen2:7b模型,看上去我只需要选择ollama然后输入模型名称就可以了,如何使用langchain去添加支持呢? |
Please reply in English to ensure all our maintainers and users understand your issue. |
Are contents in file |
Hi, I also submit an PR for supporting llama.cpp. llama.cpp also supports Qwen2 models. You can try it out after it's merged. |
I meet the same question, could you please share how to solve the error |
when i run this
DATA=sample
LLM=qwen2:7b
SYNONYM_THRESH=0.8
GPUS=0
LLM_API=ollama
bash src/setup_hipporag_colbert.sh $DATA $LLM $GPUS $SYNONYM_THRESH $LLM_API
an error occurs:
why is this happening?
how should i solve it?
The text was updated successfully, but these errors were encountered: