-
-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Send embeddings to GPT4All's Local API Server #12114
Comments
JabRef sends "embeddings" in user message. Thus hypothesis 1 coudln't be applied. However, thank you for mentioning, it's strange that GPT4All have those issues with system message customizations... It could (will) affect the output. For hypothesis 3, every LLM API is stateless (REST - representable state transfer.., smth like that). Anyways JabRef handles the state. @ThiloteE, could you run JabRef in debug mode and look at the logs. There should be a log from If I remember correctly, you just need to pass an argument |
@InAnYan Here are some logs: logs for embeddings GPT4All.txt We can see that entry metadata is added to the systemmessage, not the user message. |
@ThiloteE thanks. I see, system message is there. Because there is "answer using this information", it means there probably was a search in documents. Does chatting work with other models? |
No, other models exhibit the same behaviour with GPT4All. Results of testing today: ✅ OpenAI (ChatGPTT-4o-mini):
✅ Ollama (via OpenAI AP)I:
❌ GPT4All:
|
✅ llama.cpp (via OpenAI API):
|
Real Problem:The embeddings are not created, because of issue #12169 (Melting-pot issue https://github.com/JabRef/jabref-issue-melting-pot/issues/537) Solutions:
Explanation about what happened to me:My hypothesis about what happened: Since I had multiple CUDAs installed on my system and I had not set the PATH and System Environment Variables, the embedding model was not functioning and only the LLM was fully functional, which made it seem like embeddings were not sent to GPT4All, while in reality, no embeddings had ever been created in the first place. I confirmed LLMs being functional, while testing local API servers like GPT4All, Ollama or llama.cpp, as reported in issue #12114 I also had lots of x86 Microsoft Visual C++ Redistributables installed, which are not needed on my x64 system and that also might have caused some conflicts, but the main issue was the path issue, which caused the embeddings model to not function. Links and comments that helped me find the answers:
|
Follow up to #11870 and #12078.
Sub-issue of #11872
Setup:
Description of problem:
Chatting already works with GPT4All, but the embeddings of the pdf documents that are attached to JabRef's entries are seemingly not sent to GPT4All.
Hypotheses about the root of the problem:
Additional info:
The text was updated successfully, but these errors were encountered: