-
Notifications
You must be signed in to change notification settings - Fork 8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Replace llama.cpp with ollama #3542
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Jared Van Bortel <[email protected]>
Signed-off-by: Jared Van Bortel <[email protected]>
Signed-off-by: Jared Van Bortel <[email protected]>
Signed-off-by: Jared Van Bortel <[email protected]>
Signed-off-by: Jared Van Bortel <[email protected]>
Signed-off-by: Jared Van Bortel <[email protected]>
2025 is too soon to use C++ features from 2020 without running into bugs in every build tool that touches the project.
Is this simply going to use a ollama api endpoint? Or is ollama actually integrated inside of gpt4all in this PR? |
I have the same question as @Titaniumtown. I recently catalogued Ollama's recurring issues with non-standard installation processes, and wouldn't like to see GPT4all jump into that quagmire. |
Thank you @iwr-redmond, issues such as those were going to be my follow up. If somehow ollama can be integrated inside of gpt4all, so it is seemless to the user, I would be in favor. As long as it would be used as simply an abstraction layer to llama.cpp and not an external server you need to connect to. |
The Ollama devs have decided to shoot a hole in the screen door and abandon llama.cpp in favor of a custom inference engine. I reckon that pushes this PR into wet shoe territory. |
Oh yikes. Ollama is really going down the drain. |
Summary of changes as of 3/19
new directories:
removed directories:
moved directories:
new files:
removed files:
changed files:
new deps:
__cpp_lib_chrono >= 201907L
.__cpp_lib_generator >= 202207L
.moved deps:
changed deps: