-
Notifications
You must be signed in to change notification settings - Fork 5
Description
When attempting to load the precompiled build 5866 release files, either CPU or Vulkan, on LinuxMint 21.3 (Ubuntu Jammy, Python 3.10), an error occurs:
[2025-07-31 Thu 09:16:21.876] INFO: easy_llama v0.2.14 targeting llama.cpp@0b885577 (2025-07-10)
[2025-07-31 Thu 09:16:21.877] INFO: loaded libllama from /home/redmond/Applications/llamacpp/libllama.so
llama_model_load_from_file_impl: no backends are loaded. hint: use ggml_backend_load() or ggml_backend_load_all() to load a backend before calling this function
Traceback (most recent call last):
File "", line 1, in
File "/home/redmond/.local/lib/python3.10/site-packages/easy_llama/llama.py", line 872, in init
self._model = _LlamaModel(
File "/home/redmond/.local/lib/python3.10/site-packages/easy_llama/llama.py", line 604, in init
null_ptr_check(self.model, "self.model", "_LlamaModel.init")
File "/home/redmond/.local/lib/python3.10/site-packages/easy_llama/utils.py", line 265, in null_ptr_check
raise LlamaNullException(f"{loc_hint}: pointer{ptr_name}is null")
easy_llama.utils.LlamaNullException: _LlamaModel.init: pointerself.modelis null
Repro:
pip install easy-llama
export LIBLLAMA=/home/redmond/Applications/llamacpp/libllama.so
pythonimport easy_llama as ez
model="/home/redmond/.cache/gpt4all/Nous-Hermes-2-Mistral-7B-DPO.Q4_0.gguf"
MyLlama = ez.Llama(model)