You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Loading model: deepseek-vl-7b-chat
Traceback (most recent call last):
File "/home/zhangyuanfeng/software/ollama/llm/llama.cpp/./convert-hf-to-gguf.py", line 2099, in
main()
File "/home/zhangyuanfeng/software/ollama/llm/llama.cpp/./convert-hf-to-gguf.py", line 2079, in main
model_class = Model.from_model_architecture(hparams["architectures"][0])
File "/home/zhangyuanfeng/software/ollama/llm/llama.cpp/./convert-hf-to-gguf.py", line 215, in from_model_architecture
raise NotImplementedError(f'Architecture {arch!r} not supported!') from None
NotImplementedError: Architecture 'MultiModalityCausalLM' not supported!
So is there any feasible method? Thx.
The text was updated successfully, but these errors were encountered:
Sorry for this dummy question but I did search for some answers and try before.
Using llama.cpp
returned
So is there any feasible method? Thx.
The text was updated successfully, but these errors were encountered: