You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I found that the model you gave me doesn't seem to be able to be successfully called by llama cpp python, or it may be my version problem, I upgraded the VLM-GGUF loader node. Now you can choose the chat format, but if none of these options successfully load the clip model, I seriously doubt that llama cpp python does not support this model yet. https://github.com/abetlen/llama-cpp-python/tree/7ecdd944624cbd49e4af0a5ce1aa402607d58dcc?tab=readme-ov-file#multi-modal-models all the models written here can be called by party, because party relies on llama cpp python to call GGUF.
Is this gguf llama 3.2 vision model supported?
https://huggingface.co/leafspark/Llama-3.2-11B-Vision-Instruct-GGUF/tree/main
I tried running the 20 gig 11b model from meta, but it's taking extremely long to run and never finishes on a 32 gig ram/16gig vram setup.
The text was updated successfully, but these errors were encountered: