Replies: 1 comment 2 replies
-
@micoraweb it should be supported by LlamaSharp if llama.cpp support the format. Have you tried with llama.cpp before to test it with LlamaSharp? Making a little research It seems that ollama (that depends also on llama.cpp) manage this model. This is the prompt template that they are using:
This is a llama3 based template, it's very different that the one we are using in LlamaSharp llava example (that is based on mistral 7B). Could you share the prompt that you are using? |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello, it looks like the latest LlamaSharp is able to run Llava models 1.5 and 1.6, but Llava Next is not supported ?
Support for many types of modern vision models (Mini CPM) is unfortunately not implemented ...
Are there any plans to support those in the near future ?
This model here, for example: https://huggingface.co/KBlueLeaf/llama3-llava-next-8b-gguf - doesn't 'see' anything,
even though it's supposed to provide an improved understanding and higher resolution than 1.5 and 1.6 llava models...
Beta Was this translation helpful? Give feedback.
All reactions