You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I upload a file, I can use a vision model like llama3.2-vision:11b to describe it, but then subsequent prompts don't have any memory of the image.
I would expect that I can ask repeated questions of the image and that it would remain in my current context until my context window was exhausted.
Are there known steps to reproduce?
No response
The text was updated successfully, but these errors were encountered:
How are you running AnythingLLM?
AnythingLLM desktop app
What happened?
When I upload a file, I can use a vision model like llama3.2-vision:11b to describe it, but then subsequent prompts don't have any memory of the image.
I would expect that I can ask repeated questions of the image and that it would remain in my current context until my context window was exhausted.
Are there known steps to reproduce?
No response
The text was updated successfully, but these errors were encountered: