Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]: Vision models don't retain memory of images past one prompt #2585

Open
sheneman opened this issue Nov 5, 2024 · 0 comments
Open

[BUG]: Vision models don't retain memory of images past one prompt #2585

sheneman opened this issue Nov 5, 2024 · 0 comments
Labels
possible bug Bug was reported but is not confirmed or is unable to be replicated.

Comments

@sheneman
Copy link

sheneman commented Nov 5, 2024

How are you running AnythingLLM?

AnythingLLM desktop app

What happened?

When I upload a file, I can use a vision model like llama3.2-vision:11b to describe it, but then subsequent prompts don't have any memory of the image.

image

I would expect that I can ask repeated questions of the image and that it would remain in my current context until my context window was exhausted.

Are there known steps to reproduce?

No response

@sheneman sheneman added the possible bug Bug was reported but is not confirmed or is unable to be replicated. label Nov 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
possible bug Bug was reported but is not confirmed or is unable to be replicated.
Projects
None yet
Development

No branches or pull requests

1 participant