Added ability for vision capable modal to talk to #146
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
🚀 Feature: Vision Model Integration & Image Handling Enhancements
✨ Summary
This pull request introduces major enhancements to support interaction with vision-capable models. The following key features have been added:
✅ What's New
Vision Model Support:
Integrated support to chat with models capable of processing images (e.g.,
llava
, "gemma:4b").Image Uploading:
Users can now upload one or multiple images to send as part of their message payload.
Image Deletion:
Added the ability to remove selected images before sending a message to the model.
Persistent Storage:
Images are now stored persistently so that they're available across app sessions.
📦 Implementation Details
@State
/@StateObject
(if applicable).FileManager
or similar API.📝 Notes
I hope you consider integrating this into the main codebase. Let me know if any changes are needed or if you'd like additional enhancements! Below are some screenshots that are captured from application hope you get some idea how it works.