You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
DeepSeek-VL 7B seems to be the awesome vision model that nobody is talking about. It is better than Llava 1.6 34B in VQA and OCR. It consistently return JSON when asked and just seems to have a great textual capability. Any recommendation on how to wrap DeepSeek-VL in a fast inference server and take for spin at scale ?
DeepSeek-VL 7B seems to be the awesome vision model that nobody is talking about. It is better than Llava 1.6 34B in VQA and OCR. It consistently return JSON when asked and just seems to have a great textual capability. Any recommendation on how to wrap DeepSeek-VL in a fast inference server and take for spin at scale ?
There are some issues opened here:
InternLM/lmdeploy#1321
sgl-project/sglang#297
The text was updated successfully, but these errors were encountered: