Release v1.1.0
🔥 LLaVA-1.5 is out! This release supports LLaVA-1.5 model inference and serving.
We will release the training scripts, data, and evaluation scripts on benchmarks in the coming week.
LLaVA-1.5 achieves SoTA on 11 benchmarks, with just simple modifications to the original LLaVA, utilizes all public data, completes training in ~1 day on a single 8-A100 node, and surpasses methods like Qwen-VL-Chat that use billion-scale data. Check out the technical report, and explore the demo! Models are available in Model Zoo, with training and evaluation scripts coming in the next week!