diff --git a/README.md b/README.md index c8c7d145..cf66011e 100644 --- a/README.md +++ b/README.md @@ -16,6 +16,7 @@ VILA is a visual language model (VLM) pretrained with interleaved image-text dat ## 💡 News +- [2024/05] We move our repo to NVlabs (https://github.com/NVlabs/VILA) All future developments will be updated there! - [2024/05] We release VILA-1.5, which offers **video understanding capability**. VILA-1.5 comes with four model sizes: 3B/8B/13B/40B. - [2024/05] We release [AWQ](https://arxiv.org/pdf/2306.00978.pdf)-quantized 4bit VILA-1.5 models. VILA-1.5 is efficiently deployable on diverse NVIDIA GPUs (A100, 4090, 4070 Laptop, Orin, Orin Nano) by [TinyChat](https://github.com/mit-han-lab/llm-awq/tree/main/tinychat) and [TensorRT-LLM](demo_trt_llm) backends. - [2024/03] VILA has been accepted by CVPR 2024!