You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Visual language models (VLMs) have made significant advances in accuracy in recent years. However, their efficiency has received much less attention. This paper introduces <strong>Cosmos Nemotron</strong>, a family of open VLMs designed to optimize both efficiency and accuracy. Building on top of research from <strong>NVIDIA including NVILA and VILA</strong>, we improve its model architecture by first scaling up the spatial and temporal resolutions, and then compressing visual tokens. This <strong>"scale-then-compress" approach</strong> enables these VLMs to efficiently process high-resolution images and long videos. We also conduct a systematic investigation to enhance the efficiency of VLMs throughout its entire lifecycle, from training and fine-tuning to deployment.
887
-
In this paper, we’ll look at the latest NVILA research that serves as a foundation for Cosmos Nemotron and show how it matches or surpasses the accuracy of many leading open and proprietary VLMs across a wide range of image and video benchmarks. At the same time, it reduces training costs by 4.5×, fine-tuning memory usage by 3.4×, pre-filling latency by 1.6-2.2×, and decoding latency by 1.2-2.8×. We make our code and models available to facilitate reproducibility.
884
+
Visual language models (VLMs) have made significant advances in accuracy in recent years. However, their
885
+
efficiency has received much less attention. This paper introduces <strong>NVILA</strong>, a family of
886
+
open VLMs designed to optimize both efficiency and accuracy. Building on top of VILA, we improve its
887
+
model architecture by first <strong>scaling up</strong> the spatial and temporal resolutions, and then
888
+
<strong>compressing</strong> visual tokens. This "scale-then-compress" approach enables NVILA to
889
+
efficiently process high-resolution images and long videos. We also conduct a systematic investigation
890
+
to enhance the efficiency of NVILA throughout its entire lifecycle, from training and fine-tuning to
891
+
deployment. NVILA matches or surpasses the accuracy of many leading open and proprietary VLMs across a
892
+
wide range of image and video benchmarks. At the same time, it reduces training costs by
893
+
<strong>4.5×</strong>, fine-tuning memory usage by <strong>3.4×</strong>, pre-filling latency by
894
+
<strong>1.6-2.2×</strong>, and decoding latency by <strong>1.2-2.8×</strong>. We make our code and
In this paper, we introduce Cosmos Nemotron, a family of open VLMs designed to optimize both efficiency and accuracy. Building on NVILA and VILA, we improve its model architecture by first scaling up the spatial and temporal resolution, followed by compressing visual tokens. "Scaling" preserves more details from visual inputs, raising the accuracy upper bound, while "compression" squeezes visual information to fewer tokens, improving computational efficiency. This "scale-then-compress" strategy allows VLMs to process high-resolution images and long videos both effectively and efficiently. In addition, we conduct a systematic study to optimize the efficiency of VLMs throughout its entire lifecycle, including training, fine-tuning, and deployment.
968
+
In this paper, we introduce <strong>NVILA</strong>, a family of open VLMs designed to optimize both
969
+
efficiency and accuracy. Building on VILA, we improve its model architecture by first scaling up the
970
+
spatial and temporal resolution, followed by compressing visual tokens. "Scaling" preserves more details
971
+
from visual inputs, raising the accuracy upper bound, while "compression" squeezes visual information to
972
+
fewer tokens, improving computational efficiency. This "<em>scale-then-compress</em>" strategy allows
973
+
NVILA to process high-resolution images and long videos both effectively and efficiently. In addition,
974
+
we conduct a systematic study to optimize the efficiency of NVILA throughout its entire lifecycle,
0 commit comments