Skip to content

SuperBruceJia/Awesome-Large-Vision-Language-Model

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 

Repository files navigation

Awesome Large Vision-Language Model (VLM)

Awesome Large Vision-Language Model: A Curated List of Large Vision-Language Model (VLM)

Awesome License: MIT Made With Love

This repository, called Awesome Large Vision-Language Model, contains a collection of resources and papers on Large Vision-Language Model (VLM) and Medical Foundation Model (FM).

Welcome to share your papers, thoughts, and ideas by submitting an issue!

Contents

Presentations

Recent Advances in Vision Foundation Models
Chunyuan Li, Zhe Gan, Haotian Zhang, Jianwei Yang, Linjie Li, Zhengyuan Yang, Kevin Lin, Jianfeng Gao, Lijuan Wang
CVPR 2024 Tutorial, [Presentation]
18 Sep 2024

From Multimodal LLM to Human-level AI: Modality, Instruction, Reasoning, Efficiency and Beyond
Hao Fei, Yuan Yao, Zhuosheng Zhang, Fuxiao Liu, Ao Zhang, Tat-Seng Chua
LREC-COLING 2024, [Paper] [Presentation]
20 May 2024

Recent Advances in Vision Foundation Models
Chunyuan Li, Zhe Gan, Zhengyuan Yang, Jianwei Yang, Linjie Li, Lijuan Wang, Jianfeng Gao
CVPR 2023 Tutorial, [Paper] [Presentation]
18 Sep 2023

A Vision-and-Language Approach to Computer Vision in the Wild: Building a General-Purpose Assistant in the Visual World Towards Building and Surpassing Multimodal GPT-4
Chunyuan Li
Deep Learning Team, Microsoft Research, Redmond, [Presentation]
1 May 2023

Flamingo 🦩: A Visual Language Model for Few-Shot Learning
Andrea Wynn, Xindi Wu
Princeton University, [Presentation]
21 November 2022

Books

Foundation Models for Natural Language Processing: Pre-trained Language Models Integrating Media
Gerhard Paaß, Sven Giesselbach
Artificial Intelligence: Foundations, Theory, and Algorithms (Springer Nature), [Link]
16 Feb 2023

Benchmarks

IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning
Pan Lu, Liang Qiu, Jiaqi Chen, Tony Xia, Yizhou Zhao, Wei Zhang, Zhou Yu, Xiaodan Liang, Song-Chun Zhu
NeurIPS 2021, [Paper] [Webpage]
Note: abstract diagram comprehension and holistic cognitive reasoning in real-world diagram-based word problems, requiring both perceptual acumen and versatile cognitive reasoning
25 Jul 2022

OK-VQA: A Visual Question Answering Benchmark Requiring External Knowledge
Kenneth Marino, Mohammad Rastegari, Ali Farhadi, Roozbeh Mottaghi
CVPR 2019, [Paper] [Webpage]
Note: questions requiring reasoning with a variety of knowledge types such as commonsense, world knowledge, and visual knowledge
4 Sep 2019

GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question Answering
Drew A. Hudson, Christopher D. Manning
CVPR 2019, [Paper] [Webpage]
Note: image scene graphs reasoning, offering impartial compositional questions derived from real-world images.
10 May 2019

VQA v2 & Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering
Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, Devi Parikh
CVPR 2017, [Paper] [Webpage]
Note: every question is associated with a pair of similar images that result in two different answers to the question.
15 May 2017

Papers

Survey

The Revolution of Multimodal Large Language Models: A Survey
Davide Caffagni, Federico Cocchi, Luca Barsellotti, Nicholas Moratelli, Sara Sarto, Lorenzo Baraldi, Lorenzo Baraldi, Marcella Cornia, Rita Cucchiara
ACL 2024, [Paper]
6 Jun 2024

MM-LLMs: Recent Advances in MultiModal Large Language Models
Duzhen Zhang, Yahan Yu, Jiahua Dong, Chenxing Li, Dan Su, Chenhui Chu, Dong Yu
ACL 2024, [Paper] [Webpage]
Note: Categorize MM-LLMs by Modality Encoder, Input Projector, LLM Backbone, Output Projector, Modality Generator, Training Pipeline, SoTA MM-LLMs, and Benchmarks & Performance
28 May 2024

A Survey on Multimodal Large Language Models
Shukang Yin, Chaoyou Fu, Sirui Zhao, Ke Li, Xing Sun, Tong Xu, Enhong Chen
arXiv, [Paper] [GitHub]
Note: The Section 3: Training Strategy and Data is a good reference.
1 Apr 2024

Exploring the Reasoning Abilities of Multimodal Large Language Models (MLLMs): A Comprehensive Survey on Emerging Trends in Multimodal Reasoning
Yiqi Wang, Wentao Chen, Xiaotian Han, Xudong Lin, Haiteng Zhao, Yongfei Liu, Bohan Zhai, Jianbo Yuan, Quanzeng You, Hongxia Yang
arXiv, [Paper]
18 Jan 2024

Multimodal Foundation Models: From Specialists to General-Purpose Assistants
Chunyuan Li, Zhe Gan, Zhengyuan Yang, Jianwei Yang, Linjie Li, Lijuan Wang, Jianfeng Gao
Foundations and Trends in Computer Graphics and Vision, [Paper] [Webpage]
18 Sep 2023

Multimodal Large Language Models

Alignment Before Projection

Video-LLaVA: Learning United Visual Representation by Alignment Before Projection
Bin Lin, Yang Ye, Bin Zhu, Jiaxi Cui, Munan Ning, Peng Jin, Li Yuan
arXiv, [Paper] [Codes]
21 Nov 2023

Intermediate Networks

Note: including Q-Former + linear layer(s) projection

NExT-GPT: Any-to-Any Multimodal LLM
Shengqiong Wu, Hao Fei, Leigang Qu, Wei Ji, Tat-Seng Chua
ICML 2024, [Paper] [Codes and Dataset] [Webpage]
25 Jun 2024

mPLUG-Owl: Modularization Empowers Large Language Models with Multimodality
Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, Chenliang Li, Yuanhong Xu, Hehong Chen, Junfeng Tian, Qi Qian, Ji Zhang, Fei Huang, Jingren Zhou
arXiv, [Paper] [Codes] [Webpage]
29 Mar 2024

MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens
Kaizhi Zheng, Xuehai He, Xin Eric Wang
arXiv, [Paper] [Codes] [Webpage]
29 Mar 2024

Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding
Hang Zhang, Xin Li, Lidong Bing
EMNLP 2023, [Paper] [Codes] [Video]
25 Oct 2023

MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models
Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, Mohamed Elhoseiny
arXiv, [Paper] [Codes] [Dataset] [Webpage]
25 Oct 2023

ImageBind-LLM: Multi-modality Instruction Tuning
Jiaming Han, Renrui Zhang, Wenqi Shao, Peng Gao, Peng Xu, Han Xiao, Kaipeng Zhang, Chris Liu, Song Wen, Ziyu Guo, Xudong Lu, Shuai Ren, Yafei Wen, Xiaoxin Chen, Xiangyu Yue, Hongsheng Li, Yu Qiao
arXiv, [Paper] [Codes]
11 Sep 2023

BuboGPT: Enabling Visual Grounding in Multi-Modal LLMs
Yang Zhao, Zhijie Lin, Daquan Zhou, Zilong Huang, Jiashi Feng, Bingyi Kang
arXiv, [Paper] [Codes] [Webpage]
17 Jul 2023

InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning
Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, Steven Hoi
arXiv, [Paper] [Codes]
15 Jun 2023

X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languages
Feilong Chen, Minglun Han, Haozhi Zhao, Qingyang Zhang, Jing Shi, Shuang Xu, Bo Xu
arXiv, [Paper] [Codes] [Webpage]
22 May 2023

Feature-level Fusion

Note: These methods can also be categorized in other perspectives.

The Llama 3 Herd of Models
Llama Team, AI @ Meta
arXiv, [Paper]
15 Aug 2024

CogVLM: Visual Expert for Pretrained Language Models
Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang, Lei Zhao, Xixuan Song, Jiazheng Xu, Bin Xu, Juanzi Li, Yuxiao Dong, Ming Ding, Jie Tang
arXiv, [Paper] [Codes]
4 Feb 2024

LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
Renrui Zhang, Jiaming Han, Chris Liu, Peng Gao, Aojun Zhou, Xiangfei Hu, Shilin Yan, Pan Lu, Hongsheng Li, Yu Qiao
ICLR 2024, [Paper] [Codes]
14 Jun 2023

Flamingo 🦩 & Cross-attention (Perceiver Resampler): a Visual Language Model for Few-Shot Learning
Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Monteiro, Jacob Menick, Sebastian Borgeaud, Andrew Brock, Aida Nematzadeh, Sahand Sharifzadeh, Mikolaj Binkowski, Ricardo Barreira, Oriol Vinyals, Andrew Zisserman, Karen Simonyan
NeurIPS 2022, [Paper] [Codes] [Video]
15 Nov 2022

Linear Layers Projection

LLaVA-OneVision: Easy Visual Task Transfer
Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Yanwei Li, Ziwei Liu, Chunyuan Li
arXiv, [Paper] [Codes] [Webpage]
6 Aug 2024

LLaVA-NeXT-Interleave: Tackling Multi-image, Video, and 3D in Large Multimodal Models
Feng Li, Renrui Zhang, Hao Zhang, Yuanhan Zhang, Bo Li, Wei Li, Zejun Ma, Chunyuan Li
arXiv, [Paper] [Codes] [Webpage]
28 Jul 2024

MoE-LLaVA: Mixture of Experts for Large Vision-Language Models
Bin Lin, Zhenyu Tang, Yang Ye, Jiaxi Cui, Bin Zhu, Peng Jin, Jinfa Huang, Junwu Zhang, Yatian Pang, Munan Ning, Li Yuan
arXiv, [Paper] [Codes]
6 Jul 2024

Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models
Muhammad Maaz, Hanoona Rasheed, Salman Khan, Fahad Shahbaz Khan
ACL 2024, [Paper] [Codes] [Webpage]
10 Jun 2024

LLaVA-1.5: Improved Baselines with Visual Instruction Tuning
Haotian Liu, Chunyuan Li, Yuheng Li, Yong Jae Lee
CVPR 2024, [Paper] [Codes] [Webpage]
15 May 2024

LLaVA: Visual Instruction Tuning
Haotian Liu, Chunyuan Li, Qingyang Wu, Yong Jae Lee
NeurIPS 2023, [Paper] [Codes] [Webpage]
11 Dec 2023

CoDi-2: In-Context, Interleaved, and Interactive Any-to-Any Generation
Zineng Tang, Ziyi Yang, Mahmoud Khademi, Yang Liu, Chenguang Zhu, Mohit Bansal
arXiv, [Paper] [Codes] [Webpage]
30 Nov 2023

X-InstructBLIP: A Framework for aligning X-Modal instruction-aware representations to LLMs and Emergent Cross-modal Reasoning
Artemis Panagopoulou, Le Xue, Ning Yu, Junnan Li, Dongxu Li, Shafiq Joty, Ran Xu, Silvio Savarese, Caiming Xiong, Juan Carlos Niebles
arXiv, [Paper] [Codes] [Webpage]
30 Nov 2023

GILL: Generating Images with Multimodal Language Models
Jing Yu Koh, Daniel Fried, Ruslan Salakhutdinov
NeurIPS 2023, [Paper] [Codes] [Webpage]
13 Oct 2023

MiniGPT-v2: large language model as a unified interface for vision-language multi-task learning
Jun Chen, Deyao Zhu, Xiaoqian Shen, Xiang Li, Zechun Liu, Pengchuan Zhang, Raghuraman Krishnamoorthi, Vikas Chandra, Yunyang Xiong, Mohamed Elhoseiny
arXiv, [Paper] [Codes] [Webpage]
7 Nov 2023

Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou
arXiv, [Paper] [Codes] [Webpage]
13 Oct 2023

ChatSpot: Bootstrapping Multimodal LLMs via Precise Referring Instruction Tuning
Liang Zhao, En Yu, Zheng Ge, Jinrong Yang, Haoran Wei, Hongyu Zhou, Jianjian Sun, Yuang Peng, Runpei Dong, Chunrui Han, Xiangyu Zhang
arXiv, [Paper]
18 Jul 2023

Shikra: Unleashing Multimodal LLM's Referential Dialogue Magic
Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, Rui Zhao
arXiv, [Paper] [Codes]
3 Jul 2023

FROMAGe: Grounding Language Models to Images for Multimodal Inputs and Outputs
Jing Yu Koh, Ruslan Salakhutdinov, Daniel Fried
ICML 2023, [Paper] [Codes] [Webpage]
13 Jun 2023

PaLI-X: On Scaling up a Multilingual Vision and Language Model
Xi Chen, Josip Djolonga, Piotr Padlewski, Basil Mustafa, Soravit Changpinyo, Jialin Wu, Carlos Riquelme Ruiz, Sebastian Goodman, Xiao Wang, Yi Tay, Siamak Shakeri, Mostafa Dehghani, Daniel Salz, Mario Lucic, Michael Tschannen, Arsha Nagrani, Hexiang Hu, Mandar Joshi, Bo Pang, Ceslee Montgomery, Paulina Pietrzyk, Marvin Ritter, AJ Piergiovanni, Matthias Minderer, Filip Pavetic, Austin Waters, Gang Li, Ibrahim Alabdulmohsin, Lucas Beyer, Julien Amelot, Kenton Lee, Andreas Peter Steiner, Yang Li, Daniel Keysers, Anurag Arnab, Yuanzhong Xu, Keran Rong, Alexander Kolesnikov, Mojtaba Seyedhosseini, Anelia Angelova, Xiaohua Zhai, Neil Houlsby, Radu Soricut
arXiv, [Paper]
29 May 2023

PandaGPT: One Model To Instruction-Follow Them All
Yixuan Su, Tian Lan, Huayang Li, Jialu Xu, Yan Wang, Deng Cai
TLLM 2023, [Paper] [Codes] [Webpage]
25 May 2023

VideoLLM: Modeling Video Sequence with Large Language Models
Guo Chen, Yin-Dong Zheng, Jiahao Wang, Jilan Xu, Yifei Huang, Junting Pan, Yi Wang, Yali Wang, Yu Qiao, Tong Lu, Limin Wang
arXiv, [Paper] [Codes]
23 May 2023

Prompt Tuning

Prompt-Transformer (P-Former) & DLP: Bootstrapping Vision-Language Learning with Decoupled Language Pre-training
Yiren Jian, Chongyang Gao, Soroush Vosoughi
NeurIPS 2023, [Paper] [Codes]
19 Dec 2023

LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
Renrui Zhang, Jiaming Han, Chris Liu, Peng Gao, Aojun Zhou, Xiangfei Hu, Shilin Yan, Pan Lu, Hongsheng Li, Yu Qiao
ICLR 2024, [Paper] [Codes]
14 Jun 2023

Contrastive Language-Image Pre-Training

Intermediate Networks

Lyrics & Multi-scale Querying Transformer (MQ-Former): Boosting Fine-grained Language-Vision Alignment and Comprehension via Semantic-aware Visual Objects
Junyu Lu, Dixiang Zhang, Songxin Zhang, Zejian Xie, Zhuoyang Song, Cong Lin, Jiaxing Zhang, Bingyi Jing, Pingjian Zhang
arXiv, [Paper]
12 Apr 2024

BLIP-2 & Query-Transformer (Q-Former): Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models
Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi
ICML 2023, [Paper] [Codes]
15 Jun 2023

Flamingo 🦩 & Cross-attention (Perceiver Resampler): a Visual Language Model for Few-Shot Learning
Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Monteiro, Jacob Menick, Sebastian Borgeaud, Andrew Brock, Aida Nematzadeh, Sahand Sharifzadeh, Mikolaj Binkowski, Ricardo Barreira, Oriol Vinyals, Andrew Zisserman, Karen Simonyan
NeurIPS 2022, [Paper] [Codes] [Video]
15 Nov 2022

BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi
ICML 2022, [Paper] [Codes and Dataset]
15 Feb 2022

Simple Contrastive Learning Paradigms

CLIP: Learning Transferable Visual Models From Natural Language Supervision
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever
ICML 2021, [Paper] [Codes] [Webpage]
26 Feb 2021

Preference Alignment

Aligning Large Multimodal Models with Factually Augmented RLHF
Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, Kurt Keutzer, Trevor Darrell
arXiv, [Paper] [Codes] [Webpage]
25 Sep 2023

Universal Embedding Space

Point-Bind & Point-LLM: Aligning Point Cloud with Multi-modality for 3D Understanding, Generation, and Instruction Following
Ziyu Guo, Renrui Zhang, Xiangyang Zhu, Yiwen Tang, Xianzheng Ma, Jiaming Han, Kexin Chen, Peng Gao, Xianzhi Li, Hongsheng Li, Pheng-Ann Heng
arXiv, [Paper] [Codes]
1 Sep 2023

ImageBind: One Embedding Space To Bind Them All
Rohit Girdhar, Alaaeldin El-Nouby, Zhuang Liu, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, Ishan Misra
CVPR 2023, [Paper] [Codes] [Webpage]
31 May 2023

Training Recipes

VILA: On Pre-training for Visual Language Models
Ji Lin, Hongxu Yin, Wei Ping, Yao Lu, Pavlo Molchanov, Andrew Tao, Huizi Mao, Jan Kautz, Mohammad Shoeybi, Song Han
CVPR 2024, [Paper] [Codes] [Webpage]
16 May 2024

Acknowledgement

This project is sponsored by the PodGPT group, Kolachalama Laboratory at Boston University.