Skip to content

Commit 1780b58

Browse files
author
肖鑫
committed
Fix typos
1 parent ad7c499 commit 1780b58

File tree

1 file changed

+17
-7
lines changed

1 file changed

+17
-7
lines changed

README.md

Lines changed: 17 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,25 +1,35 @@
1-
# 👁️ Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment
2-
[![License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE)
1+
<div align="center">
2+
<h1>👁️ Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment</h1>
3+
<a href='https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE'><img src='https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg'></a>
34
<a href='https://arxiv.org/pdf/2405.17871'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a>
45

6+
[Xin Xiao](https://scholar.google.com/citations?user=CL-ZEdwAAAAJ&hl=zh-CN)<sup>1,2*</sup>,
7+
[Bohong Wu](https://scholar.google.com/citations?user=N6vypvkAAAAJ&hl=en)<sup>2*</sup>,
8+
Jiacong Wang<sup>2,3</sup>,
9+
[Chunyuan Li](https://chunyuan.li/)<sup>2</sup>,
10+
Xun Zhou<sup>2</sup>,
11+
[Haoyuan Guo](https://scholar.google.com/citations?hl=en&user=hql67boAAAAJ&view_op=list_works&sortby=pubdate) <sup>2</sup>
512

6-
This is an official PyTorch Implementation of [**Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment**](https://arxiv.org/pdf/2405.17871)
13+
<sup>1</sup>School of Computer Science, Wuhan University, <sup>2</sup>ByteDance Inc
14+
15+
<sup>3</sup>School of Artificial Intelligence, University of Chinese Academy of Sciences
716

8-
[Xin Xiao*](https://scholar.google.com/citations?user=CL-ZEdwAAAAJ&hl=zh-CN), [Bohong Wu*](https://scholar.google.com/citations?user=N6vypvkAAAAJ&hl=en), Jiacong Wang, [Chunyuan Li](https://chunyuan.li/), Xun Zhou, [Haoyuan Guo](https://scholar.google.com/citations?hl=en&user=hql67boAAAAJ&view_op=list_works&sortby=pubdate) (*Equal Contribution)
917

1018
>**abstract:**
1119
>Existing image-text modality alignment in Vision Language Models (VLMs) treats each text token equally in an autoregressive manner. Despite being simple and effective, this method results in sub-optimal cross-modal alignment by over-emphasizing the text tokens that are less correlated with or even contradictory with the input images. In this paper, we advocate for assigning distinct contributions for each text token based on its visual correlation. Specifically, we present by contrasting image inputs, the difference in prediction logits on each text token provides strong guidance of visual correlation. We therefore introduce **C**ontrastive **AL**ignment (CAL), a simple yet effective re-weighting strategy that prioritizes training visually correlated tokens. Our experimental results demonstrate that CAL consistently improves different types of VLMs across different resolutions and model sizes on various benchmark datasets. Importantly, our method incurs minimal additional computational overhead, rendering it highly efficient compared to alternative data scaling strategies.
1220
21+
</div>
22+
1323
<p align="center"><img width="100%" src="./images/motivation.jpg"></p>
1424
<p align="center"><img width="100%" src="./images/method.jpg"></p>
1525

1626

1727

1828
## News and Updates
19-
* ```2024.06``` 🔥🔥🔥 Code released.
29+
* ```2024.06``` 🔥🔥🔥 The code is released.
2030

21-
## Selected Examples
22-
<p align="center"><img width="100%" src="./images/cases.jpg"></p>
31+
<!-- ## Selected Examples
32+
<p align="center"><img width="100%" src="./images/cases.jpg"></p> -->
2333

2434
## Results
2535
We provide results comparision for LLaVA-NEXT here.

0 commit comments

Comments
 (0)