Skip to content

Commit ad3a387

Browse files
authored
Merge branch 'main' into fix-deprec-get_autocast_gpu_dtype
2 parents 80bfb43 + 16c7afd commit ad3a387

File tree

120 files changed

+452
-3162
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

120 files changed

+452
-3162
lines changed

CONTRIBUTING.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -125,8 +125,9 @@ If you're contributing a **vision-language model** (or any multimodal model that
125125
All new models should use the modular architecture pattern. Create a `modular_<model_name>.py` file using the modular model converter:
126126

127127
- Use the CLI, [`transformers add-new-model-like`](https://github.com/huggingface/transformers/blob/main/src/transformers/cli/add_new_model_like.py) to generate a modular skeleton and get started
128-
- All code should be in the modular file if possible. Modeling must be in it, it's better if configuration is in it as well.
128+
- All code should be in the modular file if possible. Modeling must be in it, it's better if configuration is in it as well. [Modular guide](./modular_transformers#implementing-a-modular-file) shows a quick way to set up a modular file.
129129
- Reuse existing patterns from similar models as much as possible
130+
- You can make the model compatible with inference engines such as vLLM or SGLang, and enable zero-effort integration. See specific requirements for model implementation in ["Transformers modeling backend"](./transformers_as_backend#multimodal-models)
130131

131132
To verify your modular file is correct, run:
132133

docker/transformers-pytorch-amd-gpu/Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
FROM rocm/pytorch:rocm7.0.2_ubuntu24.04_py3.12_pytorch_release_2.7.1
1+
FROM rocm/pytorch:rocm7.1_ubuntu22.04_py3.10_pytorch_release_2.8.0
22
LABEL maintainer="Hugging Face"
33

44
ARG DEBIAN_FRONTEND=noninteractive

docs/source/en/_toctree.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@
118118
- local: tools
119119
title: Tools
120120
- local: transformers_as_backend
121-
title: Inference server backends
121+
title: Transformers as modeling backend
122122
- local: continuous_batching
123123
title: Continuous Batching
124124
title: Inference

docs/source/en/model_doc/qwen2_5_omni.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -136,7 +136,7 @@ inputs = processor.apply_chat_template(
136136
tokenize=True,
137137
return_dict=True,
138138
return_tensors="pt",
139-
video_fps=1,
139+
fps=1,
140140

141141
# kwargs to be passed to `Qwen2-5-OmniProcessor`
142142
padding=True,
@@ -245,7 +245,7 @@ inputs = processor.apply_chat_template(
245245
tokenize=True,
246246
return_dict=True,
247247
return_tensors="pt",
248-
video_fps=1,
248+
fps=1,
249249

250250
# kwargs to be passed to `Qwen2-5-OmniProcessor`
251251
padding=True,

docs/source/en/model_doc/qwen2_audio.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ processor = AutoProcessor.from_pretrained("Qwen/Qwen2-Audio-7B", trust_remote_co
5454
prompt = "<|audio_bos|><|AUDIO|><|audio_eos|>Generate the caption in English:"
5555
url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Audio/glass-breaking-151256.mp3"
5656
audio, sr = librosa.load(BytesIO(urlopen(url).read()), sr=processor.feature_extractor.sampling_rate)
57-
inputs = processor(text=prompt, audios=audio, return_tensors="pt").to(model.device)
57+
inputs = processor(text=prompt, audio=audio, return_tensors="pt").to(model.device)
5858

5959
generate_ids = model.generate(**inputs, max_length=256)
6060
generate_ids = generate_ids[:, inputs.input_ids.size(1):]
@@ -63,7 +63,7 @@ response = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_
6363

6464
# We can also omit the audio_bos and audio_eos tokens
6565
prompt = "<|AUDIO|>Generate the caption in English:"
66-
inputs = processor(text=prompt, audios=audio, return_tensors="pt").to(model.device)
66+
inputs = processor(text=prompt, audio=audio, return_tensors="pt").to(model.device)
6767

6868
generate_ids = model.generate(**inputs, max_length=256)
6969
generate_ids = generate_ids[:, inputs.input_ids.size(1):]
@@ -106,7 +106,7 @@ for message in conversation:
106106
sr=processor.feature_extractor.sampling_rate)[0]
107107
)
108108

109-
inputs = processor(text=text, audios=audios, return_tensors="pt", padding=True)
109+
inputs = processor(text=text, audio=audios, return_tensors="pt", padding=True)
110110
inputs.input_ids = inputs.input_ids.to(model.device)
111111

112112
generate_ids = model.generate(**inputs, max_length=256)
@@ -156,7 +156,7 @@ for message in conversation:
156156
sr=processor.feature_extractor.sampling_rate)[0]
157157
)
158158

159-
inputs = processor(text=text, audios=audios, return_tensors="pt", padding=True)
159+
inputs = processor(text=text, audio=audios, return_tensors="pt", padding=True)
160160
inputs.input_ids = inputs.input_ids.to(model.device)
161161

162162
generate_ids = model.generate(**inputs, max_length=256)
@@ -213,7 +213,7 @@ for conversation in conversations:
213213
sr=processor.feature_extractor.sampling_rate)[0]
214214
)
215215

216-
inputs = processor(text=text, audios=audios, return_tensors="pt", padding=True)
216+
inputs = processor(text=text, audio=audios, return_tensors="pt", padding=True)
217217
inputs['input_ids'] = inputs['input_ids'].to(model.device)
218218
inputs.input_ids = inputs.input_ids.to(model.device)
219219

docs/source/en/model_doc/qwen3_omni_moe.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ inputs = processor.apply_chat_template(
8080
tokenize=True,
8181
return_dict=True,
8282
return_tensors="pt",
83-
video_fps=1,
83+
fps=1,
8484

8585
# kwargs to be passed to `Qwen3OmniMoeProcessor`
8686
padding=True,
@@ -136,7 +136,7 @@ inputs = processor.apply_chat_template(
136136
tokenize=True,
137137
return_dict=True,
138138
return_tensors="pt",
139-
video_fps=1,
139+
fps=1,
140140

141141
# kwargs to be passed to `Qwen3OmniMoeProcessor`
142142
padding=True,
@@ -245,7 +245,7 @@ inputs = processor.apply_chat_template(
245245
tokenize=True,
246246
return_dict=True,
247247
return_tensors="pt",
248-
video_fps=1,
248+
fps=1,
249249

250250
# kwargs to be passed to `Qwen3OmniMoeProcessor`
251251
padding=True,

docs/source/en/model_doc/seamless_m4t.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ Here is how to use the processor to process text and audio:
6161
>>> audio_sample = next(iter(dataset))["audio"]
6262

6363
>>> # now, process it
64-
>>> audio_inputs = processor(audios=audio_sample["array"], return_tensors="pt")
64+
>>> audio_inputs = processor(audio=audio_sample["array"], return_tensors="pt")
6565

6666
>>> # now, process some English test as well
6767
>>> text_inputs = processor(text = "Hello, my dog is cute", src_lang="eng", return_tensors="pt")

docs/source/en/model_doc/seamless_m4t_v2.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ Here is how to use the processor to process text and audio:
6161
>>> audio_sample = next(iter(dataset))["audio"]
6262

6363
>>> # now, process it
64-
>>> audio_inputs = processor(audios=audio_sample["array"], return_tensors="pt")
64+
>>> audio_inputs = processor(audio=audio_sample["array"], return_tensors="pt")
6565

6666
>>> # now, process some English text as well
6767
>>> text_inputs = processor(text = "Hello, my dog is cute", src_lang="eng", return_tensors="pt")

docs/source/en/modular_transformers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Contributing a new model to Transformers
22

3-
Modular Transformers lowers the bar for contributing models and significantly reduces the code required to add a model by allowing imports and inheritance.
3+
Modular Transformers lowers the bar for contributing models and significantly reduces the code required to add a model by allowing imports and inheritance. We recommend to go through [general contribution guidelines for new models](./contributing#do-you-want-to-implement-a-new-model) before diving into the details here.
44

55
One of Transformers' core design feature is the [single model, single file](https://huggingface.co/blog/transformers-design-philosophy) policy. Model components - such as attention layers - are repeated across many files and any independent implementations tend to diverge as fixes and changes are applied to specific parts of the code.
66

docs/source/en/transformers_as_backend.md

Lines changed: 8 additions & 52 deletions
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,9 @@ rendered properly in your Markdown viewer.
1414
1515
-->
1616

17-
# Inference server backends
17+
# Transformers as modeling backend
1818

19-
Transformers' models are compatible with different inference servers like vLLM and SGLang. Instead of implementing a model for each inference server, you only need one model, which can be plugged into any inference server. It simplifies maintenance and makes it easy for users to use different inference servers for different use cases.
19+
Transformers' models are compatible with different inference servers like vLLM and SGLang. Instead of implementing a new model architecture from scratch for each inference server, you only need a model definition in `transformers`, which can be plugged into any inference server. It simplifies maintenance and makes it easy for users to use different inference servers for different use cases.
2020

2121
With Transformers as a backend, you can also serve any model - including custom and Hub-hosted models - without waiting for native support.
2222

@@ -157,57 +157,13 @@ class MyConfig(PreTrainedConfig):
157157

158158
### Multimodal models
159159

160-
For multimodal models, you need to include a few more changes on top of the general recommendations. These rules ensure that your model integrates properly with multimodal data.
160+
For multimodal models, you need to include a few more changes on top of the general recommendations outlined in ["contribuiting a model"](./contributing#vision-language-model-contribution-checklist). These rules ensure that your model integrates properly and enables processing multimodal data.
161161

162-
1. A multimodal model requires a base `MyMultiModalModel` class to handle multimodal fusion without a language modeling head and a separate generative class that adds a head.
162+
1. A multimodal model's processing class must have the `self.image_token` and `self.image_token_ids` attributes. These are placeholder tokens used to indicate image positions in the input. This placeholder token is the same token used in the input prompt to denote images and used in model code to scatter image features.
163163

164-
The base model needs to implement the `get_image_features()` method to accept image pixel values and return encoded outputs. These are later merged with the language embeddings and don't require any postprocessing. The shape of the returned features must match the number of input images. If a vision encoder returns variable-length outputs (patch-based), return a list of 2D tensors of size `(image_seq_len, image_dim)` for each image.
164+
2. The processing class needs `self._get_num_multimodal_tokens` method to compute the number of placeholder tokens needed for multimodal inputs with given sizes and to return a [`MultiModalData`] object. The placeholders between `<image>` tokens such as row or column tokens don't count as image placeholders. Only tokens that are actually replaced by image features later in modeling should be counted!
165165

166-
Expand the code below for an example.
167-
168-
<details>
169-
<summary>modeling_my_multimodal_model.py</summary>
170-
171-
```python
172-
from transformers.generation import GenerationMixin
173-
174-
class MyMultimodalModel(MyMultimodalPreTrainedModel):
175-
def __init__(self, config):
176-
super().__init__(config)
177-
self.language_model = AutoModel.from_config(config.text_config)
178-
self.vision_tower = AutoModel.from_config(config.vision_config)
179-
self.multimodal_projection = nn.Linear(vision_dim, text_dim)
180-
181-
def get_image_features(self, pixel_values):
182-
return self.vision_tower(pixel_values).last_hidden_states
183-
184-
def forward(self, input_ids, pixel_values, **kwargs):
185-
# process your inputs
186-
return MyModelOutputWithPast(
187-
last_hidden_state=last_hidden_state,
188-
image_hidden_states=image_features,
189-
[...]
190-
)
191-
192-
class MyMultimodalModelForConditionalGeneration(MyMultimodalPreTrainedModel, GenerationMixin):
193-
def __init__(self, config):
194-
super().__init__(config)
195-
self.model = MyMultimodalModel(config)
196-
self.lm_head = nn.Linear(hidden_dim, vocab_size)
197-
```
198-
199-
</details>
200-
201-
2. A multimodal model config must be nested with the following fields.
202-
* text_config: decoder language model config
203-
* vision_config: vision encoder config
204-
* image_token_id: ID of the image placeholder token used in the input to indicate image position
205-
206-
3. A multimodal model's processing class must have the `self.image_token` and `self.image_token_ids` attributes. These are placeholder tokens used to indicate image positions in the input. The placeholder token is the same token used in the input prompt and to mask scatter image features.
207-
208-
The processing class also needs `self._get_num_multimodal_tokens` method to compute the number of placeholder tokens needed for multimodal inputs with given sizes and to return a [`MultiModalData`] object. The placeholder for row and column tokens don't count as image placeholders. Only the tokens that are actually replaced by image features are computed.
209-
210-
Finally, when `return_mm_token_type_ids=True`, the class has to return `mm_token_type_ids` to indicate whether each position is a text token (`0`) or image placeholder token (`1`). Each image's token type IDs must be contiguous with no breaks between consecutive ones.
166+
3. The processor needs to check the value of `return_mm_token_type_ids` and return `mm_token_type_ids` to indicate whether each position is a text token (`0`), image placeholder token (`1`) or video placeholder token (`2`). Each multimodal token type ID sequence must be contiguous without breaks between consecutive tokens, therefore special tokens for begin/end/row/column must be treated as placeholders.
211167

212168
Expand the code below for an example.
213169

@@ -246,5 +202,5 @@ class MyMultimodalProcessor(ProcessorMixin):
246202

247203
## Resources
248204

249-
* Read the [Transformers backend integration in vLLM](https://blog.vllm.ai/2025/04/11/transformers-backend.html) blog post for more details about the Transformers backend in vLLM.
250-
* Read the [Transformers backend integration in SGLang](https://huggingface.co/blog/transformers-backend-sglang) blog post for more details about the Transformers backend in SGLang.
205+
* Read the [Transformers modeling backend integration in vLLM](https://blog.vllm.ai/2025/04/11/transformers-backend.html) blog post for more details about the Transformers modeling backend in vLLM.
206+
* Read the [Transformers modeling backend integration in SGLang](https://huggingface.co/blog/transformers-backend-sglang) blog post for more details about the Transformers modeling backend in SGLang.

0 commit comments

Comments
 (0)