-
Notifications
You must be signed in to change notification settings - Fork 7.3k
Description
Reminder
- I have read the above rules and searched the existing issues.
System Info
ValueError: Processor was not found, please check and update your model file.
运行FAQ的代码输出如下:
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained("/root/autodl-tmp/Qwen3-VL-30B-A3B-Instruct")
print(type(processor))#<class 'transformers.models.qwen2.tokenization_qwen2_fast.Qwen2TokenizerFast'>
配置文件如下:
model
model_name_or_path: /root/autodl-tmp/Qwen3-VL-30B-A3B-Instruct
image_max_pixels: 262144
video_max_pixels: 16384
trust_remote_code: true
method
stage: sft
do_train: true
finetuning_type: lora
lora_rank: 8
lora_target: all
deepspeed: examples/deepspeed/ds_z3_config.json # choices: [ds_z0_config.json, ds_z2_config.json, ds_z3_config.json]
dataset
dataset: alpaca_en_demo # video: mllm_video_demo
template: qwen3_vl
cutoff_len: 2048
max_samples: 1000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 4
output
output_dir: saves/qwen3vl-30b/lora/sft
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: none # choices: [none, wandb, tensorboard, swanlab, mlflow]
train
per_device_train_batch_size: 1
gradient_accumulation_steps: 8
learning_rate: 1.0e-4
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
resume_from_checkpoint: null
## eval
val_size: 0.1
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
Reproduction
(llamafc) root@autodl-container-a5d042a72c-eebc4213:~/LLaMA-Factory-main# llamafactory-cli train examples/train_lora/qwen3vl_lora_sft.yaml
[INFO|2025-10-09 16:42:45] llamafactory.hparams.parser:423 >> Process rank: 0, world size: 1, device: cuda:0, distributed training: False, compute dtype: torch.bfloat16
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:42:45,908 >> loading file vocab.json
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:42:45,908 >> loading file merges.txt
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:42:45,908 >> loading file tokenizer.json
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:42:45,908 >> loading file added_tokens.json
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:42:45,908 >> loading file special_tokens_map.json
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:42:45,908 >> loading file tokenizer_config.json
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:42:45,908 >> loading file chat_template.jinja
[INFO|tokenization_utils_base.py:2337] 2025-10-09 16:42:46,110 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
[INFO|image_processing_base.py:374] 2025-10-09 16:42:46,111 >> loading configuration file /root/autodl-tmp/Qwen3-VL-30B-A3B-Instruct/preprocessor_config.json
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:42:46,112 >> loading file vocab.json
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:42:46,112 >> loading file merges.txt
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:42:46,112 >> loading file tokenizer.json
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:42:46,112 >> loading file added_tokens.json
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:42:46,112 >> loading file special_tokens_map.json
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:42:46,112 >> loading file tokenizer_config.json
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:42:46,112 >> loading file chat_template.jinja
[INFO|tokenization_utils_base.py:2337] 2025-10-09 16:42:46,305 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
[INFO|2025-10-09 16:42:46] llamafactory.data.loader:143 >> Loading dataset /root/autodl-tmp/flowchart_understanding_real_train.json...
trust_remote_code is not supported anymore.
Please check that the Hugging Face dataset 'json' isn't based on a loading script and remove trust_remote_code.
If the dataset is based on a loading script, please ask the dataset author to remove it and convert it to a standard format like Parquet.
Converting format of dataset (num_proc=16): 100%|█████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 2631.37 examples/s]
[INFO|2025-10-09 16:42:47] llamafactory.data.loader:143 >> Loading dataset /root/autodl-tmp/flowchart_understanding_syn.json...
trust_remote_code is not supported anymore.
Please check that the Hugging Face dataset 'json' isn't based on a loading script and remove trust_remote_code.
If the dataset is based on a loading script, please ask the dataset author to remove it and convert it to a standard format like Parquet.
Setting num_proc from 16 back to 1 for the train split to disable multiprocessing as it only contains one shard.
Generating train split: 17994 examples [00:00, 25567.76 examples/s]
Converting format of dataset (num_proc=16): 100%|█████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 2915.50 examples/s]
Running tokenizer on dataset (num_proc=16): 0%| | 0/2000 [00:01<?, ? examples/s]
multiprocess.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/root/miniconda3/envs/llamafc/lib/python3.10/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/root/miniconda3/envs/llamafc/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 586, in _write_generator_to_queue
for i, result in enumerate(func(**kwargs)):
File "/root/miniconda3/envs/llamafc/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3674, in _map_single
for i, batch in iter_outputs(shard_iterable):
File "/root/miniconda3/envs/llamafc/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3624, in iter_outputs
yield i, apply_function(example, i, offset=offset)
File "/root/miniconda3/envs/llamafc/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3547, in apply_function
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
File "/root/LLaMA-Factory-main/src/llamafactory/data/processor/supervised.py", line 99, in preprocess_dataset
input_ids, labels = self._encode_data_example(
File "/root/LLaMA-Factory-main/src/llamafactory/data/processor/supervised.py", line 43, in _encode_data_example
messages = self.template.mm_plugin.process_messages(prompt + response, images, videos, audios, self.processor)
File "/root/LLaMA-Factory-main/src/llamafactory/data/mm_plugin.py", line 1589, in process_messages
self._validate_input(processor, images, videos, audios)
File "/root/LLaMA-Factory-main/src/llamafactory/data/mm_plugin.py", line 176, in _validate_input
raise ValueError("Processor was not found, please check and update your model file.")
ValueError: Processor was not found, please check and update your model file.
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/miniconda3/envs/llamafc/bin/llamafactory-cli", line 7, in <module>
sys.exit(main())
File "/root/LLaMA-Factory-main/src/llamafactory/cli.py", line 24, in main
launcher.launch()
File "/root/LLaMA-Factory-main/src/llamafactory/launcher.py", line 152, in launch
run_exp()
File "/root/LLaMA-Factory-main/src/llamafactory/train/tuner.py", line 110, in run_exp
_training_function(config={"args": args, "callbacks": callbacks})
File "/root/LLaMA-Factory-main/src/llamafactory/train/tuner.py", line 72, in _training_function
run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
File "/root/LLaMA-Factory-main/src/llamafactory/train/sft/workflow.py", line 51, in run_sft
dataset_module = get_dataset(template, model_args, data_args, training_args, stage="sft", **tokenizer_module)
File "/root/LLaMA-Factory-main/src/llamafactory/data/loader.py", line 315, in get_dataset
dataset = _get_preprocessed_dataset(
File "/root/LLaMA-Factory-main/src/llamafactory/data/loader.py", line 256, in _get_preprocessed_dataset
dataset = dataset.map(
File "/root/miniconda3/envs/llamafc/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 560, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/root/miniconda3/envs/llamafc/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3309, in map
for rank, done, content in iflatmap_unordered(
File "/root/miniconda3/envs/llamafc/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 626, in iflatmap_unordered
[async_result.get(timeout=0.05) for async_result in async_results]
File "/root/miniconda3/envs/llamafc/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 626, in <listcomp>
[async_result.get(timeout=0.05) for async_result in async_results]
File "/root/miniconda3/envs/llamafc/lib/python3.10/site-packages/multiprocess/pool.py", line 774, in get
raise self._value
ValueError: Processor was not found, please check and update your model file.
(llamafc) root@autodl-container-a5d042a72c-eebc4213:~/LLaMA-Factory-main# llamafactory-cli train examples/train_lora/qwen3vl_lora_sft.yaml
[INFO|2025-10-09 16:50:55] llamafactory.hparams.parser:423 >> Process rank: 0, world size: 1, device: cuda:0, distributed training: False, compute dtype: torch.bfloat16
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:50:55,811 >> loading file vocab.json
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:50:55,812 >> loading file merges.txt
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:50:55,812 >> loading file tokenizer.json
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:50:55,812 >> loading file added_tokens.json
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:50:55,812 >> loading file special_tokens_map.json
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:50:55,812 >> loading file tokenizer_config.json
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:50:55,812 >> loading file chat_template.jinja
[INFO|tokenization_utils_base.py:2337] 2025-10-09 16:50:56,033 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
[INFO|image_processing_base.py:374] 2025-10-09 16:50:56,034 >> loading configuration file /root/autodl-tmp/Qwen3-VL-30B-A3B-Instruct/preprocessor_config.json
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:50:56,035 >> loading file vocab.json
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:50:56,035 >> loading file merges.txt
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:50:56,035 >> loading file tokenizer.json
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:50:56,035 >> loading file added_tokens.json
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:50:56,035 >> loading file special_tokens_map.json
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:50:56,035 >> loading file tokenizer_config.json
[INFO|tokenization_utils_base.py:2066] 2025-10-09 16:50:56,035 >> loading file chat_template.jinja
[INFO|tokenization_utils_base.py:2337] 2025-10-09 16:50:56,234 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
[INFO|2025-10-09 16:50:56] llamafactory.data.loader:143 >> Loading dataset /root/autodl-tmp/flowchart_understanding_real_train.json...
trust_remote_code is not supported anymore.
Please check that the Hugging Face dataset 'json' isn't based on a loading script and remove trust_remote_code.
If the dataset is based on a loading script, please ask the dataset author to remove it and convert it to a standard format like Parquet.
Converting format of dataset (num_proc=16): 2000 examples [00:00, 2465.02 examples/s]
[INFO|2025-10-09 16:50:57] llamafactory.data.loader:143 >> Loading dataset /root/autodl-tmp/flowchart_understanding_syn.json...
trust_remote_code is not supported anymore.
Please check that the Hugging Face dataset 'json' isn't based on a loading script and remove trust_remote_code.
If the dataset is based on a loading script, please ask the dataset author to remove it and convert it to a standard format like Parquet.
Converting format of dataset (num_proc=16): 2000 examples [00:00, 2628.88 examples/s]
Running tokenizer on dataset (num_proc=16): 0%| | 0/2000 [00:01<?, ? examples/s]
multiprocess.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/root/miniconda3/envs/llamafc/lib/python3.10/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/root/miniconda3/envs/llamafc/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 586, in _write_generator_to_queue
for i, result in enumerate(func(**kwargs)):
File "/root/miniconda3/envs/llamafc/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3674, in _map_single
for i, batch in iter_outputs(shard_iterable):
File "/root/miniconda3/envs/llamafc/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3624, in iter_outputs
yield i, apply_function(example, i, offset=offset)
File "/root/miniconda3/envs/llamafc/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3547, in apply_function
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
File "/root/LLaMA-Factory-main/src/llamafactory/data/processor/supervised.py", line 99, in preprocess_dataset
input_ids, labels = self._encode_data_example(
File "/root/LLaMA-Factory-main/src/llamafactory/data/processor/supervised.py", line 43, in _encode_data_example
messages = self.template.mm_plugin.process_messages(prompt + response, images, videos, audios, self.processor)
File "/root/LLaMA-Factory-main/src/llamafactory/data/mm_plugin.py", line 1589, in process_messages
self._validate_input(processor, images, videos, audios)
File "/root/LLaMA-Factory-main/src/llamafactory/data/mm_plugin.py", line 176, in _validate_input
raise ValueError("Processor was not found, please check and update your model file.")
ValueError: Processor was not found, please check and update your model file.
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/root/miniconda3/envs/llamafc/bin/llamafactory-cli", line 7, in <module>
sys.exit(main())
File "/root/LLaMA-Factory-main/src/llamafactory/cli.py", line 24, in main
launcher.launch()
File "/root/LLaMA-Factory-main/src/llamafactory/launcher.py", line 152, in launch
run_exp()
File "/root/LLaMA-Factory-main/src/llamafactory/train/tuner.py", line 110, in run_exp
_training_function(config={"args": args, "callbacks": callbacks})
File "/root/LLaMA-Factory-main/src/llamafactory/train/tuner.py", line 72, in _training_function
run_sft(model_args, data_args, training_args, finetuning_args, generating_args, callbacks)
File "/root/LLaMA-Factory-main/src/llamafactory/train/sft/workflow.py", line 51, in run_sft
dataset_module = get_dataset(template, model_args, data_args, training_args, stage="sft", **tokenizer_module)
File "/root/LLaMA-Factory-main/src/llamafactory/data/loader.py", line 315, in get_dataset
dataset = _get_preprocessed_dataset(
File "/root/LLaMA-Factory-main/src/llamafactory/data/loader.py", line 256, in _get_preprocessed_dataset
dataset = dataset.map(
File "/root/miniconda3/envs/llamafc/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 560, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/root/miniconda3/envs/llamafc/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3309, in map
for rank, done, content in iflatmap_unordered(
File "/root/miniconda3/envs/llamafc/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 626, in iflatmap_unordered
[async_result.get(timeout=0.05) for async_result in async_results]
File "/root/miniconda3/envs/llamafc/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 626, in <listcomp>
[async_result.get(timeout=0.05) for async_result in async_results]
File "/root/miniconda3/envs/llamafc/lib/python3.10/site-packages/multiprocess/pool.py", line 774, in get
raise self._value
ValueError: Processor was not found, please check and update your model file
Others
No response