Skip to content

Qwen3-VL-30B-A3B-Instruct全量微调卡在第一步 #9245

@antoinegg1

Description

@antoinegg1

Reminder

  • I have read the above rules and searched the existing issues.

System Info

  • llamafactory version: 0.9.4.dev0
  • Platform: Linux-5.10.134-19.100.al8.x86_64-x86_64-with-glibc2.35
  • Python version: 3.11.11
  • PyTorch version: 2.6.0+cu124 (GPU)
  • Transformers version: 4.57.0
  • Datasets version: 4.0.0
  • Accelerate version: 1.10.1
  • PEFT version: 0.17.1
  • TRL version: 0.9.6
  • GPU type: NVIDIA L20X
  • GPU number: 8
  • GPU memory: 139.81GB
  • DeepSpeed version: 0.16.9
  • Default data directory: detected

Reproduction

argument

model

model_name_or_path: Qwen/Qwen3-VL-30B-A3B-Instruct
image_max_pixels: 2000000
image_min_pixels: 40000
video_max_pixels: 16384
trust_remote_code: true

method

stage: sft
do_train: true
finetuning_type: full
freeze_vision_tower: true
freeze_multi_modal_projector: true
freeze_language_model: false
flash_attn: fa2
deepspeed: examples/deepspeed/ds_z3_config.json

dataset

dataset: minio3_coldstart
template: qwen3_vl
cutoff_len: 131072 #2048
max_samples: 100000
overwrite_cache: true
preprocessing_num_workers: 16
dataloader_num_workers: 0

output

output_dir: ./model/mini-o3-qwen3vl-SFT/
logging_steps: 10
save_steps: 500
plot_loss: true
overwrite_output_dir: true
save_only_model: false
report_to: wandb #none # choices: [none, wandb, tensorboard, swanlab, mlflow]

train

per_device_train_batch_size: 1
gradient_accumulation_steps: 4
learning_rate: 1.0e-5
num_train_epochs: 3.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
resume_from_checkpoint: null

Error

输出停止在

Image

GPU状态

Image

通过py-spy发现第一个进程卡在

Image 其他进程都在get_lst_from_rank0

Others

目前Qwen3-VL-30B-A3B-Instruct全量微调卡在第一步,不清楚这个是llamafactory的问题还是transformers的问题。

Metadata

Metadata

Assignees

Labels

solvedThis problem has been already solved

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions