-
Notifications
You must be signed in to change notification settings - Fork 316
Closed
Description
😊 Hi everyone! We are very pleased to announce that align-anything now supports fine-tuning for Qwen2.5-Omni. The code is here 👉 PKU-Alignment/align-anything#169.
Compared to the community's implementation, we believe our solution is more user-friendly. You just need to run the following script after installation to start training without modifying anything!
- Installation:
# We tested on the H800 computing cluster, and this version of CUDA works well.
# You can adjust this version according to the computing cluster's actual situation.
conda install nvidia/label/cuda-12.2.0::cuda
export CUDA_HOME=$CONDA_PREFIX
cd align-anything
pip install -e .[train]
# for qwen2.5-omni
pip uninstall transformers
pip install git+https://github.com/huggingface/transformers@3a1ead0aabed473eafe527915eea8c197d424356
pip install -U flash-attn --no-build-isolation- train
cd scripts
bash qwen_omni_sft.shhm-li0420, JoshonSmith, francescogruner, ParanoidW, Goekdeniz-Guelmez and 6 moreKass123777, hm-li0420, cwx-worst-one, Goekdeniz-Guelmez, qinxiaoyi and 3 more
Metadata
Metadata
Assignees
Labels
No labels
