Skip to content

[Fine-tuning Code] Here is an implementation 👋 ! #12

@Gaiejj

Description

@Gaiejj

😊 Hi everyone! We are very pleased to announce that align-anything now supports fine-tuning for Qwen2.5-Omni. The code is here 👉 PKU-Alignment/align-anything#169.

Image

Compared to the community's implementation, we believe our solution is more user-friendly. You just need to run the following script after installation to start training without modifying anything!

  • Installation:
# We tested on the H800 computing cluster, and this version of CUDA works well.
# You can adjust this version according to the computing cluster's actual situation.

conda install nvidia/label/cuda-12.2.0::cuda
export CUDA_HOME=$CONDA_PREFIX
cd align-anything
pip install -e .[train]

# for qwen2.5-omni
pip uninstall transformers
pip install git+https://github.com/huggingface/transformers@3a1ead0aabed473eafe527915eea8c197d424356
pip install -U flash-attn --no-build-isolation
  • train
cd scripts
bash qwen_omni_sft.sh

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions