Open
Description
Fining tuning multiple lora on a single GPU might encounter OOM issue. It is necessary to carefully adjust parameters such as batch_size and cutoff_len, but this still cannot guarantee to completely avoid OOM. Is it possible to run a tool first to provide a reference(or best) configuration for users based on their data?