Skip to content

Support automatic parameter configuration #241

Open
@LianxinGao

Description

@LianxinGao

Fining tuning multiple lora on a single GPU might encounter OOM issue. It is necessary to carefully adjust parameters such as batch_size and cutoff_len, but this still cannot guarantee to completely avoid OOM. Is it possible to run a tool first to provide a reference(or best) configuration for users based on their data?

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions