Skip to content

如何指定多张GPU卡跑,默认用一张,显示GPU内存不够 #208

@sf9ehf9fe

Description

@sf9ehf9fe

System Info / 系統信息

linux系统

Who can help? / 谁可以帮助到您?

No response

Information / 问题信息

  • The official example scripts / 官方的示例脚本
  • My own modified scripts / 我自己修改的脚本和任务

Reproduction / 复现过程

导入"model = AutoModelForCausalLM.from_pretrained(
MODEL_PATH,
torch_dtype=TORCH_TYPE,
trust_remote_code=True,
low_cpu_mem_usage=True,
).eval()"
报"OutOfMemoryError: CUDA out of memory. Tried to allocate 1002.00 MiB (GPU 0; 23.69 GiB total capacity; 10.78 GiB already allocated; 10.94 MiB free; 10.79 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF"

Expected behavior / 期待表现

怎么指定它可以跑在多张GPU卡上

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions