Skip to content

[Bug]: Specifying a huggingface id to "path" in lora module only partially works #20612

Open
@guicho271828

Description

@guicho271828

Your current environment

The mistake is obvious from the source code.

🐛 Describe the bug

When you provide a hugggingface id of a lora adapter to the path field of vllm serve --lora-modules argument, it successfully downloads and loads the adapter. This is consistent with huggingface behavior https://huggingface.co/docs/peft/v0.16.0/en/package_reference/peft_model#peft.PeftModel.from_pretrained.model_id : the path and the model id are interchangeable.

However, upon query, it tries to find [path]/adapter_config.json and fails because [path] is a huggingface id. Worse, this FileNotFoundError is completely hidden and reported as 500 Internal Server Error! This bug was discovered by #20610 , which should be separately merged in as a general QoL saver.

The correct way is to detect this and convert it into snapshot_path = huggingface_hub.snapshot_download(path).
There are several files that directly mentions such a path, including https://github.com/vllm-project/vllm/blob/main/vllm/lora/peft_helper.py#L99 .

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions