Skip to content

Fixes a typo LoraAdapterRequest -> LoRAAdapterRequest and reorganize vllm docker image to actually build local src #13125

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 12 commits into from

Conversation

kirel
Copy link

@kirel kirel commented Apr 30, 2025

Fixes a typo LoraAdapterRequest -> LoRAAdapterRequest

@kirel kirel changed the title Kirel patch 1 Fixes a typo LoraAdapterRequest -> LoRAAdapterRequest and reorganize vllm docker image to actually build local src May 1, 2025
docker build \
--build-arg http_proxy=.. \
--build-arg https_proxy=.. \
--build-arg no_proxy=.. \
--rm --no-cache -t intelanalytics/ipex-llm-serving-xpu:latest .
--rm --no-cache \
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The no-cache is becoming less relevant when building from local src - if there are changes it will be rebuilt.

COPY python/llm /llm/llm
# Install ipex-llm from the copied local source directory, using the extra index for dependencies
RUN set -eux && \
pip install --pre /llm/llm[xpu_2.6] --extra-index-url https://download.pytorch.org/whl/xpu
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most important line - install from local source tree instead of package registry.

@kirel
Copy link
Author

kirel commented May 7, 2025

Closing as per discussion in #13124

@kirel kirel closed this May 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant