Skip to content
This repository was archived by the owner on May 10, 2024. It is now read-only.

Conversation

@mohdwasim
Copy link

Fix typo in Docker run command

Corrected the Docker run command to remove the typo in the --rm flag placement. The corrected command now properly starts the huggingface-embedding-server container with the specified image and parameters.

Fix typo in Docker run command

Corrected the Docker run command to remove the typo in the --rm flag placement. The corrected command now properly starts the huggingface-embedding-server container with the specified image and parameters.
@vercel
Copy link

vercel bot commented Apr 6, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
docs ✅ Ready (Inspect) Visit Preview 💬 Add feedback Apr 6, 2024 8:13am


```bash
docker run -p 8001:80 -d -rm --name huggingface-embedding-server ghcr.io/huggingface/text-embeddings-inference:cpu-0.3.0 --model-id BAAI/bge-small-en-v1.5 --revision -main
docker run -p 8001:80 -d --rm --name huggingface-embedding-server ghcr.io/huggingface/text-embeddings-inference:cpu-0.3.0 --model-id BAAI/bge-small-en-v1.5 --revision -main
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mohdwasim, thanks for this. Can we instead use the following (from the latest version of hf repo)

docker run --platform linux/amd64 -d --rm -p 8080:80 -v ./data:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.2 --model-id "BAAI/bge-large-en-v1.5" --revision "refs/pr/5"

The reason we add --platform is so that it works on MacOS. Also we add a mount to store the model so we don't have to redownload.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants