Skip to content

Commit 1e98da0

Browse files
docs: placeholder for model downloads folder (#446)
1 parent d094cc3 commit 1e98da0

File tree

3 files changed

+8
-2
lines changed

3 files changed

+8
-2
lines changed

server/storage/models/.gitignore

+2-1
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,3 @@
11
Xenova
2-
downloaded/*
2+
downloaded/*
3+
!downloaded/.placeholder

server/storage/models/README.md

+5-1
Original file line numberDiff line numberDiff line change
@@ -30,4 +30,8 @@ If you would like to use a local Llama compatible LLM model for chatting you can
3030
> If running in Docker you should be running the container to a mounted storage location on the host machine so you
3131
> can update the storage files directly without having to re-download or re-build your docker container. [See suggested Docker config](../../../README.md#recommended-usage-with-docker-easy)
3232
33-
All local models you want to have available for LLM selection should be placed in the `storage/models/downloaded` folder. Only `.gguf` files will be allowed to be selected from the UI.
33+
> [!NOTE]
34+
> `/server/storage/models/downloaded` is the default location that your model files should be at.
35+
> Your storage directory may differ if you changed the STORAGE_DIR environment variable.
36+
37+
All local models you want to have available for LLM selection should be placed in the `server/storage/models/downloaded` folder. Only `.gguf` files will be allowed to be selected from the UI.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
All your .GGUF model file downloads you want to use for chatting should go into this folder.

0 commit comments

Comments
 (0)