You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: server/storage/models/README.md
+5-1
Original file line number
Diff line number
Diff line change
@@ -30,4 +30,8 @@ If you would like to use a local Llama compatible LLM model for chatting you can
30
30
> If running in Docker you should be running the container to a mounted storage location on the host machine so you
31
31
> can update the storage files directly without having to re-download or re-build your docker container. [See suggested Docker config](../../../README.md#recommended-usage-with-docker-easy)
32
32
33
-
All local models you want to have available for LLM selection should be placed in the `storage/models/downloaded` folder. Only `.gguf` files will be allowed to be selected from the UI.
33
+
> [!NOTE]
34
+
> `/server/storage/models/downloaded` is the default location that your model files should be at.
35
+
> Your storage directory may differ if you changed the STORAGE_DIR environment variable.
36
+
37
+
All local models you want to have available for LLM selection should be placed in the `server/storage/models/downloaded` folder. Only `.gguf` files will be allowed to be selected from the UI.
0 commit comments