Skip to content

Commit aaa09b8

Browse files
authored
Merge pull request #268 from tokk-nv/dev-openwebui
Add Open WebUI basic and RAG demo instruction
2 parents 8531c52 + 39c9c9d commit aaa09b8

20 files changed

+240
-2
lines changed

docs/css/extra.css

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88

99
.md-icon {
1010
color: black;
11-
}
11+
}
1212

1313
.md-header__title {
1414
color: #000000;
@@ -158,4 +158,8 @@
158158
.md-typeset .tabbed-content {
159159
padding-left: 1rem;
160160
padding-right: 1rem;
161+
}
162+
163+
.shadow {
164+
box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.19);
161165
}
Loading
1000 KB
Loading
Loading
285 KB
Loading
29.2 KB
Loading
Loading
Loading

docs/images/openwebui_models.png

26 KB
Loading

docs/images/openwebui_q_jon_power.png

103 KB
Loading
101 KB
Loading
68.1 KB
Loading
17 KB
Loading
31.1 KB
Loading
15.6 KB
Loading
15.5 KB
Loading
33.3 KB
Loading
42.1 KB
Loading
Loading

docs/tutorial_openwebui.md

Lines changed: 235 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,4 +46,238 @@ sudo docker run -d --network=host \
4646
--name open-webui \
4747
--restart always \
4848
ghcr.io/open-webui/open-webui:main
49-
```
49+
```
50+
51+
### Case 2: Ollama container
52+
53+
If you have not natively installed Ollama, you can also run `Ollama` in container using `jetson-containers`, after executing the above command to run Open WebUI container.
54+
55+
```bash
56+
jetson-containers run --name ollama $(autotag ollama)
57+
```
58+
59+
> You need to have `jetson-containers` installed
60+
>
61+
> ```bash
62+
> git clone https://github.com/dusty-nv/jetson-containers
63+
> bash jetson-containers/install.sh
64+
> ```
65+
66+
### Case 3: Docker Compose to launch both at the same time
67+
68+
You can save the following YML file and issue `docker compose up`.
69+
70+
=== "docker-compose.yml"
71+
72+
```yaml
73+
services:
74+
open-webui:
75+
image: ghcr.io/open-webui/open-webui:main
76+
container_name: open-webui
77+
network_mode: "host"
78+
environment:
79+
- OLLAMA_BASE_URL=http://127.0.0.1:11434
80+
volumes:
81+
- "${HOME}/open-webui:/app/backend/data"
82+
83+
ollama:
84+
image: dustynv/ollama:r36.4.0
85+
container_name: ollama
86+
runtime: nvidia
87+
network_mode: "host"
88+
shm_size: "8g"
89+
volumes:
90+
- "/var/run/docker.sock:/var/run/docker.sock"
91+
- "${HOME}/data:/data"
92+
- "/etc/localtime:/etc/localtime:ro"
93+
- "/etc/timezone:/etc/timezone:ro"
94+
- "/run/jtop.sock:/run/jtop.sock"
95+
```
96+
97+
Once you save the above YAML file in a directory, issue the following.
98+
99+
```bash
100+
docker compose up
101+
```
102+
103+
![](./images/docker-compose_openwebui.png){.shadow}
104+
105+
## Basic Usage
106+
107+
### Step 1. Access through Web browser
108+
109+
Open a Web browser on a PC connected to the same network as your Jetson, and access the following address.
110+
111+
```
112+
http://<IP_ADDR>:8080
113+
```
114+
115+
You will see a initial web page like this.
116+
117+
![](./images/openwebui_initial_screen.png){.shadow}
118+
119+
Click "**Get started :material-arrow-right-circle: **".
120+
121+
### Step 2. Complete the Account Creation Process
122+
123+
Follow the on-screen prompts to create an account.
124+
125+
![](./images/openwebui_get-started.png){.shadow}
126+
127+
!!! note
128+
129+
Note that **all account information stays local**, so privacy is maintained. You can use a random email address and password for this step as it is not verified or stored externally.
130+
131+
However, make sure to remember the credentials, as you’ll need them to log in for future sessions.
132+
133+
For more details, refer to the provided [information link](https://docs.openwebui.com/faq/#q-why-am-i-asked-to-sign-up-where-are-my-data-being-sent-to) or instructions on the screen.
134+
135+
Once you "**Create Admin Account**", you will be presented with the release notes for the latest version of Open WebUI. To proceed, click "**Okay, Let's Go!**" button.
136+
137+
![](./images/openwebui_release-note.png){.shadow}
138+
139+
Once everything is set up, you should see the following UI appear.
140+
141+
![](./images/openwebui_select-a-model.png){.shadow}
142+
143+
### Step 3. Download an SLM model
144+
145+
To download an SLM model, click the dropdown :material-chevron-down: next to the "Select a model" section. Type the name of the model you want to try in "🔎 **Search a model**" field.
146+
147+
![](./images/openwebui_search-llama3-2.png){.shadow}
148+
149+
Once selected, you'll be prompted to download the model directly from Ollama.
150+
151+
![Downloading "llama3.2"](images/openwebui_download_llama3.2.png){.shadow}
152+
153+
After the download, select the newly downloaded model from the list. In our case, it was the LLaMA 3.2:2B model.
154+
155+
![](./images/openwebui_select_llama3.2.png){.shadow}
156+
157+
!!! tip
158+
159+
After downloading your SLM model, you have the option to disconnect your Jetson unit from the internet. This allows you to validate that all subsequent interactions are powered exclusively by the local generative AI model, all running on this edge AI computer.
160+
161+
### Step 4. Start interacting with the model
162+
163+
You can now start interacting with the model, just like you would with any other LLM chatbot.
164+
165+
![](./images/openwebui_question-la-to-sf.png){.shadow}
166+
167+
## Usage - RAG
168+
169+
### Step 1. Create a knowledge base
170+
171+
To enable RAG, you need to create a knowledge base that the LLM can reference to answer queries.
172+
173+
Follow these steps to create a knowledge base:
174+
175+
- Open the Open WebUI interface.
176+
- Click on Workspace.
177+
- Select Knowledge from the top menu.
178+
- Click the "➕" icon to create a new knowledge base.
179+
180+
![](./images/openwebui_create-jetson-knowledge.gif){.shadow}
181+
182+
### Step 2. Add files to the knowledge base
183+
184+
After providing the necessary details and clicking "**Create Knowledge**", you will be redirected to the following page.
185+
186+
Here, you can add files or entire directories to build your knowledge base.
187+
188+
![](./images/openwebui_upload-files.png){.shadow}
189+
190+
Select a local PDF file or other document files to upload.
191+
192+
![](./images/openwebui_upload-select-file.png){.shadow}
193+
194+
![](./images/openwebui_uploaded-file-being-processed.png){.shadow}
195+
196+
![](./images/openwebui_file-added-successfully.png){.shadow}
197+
198+
### Step 3. Create a custom model with knowledge base access
199+
200+
To leverage the knowledge base for more accurate and context-aware responses, you need to create a model that can access this information. By linking the model to the knowledge base, the LLM can reference the stored data to provide more precise and relevant answers to user queries.
201+
202+
Follow these steps to create a model with knowledge base access
203+
204+
- Open the Open WebUI interface.
205+
- Click on Workspace.
206+
- Select Model from the top menu.
207+
- Click the "➕" icon to create a new model.
208+
209+
![](./images/openwebui_create-model-with-kb.gif){.shadow}
210+
211+
### Step 4. Configure the model
212+
213+
After clicking the "➕" icon, the following screen will appear.
214+
215+
![](./images/openwebui_configure-model-with-kb.png){.shadow}
216+
217+
Here, you can configure the model with the following info:
218+
219+
| Field | What to do | Example |
220+
| ----- | ---------- | ------- |
221+
| **Model name** | Enter a name for your new model. | "**Jetson Expert**" |
222+
| **Select a base model** :material-chevron-down: | Choose a base model from the list of models you've downloaded on your local Ollama server. | "`llama3.2:3b`" |
223+
| **Select Knowledge** | Click the button and select the newly created knowledge base from the list. | "**Jetson Documents**" |
224+
225+
> ![](./images/openwebui_select-knowledge.png){.shadow}
226+
227+
Once you enter necessary information, click "**Save & Create**" button.
228+
229+
![](./images/openwebui_jetson-expert-configured.png){.shadow}
230+
231+
You will be taken back to the **Models** tab in the Workspace.
232+
233+
![](./images/openwebui_models.png){.shadow}
234+
235+
### Step 5. Chat with your custom model
236+
237+
You can now navigate to the chat window and select the newly created model.
238+
239+
Use this model to ask questions related to the uploaded documents. This will help verify if the model can successfully retrieve information from the knowledge base.
240+
241+
![](./images/openwebui_q_jon_power.png){.shadow}
242+
243+
## Troubleshooting
244+
245+
### Open Web UI is not responding
246+
247+
Reload the page on your web browser.
248+
249+
### How to Create a New Account for Open WebUI (If You Forgot Your Password)
250+
251+
If you need to create a new account in Open WebUI (for example, if you forgot your password), follow these steps to reset the account:
252+
253+
#### Delete the existing Open WebUI data folder:
254+
255+
This will remove all existing user account data and settings. Run the following command:
256+
257+
```bash
258+
sudo rm -rf ${HOME}/open-webui
259+
```
260+
261+
#### Re-run the Docker container
262+
263+
This will recreate a fresh instance of Open WebUI, allowing you to create a new account.
264+
265+
```bash
266+
sudo docker run -d --network=host \
267+
-v ${HOME}/open-webui:/app/backend/data \
268+
-e OLLAMA_BASE_URL=http://127.0.0.1:11434 \
269+
--name open-webui \
270+
--restart always \
271+
ghcr.io/open-webui/open-webui:main
272+
```
273+
274+
#### How to Shut Down Open WebUI?
275+
276+
To gracefully stop and remove the Open WebUI container, run the following commands:
277+
278+
```bash
279+
sudo docker stop open-webui
280+
sudo docker rm open-webui
281+
```
282+
283+
## Optional Setup: MLC backend

0 commit comments

Comments
 (0)