This is an app that combines Elasticsearch, Langchain and a number of different LLMs (e.g., OpenAI, Ollama) to create a chatbot experience with ELSER with Vietnam National University data.
- Framework: Langchain
- Search engine + database: Elasticsearch
- Frontend:
- HTML
- SVG
- JSON
- React with TypeScript and JSX
- TypeScript
- CSS
- JavaScript
- Backend:
- Flask
- Python
- Copy env.example to
.envand fill in values noted inside. - Download all libraries in requirements.txt using
pip install -r requirements.txt - Install Nodejs, yarn: https://nodejs.org/en
- Create an empty folder named
data_json cd dataand runpython craw_data.pyto craw data todata_jsonfolder
-
Install and Run Ollama: Ensure you have Ollama installed and the service is running on your machine. You can download it from ollama.com.
-
Pull the Gemma3 Model: Open your terminal and run the following command to download the
gemma3model:ollama pull gemma3
-
Set LLM Type in
.env(for local non-Docker execution): In your.envfile (copied fromenv.example), ensureLLM_TYPEis set toollama. If you are running Ollama on a different host or port, also setOLLAMA_BASE_URL.LLM_TYPE=ollama OLLAMA_BASE_URL=http://localhost:11434 # Default if Ollama runs locally # Other environment variables...
-
Update Model in Code: Modify the file
api/llm_ollama.pyto usegemma3as the default model. Change themodelparameter in thestreamandask_ollamafunctions:# In api/llm_ollama.py # ... class OllamaResponse: # ... @staticmethod def stream(prompt, model="gemma3"): # Changed from "llama3" or other model # ... existing code ... def ask_ollama(question, model="gemma3"): # Changed from "llama3" or other model # ... existing code ...
Note for Docker Users:
If you are running the application using docker-compose up --build:
- The
LLM_TYPEis already set toollamain thedocker-compose.ymlandDockerfile. - Ensure Ollama is running on your host machine.
- Pull the
gemma3model on your host machine usingollama pull gemma3. - The application inside Docker will connect to Ollama on your host via
http://host.docker.internal:11434(as configured indocker-compose.yml). - You still need to make the code change in
api/llm_ollama.pyto usegemma3as shown in step 4 above. After making the change, rebuild your Docker image by runningdocker-compose up --build.
-
1: cd frontend
- npm install -g yarn
- yarn install
-
2: Run
- Terminal 1:
- flask create-index (If you have already run this command, you do not need to run it again)
- flask run --port=3001
- Terminal 2:
- cd frontend
- yarn start
- Terminal 1:
-
The app will be available at https://localhost:3001
