Description:
A Retrieval-Augmented Generation (RAG) system that uses Ollama to enable intelligent document understanding. It employs two separate models — one for embedding documents and another for semantic question answering — to provide contextual responses based on the ingested content.
- Document embedding for efficient vector search
- Semantic answering based on document context
- Node.js-based frontend for interaction
- Python backend to orchestrate retrieval and generation
Follow the installation instructions for your operating system:
https://ollama.com/download
Open a terminal and pull the necessary models:
ollama pull nomic-embed-text # For document embedding
ollama pull llama3 # For question answeringcd app
npm install
npm run devpip install -r requirements.txtpython3 -m venv .venv .venv\Scripts\activatepython main.py