Whether for research, customer support, or personal projects, this repository provides an idea for building scalable and practical solution for document Q&A using LLMs.
Setup
- Clone the repository:
git clone https://github.com/Al04ni/LLM-Files-QA.git cd LLM-Files-QA
- Run the setup script
./setup.sh
- Activate the virtual environment:
- Windows:
.\venv\Scripts\activate
- macOS/Linux:
source venv/bin/activate
- Run the application:
streamlit run app.py
Feel free to contribute, suggest improvements, or share your experiences with this repository as we continue to make knowledge more accessible and intuitive.
Caution
This project is still pending as we are still looking for great way to make LLM be able to get content from the uploaded file so that i can give out insights, so stay tuned 😄, we are still working on it in in-doors.
Happy coding!