This repository contains a full Q&A pipeline using LangChain framework, Qdrant as vector database and AWS Lambda Function and API Gateway. The data used are research papers that can be loaded into the vector database, and the AWS Lambda Function processes the request using the retrieval and generation logic. Therefore it can use any other research paper from Arxiv.
This Medium article contains the complete instructions for the project set up.
The main steps taken to build the RAG pipeline can be summarize as follows:
-
Data Ingestion: load data from https://arxiv.org
-
Indexing: RecursiveCharacterTextSplitter for indexing in chunks
-
Vector Store: Qdrant inserting metadata
-
QA Chain Retrieval: RetrievalQA
-
AWS Lambda and API: Process the request
-
Streamlit: UI
Feel free to ⭐ and clone this repo 😉
The project has been structured with the following files:
Dockerfile:
Dockerfile.env_sample
: sample environmental variables.gitattributes
: gitattributesMakefile
: install requirements, formating, linting, and clean uppyproject.toml
: linting and formatting using ruffrequirements.txt:
project requirementscreate_vector_store.py:
script to create the collection in Qdrantrag_app.py:
RAG Logiclambda_function.py:
lambda functiontest_lambda.py:
script to test the lambda functionbuild_and_deploy.sh:
script to create and push the Docker image and deploy it as an AWS Lambda function
The Python version used for this project is Python 3.10. You can follow along the medium article.
-
Clone the repo (or download it as a zip file):
git clone https://github.com/benitomartin/rag-aws-qdrant.git
-
Create the virtual environment named
main-env
using Conda with Python version 3.10:conda create -n main-env python=3.10 conda activate main-env
-
Execute the
Makefile
script and install the project dependencies included in the requirements.txt:pip install -r requirements.txt or make install
-
Creat AWS Account, credentials, and proper policies with full access to ECR and Lambda for the project to function correctly. Make sure to configure the appropriate credentials to interact with AWS services.
-
Make sure the
.env
file is complete and run thebuild_and_deploy.sh script
chmod +x build_and_deploy.sh ./build_and_deploy.sh
-
If you get timeout and/or memory error you can increase them:
aws lambda update-function-configuration \ --function-name rag-llm-lambda \ --timeout 300 \ --memory-size 1024
-
Create an API Endpoint as per medium article description
-
Run the streamlit app:
streamlit run streamlit_app.py