This repo contains the code for setting up and running the backend for the AI chat feature. It contains two pieces:
scraper
- Fetches and vectorizes documentation pages as well as anything else.server
- Server process to run user's queries against and have LLM answer them.
Please view their respective READMEs on the specifices of setting and running them. Both services
require a connection to a PostgreSQL database with pgvector,
where scraper
requires write access to an embeddings table and server
requires read access to
that table, and write to an analytics table. server
requires an OpenAI API key as well. Both
projects share an .env
file at the root of the repo
Both repos can use an .env
file to setup the constants necessary for the service, read from
either their folder or from the root of the repo.
Create the env file:
cp docker/.env.docker .env
Then open .env
file and fill in the OPENAI_API_KEY
value.
Afterwards, to run the app, run:
docker compose up
and can access the web server at http://localhost:4567/
. See
server
README for information on available API endpoints.
To populate the document store (or update it to fetch new docs), run:
docker compose run scraper