This is a very simple personal use LLM aggregator that helps in passing same prompts to multiple LLMs and then viewing their output in UI and then take a call which one is better.
Supported LLMs:
- ChatGPT
- Gemini
- Claude
- Llama
- Install docker
- Start potgres container using below command from infra directory:
docker compose up --build -d
- On first run this command will automatically create schemas
- Postgres data will be stored in pg_data folder so data will not be lost even if container stops.
-
Use node 21
-
Get key for ChatGPT, Gemini and Claude and set them in the environment. (.env file in repository root can also be used)
GEMINI_API_KEY=<your_key> CHATGPT_API_KEY=<your_key> CLAUDE_API_KEY=<your_key>
-
Run llama locally (ollama can be used to run llama locally)
-
npm install
-
npm run execute
npm install
npm start
npm i -g serve
- needs to be done only first timenpm run build
serve -s build