This project is a real-time group chat application with AI Assistance.
Multiple users can join the same chatroom, send messages, and see each other’s responses in real time.
Any message ending with a ? automatically triggers a response from an AI assistant using Mistral’s models.
Each AI response can be liked or disliked by users, and the collected feedback is used in a preference fine-tuning loop with GRPO to continuously improve model behavior.
- 🔐 User authentication (JWT-based)
- 👥 Shared group chatroom — all users see each other’s messages
- 💬 Real-time chat via WebSocket
- 🤖 AI bot replies (triggered by
?) powered by Mistral - 👍👎 Feedback system — like/dislike buttons on each AI response
- 🔄 Feedback-to-fine-tuning loop using GRPO for preference adaptation
- 📦 Clear, modular structure with
.envconfiguration - 🛠 Easy to set up and test locally or in deployment
- OS: Linux, macOS, or Windows
- MySQL 8+
- Python 3.10+ and
pip
CREATE DATABASE groupchat CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE USER 'chatuser'@'localhost' IDENTIFIED BY 'chatpass';
GRANT ALL PRIVILEGES ON groupchat.* TO 'chatuser'@'localhost';
FLUSH PRIVILEGES;cd backend
python -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -r requirements.txtCreate .env and paste the following:
DATABASE_URL=mysql+asyncmy://chatuser:chatpass@localhost:3306/groupchat
JWT_SECRET=<replace_with_long_random_string>
JWT_EXPIRE_MINUTES=43200
LLM_API_BASE=https://api.mistral.ai/v1
LLM_MODEL=mistral-medium
LLM_API_KEY=<your_mistral_api_key>
APP_HOST=0.0.0.0
APP_PORT=8000Run the server:
uvicorn app:app --host 0.0.0.0 --port 8000Access the app at: http://localhost:8000
This setup lets you run the FastAPI + MySQL + Frontend stack using Docker Compose — no manual environment setup required.
From the project root:
docker compose up --buildThis command will:
-
Build the FastAPI app image from your Dockerfile
-
Launch MySQL 8.0 and the FastAPI backend
-
Automatically initialize the database from sql/schema.sql
Once complete, you’ll see:
Uvicorn running on http://0.0.0.0:8000
The frontend is HTML/CSS/JS served by FastAPI:
- REST API → login, signup, and posting messages
- WebSocket → broadcasting chat messages in real time
- Feedback UI → like/dislike buttons on AI messages
- Start the FastAPI server (
uvicorn ...). - Open http://localhost:8000 in a browser.
- Sign up for an account and log in.
- Share the same URL with other users — each can sign up and join.
- All logged-in users connect to the same group chatroom.
- Messages are broadcast in real time to all participants.
- Any message ending with
?will trigger an AI response from Mistral. - Users can like or dislike AI responses.
- Feedback is stored in the DB and used in a GRPO fine-tuning pipeline to improve model responses over time.
backend/
├── app/ # FastAPI app
├── requirements.txt # Python dependencies
└── .env.example # Example environment configuration
frontend/
├── static/ # HTML, CSS, JS files
sql/
└── schema.sql # DB schema setup
README.md
