An AI-powered detective mystery solving platform that generates unique murder cases and allows users to investigate, gather clues, and solve mysteries through interactive gameplay.
- Generate murder mysteries in the preferred time period and setting
- Guess the correct murderer out of three suspects
- Ask specific questions about suspect's backgrounds and alibis using natural language (Type the suspect's name and surname in the question)
- A Toast notification will pop up if the prompt retrieves a document from the vector store.
-
Clone and install dependencies:
git clone <repository-url> cd detective-ai npm install --legacy-peer-deps
-
Set up environment variables:
cp .env.example .env
Configure the following required variables:
DATABASE_URL=your_postgresql_url OPENAI_API_KEY=your_openai_api_key PINECONE_API_KEY=your_pinecone_api_key PINECONE_INDEX=your_pinecone_index_name -
Set up the database:
start-database.sh (this starts a docker container with a PostgreSQL database) npm run db:generate npm run db:push
-
Start the development server:
npm run dev
- Next.js 15 - React framework with App Router
- TypeScript - Type-safe development
- Tailwind CSS - Utility-first styling framework
- tRPC - End-to-end typesafe APIs
- Prisma - Database ORM with PostgreSQL
- LangChain - AI framework for LLM integration
- OpenAI GPT-4 - Case generation and chat interactions
- Pinecone - Vector database for semantic search
- OpenAI Embeddings - Text embedding for similarity scoring
- React Query - Server state management via tRPC
- AI-Powered Creation: Uses GPT-4 to generate unique murder mystery cases based on user inputs (time period, setting, requirements)
- Structured Output: Each case includes:
- Compelling storyline with victim backstory
- 3 suspects (1 culprit, 2 innocent with truthful alibis)
- 5 clues (2 leading to culprit, 3 red herrings)
- Complete solution (who, when, where, why, how)
Stores structural case data:
- Case metadata (name, time period, setting)
- Solution details (culprit, motive, method)
- User guesses and game progress
Stores semantic content for intelligent search:
- Suspect alibis and backgrounds - Enables natural language queries about suspects
- Case solution embeddings - Used for similarity scoring of user guesses
- Semantic search capabilities - Allows contextual information retrieval
- Case Presentation: Users receive case introduction with basic information
- Interactive Investigation: Players can:
- Ask questions about suspects using natural language
- Search for specific information about alibis and backgrounds
- Gather clues through targeted queries
- AI Chat Assistant: Provides investigation guidance and answers questions using vector search
- Guess Submission: Users submit their theories (who, why, how)
- Similarity Scoring: Vector embeddings compare user guesses against actual solutions
- Results: Players receive percentage-based accuracy scores for their deductions
- Embedding Model: OpenAI text-embedding-3-small
- Similarity Threshold: Configurable threshold (default 0.7) for relevant results
- Contextual Search: Suspect information includes name variations for better matching
- Semantic Scoring: User guesses are scored against actual solutions using cosine similarity
src/
├── app/ # Next.js App Router
│ ├── _components/ # React components
│ │ ├── atoms/ # Basic UI components
│ │ ├── molecules/ # Composite components
│ │ └── organisms/ # Complex page sections
│ └── api/ # API routes
├── server/
│ ├── api/ # tRPC routers
│ │ └── routers/ # API endpoint definitions
│ └── services/ # Business logic
│ ├── case-generation.ts # AI case creation
│ ├── vector-store.ts # Pinecone integration
│ └── detective-chat.ts # Investigation chat
├── trpc/ # tRPC client setup
└── types/ # TypeScript type definitions
- Infinite Case Variety: AI generates unique mysteries based on user preferences
- Semantic Scoring: Advanced similarity algorithms evaluate solution accuracy
Built using the T3 Stack and modern AI technologies.