Agent Prompt Train is a Claude Code management server for teams that includes comprehensive monitoring, conversation tracking, and dashboard visualizations. Agent Prompt Train allows you to understand, manage, and improve your team's Claude Code usage. (Supports individual Claude Max plan)
- Getting Started - Set up Agent Prompt Train in seconds
- Features - Explore capabilities and functionality
- Development - Build and contribute
- Documentation - Complete guides and references
- Deployment - Production setup guides
Moonsong Labs is a leading protocol + AI/ML engineering company that operates through two distinct strategies: building long-term engineering services partnerships and launching high-conviction venture studio projects.
👉 Check out our engineering services work
👉 Discover our venture studio
👉 Contact Us
Unofficial project. This community-maintained tool interoperates with Anthropic's Claude Code. It is not affiliated with, sponsored, or endorsed by Anthropic. Claude and Claude Code are trademarks of Anthropic.
Agent Prompt Train empowers development teams to maximize their Claude AI usage through:
- 🔍 Complete Visibility: Real-time access to conversations, tool invocations, and prompts for effective troubleshooting and debugging
- 📈 Historical Analytics: Comprehensive activity history enabling usage monitoring, pattern identification, and continuous improvement
- 🤖 Intelligent Insights: AI-powered conversation analysis providing actionable prompt optimization suggestions and best practice recommendations
Experience Agent Prompt Train in action with our live demo:
👉 https://prompttrain-demo.moonsonglabs.dev
Note: This is a read-only demo showcasing real usage data from our development team.
- 🚀 High-Performance Proxy - Built with Bun and Hono for minimal latency
- 🔀 Conversation Tracking - Automatic message threading with branch, sub-agent & compact support
- 📊 Real-time Dashboard - Monitor usage, view conversations, and analyze patterns
- 🔐 Multi-Auth Support - API keys and OAuth with auto-refresh
- 📈 Token Tracking - Detailed usage statistics per project and account
- 🔄 Streaming Support - Full SSE streaming with chunk storage
- 🐳 Docker Ready - Separate optimized images for each service
- 🤖 Claude CLI Integration - Run Claude CLI connected to the proxy
- 🧠 AI-Powered Analysis - Automated conversation insights using Gemini Pro
Understanding these terms will help you navigate Agent Prompt Train effectively:
- 🗣️ Conversation: A complete interaction session between a user and Claude, consisting of multiple message exchanges. Each conversation has a unique ID and can span multiple requests.
- 🌳 Branch: When you edit an earlier message in a conversation and continue from there, it creates a new branch - similar to Git branches. This allows exploring alternative conversation paths without losing the original.
- 📦 Compact: When a conversation exceeds Claude's context window, it's automatically summarized and continued as a "compact" conversation, preserving the essential context while staying within token limits.
- 🤖 Sub-task: When Claude spawns another AI agent using the Task tool, it creates a sub-task. These are tracked separately but linked to their parent conversation for complete visibility.
- 🔤 Token: The basic unit of text that Claude processes. Monitoring token usage helps track costs and stay within API limits.
- 📊 Request: A single API call to Claude, which may contain multiple messages. Conversations are built from multiple requests.
- 🔧 Tool Use: Claude's ability to use external tools (like file reading, web search, or spawning sub-tasks). Each tool invocation is tracked and displayed.
- 📝 MCP (Model Context Protocol): A protocol for managing and sharing prompt templates across teams, with GitHub integration for version control.
- Timeline View: Shows the chronological flow of messages within a conversation
- Tree View: Visualizes conversation branches and sub-tasks as an interactive tree
- Message Hash: Unique identifier for each message, used to track conversation flow and detect branches
Visualize entire conversation flows as interactive trees, making it easy to understand complex interactions, debug issues, and track conversation branches.
Examine individual API requests and responses with syntax highlighting, tool result visualization, and comprehensive metadata including token counts and timing information.
Leverage Gemini Pro to automatically analyze conversations for sentiment, quality, outcomes, and actionable insights. Get intelligent recommendations for improving your AI interactions.
Manage and sync Model Context Protocol prompts from GitHub repositories. Create reusable prompt templates that can be shared across your team and integrated with Claude Desktop.
For developers who need complete visibility, access the raw JSON view of any request or response with syntax highlighting and expandable tree structure.
For administrators or heavy users, you can follow the token usage and see when approaching the rate limits.
Get Agent Prompt Train running locally in seconds.
Prerequisites:
- Docker
- Claude Code (already installed and setup)
Start the Agent Prompt Train (docker image with: Postgres + Proxy + Dashboard):
docker run -d -p 3000:3000 -p 3001:3001 --name agent-prompttrain moonsonglabs/agent-prompttrain-all-in:latestStart using it from any project, you can use multiple claude at the same time:
ANTHROPIC_BASE_URL=http://localhost:3000 claudeYou're all set!
Access the dashboard at http://localhost:3001 to watch conversations as you use Claude Code.
Note: For local development, the all-in-one image includes DASHBOARD_DEV_USER_EMAIL=dev@localhost for easy access.
Looking to develop or contribute? Jump to Development Setup.
For developers who want to modify the proxy or dashboard code with hot reload capabilities.
- Bun runtime (v1.0+)
- Docker and Docker Compose
- Claude API Key or Claude Max subscription for each developer using Agent Prompt Train
1. Initial Setup
# Clone and install dependencies
git clone https://github.com/Moonsong-Labs/agent-prompttrain.git
cd agent-prompttrain
bun run setup
# Configure environment
cp .env.example .env
# Edit .env with your settings2. Start Infrastructure Services
# Start ONLY PostgreSQL, and Claude CLI (optimized for development)
bun run docker:dev:up3. Start Application Services Locally
# Start proxy and dashboard with hot reload
bun run dev# Infrastructure management
bun run docker:dev:up # Start development infrastructure (postgres, claude-cli)
bun run docker:dev:down # Stop development infrastructure
bun run docker:dev:logs # View infrastructure logs
# Development workflow
bun run dev # Start local services with hot reload
bun run typecheck # Type checking
bun run test # Run tests
bun run format # Format code
# Database operations
bun run db:backup
bun run db:analyze-conversations- ✅ Hot reload for code changes (proxy & dashboard run locally)
- ✅ Direct debugging access with breakpoints
- ✅ Fast iteration cycle (no container rebuilds)
- ✅ Production-like database environment (PostgreSQL in Docker)
- ✅ Separate concerns (infrastructure vs application services)
Development Mode:
Local Machine: Docker Containers:
├── Proxy (port 3000) ├── PostgreSQL (port 5432)
├── Dashboard (port 3001) └── Claude CLI
└── Hot Reload ⚡
vs. Full Docker Mode:
Docker Containers:
├── PostgreSQL (port 5432)
├── Proxy (port 3000)
├── Dashboard (port 3001)
└── Claude CLI
For deploying Agent Prompt Train in production environments.
Important Considerations:
In order to comply with the Anthropic Terms of Service, you need to have a Claude Max subscription for each user of Agent Prompt Train.
Choose your deployment method:
- AWS Infrastructure - Complete AWS deployment with RDS, ECS, and load balancing
- Docker Compose Production - Production Docker Compose setup
- Docker Deployment - Container-based deployment options
- Security Guide - Authentication, authorization, and security best practices
- Monitoring - Observability and alerting setup
- Database Management - Database administration and maintenance
- Backup & Recovery - Data protection strategies
- MANDATORY: Deploy oauth2-proxy for production dashboard access
- Configure proper SSL/TLS certificates
- Set up monitoring and alerting
- Implement proper backup strategies
- Review security documentation thoroughly
- Use
DASHBOARD_DEV_USER_EMAILonly for local development
Essential configuration:
# Database
DATABASE_URL=postgresql://user:password@localhost:5432/agent_prompttrain
# Dashboard Authentication
# Production: oauth2-proxy is MANDATORY - see deployment docs
DASHBOARD_SSO_HEADERS=X-Auth-Request-Email
DASHBOARD_SSO_ALLOWED_DOMAINS=your-company.com
# Development: Use dev bypass (never in production!)
DASHBOARD_DEV_USER_EMAIL=dev@localhost
# Service-to-Service Authentication
INTERNAL_API_KEY=your-internal-service-key
# Optional Features
STORAGE_ENABLED=true
DEBUG=falseSee the Documentation for complete configuration options.
-
Create an account credential under
credentials/accounts/:mkdir -p credentials/accounts cat > credentials/accounts/account-primary.credentials.json <<'JSON' { "type": "api_key", "accountId": "acc_team_alpha", "api_key": "sk-ant-your-claude-api-key" } JSON
-
Allow proxy clients by listing Bearer tokens per project in
credentials/project-client-keys/:mkdir -p credentials/project-client-keys cat > credentials/project-client-keys/project-alpha.client-keys.json <<'JSON' { "keys": ["cnp_live_team_alpha"] } JSON
-
Tag outgoing Anthropic calls so responses stay mapped to the right project:
export ANTHROPIC_CUSTOM_HEADERS="MSL-Project-Id:project-alpha"
-
Select a specific account at runtime (optional):
curl -X POST http://localhost:3000/v1/messages \ -H "MSL-Project-Id: project-alpha" \ -H "MSL-Account: acc_abc123xyz" \ -H "Authorization: Bearer cnp_live_team_alpha" \ -H "Content-Type: application/json" \ -d '{"model":"claude-3-opus-20240229","messages":[{"role":"user","content":"Hello"}]}'
If MSL-Account is omitted, the proxy uses the project's configured default account. Each project must have a default account set via the dashboard.
Use the proxy exactly like Claude's API:
curl -X POST http://localhost:3000/v1/messages \
-H "Authorization: Bearer YOUR_CLIENT_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-3-opus-20240229",
"messages": [{"role": "user", "content": "Hello!"}]
}'Access the dashboard at http://localhost:3001. The dashboard requires user authentication:
- Production: oauth2-proxy is MANDATORY for user authentication (see Deployment Guide)
- Development: Set
DASHBOARD_DEV_USER_EMAIL=dev@localhostfor local development bypass
Features:
- Real-time request monitoring
- Conversation visualization with branching
- Token usage analytics
- Request history browsing
agent-prompttrain/
├── packages/shared/ # Shared types and utilities
├── services/
│ ├── proxy/ # Proxy API service
│ └── dashboard/ # Dashboard web service
└── scripts/ # Management utilities
See Architecture Overview for detailed architecture documentation.
# Run type checking
bun run typecheck
# Run tests
bun test
# Format code
bun run format
# Database operations
bun run db:backup # Backup database
bun run db:analyze-conversations # Analyze conversation structure
bun run db:rebuild-conversations # Rebuild conversation data
# AI Analysis management
bun run ai:check-jobs # Check analysis job statuses
bun run ai:check-content # Inspect analysis content
bun run ai:reset-stuck # Reset jobs with high retry countsSee Development Guide for development guidelines.
Agent Prompt Train supports deployment to multiple environments:
- Production (
prod) - Live production services - Staging (
staging) - Pre-production testing environment
For AWS EC2 deployments, use the manage-agent-prompttrain-proxies.sh script with environment filtering:
# Deploy to production servers only
./scripts/ops/manage-agent-prompttrain-proxies.sh --env prod up
# Check staging server status
./scripts/ops/manage-agent-prompttrain-proxies.sh --env staging statusSee AWS Infrastructure Guide for detailed multi-environment setup.
# Run with docker-compose using images from registry
./docker-up.sh up -d# Build and run with locally built images
docker compose -f docker/docker-compose.yml up -d --buildProduction deployments must use oauth2-proxy for user authentication. See Docker Compose Deployment for configuration.
# Build images individually
docker build -f docker/proxy/Dockerfile -t moonsonglabs/agent-prompttrain-proxy:local .
docker build -f docker/dashboard/Dockerfile -t agent-prompttrain-dashboard:local .See the Deployment Guide for production deployment options.
Comprehensive documentation is available in the docs directory:
- Quick Start Guide - Get up and running in 5 minutes
- Installation - Detailed installation instructions
- Configuration - All configuration options
- API Reference - Complete API documentation
- Authentication - Auth setup and troubleshooting
- Dashboard Guide - Using the monitoring dashboard
- Claude CLI - CLI integration guide
- Deployment - Docker and production deployment
- Security - Security best practices
- Monitoring - Metrics and observability
- Backup & Recovery - Data protection
- System Architecture - High-level design
- Internals - Deep implementation details
- ADRs - Architecture decision records
- Common Issues - FAQ and solutions
- Performance - Performance optimization
- Debugging - Debug techniques
Contributions are welcome! Please read our Contributing Guidelines first.







