A modern Python project template for building AI-powered APIs with PydanticAI, FastAPI, and Docker.
- Prerequisites:
- VS Code or Cursor
- Dev Containers extension
- Docker
- A Nerd Font for optimal terminal experience (optional)
The development environment uses modern CLI tools like eza with icons enabled (--icons). For these icons to display correctly in the integrated terminal (VS Code / Cursor):
-
Install a Nerd Font on your host machine (not in the container).
- The
.devcontainer/devcontainer.jsonis configured to use "MesloLGM Nerd Font Mono" by default. - You can download this specific font or another Nerd Font variant (like Meslo, Fira Code, Hack) from the Nerd Fonts website. Make sure to get a "Nerd Font" version (often suffixed with
NForNerd Font). - Install the downloaded font on your host operating system (e.g., through Font Book on macOS).
- The
-
Verify VS Code/Cursor Configuration:
- The Dev Container setting
terminal.integrated.fontFamilyin.devcontainer/devcontainer.jsonis set to"MesloLGM Nerd Font Mono". - If you installed a different Nerd Font on your host, update this setting in
.devcontainer/devcontainer.jsonto match the exact name of the font you installed before rebuilding the container. You can find the exact name in your OS's font manager (e.g., Font Book on macOS).
- The Dev Container setting
-
Configure External Terminals (If Applicable):
- If you use a terminal outside of VS Code/Cursor to interact with the container, ensure that terminal is also configured to use the Nerd Font you installed on your host.
Note: If you see boxes (□) or missing icons in the terminal after rebuilding the container, it likely means the font name specified in .devcontainer/devcontainer.json doesn't exactly match a Nerd Font installed and recognized on your host system. Double-check the font name in your OS font manager and the devcontainer.json setting.
-
Open in Dev Container:
- Clone this repository
- Open in VS Code/Cursor
- Click "Reopen in Container" when prompted
-
Start Development:
- Inside the container, run
startor pressCmd+Shift+B(macOS) /Ctrl+Shift+B(Windows/Linux) - Visit http://localhost:8000/docs for API documentation
- For the MCP server:
pat run-mcp(accessible at http://localhost:3001)
- Inside the container, run
This README provides a high-level overview. For detailed information, refer to:
| Documentation | Purpose |
|---|---|
| Documentation Overview | Comprehensive guide to all documentation |
| Developer Guide | Setup instructions and development workflows |
| Architecture | System design, patterns, and component relationships |
| Models | Pydantic models, validation, and PydanticAI integration |
| API Reference | API endpoints, parameters, and response formats |
| Testing Guide | Testing strategies and examples |
| Maintenance | Configuration management and project maintenance |
| Observability | Logging, tracing, and monitoring with Logfire |
| Cursor Rules | AI-assisted development with Cursor |
| Wishlist | Future improvements and feature ideas |
- PydanticAI: Structured interactions with LLMs using Pydantic models
- FastAPI: High-performance API framework with automatic docs
- MCP Server: Model Context Protocol server for AI agent access
- Type Safety: End-to-end type checking with mypy and Pydantic
- Docker: Containerization for consistent development and deployment
- Modern Tooling: Ruff, MyPy, UV package manager, and more
- Prompt Testing: Automated testing for LLM prompts with CI/CD integration
- Observability: Complete visibility with Logfire integration
- Cursor Rules: Smart AI-assisted development with contextual reminders
- Repomix Runner: Easily bundle project files for providing context to AI assistants (VS Code Extension)
Maintaining this project involves keeping dependencies up-to-date, synchronizing configuration files, and managing Docker environments. For detailed instructions on managing dependencies, CLI commands, Docker configurations, and more, please refer to the Project Maintenance Guide.
This project includes custom Cursor Rules to enhance your development experience when using Cursor, an AI-powered code editor:
- Documentation Reminders: Get contextual reminders to update documentation when changing code
- Type Safety Enforcement: Maintain type safety throughout the codebase
- Director Pattern Detection: Identify opportunities for implementing autonomous AI workflows
- Repomix Integration: Use the Repomix Runner extension (automatically installed in the dev container) to easily bundle files or directories and copy them to the clipboard for pasting into AI chat prompts.
To get started with the Cursor Rules:
- Open the project in Cursor
- The rules will be automatically loaded from
.cursor/rules.yml - Start coding and benefit from smart, contextual assistance
- Use
@symbol references (e.g.,@docs/MODELS.md) to bring relevant context into chats
For detailed information, see the Cursor Rules Guide.
Due to potential editor interference when directly modifying files in the .cursor/rules/ directory, a helper script is provided for a safer workflow:
- Create/Edit Rules: Make your changes to rule files (or create new ones) with the
.mdextension inside the staging directory.cursor/rules_staging/at the project root. Ensure each file starts with the correct YAML frontmatter (seedocs/CURSOR_RULES.mdfor structure). - Run the Script: Execute
make rulesor./scripts/tasks/move_rules.sh(make sure it's executable:chmod +x scripts/tasks/move_rules.sh). - Result: The script will move all
.mdfiles from.cursor/rules_staging/to.cursor/rules/, rename them with the.mdcextension, and remove the (now empty).cursor/rules_staging/directory.
This ensures the files are correctly formatted and placed without potential conflicts during the editing process.
├── .cursor # Cursor AI rules and configuration
├── .devcontainer # Dev container configuration
├── .vscode # VS Code settings and tasks
├── docs/ # Detailed documentation
├── promptfoo/ # Prompt testing configuration
├── src/ # Source code
│ └── pydanticai_api_template/
│ ├── api/ # FastAPI routes and endpoints
│ ├── agents/ # PydanticAI agent definitions
│ ├── models/ # Pydantic data models
│ ├── mcp/ # MCP server implementation
│ └── cli.py # Command-line interface
├── tests/ # Test suite
├── wishlist/ # Future improvements and feature ideas
├── pyproject.toml # Project dependencies and config
└── Makefile # Common development commands
from pydantic_ai import Agent
from pydantic import BaseModel
class StoryIdea(BaseModel):
title: str
premise: str
story_agent = Agent("openai:gpt-4o", result_type=StoryIdea)
result = await story_agent.run("Give me a sci-fi story idea")You can generate story ideas using the /story endpoint:
cURL -X POST "http://localhost:8000/story" \\
-H "Content-Type: application/json" \\
-d '{"message":"Give me a sci-fi story about time travel"}'Response:
{
"title": "Echoes of Tomorrow",
"premise": "A physicist discovers that time isn't linear but layered, with each moment existing simultaneously. When she builds a device to view these layers, she witnesses a future catastrophe and must find a way to reach across time to prevent it."
}Connect any MCP-compatible client to access tools:
from pydantic_ai import Agent
from pydantic_ai.mcp import MCPServerHTTP
server = MCPServerHTTP(url='http://localhost:3001/sse')
agent = Agent('openai:gpt-4o', mcp_servers=[server])We bundle the OpenAI Codex CLI into our dev container for AI-powered, in-terminal coding assistance.
Install (on your host, if you ever need it locally):
npm install -g @openai/codexUsage inside the container:
codex→ opens an interactive shellcodex "explain this codebase to me"→ one-shot prompt- You can also pipe in a file path:
codex src/pydanticai_api_template/api/endpoints.py
VS Code Task:
- Open Command Palette → "Run Task" → "Codex: Interactive"
API Key:
- The
OPENAI_API_KEYenvironment variable is automatically passed into the dev container for Codex CLI usage. Add your key to your.envfile as shown in the Environment Setup section.
The project includes a wishlist/ directory for capturing future improvements and feature ideas. This serves as:
- Feature Backlog: A place to document desired enhancements while focusing on current priorities
- AI-Driven Implementation: Actionable items for AI to implement during coding sessions
- Collaborative Planning: A way to track ideas from the entire team for future sprints
Current wishlist items:
- Templateizing and CookieCutter integration for project scaffolding
To contribute to the wishlist, add markdown files to the wishlist/ directory with detailed descriptions of proposed features or improvements.
This template comes with built-in observability powered by Logfire. Key features include:
- Automatic Instrumentation for FastAPI, PydanticAI, and HTTP requests
- Live Debugging with real-time trace visualization
- LLM Call Monitoring including prompts, tokens, and costs
- Performance Metrics to identify bottlenecks
# From inside the dev container
auth-logfire # Authenticate with Logfire
use-logfire # Set the current projectSet these environment variables:
LOGFIRE_TOKEN="your-write-token"
LOGFIRE_ENABLED="true"For detailed instructions, see Observability.
Create a .env file in the project root with your API keys:
OPENAI_API_KEY=your_openai_api_key_here
ANTHROPIC_API_KEY=your_claude_api_key_here
PYDANTICAI_API_KEY=your_api_key_here # For API authenticationThe API endpoints are protected with API key authentication. To generate a secure API key:
# Run the API key generation command
pat generate-api-keyThis will generate a secure random key that you can add to your .env file as PYDANTICAI_API_KEY.
When making requests to protected endpoints like /chat or /story, include the API key in the X-API-Key header:
curl -X POST "http://localhost:8000/chat" \
-H "Content-Type: application/json" \
-H "X-API-Key: your_api_key_here" \
-d '{"message":"Tell me a joke"}'If no API key is set in the environment, authentication will be skipped (with a warning in the logs).