A modular, production-ready template for building multiagent systems using LangGraph and Google's Agent-to-Agent (A2A) protocol. This template demonstrates industry best practices for creating scalable, interoperable AI agents.
- Multiple Agent Patterns: Both LangGraph (create_agent and custom class) and LangChain implementations
- A2A Protocol Integration: Full Google A2A protocol support for agent interoperability
- Production-Ready: Includes logging, persistence, memory management, and error handling
- Modular Architecture: Easy to extend and customize for specific use cases
- Type-Safe: Fully typed with Pydantic models
- Client and Server: Complete A2A client and server implementations
- Checkpointing: Built-in support for conversation persistence
LanggraphA2ATemplate/
├── config/ # Configuration management
│ ├── settings.py # Pydantic settings with env support
│ └── logging_config.py # Structured logging setup
├── core/ # Core utilities
│ ├── a2a/ # A2A protocol implementation
│ │ ├── wrapper.py # A2A agent wrapper
│ │ ├── agent_card.py # Agent card management
│ │ └── skills.py # Skills registry
│ ├── memory/ # Memory and persistence
│ │ ├── persistence.py # Checkpoint management
│ │ └── memory_manager.py # Conversation memory
│ └── logging/ # Logging utilities
├── agents/ # Agent implementations
│ ├── langgraph/ # LangGraph agents
│ │ ├── create_agent_example.py # create_react_agent approach
│ │ └── custom_class_example.py # Custom supervisor approach
│ └── langchain/ # LangChain agents
│ └── langchain_agent_example.py # Traditional AgentExecutor
├── server/ # A2A server
│ └── a2a_server.py # FastAPI server implementation
├── client/ # A2A client
│ └── a2a_client.py # HTTP client for A2A agents
└── examples/ # Example scripts
- Python 3.10+
- OpenAI API key (or Anthropic API key)
- Clone the repository:
git clone <repository-url>
cd LanggraphA2ATemplate- Create a virtual environment:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate- Install dependencies:
pip install -r requirements.txt- Configure environment:
cp .env.example .env
# Edit .env and add your API keysfrom agents.langgraph import create_simple_agent
from server import create_a2a_server
# Create agent
agent = create_simple_agent(
model_name="gpt-4",
use_checkpointing=True
)
# Create and run A2A server
server = create_a2a_server(
agent,
agent_name="Research Agent",
agent_description="Agent with search and calculation capabilities"
)
server.run(port=8000)Run the example:
python examples/run_langgraph_create_agent.pyfrom agents.langgraph import create_multiagent_system
from server import create_a2a_server
# Create supervisor-based multiagent system
system = create_multiagent_system(use_checkpointing=True)
# Expose via A2A server
server = create_a2a_server(
system,
agent_name="Supervisor System",
agent_description="Multiagent system with specialized workers"
)
server.run(port=8001)Run the example:
python examples/run_langgraph_custom.pyfrom agents.langchain import create_langchain_agent, LangChainAgentWrapper
from server import create_a2a_server
# Create traditional LangChain agent
agent = create_langchain_agent(model_name="gpt-4")
wrapped = LangChainAgentWrapper(agent)
# Expose via A2A server
server = create_a2a_server(
wrapped,
agent_name="LangChain Agent",
agent_description="Traditional LangChain agent with tools"
)
server.run(port=8002)Run the example:
python examples/run_langchain_agent.pyfrom client import A2AClient
# Initialize client
client = A2AClient("http://localhost:8000")
# Discover agent capabilities
card = await client.get_agent_card()
print(f"Agent: {card.name}")
print(f"Skills: {len(card.skills)}")
# Invoke agent
result = await client.invoke(
"What is 25 * 16?",
thread_id="my-conversation"
)
print(result["response"])
# Stream responses
async for chunk in client.stream("Search for LangGraph info"):
print(chunk)Best for: Single-purpose agents with straightforward tool usage
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
agent = create_react_agent(
ChatOpenAI(model="gpt-4"),
tools=[tool1, tool2],
checkpointer=get_checkpointer()
)Best for: Complex multiagent systems with custom routing logic
class WorkerAgent:
def __init__(self, name, tools):
self.name = name
self.agent = create_react_agent(llm, tools)
async def execute(self, state):
return await self.agent.ainvoke(state)
# Create supervisor-based system
workflow = StateGraph(AgentState)
workflow.add_node("worker1", worker1.execute)
workflow.add_node("supervisor", supervisor.route)Best for: Backward compatibility or simple single-agent scenarios
from langchain.agents import create_openai_functions_agent, AgentExecutor
agent = create_openai_functions_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)This template implements the Google A2A protocol for agent interoperability:
Agents expose their capabilities at /.well-known/agent-card.json:
{
"name": "Research Agent",
"description": "Agent with search capabilities",
"version": "1.0.0",
"skills": [
{
"id": "search_web",
"name": "Web Search",
"description": "Search the web for information"
}
],
"endpoints": {
"invoke": "/api/v1/invoke",
"stream": "/api/v1/stream"
}
}GET /.well-known/agent-card.json- Agent capabilitiesPOST /api/v1/invoke- Synchronous invocationPOST /api/v1/stream- Streaming responsesPOST /api/v1/skills/{skill_id}- Direct skill invocationGET /health- Health check
Configuration is managed via environment variables and config/settings.py:
# API Keys
OPENAI_API_KEY=your_key
ANTHROPIC_API_KEY=your_key
# A2A Configuration
A2A_AGENT_NAME=My Agent
A2A_SERVER_PORT=8000
# Persistence
LANGGRAPH_CHECKPOINT_BACKEND=sqlite # or redis, memory
SQLITE_DB_PATH=./data/checkpoints.db
# Logging
LOG_LEVEL=INFO
LOG_FORMAT=json # or textStructured logging with deployment-level features:
from core.logging import get_logger
from core.logging.logger import LoggerContext, log_agent_event
logger = get_logger(__name__)
# Use trace context for request tracking
with LoggerContext() as trace_id:
log_agent_event(logger, "agent_start", "my_agent", trace_id=trace_id)
# ... agent execution
log_agent_event(logger, "agent_complete", "my_agent", trace_id=trace_id)Logs include:
- Trace IDs for request tracking
- Structured JSON output (production)
- Agent-specific events
- A2A protocol events
- Performance metrics
LangGraph checkpointing enables:
- Conversation state persistence
- Resume interrupted conversations
- Human-in-the-loop workflows
- Time-travel debugging
from core.memory import get_checkpointer
checkpointer = get_checkpointer() # Auto-configured from settings
agent = create_react_agent(llm, tools, checkpointer=checkpointer)
# Use with thread IDs
config = {"configurable": {"thread_id": "user-123"}}
result = await agent.ainvoke(input_data, config=config)from core.memory import MemoryManager
memory = MemoryManager(max_history_length=50)
# Add messages
messages = memory.add_message(messages, new_message)
# Get context window
context = memory.create_context_window(messages, window_size=20)- Environment Variables: Set all required API keys and configuration
- Logging: Use JSON format for production (
LOG_FORMAT=json) - Persistence: Use SQLite or Redis for checkpointing
- Security: Implement authentication/authorization as needed
- Monitoring: Set up log aggregation and monitoring
- Scaling: Use load balancer for multiple instances
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "examples/run_langgraph_create_agent.py"]# Development
ENVIRONMENT=development
LOG_LEVEL=DEBUG
LOG_FORMAT=text
# Production
ENVIRONMENT=production
LOG_LEVEL=INFO
LOG_FORMAT=json# Install dev dependencies
pip install pytest pytest-asyncio
# Run tests
pytest tests/- A2A Protocol Documentation
- LangGraph Documentation
- LangChain Documentation
- Building Multi-Agent Systems with LangGraph
- Benchmarking Multi-Agent Architectures
MIT License
Contributions welcome! Please follow the modular structure and include:
- Type hints
- Docstrings
- Logging
- Tests
- Documentation updates