Skip to content

SS12dev/LanggraphA2ATemplate

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LangGraph A2A Multiagent Template

A modular, production-ready template for building multiagent systems using LangGraph and Google's Agent-to-Agent (A2A) protocol. This template demonstrates industry best practices for creating scalable, interoperable AI agents.

Features

  • Multiple Agent Patterns: Both LangGraph (create_agent and custom class) and LangChain implementations
  • A2A Protocol Integration: Full Google A2A protocol support for agent interoperability
  • Production-Ready: Includes logging, persistence, memory management, and error handling
  • Modular Architecture: Easy to extend and customize for specific use cases
  • Type-Safe: Fully typed with Pydantic models
  • Client and Server: Complete A2A client and server implementations
  • Checkpointing: Built-in support for conversation persistence

Architecture

Core Components

LanggraphA2ATemplate/
├── config/                 # Configuration management
│   ├── settings.py        # Pydantic settings with env support
│   └── logging_config.py  # Structured logging setup
├── core/                  # Core utilities
│   ├── a2a/              # A2A protocol implementation
│   │   ├── wrapper.py    # A2A agent wrapper
│   │   ├── agent_card.py # Agent card management
│   │   └── skills.py     # Skills registry
│   ├── memory/           # Memory and persistence
│   │   ├── persistence.py      # Checkpoint management
│   │   └── memory_manager.py   # Conversation memory
│   └── logging/          # Logging utilities
├── agents/               # Agent implementations
│   ├── langgraph/       # LangGraph agents
│   │   ├── create_agent_example.py    # create_react_agent approach
│   │   └── custom_class_example.py    # Custom supervisor approach
│   └── langchain/       # LangChain agents
│       └── langchain_agent_example.py # Traditional AgentExecutor
├── server/              # A2A server
│   └── a2a_server.py   # FastAPI server implementation
├── client/              # A2A client
│   └── a2a_client.py   # HTTP client for A2A agents
└── examples/            # Example scripts

Installation

Prerequisites

  • Python 3.10+
  • OpenAI API key (or Anthropic API key)

Setup

  1. Clone the repository:
git clone <repository-url>
cd LanggraphA2ATemplate
  1. Create a virtual environment:
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
  1. Install dependencies:
pip install -r requirements.txt
  1. Configure environment:
cp .env.example .env
# Edit .env and add your API keys

Quick Start

Example 1: LangGraph with create_react_agent

from agents.langgraph import create_simple_agent
from server import create_a2a_server

# Create agent
agent = create_simple_agent(
    model_name="gpt-4",
    use_checkpointing=True
)

# Create and run A2A server
server = create_a2a_server(
    agent,
    agent_name="Research Agent",
    agent_description="Agent with search and calculation capabilities"
)
server.run(port=8000)

Run the example:

python examples/run_langgraph_create_agent.py

Example 2: LangGraph Custom Multiagent System

from agents.langgraph import create_multiagent_system
from server import create_a2a_server

# Create supervisor-based multiagent system
system = create_multiagent_system(use_checkpointing=True)

# Expose via A2A server
server = create_a2a_server(
    system,
    agent_name="Supervisor System",
    agent_description="Multiagent system with specialized workers"
)
server.run(port=8001)

Run the example:

python examples/run_langgraph_custom.py

Example 3: LangChain Agent

from agents.langchain import create_langchain_agent, LangChainAgentWrapper
from server import create_a2a_server

# Create traditional LangChain agent
agent = create_langchain_agent(model_name="gpt-4")
wrapped = LangChainAgentWrapper(agent)

# Expose via A2A server
server = create_a2a_server(
    wrapped,
    agent_name="LangChain Agent",
    agent_description="Traditional LangChain agent with tools"
)
server.run(port=8002)

Run the example:

python examples/run_langchain_agent.py

Using the A2A Client

from client import A2AClient

# Initialize client
client = A2AClient("http://localhost:8000")

# Discover agent capabilities
card = await client.get_agent_card()
print(f"Agent: {card.name}")
print(f"Skills: {len(card.skills)}")

# Invoke agent
result = await client.invoke(
    "What is 25 * 16?",
    thread_id="my-conversation"
)
print(result["response"])

# Stream responses
async for chunk in client.stream("Search for LangGraph info"):
    print(chunk)

Agent Patterns

1. create_react_agent Approach (Simple)

Best for: Single-purpose agents with straightforward tool usage

from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI

agent = create_react_agent(
    ChatOpenAI(model="gpt-4"),
    tools=[tool1, tool2],
    checkpointer=get_checkpointer()
)

2. Custom Class Approach (Advanced)

Best for: Complex multiagent systems with custom routing logic

class WorkerAgent:
    def __init__(self, name, tools):
        self.name = name
        self.agent = create_react_agent(llm, tools)

    async def execute(self, state):
        return await self.agent.ainvoke(state)

# Create supervisor-based system
workflow = StateGraph(AgentState)
workflow.add_node("worker1", worker1.execute)
workflow.add_node("supervisor", supervisor.route)

3. LangChain AgentExecutor (Traditional)

Best for: Backward compatibility or simple single-agent scenarios

from langchain.agents import create_openai_functions_agent, AgentExecutor

agent = create_openai_functions_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)

A2A Protocol

This template implements the Google A2A protocol for agent interoperability:

Agent Card

Agents expose their capabilities at /.well-known/agent-card.json:

{
  "name": "Research Agent",
  "description": "Agent with search capabilities",
  "version": "1.0.0",
  "skills": [
    {
      "id": "search_web",
      "name": "Web Search",
      "description": "Search the web for information"
    }
  ],
  "endpoints": {
    "invoke": "/api/v1/invoke",
    "stream": "/api/v1/stream"
  }
}

Endpoints

  • GET /.well-known/agent-card.json - Agent capabilities
  • POST /api/v1/invoke - Synchronous invocation
  • POST /api/v1/stream - Streaming responses
  • POST /api/v1/skills/{skill_id} - Direct skill invocation
  • GET /health - Health check

Configuration

Configuration is managed via environment variables and config/settings.py:

# API Keys
OPENAI_API_KEY=your_key
ANTHROPIC_API_KEY=your_key

# A2A Configuration
A2A_AGENT_NAME=My Agent
A2A_SERVER_PORT=8000

# Persistence
LANGGRAPH_CHECKPOINT_BACKEND=sqlite  # or redis, memory
SQLITE_DB_PATH=./data/checkpoints.db

# Logging
LOG_LEVEL=INFO
LOG_FORMAT=json  # or text

Logging

Structured logging with deployment-level features:

from core.logging import get_logger
from core.logging.logger import LoggerContext, log_agent_event

logger = get_logger(__name__)

# Use trace context for request tracking
with LoggerContext() as trace_id:
    log_agent_event(logger, "agent_start", "my_agent", trace_id=trace_id)
    # ... agent execution
    log_agent_event(logger, "agent_complete", "my_agent", trace_id=trace_id)

Logs include:

  • Trace IDs for request tracking
  • Structured JSON output (production)
  • Agent-specific events
  • A2A protocol events
  • Performance metrics

Memory and Persistence

Checkpointing

LangGraph checkpointing enables:

  • Conversation state persistence
  • Resume interrupted conversations
  • Human-in-the-loop workflows
  • Time-travel debugging
from core.memory import get_checkpointer

checkpointer = get_checkpointer()  # Auto-configured from settings
agent = create_react_agent(llm, tools, checkpointer=checkpointer)

# Use with thread IDs
config = {"configurable": {"thread_id": "user-123"}}
result = await agent.ainvoke(input_data, config=config)

Memory Management

from core.memory import MemoryManager

memory = MemoryManager(max_history_length=50)

# Add messages
messages = memory.add_message(messages, new_message)

# Get context window
context = memory.create_context_window(messages, window_size=20)

Deployment

Production Checklist

  1. Environment Variables: Set all required API keys and configuration
  2. Logging: Use JSON format for production (LOG_FORMAT=json)
  3. Persistence: Use SQLite or Redis for checkpointing
  4. Security: Implement authentication/authorization as needed
  5. Monitoring: Set up log aggregation and monitoring
  6. Scaling: Use load balancer for multiple instances

Docker Deployment

FROM python:3.11-slim

WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt

COPY . .

CMD ["python", "examples/run_langgraph_create_agent.py"]

Environment-Specific Settings

# Development
ENVIRONMENT=development
LOG_LEVEL=DEBUG
LOG_FORMAT=text

# Production
ENVIRONMENT=production
LOG_LEVEL=INFO
LOG_FORMAT=json

Testing

# Install dev dependencies
pip install pytest pytest-asyncio

# Run tests
pytest tests/

References

License

MIT License

Contributing

Contributions welcome! Please follow the modular structure and include:

  • Type hints
  • Docstrings
  • Logging
  • Tests
  • Documentation updates

About

No description, website, or topics provided.

Resources

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published