A sophisticated multi-agent system that combines the Model Context Protocol (MCP) with Agent-to-Agent (A2A) communication for intelligent text processing, web interaction, and automated workflows.
- Smart Agent Coordination: Automatic detection of user intent and routing to appropriate agents
- A2A Communication: Direct agent-to-agent communication for complex workflows
- MCP Integration: Standards-compliant Model Context Protocol implementation
- Web Search & Weather: DuckDuckGo integration with weather-specific queries
- Website Content Extraction: Headless browser-based text extraction
- Time & Date Services: NTP-synchronized accurate time information
- File Processing: Multi-format document conversion to PDF
- Data Anonymization: Intelligent PII detection and removal
- Text Optimizer: Professional email generation and tone adjustment
- Grammar Corrector (Lektor): German/English grammar and spelling correction
- Sentiment Analysis: Emotion detection and sentiment scoring
- Query Refactoring: LLM-optimized query reformulation
- User Interface Agent: Intelligent request interpretation and routing
- Gradio Web Interface: User-friendly browser-based interaction
- RESTful API: Programmatic access to all services
- CLI Integration: Command-line tool compatibility
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β A2A-MCP System β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Gradio Interface (Port 7860) β
β βββ File Upload & Processing β
β βββ Natural Language Input β
β βββ Tonality Selection β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β User Interface Agent (Intelligent Router) β
β βββ Intent Detection β
β βββ Agent Selection β
β βββ Response Coordination β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β MCP Server (Port 8000) A2A Registry β
β βββ Web Search & Weather βββ Optimizer β
β βββ Website Extraction βββ Lektor β
β βββ Time/Date Services βββ Sentiment β
β βββ File Conversion βββ Query Ref β
β βββ Anonymization βββ UI Agent β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Backend Services β
β βββ Selenium WebDriver β
β βββ DuckDuckGo Search β
β βββ NTP Time Sync β
β βββ PDF Conversion β
β βββ LLM Integration (Ollama) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
- Python 3.11+
- Ollama with
qwen2.5:latest
model - Modern web browser (for Gradio interface)
-
Clone the repository:
git clone <repository-url> cd a2a_mcp
-
Install dependencies:
# Using uv (recommended) uv sync # Or using pip pip install -r requirements.txt
-
Configure environment:
cp .env.example .env # Edit .env with your settings
-
Start Ollama (if not running):
ollama serve ollama pull qwen2.5:latest
Start all services with the integrated launcher:
python launcher.py
This will automatically start:
- MCP Server on
http://localhost:8000
- Gradio Interface on
http://localhost:7860
- A2A Agent Registry (embedded)
- Open your browser to
http://localhost:7860
- Try these example requests:
"Wie wird das Wetter morgen in Berlin?"
β Automatic web search with weather optimization
"Korrigiere diesen Text: Das ist ein sehr schlechte Satz mit viele Fehler."
β Grammar correction via Lektor agent
"Optimiere diesen Text fΓΌr eine professionelle E-Mail: Das Produkt ist Schrott!"
β Professional email generation via Optimizer agent
"Analysiere das Sentiment: Ich bin so glΓΌcklich ΓΌber dieses groΓartige Produkt!"
β Sentiment analysis with emotion detection
"Wie spΓ€t ist es jetzt?"
β NTP-synchronized time retrieval
import httpx
# Direct MCP tool call
async with httpx.AsyncClient() as client:
response = await client.post(
"http://localhost:8000/mcp/call-tool",
json={
"name": "duckduckgo_search",
"arguments": {"query": "weather Berlin", "max_results": 5}
}
)
print(response.json())
from agent_server.user_interface import process_input
# Intelligent request processing
result = await process_input("Mache diesen Text freundlicher: Ihre Anfrage wurde abgelehnt.")
print(result.final_result)
# LLM Configuration
BASE_URL=http://localhost:11434/v1
API_KEY=ollama
USER_INTERFACE_MODEL=qwen2.5:latest
OPTIMIZER_MODEL=qwen2.5:latest
LEKTOR_MODEL=qwen2.5:latest
SENTIMENT_MODEL=qwen2.5:latest
QUERY_REF_MODEL=qwen2.5:latest
# Server Configuration
SERVER_HOST=localhost
SERVER_PORT=8000
SERVER_SCHEME=http
GRADIO_HOST=127.0.0.1
GRADIO_PORT=7860
# Debug Options
DEBUG_AGENT_RESPONSES=false
DEBUG_A2A_CALLS=false
# Service Configuration
ANONYMIZER_USE_LLM=false
ANONYMIZER_LLM_ENDPOINT=
ANONYMIZER_LLM_API_KEY=
ANONYMIZER_LLM_MODEL=
Ensure these models are available in Ollama:
ollama pull qwen2.5:latest # Primary model for all agents
# Or configure different models per agent in .env
Tool | Description | Parameters |
---|---|---|
get_current_time |
NTP-synchronized UTC time | None |
duckduckgo_search |
Web search with weather optimization | query , max_results |
extract_website_text |
Extract main content from URLs | url |
anonymize_text |
Remove PII from text | text |
convert_to_pdf |
Convert files to PDF format | input_filepath , output_directory |
Agent | Purpose | Input | Output |
---|---|---|---|
Optimizer | Professional text optimization | Raw text + tonality | Polished professional text |
Lektor | Grammar & spelling correction | Text with errors | Corrected text |
Sentiment | Emotion & sentiment analysis | Any text | Sentiment score + emotions |
Query Ref | LLM query optimization | User query | Optimized query |
User Interface | Intelligent request routing | Natural language | Coordinated response |
.txt
,.md
,.py
,.csv
,.log
,.json
,.xml
,.html
- Text files: All above formats
- Images:
.jpg
,.png
,.gif
,.bmp
,.tiff
,.webp
- Office docs:
.docx
,.xlsx
,.pptx
(requires LibreOffice)
User Input β Intent Detection β Agent Selection β Processing β Response
β β β β β
"Fix grammar" β Text Processing β Lektor Agent β Correction β Clean Text
Complaint Text β Optimizer Agent β Professional Email β Lektor Check β Final Email
Raw Text β Query Refactor β Optimization β Grammar Check β Sentiment Analysis
a2a_mcp/
βββ agent_server/ # A2A agents
β βββ user_interface.py # Main coordination agent
β βββ optimizer.py # Text optimization
β βββ lektor.py # Grammar correction
β βββ sentiment.py # Sentiment analysis
β βββ query_ref.py # Query refactoring
βββ mcp_services/ # MCP service implementations
β βββ mcp_search/ # DuckDuckGo integration
β βββ mcp_website/ # Web scraping
β βββ mcp_time/ # NTP time services
β βββ mcp_anonymizer/ # Data anonymization
β βββ mcp_fileconverter/ # PDF conversion
βββ mcp_main.py # MCP server
βββ launcher.py # Service orchestrator
βββ gradio_interface.py # Web UI
βββ a2a_server.py # A2A registry
βββ uploaded_files/ # File upload storage
- Create agent file in
agent_server/
:
from pydantic_ai import Agent
from pydantic import BaseModel
class MyAgentResponse(BaseModel):
result: str
async def my_agent_a2a_function(messages: list) -> MyAgentResponse:
# Implementation
pass
- Register in A2A server:
# In a2a_server.py
registry.register_a2a_agent("my_agent", my_agent_a2a_function)
- Add to user interface agent routing logic
- Implement service in
mcp_services/
- Add endpoint in
mcp_main.py
- Register tool in MCP configuration
# Test individual agents
python agent_server/sentiment.py
python agent_server/optimizer.py
# Test MCP server
curl http://localhost:8000/health
# Test full workflow
python a2a_server.py
# Sentiment analysis
await sentiment_agent("I love this amazing product!")
# Text optimization
await optimizer_agent("Das Produkt ist Schrott!", tonality="professionell")
# Grammar correction
await lektor_agent("Das ist ein sehr schlechte Satz.")
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature
- Commit changes:
git commit -m 'Add amazing feature'
- Push to branch:
git push origin feature/amazing-feature
- Open a Pull Request
This project is licensed under the AGPL v3 License - see the License.md file for details.
- Pydantic AI for the agent framework
- Model Context Protocol for the standards
- Gradio for the web interface
- Ollama for local LLM support
Built with β€οΈ for intelligent multi-agent workflows