A Model Context Protocol (MCP) server with Server-Sent Events (SSE) transport that connects Large Language Models to Wikidata's structured knowledge base. Features an optimized hybrid architecture that balances speed, accuracy, and verifiability by using fast basic tools for simple queries and advanced orchestration only for complex temporal/relational queries.
- π Fast Basic Tools: 140-250ms for simple entity/property searches
- π§ Advanced Orchestration: 1-11s for complex temporal queries (when needed)
- β‘ 50x Performance Difference: Empirically measured and optimized
- π Hybrid Approach: Right tool for each query type
- π‘οΈ Graceful Degradation: Works with or without Vector DB API key
search_wikidata_entity: Find entities by name (140-250ms)search_wikidata_property: Find properties by name (~200ms)get_wikidata_metadata: Entity labels, descriptions (~200ms)get_wikidata_properties: All entity properties (~200ms)execute_wikidata_sparql: Direct SPARQL queries (~200ms)
query_wikidata_complex: Temporal/relational queries (1-11s)- β "last 3 popes", "recent presidents of France"
- β Simple entity searches (use basic tools instead)
The server is deployed and accessible at:
- URL: https://wikidata-mcp-mirror.onrender.com
- MCP Endpoint: https://wikidata-mcp-mirror.onrender.com/mcp
- Health Check: https://wikidata-mcp-mirror.onrender.com/health
To use this server with Claude Desktop:
-
Install mcp-remote (if not already installed):
npm install -g @modelcontextprotocol/mcp-remote
-
Edit the Claude Desktop configuration file located at:
~/Library/Application Support/Claude/claude_desktop_config.json -
Configure it to use the remote MCP server:
{ "mcpServers": { "Wikidata MCP": { "command": "npx", "args": [ "mcp-remote", "https://wikidata-mcp-mirror.onrender.com/mcp" ] } } } -
Restart Claude Desktop
-
When using Claude, you can now access Wikidata knowledge through the configured MCP server.
- Create a new Web Service in your Render dashboard
- Connect your GitHub repository
- Configure the service:
- Build Command:
pip install -e . - Start Command:
python -m wikidata_mcp.api
- Build Command:
- Set Environment Variables:
- Add all variables from
.env.example - For production, set
DEBUG=false - Make sure to set a proper
WIKIDATA_VECTORDB_API_KEY
- Add all variables from
- Deploy
The service will be available at https://your-service-name.onrender.com
- Python 3.10+
- Virtual environment tool (venv, conda, etc.)
- Vector DB API key (for enhanced semantic search)
Create a .env file in the project root with the following variables:
# Required for Vector DB integration
1. Clone the repository:
```bash
git clone https://github.com/yourusername/wikidata-mcp-mirror.git
cd wikidata-mcp-mirror-
Create and activate a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: .\venv\Scripts\activate
-
Install the required dependencies:
pip install -e . -
Create a
.envfile based on.env.exampleand configure your environment variables:cp .env.example .env # Edit .env with your configuration -
Run the application:
# Development python -m wikidata_mcp.api # Production (with Gunicorn) gunicorn --bind 0.0.0.0:8000 --workers 4 --timeout 120 --keep-alive 5 --worker-class uvicorn.workers.UvicornWorker wikidata_mcp.api:app
The server will start on
http://localhost:8000by default with the following endpoints:GET /health- Health checkGET /messages/- SSE endpoint for MCP communicationGET /docs- Interactive API documentation (if enabled)GET /metrics- Prometheus metrics (if enabled)
| Variable | Default | Description |
|---|---|---|
PORT |
8000 | Port to run the server on |
WORKERS |
4 | Number of worker processes |
TIMEOUT |
120 | Worker timeout in seconds |
KEEPALIVE |
5 | Keep-alive timeout in seconds |
DEBUG |
false | Enable debug mode |
LOG_LEVEL |
INFO | Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL) |
USE_VECTOR_DB |
true | Enable/disable vector DB integration |
USE_CACHE |
true | Enable/disable caching system |
USE_FEEDBACK |
true | Enable/disable feedback system |
CACHE_TTL_SECONDS |
3600 | Cache time-to-live in seconds |
CACHE_MAX_SIZE |
1000 | Maximum number of items in cache |
WIKIDATA_VECTORDB_API_KEY |
API key for the vector DB service |
-
Build the Docker image:
docker build -t wikidata-mcp . -
Run the container:
docker run -p 8000:8000 --env-file .env wikidata-mcp
-
Start the application:
docker-compose up --build
-
For production, use the production compose file:
docker-compose -f docker-compose.prod.yml up --build -d
The service exposes Prometheus metrics at /metrics when the PROMETHEUS_METRICS environment variable is set to true.
curl http://localhost:8000/healthcurl http://localhost:8000/metricsRun the test suite with:
# Run all tests
pytest
# Run specific test file
pytest tests/orchestration/test_query_orchestrator.py -v
# Run with coverage report
pytest --cov=wikidata_mcp tests/To test the Vector DB integration, you'll need to set the WIKIDATA_VECTORDB_API_KEY environment variable:
WIKIDATA_VECTORDB_API_KEY=your_key_here pytest tests/orchestration/test_vectordb_integration.py -vYou can also test the server using the included test client:
python test_mcp_client.pyOr manually with curl:
# Connect to SSE endpoint
curl -N -H "Accept: text/event-stream" https://wikidata-mcp-mirror.onrender.com/messages/
# Send a message (replace SESSION_ID with the one received from the SSE endpoint)
curl -X POST "https://wikidata-mcp-mirror.onrender.com/messages/?session_id=YOUR_SESSION_ID" \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{},"clientInfo":{"name":"test-client","version":"0.1.0"}},"id":0}'This server is configured for deployment on Render.com using the render.yaml file.
- Build Command:
pip install -r requirements.txt - Start Command:
gunicorn -k uvicorn.workers.UvicornWorker server_sse:app - Environment Variables:
PORT: 10000
- Health Check Path:
/health
The repository includes a Dockerfile that's used by Render.com for containerized deployment. This allows the server to run in a consistent environment with all dependencies properly installed.
- Fork or clone this repository to your GitHub account
- Create a new Web Service on Render.com
- Connect your GitHub repository
- Render will automatically detect the
render.yamlfile and configure the deployment - Click "Create Web Service"
After deployment, you can access your server at the URL provided by Render.com.
The server is built using:
- FastAPI: For handling HTTP requests and routing
- SSE Transport: For bidirectional communication with clients
- MCP Framework: For implementing the Model Context Protocol
- Wikidata API: For accessing Wikidata's knowledge base
server_sse.py: Main server implementation with SSE transportwikidata_api.py: Functions for interacting with Wikidata's API and SPARQL endpointrequirements.txt: Dependencies for the projectDockerfile: Container configuration for Docker deployment on Renderrender.yaml: Configuration for deployment on Render.comtest_mcp_client.py: Test client for verifying server functionality
The server provides the following MCP tools:
search_wikidata_entity: Search for entities by namesearch_wikidata_property: Search for properties by nameget_wikidata_metadata: Get entity metadata (label, description)get_wikidata_properties: Get all properties for an entityexecute_wikidata_sparql: Execute a SPARQL queryfind_entity_facts: Search for an entity and find its factsget_related_entities: Find entities related to a given entity
This project is licensed under the MIT License - see the LICENSE file for details.
- Based on the Model Context Protocol (MCP) specification
- Uses Wikidata as the knowledge source
- Inspired by the MCP examples from the official documentation