An MCP (Model Context Protocol) server that enables AI applications to outsource tasks to various model providers through a unified interface.
Compatible with any AI tool that supports the Model Context Protocol, including Claude Desktop, Cline, and other MCP-enabled applications. Built with FastMCP for the MCP server implementation and Agno for AI agent capabilities.
- π€ Multi-Provider Support: Access 20+ AI providers through a single interface
- π Text Generation: Generate text using models from OpenAI, Anthropic, Google, and more
- π¨ Image Generation: Create images using DALL-E 3 and DALL-E 2
- π§ Simple API: Consistent interface with just three parameters: provider, model, and prompt
- π Flexible Authentication: Only configure API keys for the providers you use
Add the following configuration to your MCP client. Consult your MCP client's documentation for specific configuration details.
{
"mcpServers": {
"outsource-mcp": {
"command": "uvx",
"args": ["--from", "git+https://github.com/gwbischof/outsource-mcp.git", "outsource-mcp"],
"env": {
"OPENAI_API_KEY": "your-openai-key",
"ANTHROPIC_API_KEY": "your-anthropic-key",
"GOOGLE_API_KEY": "your-google-key",
"GROQ_API_KEY": "your-groq-key",
"DEEPSEEK_API_KEY": "your-deepseek-key",
"XAI_API_KEY": "your-xai-key",
"PERPLEXITY_API_KEY": "your-perplexity-key",
"COHERE_API_KEY": "your-cohere-key",
"FIREWORKS_API_KEY": "your-fireworks-key",
"HUGGINGFACE_API_KEY": "your-huggingface-key",
"MISTRAL_API_KEY": "your-mistral-key",
"NVIDIA_API_KEY": "your-nvidia-key",
"OLLAMA_HOST": "http://localhost:11434",
"OPENROUTER_API_KEY": "your-openrouter-key",
"TOGETHER_API_KEY": "your-together-key",
"CEREBRAS_API_KEY": "your-cerebras-key",
"DEEPINFRA_API_KEY": "your-deepinfra-key",
"SAMBANOVA_API_KEY": "your-sambanova-key"
}
}
}
}Note: The environment variables are optional. Only include the API keys for the providers you want to use.
Once installed and configured, you can use the tools in your MCP client:
- Generate text: Use the
outsource_texttool with provider "openai", model "gpt-4o-mini", and prompt "Write a haiku about coding" - Generate images: Use the
outsource_imagetool with provider "openai", model "dall-e-3", and prompt "A futuristic city skyline at sunset"
Creates an Agno agent with a specified provider and model to generate text responses.
Arguments:
provider: The provider name (e.g., "openai", "anthropic", "google", "groq", etc.)model: The model name (e.g., "gpt-4o", "claude-3-5-sonnet-20241022", "gemini-2.0-flash-exp")prompt: The text prompt to send to the model
Generates images using AI models.
Arguments:
provider: The provider name (currently only "openai" is supported)model: The model name ("dall-e-3" or "dall-e-2")prompt: The image generation prompt
Returns the URL of the generated image.
Note: Image generation is currently only supported by OpenAI models (DALL-E 2 and DALL-E 3). Other providers only support text generation.
The following providers are supported. Use the provider name (in parentheses) as the provider argument:
- OpenAI (
openai) - GPT-4, GPT-3.5, DALL-E, etc. | Models - Anthropic (
anthropic) - Claude 3.5, Claude 3, etc. | Models - Google (
google) - Gemini Pro, Gemini Flash, etc. | Models - Groq (
groq) - Llama 3, Mixtral, etc. | Models - DeepSeek (
deepseek) - DeepSeek Chat & Coder | Models - xAI (
xai) - Grok models | Models - Perplexity (
perplexity) - Sonar models | Models
- Cohere (
cohere) - Command models | Models - Mistral AI (
mistral) - Mistral Large, Medium, Small | Models - NVIDIA (
nvidia) - Various models | Models - HuggingFace (
huggingface) - Open source models | Models - Ollama (
ollama) - Local models | Models - Fireworks AI (
fireworks) - Fast inference | Models - OpenRouter (
openrouter) - Multi-provider access | Models - Together AI (
together) - Open source models | Models - Cerebras (
cerebras) - Fast inference | Models - DeepInfra (
deepinfra) - Optimized models | Models - SambaNova (
sambanova) - Enterprise models | Models
- AWS Bedrock (
awsorbedrock) - AWS-hosted models | Models - Azure AI (
azure) - Azure-hosted models | Models - IBM WatsonX (
ibmorwatsonx) - IBM models | Models - LiteLLM (
litellm) - Universal interface | Models - Vercel v0 (
vercelorv0) - Vercel AI | Models - Meta Llama (
meta) - Direct Meta access | Models
Each provider requires its corresponding API key:
| Provider | Environment Variable | Example |
|---|---|---|
| OpenAI | OPENAI_API_KEY |
sk-... |
| Anthropic | ANTHROPIC_API_KEY |
sk-ant-... |
GOOGLE_API_KEY |
AIza... | |
| Groq | GROQ_API_KEY |
gsk_... |
| DeepSeek | DEEPSEEK_API_KEY |
sk-... |
| xAI | XAI_API_KEY |
xai-... |
| Perplexity | PERPLEXITY_API_KEY |
pplx-... |
| Cohere | COHERE_API_KEY |
... |
| Fireworks | FIREWORKS_API_KEY |
... |
| HuggingFace | HUGGINGFACE_API_KEY |
hf_... |
| Mistral | MISTRAL_API_KEY |
... |
| NVIDIA | NVIDIA_API_KEY |
nvapi-... |
| Ollama | OLLAMA_HOST |
http://localhost:11434 |
| OpenRouter | OPENROUTER_API_KEY |
... |
| Together | TOGETHER_API_KEY |
... |
| Cerebras | CEREBRAS_API_KEY |
... |
| DeepInfra | DEEPINFRA_API_KEY |
... |
| SambaNova | SAMBANOVA_API_KEY |
... |
| AWS Bedrock | AWS credentials | Via AWS CLI/SDK |
| Azure AI | Azure credentials | Via Azure CLI/SDK |
| IBM WatsonX | IBM_WATSONX_API_KEY |
... |
| Meta Llama | LLAMA_API_KEY |
... |
Note: Only configure the API keys for providers you plan to use.
# Using OpenAI
provider: openai
model: gpt-4o-mini
prompt: Write a haiku about coding
# Using Anthropic
provider: anthropic
model: claude-3-5-sonnet-20241022
prompt: Explain quantum computing in simple terms
# Using Google
provider: google
model: gemini-2.0-flash-exp
prompt: Create a recipe for chocolate chip cookies
# Using DALL-E 3
provider: openai
model: dall-e-3
prompt: A serene Japanese garden with cherry blossoms
# Using DALL-E 2
provider: openai
model: dall-e-2
prompt: A futuristic cityscape at sunset
- Python 3.11 or higher
- uv package manager
git clone https://github.com/gwbischof/outsource-mcp.git
cd outsource-mcp
uv syncThe MCP Inspector allows you to test the server interactively:
mcp dev server.pyThe test suite includes integration tests that verify both text and image generation:
# Run all tests
uv run pytestNote: Integration tests require API keys to be set in your environment.
-
"Error: Unknown provider"
- Check that you're using a supported provider name from the list above
- Provider names are case-insensitive
-
"Error: OpenAI API error"
- Verify your API key is correctly set in the environment variables
- Check that your API key has access to the requested model
- Ensure you have sufficient credits/quota
-
"Error: No image was generated"
- This can happen if the image generation request fails
- Try a simpler prompt or different model (dall-e-2 vs dall-e-3)
-
Environment variables not working
- Make sure to restart your MCP client after updating the configuration
- Verify the configuration file location for your specific MCP client
- Check that the environment variables are properly formatted in the configuration
Contributions are welcome! Please feel free to submit a Pull Request.