A Model Context Protocol (MCP) server that scrapes web content and converts it to Markdown.
This MCP server provides a simple tool for scraping web content and converting it to Markdown format. It uses:
- Playwright: For headless browser automation to handle modern web pages including JavaScript-heavy sites
- BeautifulSoup: For HTML parsing and cleanup
- Pypandoc: For high-quality HTML to Markdown conversion
The server implements a single tool:
scrape_to_markdown
: Scrapes content from a URL and converts it to Markdown- Required parameter:
url
(string) - The URL to scrape - Optional parameter:
verify_ssl
(boolean) - Whether to verify SSL certificates (default: true)
- Required parameter:
When using uv
no specific installation is needed. We will
use uvx
to directly run mcp-playwright-scraper.
Alternatively you can install mcp-playwright-scraper
via pip:
pip install mcp-playwright-scraper
After installation, you can run it as a script using:
python -m mcp_playwright_scraper
- Python 3.11 or higher
- Playwright browser dependencies
- Pandoc (optional, will be automatically installed by pypandoc if possible)
After installation, you need to install Playwright browser dependencies:
playwright install --with-deps chromium
Add this to your claude_desktop_config.json
:
Using uvx
"mcpServers": {
"mcp-playwright-scraper": {
"command": "uvx",
"args": ["mcp-playwright-scraper"]
}
}
Using pip installation
"mcpServers": {
"mcp-playwright-scraper": {
"command": "python",
"args": ["-m", "mcp_playwright_scraper"]
}
}
# Basic syntax
$ claude mcp add mcp-playwright-scraper -- uvx mcp-playwright-scraper
# Alternatively, with pip installation
$ claude mcp add mcp-playwright-scraper -- python -m mcp_playwright_scraper
Development/Unpublished Servers Configuration
"mcpServers": {
"mcp-playwright-scraper": {
"command": "uv",
"args": [
"--directory",
"/path/to/mcp-playwright-scraper",
"run",
"mcp-playwright-scraper"
]
}
}
Usage with Zed
Add to your Zed settings.json:
Using uvx
"context_servers": [
"mcp-playwright-scraper": {
"command": {
"path": "uvx",
"args": ["mcp-playwright-scraper"]
}
}
],
Using pip installation
"context_servers": {
"mcp-playwright-scraper": {
"command": "python",
"args": ["-m", "mcp_playwright_scraper"]
}
},
- Open Cursor Settings
- Navigate to Cursor Settings > Features > MCP
- Click the "+ Add New MCP Server" button
- Configure the Server
- Name:
mcp-playwright-scraper
- Type: Select
stdio
- Command: Enter one of the following:
- Name:
Using uvx
uvx mcp-playwright-scraper
Using pip installation
python -m mcp_playwright_scraper
Once configured in Claude Desktop, you can explicitly use the scraper with a prompt like:
Use the mcp-playwright-scraper to scrape the content from https://example.com and summarize it.
You can use the MCP inspector to debug the server:
npx @modelcontextprotocol/inspector uvx mcp-playwright-scraper
Or if you've installed the package in a specific directory or are developing on it:
cd path/to/mcp-playwright-scraper
npx @modelcontextprotocol/inspector uv run mcp-playwright-scraper
Upon launching, the Inspector will display a URL that you can access in your browser to begin debugging.
To prepare the package for distribution:
- Sync dependencies and update lockfile:
uv sync
- Build package distributions:
uv build
This will create source and wheel distributions in the dist/
directory.
- Publish to PyPI:
uv publish
Note: You'll need to set PyPI credentials via environment variables or command flags:
- Token:
--token
orUV_PUBLISH_TOKEN
- Or username/password:
--username
/UV_PUBLISH_USERNAME
and--password
/UV_PUBLISH_PASSWORD
This MCP server is licensed under the Apache License, Version 2.0. You are free to use, modify, and distribute the software, subject to the terms and conditions of the Apache License 2.0. For more details, please see the LICENSE file in the project repository or visit http://www.apache.org/licenses/LICENSE-2.0.