Automatically convert documentation websites, GitHub repositories, and PDFs into Claude AI skills in minutes.
๐ View Development Roadmap & Tasks - 134 tasks across 10 categories, pick any to contribute!
Skill Seeker is an automated tool that transforms documentation websites, GitHub repositories, and PDF files into production-ready Claude AI skills. Instead of manually reading and summarizing documentation, Skill Seeker:
- Scrapes multiple sources (docs, GitHub repos, PDFs) automatically
- Analyzes code repositories with deep AST parsing
- Detects conflicts between documentation and code implementation
- Organizes content into categorized reference files
- Enhances with AI to extract best examples and key concepts
- Packages everything into an uploadable
.zipfile for Claude
Result: Get comprehensive Claude skills for any framework, API, or tool in 20-40 minutes instead of hours of manual work.
- ๐ฏ For Developers: Create skills from documentation + GitHub repos with conflict detection
- ๐ฎ For Game Devs: Generate skills for game engines (Godot docs + GitHub, Unity, etc.)
- ๐ง For Teams: Combine internal docs + code repositories into single source of truth
- ๐ For Learners: Build comprehensive skills from docs, code examples, and PDFs
- ๐ For Open Source: Analyze repos to find documentation gaps and outdated examples
- โ llms.txt Support - Automatically detects and uses LLM-ready documentation files (10x faster)
- โ Universal Scraper - Works with ANY documentation website
- โ Smart Categorization - Automatically organizes content by topic
- โ Code Language Detection - Recognizes Python, JavaScript, C++, GDScript, etc.
- โ 8 Ready-to-Use Presets - Godot, React, Vue, Django, FastAPI, and more
- โ Basic PDF Extraction - Extract text, code, and images from PDF files
- โ OCR for Scanned PDFs - Extract text from scanned documents
- โ Password-Protected PDFs - Handle encrypted PDFs
- โ Table Extraction - Extract complex tables from PDFs
- โ Parallel Processing - 3x faster for large PDFs
- โ Intelligent Caching - 50% faster on re-runs
- โ Deep Code Analysis - AST parsing for Python, JavaScript, TypeScript, Java, C++, Go
- โ API Extraction - Functions, classes, methods with parameters and types
- โ Repository Metadata - README, file tree, language breakdown, stars/forks
- โ GitHub Issues & PRs - Fetch open/closed issues with labels and milestones
- โ CHANGELOG & Releases - Automatically extract version history
- โ Conflict Detection - Compare documented APIs vs actual code implementation
- โ MCP Integration - Natural language: "Scrape GitHub repo facebook/react"
- โ Combine Multiple Sources - Mix documentation + GitHub + PDF in one skill
- โ Conflict Detection - Automatically finds discrepancies between docs and code
- โ Intelligent Merging - Rule-based or AI-powered conflict resolution
- โ
Transparent Reporting - Side-by-side comparison with
โ ๏ธ warnings - โ Documentation Gap Analysis - Identifies outdated docs and undocumented features
- โ Single Source of Truth - One skill showing both intent (docs) and reality (code)
- โ Backward Compatible - Legacy single-source configs still work
- โ AI-Powered Enhancement - Transforms basic templates into comprehensive guides
- โ No API Costs - FREE local enhancement using Claude Code Max
- โ MCP Server for Claude Code - Use directly from Claude Code with natural language
- โ
Async Mode - 2-3x faster scraping with async/await (use
--asyncflag) - โ Large Documentation Support - Handle 10K-40K+ page docs with intelligent splitting
- โ Router/Hub Skills - Intelligent routing to specialized sub-skills
- โ Parallel Scraping - Process multiple skills simultaneously
- โ Checkpoint/Resume - Never lose progress on long scrapes
- โ Caching System - Scrape once, rebuild instantly
- โ Fully Tested - 299 tests with 100% pass rate
# One-time setup (5 minutes)
./setup_mcp.sh
# Then in Claude Code, just ask:
"Generate a React skill from https://react.dev/"
"Scrape PDF at docs/manual.pdf and create skill"Time: Automated | Quality: Production-ready | Cost: Free
# Install dependencies (2 pip packages)
pip3 install requests beautifulsoup4
# Generate a React skill in one command
python3 cli/doc_scraper.py --config configs/react.json --enhance-local
# Upload output/react.zip to Claude - Done!Time: ~25 minutes | Quality: Production-ready | Cost: Free
# Install PDF support
pip3 install PyMuPDF
# Basic PDF extraction
python3 cli/pdf_scraper.py --pdf docs/manual.pdf --name myskill
# Advanced features
python3 cli/pdf_scraper.py --pdf docs/manual.pdf --name myskill \
--extract-tables \ # Extract tables
--parallel \ # Fast parallel processing
--workers 8 # Use 8 CPU cores
# Scanned PDFs (requires: pip install pytesseract Pillow)
python3 cli/pdf_scraper.py --pdf docs/scanned.pdf --name myskill --ocr
# Password-protected PDFs
python3 cli/pdf_scraper.py --pdf docs/encrypted.pdf --name myskill --password mypassword
# Upload output/myskill.zip to Claude - Done!Time: ~5-15 minutes (or 2-5 minutes with parallel) | Quality: Production-ready | Cost: Free
Advanced Features:
- โ OCR for scanned PDFs (requires pytesseract)
- โ Password-protected PDF support
- โ Table extraction
- โ Parallel processing (3x faster)
- โ Intelligent caching
# Install GitHub support
pip3 install PyGithub
# Basic repository scraping
python3 cli/github_scraper.py --repo facebook/react
# Using a config file
python3 cli/github_scraper.py --config configs/react_github.json
# With authentication (higher rate limits)
export GITHUB_TOKEN=ghp_your_token_here
python3 cli/github_scraper.py --repo facebook/react
# Customize what to include
python3 cli/github_scraper.py --repo django/django \
--include-issues \ # Extract GitHub Issues
--max-issues 100 \ # Limit issue count
--include-changelog \ # Extract CHANGELOG.md
--include-releases # Extract GitHub Releases
# MCP usage in Claude Code
"Scrape GitHub repository facebook/react"
# Upload output/react.zip to Claude - Done!Time: ~5-10 minutes | Quality: Production-ready | Cost: Free
What Gets Extracted:
- โ README.md and documentation files
- โ GitHub Issues (open/closed, labels, milestones)
- โ CHANGELOG.md and version history
- โ GitHub Releases with release notes
- โ Repository metadata (stars, language, topics)
- โ File structure and language breakdown
The Problem: Documentation and code often drift apart. Docs might be outdated, missing features that exist in code, or documenting features that were removed.
The Solution: Combine documentation + GitHub + PDF into one unified skill that shows BOTH what's documented AND what actually exists, with clear warnings about discrepancies.
# Create unified config (mix documentation + GitHub)
cat > configs/myframework_unified.json << 'EOF'
{
"name": "myframework",
"description": "Complete framework knowledge from docs + code",
"merge_mode": "rule-based",
"sources": [
{
"type": "documentation",
"base_url": "https://docs.myframework.com/",
"extract_api": true,
"max_pages": 200
},
{
"type": "github",
"repo": "owner/myframework",
"include_code": true,
"code_analysis_depth": "surface"
}
]
}
EOF
# Run unified scraper
python3 cli/unified_scraper.py --config configs/myframework_unified.json
# Upload output/myframework.zip to Claude - Done!Time: ~30-45 minutes | Quality: Production-ready with conflict detection | Cost: Free
What Makes It Special:
โ Conflict Detection - Automatically finds 4 types of discrepancies:
- ๐ด Missing in code (high): Documented but not implemented
- ๐ก Missing in docs (medium): Implemented but not documented
โ ๏ธ Signature mismatch: Different parameters/types- โน๏ธ Description mismatch: Different explanations
โ Transparent Reporting - Shows both versions side-by-side:
#### `move_local_x(delta: float)`
โ ๏ธ **Conflict**: Documentation signature differs from implementation
**Documentation says:**def move_local_x(delta: float)
**Code implementation:**
```python
def move_local_x(delta: float, snap: bool = False) -> None
โ
**Advantages:**
- **Identifies documentation gaps** - Find outdated or missing docs automatically
- **Catches code changes** - Know when APIs change without docs being updated
- **Single source of truth** - One skill showing intent (docs) AND reality (code)
- **Actionable insights** - Get suggestions for fixing each conflict
- **Development aid** - See what's actually in the codebase vs what's documented
**Example Unified Configs:**
- `configs/react_unified.json` - React docs + GitHub repo
- `configs/django_unified.json` - Django docs + GitHub repo
- `configs/fastapi_unified.json` - FastAPI docs + GitHub repo
**Full Guide:** See [docs/UNIFIED_SCRAPING.md](docs/UNIFIED_SCRAPING.md) for complete documentation.
## How It Works
```mermaid
graph LR
A[Documentation Website] --> B[Skill Seeker]
B --> C[Scraper]
B --> D[AI Enhancement]
B --> E[Packager]
C --> F[Organized References]
D --> F
F --> E
E --> G[Claude Skill .zip]
G --> H[Upload to Claude AI]
- Detect llms.txt - Checks for llms-full.txt, llms.txt, llms-small.txt first
- Scrape: Extracts all pages from documentation
- Categorize: Organizes content into topics (API, guides, tutorials, etc.)
- Enhance: AI analyzes docs and creates comprehensive SKILL.md with examples
- Package: Bundles everything into a Claude-ready
.zipfile
Before you start, make sure you have:
- Python 3.10 or higher - Download | Check:
python3 --version - Git - Download | Check:
git --version - 15-30 minutes for first-time setup
First time user? โ Start Here: Bulletproof Quick Start Guide ๐ฏ
This guide walks you through EVERYTHING step-by-step (Python install, git clone, first skill creation).
Use Skill Seeker directly from Claude Code with natural language!
# Clone repository
git clone https://github.com/yusufkaraaslan/Skill_Seekers.git
cd Skill_Seekers
# One-time setup (5 minutes)
./setup_mcp.sh
# Restart Claude Code, then just ask:In Claude Code:
List all available configs
Generate config for Tailwind at https://tailwindcss.com/docs
Scrape docs using configs/react.json
Package skill at output/react/
Benefits:
- โ No manual CLI commands
- โ Natural language interface
- โ Integrated with your workflow
- โ 9 tools available instantly (includes automatic upload!)
- โ Tested and working in production
Full guides:
- ๐ MCP Setup Guide - Complete installation instructions
- ๐งช MCP Testing Guide - Test all 9 tools
- ๐ฆ Large Documentation Guide - Handle 10K-40K+ pages
- ๐ค Upload Guide - How to upload skills to Claude
# Clone repository
git clone https://github.com/yusufkaraaslan/Skill_Seekers.git
cd Skill_Seekers
# Create virtual environment
python3 -m venv venv
# Activate virtual environment
source venv/bin/activate # macOS/Linux
# OR on Windows: venv\Scripts\activate
# Install dependencies
pip install requests beautifulsoup4 pytest
# Save dependencies
pip freeze > requirements.txt
# Optional: Install anthropic for API-based enhancement (not needed for LOCAL enhancement)
# pip install anthropicAlways activate the virtual environment before using Skill Seeker:
source venv/bin/activate # Run this each time you start a new terminal session# Make sure venv is activated (you should see (venv) in your prompt)
source venv/bin/activate
# Optional: Estimate pages first (fast, 1-2 minutes)
python3 cli/estimate_pages.py configs/godot.json
# Use Godot preset
python3 cli/doc_scraper.py --config configs/godot.json
# Use React preset
python3 cli/doc_scraper.py --config configs/react.json
# See all presets
ls configs/python3 cli/doc_scraper.py --interactivepython3 cli/doc_scraper.py \
--name react \
--url https://react.dev/ \
--description "React framework for UIs"Once your skill is packaged, you need to upload it to Claude:
# Set your API key (one-time)
export ANTHROPIC_API_KEY=sk-ant-...
# Package and upload automatically
python3 cli/package_skill.py output/react/ --upload
# OR upload existing .zip
python3 cli/upload_skill.py output/react.zipBenefits:
- โ Fully automatic
- โ No manual steps
- โ Works from command line
Requirements:
- Anthropic API key (get from https://console.anthropic.com/)
# Package skill
python3 cli/package_skill.py output/react/
# This will:
# 1. Create output/react.zip
# 2. Open the output/ folder automatically
# 3. Show upload instructions
# Then manually upload:
# - Go to https://claude.ai/skills
# - Click "Upload Skill"
# - Select output/react.zip
# - Done!Benefits:
- โ No API key needed
- โ Works for everyone
- โ Folder opens automatically
In Claude Code, just ask:
"Package and upload the React skill"
# With API key set:
# - Packages the skill
# - Uploads to Claude automatically
# - Done! โ
# Without API key:
# - Packages the skill
# - Shows where to find the .zip
# - Provides manual upload instructions
Benefits:
- โ Natural language
- โ Smart auto-detection (uploads if API key available)
- โ Works with or without API key
- โ No errors or failures
doc-to-skill/
โโโ cli/
โ โโโ doc_scraper.py # Main scraping tool
โ โโโ package_skill.py # Package to .zip
โ โโโ upload_skill.py # Auto-upload (API)
โ โโโ enhance_skill.py # AI enhancement
โโโ mcp/ # MCP server for Claude Code
โ โโโ server.py # 9 MCP tools
โโโ configs/ # Preset configurations
โ โโโ godot.json # Godot Engine
โ โโโ react.json # React
โ โโโ vue.json # Vue.js
โ โโโ django.json # Django
โ โโโ fastapi.json # FastAPI
โโโ output/ # All output (auto-created)
โโโ godot_data/ # Scraped data
โโโ godot/ # Built skill
โโโ godot.zip # Packaged skill
python3 cli/estimate_pages.py configs/react.json
# Output:
๐ ESTIMATION RESULTS
โ
Pages Discovered: 180
๐ Estimated Total: 230
โฑ๏ธ Time Elapsed: 1.2 minutes
๐ก Recommended max_pages: 280Benefits:
- Know page count BEFORE scraping (saves time)
- Validates URL patterns work correctly
- Estimates total scraping time
- Recommends optimal
max_pagessetting - Fast (1-2 minutes vs 20-40 minutes full scrape)
python3 cli/doc_scraper.py --config configs/godot.json
# If data exists:
โ Found existing data: 245 pages
Use existing data? (y/n): y
โญ๏ธ Skipping scrape, using existing dataAutomatic pattern extraction:
- Extracts common code patterns from docs
- Detects programming language
- Creates quick reference with real examples
- Smarter categorization with scoring
Enhanced SKILL.md:
- Real code examples from documentation
- Language-annotated code blocks
- Common patterns section
- Quick reference from actual usage examples
Automatically infers categories from:
- URL structure
- Page titles
- Content keywords
- With scoring for better accuracy
# Automatically detects:
- Python (def, import, from)
- JavaScript (const, let, =>)
- GDScript (func, var, extends)
- C++ (#include, int main)
- And more...# Scrape once
python3 cli/doc_scraper.py --config configs/react.json
# Later, just rebuild (instant)
python3 cli/doc_scraper.py --config configs/react.json --skip-scrape# Enable async mode with 8 workers (recommended for large docs)
python3 cli/doc_scraper.py --config configs/react.json --async --workers 8
# Small docs (~100-500 pages)
python3 cli/doc_scraper.py --config configs/mydocs.json --async --workers 4
# Large docs (2000+ pages) with no rate limiting
python3 cli/doc_scraper.py --config configs/largedocs.json --async --workers 8 --no-rate-limitPerformance Comparison:
- Sync mode (threads): ~18 pages/sec, 120 MB memory
- Async mode: ~55 pages/sec, 40 MB memory
- Result: 3x faster, 66% less memory!
When to use:
- โ Large documentation (500+ pages)
- โ Network latency is high
- โ Memory is constrained
- โ Small docs (< 100 pages) - overhead not worth it
See full guide: ASYNC_SUPPORT.md
# Option 1: During scraping (API-based, requires API key)
pip3 install anthropic
export ANTHROPIC_API_KEY=sk-ant-...
python3 cli/doc_scraper.py --config configs/react.json --enhance
# Option 2: During scraping (LOCAL, no API key - uses Claude Code Max)
python3 cli/doc_scraper.py --config configs/react.json --enhance-local
# Option 3: After scraping (API-based, standalone)
python3 cli/enhance_skill.py output/react/
# Option 4: After scraping (LOCAL, no API key, standalone)
python3 cli/enhance_skill_local.py output/react/What it does:
- Reads your reference documentation
- Uses Claude to generate an excellent SKILL.md
- Extracts best code examples (5-10 practical examples)
- Creates comprehensive quick reference
- Adds domain-specific key concepts
- Provides navigation guidance for different skill levels
- Automatically backs up original
- Quality: Transforms 75-line templates into 500+ line comprehensive guides
LOCAL Enhancement (Recommended):
- Uses your Claude Code Max plan (no API costs)
- Opens new terminal with Claude Code
- Analyzes reference files automatically
- Takes 30-60 seconds
- Quality: 9/10 (comparable to API version)
For massive documentation sites like Godot (40K pages), AWS, or Microsoft Docs:
# 1. Estimate first (discover page count)
python3 cli/estimate_pages.py configs/godot.json
# 2. Auto-split into focused sub-skills
python3 cli/split_config.py configs/godot.json --strategy router
# Creates:
# - godot-scripting.json (5K pages)
# - godot-2d.json (8K pages)
# - godot-3d.json (10K pages)
# - godot-physics.json (6K pages)
# - godot-shaders.json (11K pages)
# 3. Scrape all in parallel (4-8 hours instead of 20-40!)
for config in configs/godot-*.json; do
python3 cli/doc_scraper.py --config $config &
done
wait
# 4. Generate intelligent router/hub skill
python3 cli/generate_router.py configs/godot-*.json
# 5. Package all skills
python3 cli/package_multi.py output/godot*/
# 6. Upload all .zip files to Claude
# Users just ask questions naturally!
# Router automatically directs to the right sub-skill!Split Strategies:
- auto - Intelligently detects best strategy based on page count
- category - Split by documentation categories (scripting, 2d, 3d, etc.)
- router - Create hub skill + specialized sub-skills (RECOMMENDED)
- size - Split every N pages (for docs without clear categories)
Benefits:
- โ Faster scraping (parallel execution)
- โ More focused skills (better Claude performance)
- โ Easier maintenance (update one topic at a time)
- โ Natural user experience (router handles routing)
- โ Avoids context window limits
Configuration:
{
"name": "godot",
"max_pages": 40000,
"split_strategy": "router",
"split_config": {
"target_pages_per_skill": 5000,
"create_router": true,
"split_by_categories": ["scripting", "2d", "3d", "physics"]
}
}Full Guide: Large Documentation Guide
Never lose progress on long-running scrapes:
# Enable in config
{
"checkpoint": {
"enabled": true,
"interval": 1000 // Save every 1000 pages
}
}
# If scrape is interrupted (Ctrl+C or crash)
python3 cli/doc_scraper.py --config configs/godot.json --resume
# Resume from last checkpoint
โ
Resuming from checkpoint (12,450 pages scraped)
โญ๏ธ Skipping 12,450 already-scraped pages
๐ Continuing from where we left off...
# Start fresh (clear checkpoint)
python3 cli/doc_scraper.py --config configs/godot.json --freshBenefits:
- โ Auto-saves every 1000 pages (configurable)
- โ Saves on interruption (Ctrl+C)
- โ
Resume with
--resumeflag - โ Never lose hours of scraping progress
# 1. Scrape + Build + AI Enhancement (LOCAL, no API key)
python3 cli/doc_scraper.py --config configs/godot.json --enhance-local
# 2. Wait for new terminal to close (enhancement completes)
# Check the enhanced SKILL.md:
cat output/godot/SKILL.md
# 3. Package
python3 cli/package_skill.py output/godot/
# 4. Done! You have godot.zip with excellent SKILL.mdTime: 20-40 minutes (scraping) + 60 seconds (enhancement) = ~21-41 minutes
# 1. Use cached data + Local Enhancement
python3 cli/doc_scraper.py --config configs/godot.json --skip-scrape
python3 cli/enhance_skill_local.py output/godot/
# 2. Package
python3 cli/package_skill.py output/godot/
# 3. Done!Time: 1-3 minutes (build) + 60 seconds (enhancement) = ~2-4 minutes total
# 1. Scrape + Build (no enhancement)
python3 cli/doc_scraper.py --config configs/godot.json
# 2. Package
python3 cli/package_skill.py output/godot/
# 3. Done! (SKILL.md will be basic template)Time: 20-40 minutes Note: SKILL.md will be generic - enhancement strongly recommended!
| Config | Framework | Description |
|---|---|---|
godot.json |
Godot Engine | Game development |
react.json |
React | UI framework |
vue.json |
Vue.js | Progressive framework |
django.json |
Django | Python web framework |
fastapi.json |
FastAPI | Modern Python API |
ansible-core.json |
Ansible Core 2.19 | Automation & configuration |
# Godot
python3 cli/doc_scraper.py --config configs/godot.json
# React
python3 cli/doc_scraper.py --config configs/react.json
# Vue
python3 cli/doc_scraper.py --config configs/vue.json
# Django
python3 cli/doc_scraper.py --config configs/django.json
# FastAPI
python3 cli/doc_scraper.py --config configs/fastapi.json
# Ansible
python3 cli/doc_scraper.py --config configs/ansible-core.jsonpython3 cli/doc_scraper.py --interactive
# Follow prompts, it will create the config for you# Copy a preset
cp configs/react.json configs/myframework.json
# Edit it
nano configs/myframework.json
# Use it
python3 cli/doc_scraper.py --config configs/myframework.json{
"name": "myframework",
"description": "When to use this skill",
"base_url": "https://docs.myframework.com/",
"selectors": {
"main_content": "article",
"title": "h1",
"code_blocks": "pre code"
},
"url_patterns": {
"include": ["/docs", "/guide"],
"exclude": ["/blog", "/about"]
},
"categories": {
"getting_started": ["intro", "quickstart"],
"api": ["api", "reference"]
},
"rate_limit": 0.5,
"max_pages": 500
}output/
โโโ godot_data/ # Scraped raw data
โ โโโ pages/ # JSON files (one per page)
โ โโโ summary.json # Overview
โ
โโโ godot/ # The skill
โโโ SKILL.md # Enhanced with real examples
โโโ references/ # Categorized docs
โ โโโ index.md
โ โโโ getting_started.md
โ โโโ scripting.md
โ โโโ ...
โโโ scripts/ # Empty (add your own)
โโโ assets/ # Empty (add your own)
# Interactive mode
python3 cli/doc_scraper.py --interactive
# Use config file
python3 cli/doc_scraper.py --config configs/godot.json
# Quick mode
python3 cli/doc_scraper.py --name react --url https://react.dev/
# Skip scraping (use existing data)
python3 cli/doc_scraper.py --config configs/godot.json --skip-scrape
# With description
python3 cli/doc_scraper.py \
--name react \
--url https://react.dev/ \
--description "React framework for building UIs"Edit max_pages in config to test:
{
"max_pages": 20 // Test with just 20 pages
}# Scrape once
python3 cli/doc_scraper.py --config configs/react.json
# Rebuild multiple times (instant)
python3 cli/doc_scraper.py --config configs/react.json --skip-scrape
python3 cli/doc_scraper.py --config configs/react.json --skip-scrape# Test in Python
from bs4 import BeautifulSoup
import requests
url = "https://docs.example.com/page"
soup = BeautifulSoup(requests.get(url).content, 'html.parser')
# Try different selectors
print(soup.select_one('article'))
print(soup.select_one('main'))
print(soup.select_one('div[role="main"]'))# After building, check:
cat output/godot/SKILL.md # Should have real examples
cat output/godot/references/index.md # Categories- Check your
main_contentselector - Try:
article,main,div[role="main"]
# Force re-scrape
rm -rf output/myframework_data/
python3 cli/doc_scraper.py --config configs/myframework.jsonEdit the config categories section with better keywords.
# Delete old data
rm -rf output/godot_data/
# Re-scrape
python3 cli/doc_scraper.py --config configs/godot.json| Task | Time | Notes |
|---|---|---|
| Scraping (sync) | 15-45 min | First time only, thread-based |
| Scraping (async) | 5-15 min | 2-3x faster with --async flag |
| Building | 1-3 min | Fast! |
| Re-building | <1 min | With --skip-scrape |
| Packaging | 5-10 sec | Final zip |
One tool does everything:
- โ Scrapes documentation
- โ Auto-detects existing data
- โ Generates better knowledge
- โ Creates enhanced skills
- โ Works with presets or custom configs
- โ Supports skip-scraping for fast iteration
Simple structure:
doc_scraper.py- The toolconfigs/- Presetsoutput/- Everything else
Better output:
- Real code examples with language detection
- Common patterns extracted from docs
- Smart categorization
- Enhanced SKILL.md with actual examples
- BULLETPROOF_QUICKSTART.md - ๐ฏ START HERE if you're new!
- QUICKSTART.md - Quick start for experienced users
- TROUBLESHOOTING.md - Common issues and solutions
- docs/LARGE_DOCUMENTATION.md - Handle 10K-40K+ page docs
- ASYNC_SUPPORT.md - Async mode guide (2-3x faster scraping)
- docs/ENHANCEMENT.md - AI enhancement guide
- docs/UPLOAD_GUIDE.md - How to upload skills to Claude
- docs/MCP_SETUP.md - MCP integration setup
- docs/CLAUDE.md - Technical architecture
- STRUCTURE.md - Repository structure
# Try Godot
python3 cli/doc_scraper.py --config configs/godot.json
# Try React
python3 cli/doc_scraper.py --config configs/react.json
# Or go interactive
python3 cli/doc_scraper.py --interactiveMIT License - see LICENSE file for details
Happy skill building! ๐
