Skip to content

Using a proxy for the crawler #63

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
tommaso-castellani opened this issue Apr 25, 2025 · 0 comments
Open

Using a proxy for the crawler #63

tommaso-castellani opened this issue Apr 25, 2025 · 0 comments

Comments

@tommaso-castellani
Copy link

As per Crawl4AI documentation I tried to add the proxy inside backend/app/config.py as parameter of

BrowserConfig(
    proxy="http://123.123.123.123:1234", 
    headless=True...
)

Unfortunately it didn't work, as I can see CrawlConfigManager it's not being used.

So I added it as parameter of the post requests in both crawler.discover_pages and crawler.crawl_pages:

browser_config_payload = {
    "type": "BrowserConfig",
    "params": {"headless": True, "proxy": "http://123.123.123.123:1234"}
}
response = requests.post(
     f"{CRAWL4AI_URL}/crawl",
     headers=headers,
     json={"urls": url, "browser_config": browser_config_payload},
     timeout=30
)

But it didn't work either.

What's the issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant