crawl4ai version
docker image tag: unclecode/crawl4ai:basic-amd64
Expected Behavior
When crawl4ai finishes crawling the web page, the used Chrome process should be released immediately to avoid a large number of Chrome processes remaining in the background.
Or is there an available container configuration that can achieve this goal?
Current Behavior

Is this reproducible?
Yes
Inputs Causing the Bug
curl --location 'http://127.0.0.1:11235/crawl' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer 12345678' \
--data '{
"urls": "test url here",
"crawler_params": {
"headless": true,
"page_timeout": 15000,
"remove_overlay_elements": true,
"semaphore_count": 1
},
"extra": {
"word_count_threshold": 20,
"bypass_cache": true,
"only_text": true,
"process_iframes": false
}
}'
Steps to Reproduce
docker run -p 11235:11235 --env CRAWL4AI_API_TOKEN=12345678 --env MAX_CONCURRENT_TASKS=1 --name crawl4ai -m 4G --restart unless-stopped unclecode/crawl4ai:basic-amd64
Code snippets
OS
Linux Docker
Python version
3.10.15
Browser
No response
Browser version
No response
Error logs & Screenshots (if applicable)
No response