Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

disconnected: not connected to DevTools #689

Open
maulikprajapati opened this issue Nov 13, 2024 · 2 comments
Open

disconnected: not connected to DevTools #689

maulikprajapati opened this issue Nov 13, 2024 · 2 comments

Comments

@maulikprajapati
Copy link

I am using the fitnesse-fixtures-test-jre11-chrome-with-pdf Docker image for E2E testing and am running multiple containers simultaneously to reduce build pipeline time. However, I am encountering the following issue intermittently with random test suites:

Unable to capture page source for exception: null
org.openqa.selenium.WebDriverException: disconnected: not connected to DevTools

image

@fhoeben
Copy link
Owner

fhoeben commented Nov 17, 2024

I've encountered this in the past myself. I never was able to find a precise root cause.
I believe the issue is not in my fixture code, or FitNesse, but this is really an issue between Selenium and Chrome or maybe even internally in Chrome.

Unless we are able to find a good reproduction path I'm unsure how to prevent these errors in the future.

There might be a combination of complexity of (JavaScript on) the pages being tested, size of the test suite or resources available on the machine that is running the tests (maybe of the docker container or the host the containers are running on).

Are you able to see some pattern in when the errors occur? For instance how many suites are running in parallel, other jobs running on the same docker host, early or later in the test run, or maybe which tests/suites were run before the test that shows the error?

@nobilexdev
Copy link

@fhoeben I haven’t noticed any specific pattern so far. There are 12 suites running in parallel in the pipeline. However, I did closely monitor the pod resources and observed that the CPU was at its peak. At the time of failure, about 20–25% of RAM was still available.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants