-
-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
frontend tests via Saucelabs should use pinned browsers versions #3763
Comments
Afaik we get 5 concurrent tests (but this appears to be 4) so we could easily pin against a few and go latest against others. I think nowadays we should always target latest stable because that's what browser vendors are keen to deploy. So TLDR; we could target 3 pinned, FF, Chrome, Safari, for example. It's just a question of maintaining this blob and tbh I feel like we get most value from latest and we get results quicker that for me, targetting a few older and major version changes is sufficient.. Our tests take a LONG time to run because we do so many in the frontend and so many are time restrictive (think time points of putting revisions into pads to create timeslider with enough content to be tested).. Afaik it's like !!6 minutes to run the tests!!.. So if we run the tests on more than 4(let's say 7) devices then that's 12 minutes.. I guess I need to spend longer brewing my cups of tea in the future :D |
I will submit a PR for this but I'm not keen on using pinned versions and slowing the tests //even //...more |
My proposal is getting rid of Yes, it does not auto-update, but this is good. Otherwise you do not know agiunst which versions you are testing. It becomes time-dependent. Periodically, review the test specifications and perform a change that bumps only the browser's versions. No free lunch. :) |
FWIW for now I dropped it to just test in Firefox. We're awaiting a response from Saucelabs to why things don't run properly. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Any news from saucelabs? I think you both have a point here. The frontend tests should run faster and in parallel. Do we have limits on parallelism (this should be visible when logged in into saucelabs) or can we ask saucelabs to sponsor parallel runs? If not, we might ask browserstack or crossbrowsertesting to sponsor parallelism. If that's not possible, we could think about hosting our own selenium infrastructure. At least for some OS/browser-combination it's probably not to hard. Different environments really help to ensure everything runs everywhere as expected. Also we have some browser-specific code
and legacy code (I think mostly for IE) that sometimes can be dropped, but only if we ensure everything still works with all the browsers/envs we want to support. When new browser versions are released it takes some weeks/month until most of the users are updated. So I think it's useful if we had an understanding of which browser versions on which platform run our tests seamlessly, even if it's a slightly outdated version. RE pinned versions: Can we add some scheduled event that notifies us, if a new browser version is released? This might also be useful for nodejs-versions. RE frontend tests performance: If we cannot make them faster, we could sort out the slow ones and run them scheduled, e.g. every 12 hours (can be configured in travis), while our "fast" ones are triggered on every commit. I have some ideas to make frontend tests faster: Introduce "page objects" to better separate assertions from all the DOM querys etc and optimize those whenever possible. This will also remove duplicated code and introduce separated areas that can be optimized. Despite the occasional flakiness (we may need to parse the outputs from travis/sauce with our own logic some day to identify them), I really do like the way we do end2end-testing, because when we can reproduce a bug in our frontend tests, it's pretty sure a bug that happens in real environments. However, I am thinking about creating more unit-like tests whenever possible (with JSDOM). They might catch bugs that are not reproducible in real environments, but they should run as fast as our API-tests do right now. So in the end we could have frontend tests for UI/UX, but also unit-tests whenever it's possible to mock browser behaviour. My prio is documentation and #2911 right now, but when I am done with that I'd like to implement some of the ideas above. If you think it's stupid, just let me know ;-) I stumbled upon this when I implemented istanbul-coverage for the client tests. I have not issued a pull request for this until now, because it's making frontend tests even slower and it's useful only if we send them to coveralls/codecov to view them afterwards, which is not implemented yet. |
SL allow 4 parralel. We can do more tests it's just really slow / time consuming as we add more. Sorry for short response am afk |
I guess a PR doing 8 tests[4 latest, 4 pinned] would be ideal but would mean removing some very slow running tests IE timeslider_revision test. I would estimate 10% of the tests take 90% of the time. Investigating that might be a worthwhile endeavor? We could run automated tests for the majority of coverage then for release or to run full test coverage it can be run locally on any browser. For example the sauce labs tests could just run /tests/frontend?sl=true or something and this would exclude specific tests known to run too slowly for an automated environment. FWIW I added some backend unit tests for stuff like contentcollector.js so you can refer to those. Steps IMHO:
Sounds good? |
Added gitpay bounty :) |
After some long digging.... I crawled travis' job-logs (unfortunately can't get beyond last ~1000 builds) to check the output of old builds. I had the feeling that I did not saw parallel builds for a long time (we had 4 browsers configured for some time now, but frontend tests took about 20 minutes alltogether, sometimes they were faster but in those cases there was always some browser failing). The reason is probably in 7729e5a I don't know what's wrong there, but I changed it back to There is another bug because sometimes tests fail while return value is 0 (https://travis-ci.org/github/ether/etherpad-lite/jobs/706736416 grep for TypeError) Now its 20 minutes for 23 browser/OS-combinations: https://travis-ci.org/github/ether/etherpad-lite/jobs/706743580 |
Looks like ie tests are failing due to lack of promises? |
Yep. |
|
@JohnMcLear @muxator if it is still unsolved, why is closed? |
I pinned it. |
Is it still open for the bounty program? |
@aryamanpuri We're open for students to jump in, fix things and send us invoices for time. Look at the open issues and start debugging them! |
/tests/frontend/travis/remote_runner.js performs its tests against the
latest
version of every browser.I am not sure about this: we have two moving targets: the codebase, which evolves, and the browsers, which evolves at even a faster pace.
We obviously care that the software works with as many browsers as possible, but what we control is Etherpad, not the browsers.
At the cost of having to perform many small bumps now and then, I would prefer to have a set of fixed browsers versions against which the code is tested.
Or maybe both: a fixed set AND the latest ones.
@JohnMcLear, what do you think? What would be the limitations of doing this? More time to test? Not technically possible at all due to Saucelabs limits, etc...?
The text was updated successfully, but these errors were encountered: