Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

frontend tests via Saucelabs should use pinned browsers versions #3763

Closed
muxator opened this issue Mar 26, 2020 · 18 comments
Closed

frontend tests via Saucelabs should use pinned browsers versions #3763

muxator opened this issue Mar 26, 2020 · 18 comments
Assignees
Labels
tests Waiting on Testing wontfix Wont Fix these things, no hate.

Comments

@muxator
Copy link
Contributor

muxator commented Mar 26, 2020

/tests/frontend/travis/remote_runner.js performs its tests against the latest version of every browser.

I am not sure about this: we have two moving targets: the codebase, which evolves, and the browsers, which evolves at even a faster pace.

We obviously care that the software works with as many browsers as possible, but what we control is Etherpad, not the browsers.

At the cost of having to perform many small bumps now and then, I would prefer to have a set of fixed browsers versions against which the code is tested.

Or maybe both: a fixed set AND the latest ones.

@JohnMcLear, what do you think? What would be the limitations of doing this? More time to test? Not technically possible at all due to Saucelabs limits, etc...?

@JohnMcLear
Copy link
Member

JohnMcLear commented Mar 26, 2020

Afaik we get 5 concurrent tests (but this appears to be 4) so we could easily pin against a few and go latest against others. I think nowadays we should always target latest stable because that's what browser vendors are keen to deploy.

So TLDR; we could target 3 pinned, FF, Chrome, Safari, for example. It's just a question of maintaining this blob and tbh I feel like we get most value from latest and we get results quicker that for me, targetting a few older and major version changes is sufficient..

Our tests take a LONG time to run because we do so many in the frontend and so many are time restrictive (think time points of putting revisions into pads to create timeslider with enough content to be tested).. Afaik it's like !!6 minutes to run the tests!!.. So if we run the tests on more than 4(let's say 7) devices then that's 12 minutes.. I guess I need to spend longer brewing my cups of tea in the future :D

@JohnMcLear
Copy link
Member

I will submit a PR for this but I'm not keen on using pinned versions and slowing the tests //even //...more

@muxator
Copy link
Contributor Author

muxator commented Apr 3, 2020

My proposal is getting rid of latest browsers version.

Yes, it does not auto-update, but this is good. Otherwise you do not know agiunst which versions you are testing. It becomes time-dependent.

Periodically, review the test specifications and perform a change that bumps only the browser's versions.

No free lunch. :)

@JohnMcLear
Copy link
Member

FWIW for now I dropped it to just test in Firefox. We're awaiting a response from Saucelabs to why things don't run properly.

@stale
Copy link

stale bot commented Jun 18, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix Wont Fix these things, no hate. label Jun 18, 2020
@stale stale bot closed this as completed Jun 25, 2020
@webzwo0i
Copy link
Member

webzwo0i commented Jul 9, 2020

Any news from saucelabs? I think you both have a point here. The frontend tests should run faster and in parallel. Do we have limits on parallelism (this should be visible when logged in into saucelabs) or can we ask saucelabs to sponsor parallel runs? If not, we might ask browserstack or crossbrowsertesting to sponsor parallelism. If that's not possible, we could think about hosting our own selenium infrastructure. At least for some OS/browser-combination it's probably not to hard.
Also, I would love to have coverage for android&iphone...

Different environments really help to ensure everything runs everywhere as expected. Also we have some browser-specific code

$ grep -Pr 'browser\.(?=msie|safari|chrome|firefox|opera|modernIE)' src/static/ |wc -l 
$ 42

and legacy code (I think mostly for IE) that sometimes can be dropped, but only if we ensure everything still works with all the browsers/envs we want to support. When new browser versions are released it takes some weeks/month until most of the users are updated. So I think it's useful if we had an understanding of which browser versions on which platform run our tests seamlessly, even if it's a slightly outdated version.

RE pinned versions: Can we add some scheduled event that notifies us, if a new browser version is released? This might also be useful for nodejs-versions.

RE frontend tests performance: If we cannot make them faster, we could sort out the slow ones and run them scheduled, e.g. every 12 hours (can be configured in travis), while our "fast" ones are triggered on every commit. I have some ideas to make frontend tests faster: Introduce "page objects" to better separate assertions from all the DOM querys etc and optimize those whenever possible. This will also remove duplicated code and introduce separated areas that can be optimized.
All the "setTimeouts" need to be revised: I will try to find a way for browsers to listen for ACCEPT_COMMITS/CHAT_MESSAGE on their websocket-channel, so they don't need to wait some predefined seconds for the server.
It's useful to have a new pad for most of the tests, because it isolates prior tests from subsequent ones. However, when done right, I think we could sometimes test without creating a new pad.

Despite the occasional flakiness (we may need to parse the outputs from travis/sauce with our own logic some day to identify them), I really do like the way we do end2end-testing, because when we can reproduce a bug in our frontend tests, it's pretty sure a bug that happens in real environments. However, I am thinking about creating more unit-like tests whenever possible (with JSDOM). They might catch bugs that are not reproducible in real environments, but they should run as fast as our API-tests do right now. So in the end we could have frontend tests for UI/UX, but also unit-tests whenever it's possible to mock browser behaviour.

My prio is documentation and #2911 right now, but when I am done with that I'd like to implement some of the ideas above. If you think it's stupid, just let me know ;-) I stumbled upon this when I implemented istanbul-coverage for the client tests. I have not issued a pull request for this until now, because it's making frontend tests even slower and it's useful only if we send them to coveralls/codecov to view them afterwards, which is not implemented yet.

@JohnMcLear
Copy link
Member

JohnMcLear commented Jul 9, 2020

SL allow 4 parralel. We can do more tests it's just really slow / time consuming as we add more. Sorry for short response am afk

@JohnMcLear
Copy link
Member

I guess a PR doing 8 tests[4 latest, 4 pinned] would be ideal but would mean removing some very slow running tests IE timeslider_revision test.

I would estimate 10% of the tests take 90% of the time. Investigating that might be a worthwhile endeavor? We could run automated tests for the majority of coverage then for release or to run full test coverage it can be run locally on any browser. For example the sauce labs tests could just run /tests/frontend?sl=true or something and this would exclude specific tests known to run too slowly for an automated environment.

FWIW I added some backend unit tests for stuff like contentcollector.js so you can refer to those.

Steps IMHO:

  1. Put pinned versions back in, see what happens.
  2. If time outs, identify slow running tests
  3. Initially comment out slow running tests using xit
  4. If xiting slow running tests makes things "work" then look to introduce a method for not running slow running tests in the SL environment.

Sounds good?

@JohnMcLear
Copy link
Member

JohnMcLear commented Jul 9, 2020

https://gitpay.me/#/task/327

Added gitpay bounty :)

@webzwo0i
Copy link
Member

After some long digging.... I crawled travis' job-logs (unfortunately can't get beyond last ~1000 builds) to check the output of old builds. I had the feeling that I did not saw parallel builds for a long time (we had 4 browsers configured for some time now, but frontend tests took about 20 minutes alltogether, sometimes they were faster but in those cases there was always some browser failing). The reason is probably in 7729e5a I don't know what's wrong there, but I changed it back to chain, added some browser/OS-combinations (23 now) and increased concurrency of queue.push. On saucelabs I see that there are up to 8 browser sessions running in parallel.

There is another bug because sometimes tests fail while return value is 0 (https://travis-ci.org/github/ether/etherpad-lite/jobs/706736416 grep for TypeError)

Now its 20 minutes for 23 browser/OS-combinations: https://travis-ci.org/github/ether/etherpad-lite/jobs/706743580
SL says it's only 15 minutes: https://app.saucelabs.com/builds/52ef8d4efacd4240b204ae37c2ddca77

@JohnMcLear
Copy link
Member

Looks like ie tests are failing due to lack of promises?

@webzwo0i
Copy link
Member

Yep.
RE pinned+latest: I did not find a way to detect if our "latest" equals to a pinned version prior to issue sauceWorkers. In the end we useless fill up our SL-resources if we run "latest" and later a pinned version that happens to be the same as "latest". However, on our release branches this is probably no problem. We need to write some logic for that, so it pulls other remote_runner.js with a broader coverage when release testing.

@alexanmtz
Copy link
Contributor

https://gitpay.me/#/task/327

Added gitpay bounty :)

https://gitpay.me/#/task/357

@alexanmtz
Copy link
Contributor

@JohnMcLear @muxator if it is still unsolved, why is closed?

@JohnMcLear
Copy link
Member

I pinned it.

@aryamanpuri
Copy link

Is it still open for the bounty program?
I am a CS college student and interest in web development.
I am interested in this project and looking for long term opportunitites.

@JohnMcLear
Copy link
Member

@aryamanpuri We're open for students to jump in, fix things and send us invoices for time. Look at the open issues and start debugging them!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
tests Waiting on Testing wontfix Wont Fix these things, no hate.
Projects
None yet
Development

No branches or pull requests

6 participants
@alexanmtz @JohnMcLear @webzwo0i @muxator @aryamanpuri and others