Disable rescue & retry in Ferrum::Frame::Runtime#call
on existing nodes
#360
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Cuprite/Ferrum is an amazing project, thank you for all the hard work! Our migration from Selenium so far has been painless. One spec gave us troubly though. The spec tests a report page which contains a table. The spec clicks on filters which causes the page to re-fetch the table and replace it. After each click, we test for the new table content.
This heavy spec generally takes around 15 seconds to execute. Yet, with Cuprite, it fails half the time ("could not find td with text: 'Filtered Foo'"). Although increasing the
Capyara.default_max_wait_time
to 30 seconds makes make this spec more robust, it also occasionally increases the runtime to over a minute. Given that the AJAX request to fetch the new table only takes a few milliseconds, the network requests does not appear to be the cause of this sporadic slowdown issue.I tracked the cause down to the retry in
Ferrum::Frame::Runtime#call
ferrum/lib/ferrum/frame/runtime.rb
Lines 103 to 105 in ec6d9e5
After Capybara collects all the
td
nodes, it queries these nodes for a potentialtext
match. If these nodes were replaced in the meantime (i.e. after the collection but before the execution of the match query), the query on each of these stale, non-existenttd
s would cause theFerrum::Frame::Runtime#call
to retry & finally timeout (roughly 0.5s for each node). Depending on the size of the collection, the query can take multiple seconds (accumulation of timeouts), causing the spec to fail (or a heavy increase in runtime given a high enoughCapybara.default_max_wait_time
).In my limited understanding, the retry upon encountering
NodeNotFoundError
on a known node seems unnecessary. Chrome tells us that the node with the given id is not around anymore (code 32000, "Could not find node with given id"
), so a retry will never fix this.The PR disables the rescuing and retrying when querying on an existing
node
. With this change, our troublesome spec passes consistently and with a stable runtime.Please let me know if I've overlooked any potential side effects or if there are additional changes needed, such as more tests.