Skip to content

Conversation

@valeriupredoi
Copy link
Collaborator

@valeriupredoi valeriupredoi commented Oct 28, 2025

Description

Contributes to avoiding #140 and makes #141 work, as @kmuehlbauer astutely pointed out

Before you get started

Checklist

  • This pull request has a descriptive title and labels
  • This pull request has a minimal description (most was discussed in the issue, but a two-liner description is still desirable)
  • Unit tests have been added (if codecov test fails)
  • Any changed dependencies have been added or removed correctly (if need be)
  • If you are working on the documentation, please ensure the current build passes
  • All tests pass

@codecov
Copy link

codecov bot commented Oct 28, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 74.66%. Comparing base (a2fa21b) to head (07f5f57).
⚠️ Report is 4 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #142   +/-   ##
=======================================
  Coverage   74.66%   74.66%           
=======================================
  Files          12       12           
  Lines        2712     2712           
  Branches      407      407           
=======================================
  Hits         2025     2025           
  Misses        576      576           
  Partials      111      111           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@valeriupredoi
Copy link
Collaborator Author

@kmuehlbauer shall we get this one in, mate? Though I'm not seeing any flakes in the nightly tests at all - better safe than sorry though 🍺

@kmuehlbauer
Copy link
Collaborator

@valeriupredoi Yes! Let me do one final check locally. Hang on a minute.

@valeriupredoi
Copy link
Collaborator Author

take yer time, no rush at all 🍺

@kmuehlbauer
Copy link
Collaborator

kmuehlbauer commented Oct 29, 2025

OK, so the current setup with --reruns=3 will rerun any test case which fails, marked or not.

According to my observation, this will only rerun tests once if tests are marked with @pytest.mark.flaky.

So we could remove --reruns=3 to directly catch flaky test and mark each flaky tests like @pytest.mark.flaky(reruns=3, only_rerun="ValueError").

Update: Because ValueError is the error which is raised in the flaky case.

@valeriupredoi
Copy link
Collaborator Author

(reruns=3, only_rerun="ValueError")

sounds like a good plan! See e82ec3c and 07f5f57 - I reckon that's the only flaky test we ever have, so just rerun that one, though am still quite puzzled why I'm not seeing any fails in the nightly tests (just in PRs)

Copy link
Collaborator

@kmuehlbauer kmuehlbauer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's see how this works!

@valeriupredoi valeriupredoi merged commit 3cbc35d into main Oct 29, 2025
7 checks passed
@valeriupredoi valeriupredoi deleted the pytest_reruns branch October 29, 2025 15:14
@valeriupredoi
Copy link
Collaborator Author

cheers muchly, Kai! 🍺

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants