-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pytest: one test timing out should not affect the other tests #311
Comments
This is currently the intended behaviour for the autotester. We can only report back results for tests that actually ran so if a tester exits early we report an error that requires the instructor to manually check the code themselves. Consider the scenario where you have 10 test cases that are associated with a criterion out of 100 marks. Normally, all 10 test cases are run and if a student passes 5 of them they get 50/100. However, if the tester stops completely after running only 3 tests and the student passes 2 of those then they would get 66/100 instead. This is obviously not correct either so MarkUs chooses not to assign a mark to the criterion. In order to achieve the behaviour you describe we could report the number of tests that should have been run (this can be determined at the test collection step) but we still don't know whether the tests that didn't run would have passed or failed. Reporting an error indicates that something unexpected happened and indicates to the instructor that the results from the tester need to be manually validated. So, yes it does mean that instructors are required to spend time inspecting the results but the alternative is giving the student an incorrect grade. For this specific case, you can use timeout decorators in your tests to set per-test timeouts. Some libraries I'd recommend: |
timeout_decorator has been working wonderfully for me. It allows per-test timeouts. I set the overall MarkUs timeout to a number that's greater than all of my individual timeouts. |
Does anyone have experience with the other one that Misha mentioned? |
re-opened from: MarkUsProject/Markus#5567
I understand this will require infrastructure changes in the backend, so I'm lodging this report with hopes that it will eventually be implemented in the future.
Currently, if you run a set of tests like this on Markus:
Because the third test will hit the runtime limit, all tests for this test suite get assigned a grade of 0. However, the desired behaviour is that the test that times out gets a grade of 0, but the other tests run as expected.
The reason for requesting this fix is because it wastes precious instructor time to go run these tests manually for the students that timed out so that they can get part marks for the tests that do pass.
This may also be a limitation of PyTest, but with some tinkering I'm sure there's a fix for it.
The text was updated successfully, but these errors were encountered: