Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Type tests #2213

Draft
wants to merge 18 commits into
base: master
Choose a base branch
from
Draft

Type tests #2213

wants to merge 18 commits into from

Conversation

mplatypus
Copy link
Contributor

Summary

Add typing to all tests, along with a pipeline that forces tests to stay typed.

This will make writing new tests easier, and cleaner.

@mplatypus mplatypus requested a review from davfsa as a code owner March 16, 2025 13:23
@mplatypus mplatypus marked this pull request as draft March 16, 2025 13:24
Copy link
Member

@davfsa davfsa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This pr is going to be insanely hard to review, so I'm mostly going to base it on tests passing + the pipeline passing.

Any new questionable additions i will comment on as I slowly review this

Comment on lines +31 to +32
guild_id=mock.Mock(), # FIXME: Can this be pulled from the actual fixture?
parent_id=mock.Mock(), # FIXME: Can this be pulled from the actual fixture?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doing .id would work

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With this, I did have it get the hikari_partial_guild object, and then use the id from it, but I kinda changed how I internally used it. This is more a me problem.

@@ -0,0 +1,126 @@
from __future__ import annotations
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file seems a bit arbitrary, what's the idea behind it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

conftest.py is a pytest feature, that allows any fixtures placed within, to be accessable to any tests within that directory (recursively). The goal for this, is to put all the fixtures that are identical, and used in more than one file, here, to minimize the amount of duplicated fixtures. I simply have not gotten around to moving them, its on my list of todo's

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thats nice to know, didnt know about that feature. Would be nice to document it in the files docstring (as well as add the missing license :P)

@mplatypus
Copy link
Contributor Author

This pr is going to be insanely hard to review, so I'm mostly going to base it on tests passing + the pipeline passing.

Any new questionable additions i will comment on as I slowly review this

Yeah, its all good. I know this is going to be a massive PR. I will mark things that I think you need to see with a FIXME: with some reason as to why I think its important to view and/or if I have questions about it.

@mplatypus
Copy link
Contributor Author

Okay, after doing some more changes, I ran nox and a few errors are occurring, but the weirdest part is, they only occur sometimes.

FAILED tests/hikari/impl/test_rest.py::TestRESTClientImplAsync::test_edit_guild - TypeError: An asyncio.Future, a coroutine or an awaitable is required
FAILED tests/hikari/impl/test_buckets.py::TestRESTBucketManager::test_start - TypeError: object MagicMock can't be used in 'await' expression
FAILED tests/hikari/impl/test_event_manager_base.py::TestEventStream::test___anext___waits_for_next_event - TypeError: An asyncio.Future, a coroutine or an awaitable is required

The above three tests are the only ones I have seen fail. Sometimes its two, sometimes its all three, and other times it just happily passes.

@davfsa
Copy link
Member

davfsa commented Mar 17, 2025

they only occur sometimes.

Test order is randomized every time nox is ran, it's to catch flaxy tests or tests that behave differently based on what order they are run.

At the top of the pytest output you should see a seed flag with a random number next to it's it you pass it to nox like so: nox -s -- <flag_here> then you will be able to reproduce it consistently.

Here I would advice you to look at what order the tests are excuted and the most probable reason is that whatever is running before the first test is modifying the state of whatever is running afterwards.

If you provide me the flag, I can have a look too :)

I can also give a shot at reproducing it later

@mplatypus
Copy link
Contributor Author

I didn't know it did this. That's pretty useful.

Below are some seeds that succeed/fail when ran.

Successful seeds: 4147739088, 3310411651, 3105782950
Failure seeds: 4274378829, 1168526332, 620612017

davfsa added 4 commits March 17, 2025 11:02
Also remove all uses of contextlib.ExitStack in tests where simple
'with' chained blocks work

Signed-off-by: davfsa <[email protected]>
@davfsa
Copy link
Member

davfsa commented Mar 17, 2025

I didn't know it did this. That's pretty useful.

Below are some seeds that succeed/fail when ran.

Successful seeds: 4147739088, 3310411651, 3105782950 Failure seeds: 4274378829, 1168526332, 620612017

Fixed :)

@davfsa davfsa added testing Improvements or additions to tests skip-fragment-check Skip fragment checks for this PR as it doesnt need one labels Mar 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
skip-fragment-check Skip fragment checks for this PR as it doesnt need one testing Improvements or additions to tests
Projects
Status: In progress
Development

Successfully merging this pull request may close these issues.

2 participants