-
Notifications
You must be signed in to change notification settings - Fork 252
doc/contributions/: enhance Python test framework #1348
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from all commits
b1abca1
2369ebc
96f62e1
80021c7
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,12 +1,96 @@ | ||
| # Coding style | ||
|
|
||
| The Shadow project is developed in C, | ||
| with Python used for testing purposes. | ||
| Each language follows its own established conventions and style guidelines. | ||
|
|
||
| ## C code | ||
|
|
||
| * For a general guidance refer to the | ||
| [Linux kernel coding style](https://www.kernel.org/doc/html/latest/process/coding-style.html) | ||
|
|
||
| * Patches that change the existing coding style are not welcome, as they make | ||
| downstream porting harder for the distributions | ||
|
|
||
| ## Indentation | ||
| ### Indentation | ||
|
|
||
| Tabs are preferred over spaces for indentation. Loading the `.editorconfig` | ||
| file in your preferred IDE may help you configure it. | ||
|
|
||
| ## Python code | ||
|
|
||
| Python code in the Shadow project is primarily found | ||
| in the system test framework (`tests/system/`). | ||
| Follow these conventions for consistency: | ||
|
|
||
| ### General conventions | ||
|
|
||
| * **PEP 8 compliance**: follow [PEP 8](https://pep8.org/) style guidelines. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. (Took a quick look under .github/workflows and share/ansible, and didn't see anything, but maybe I missed it) Can we have CI enforce these? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We are already doing it since #1349 was merged 😉 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It's called |
||
| * **Code quality enforcement**: all Python code must pass flake8, pycodestyle, isort, mypy, and black checks. | ||
| * **Import organization**: use absolute imports with `from __future__ import annotations`. | ||
| * **Type hints**: use modern type hints (e.g., `str | None` instead of `Optional[str]`). | ||
| * **Line length**: maximum 119 characters per line. | ||
| * **Configuration**: all formatting and linting settings are defined in `tests/system/pyproject.toml`. | ||
|
|
||
| ### Test code style | ||
|
|
||
| **File and test naming:** | ||
| * Test files: `test_<command>.py` (e.g., `test_useradd.py`). | ||
| * Test functions: `test_<command>__<specific_behavior>` using double underscores. | ||
| * Use descriptive names that clearly indicate what is being tested. | ||
|
|
||
| **Test structure (AAA pattern):** | ||
| ```python | ||
| @pytest.mark.topology(KnownTopology.Shadow) | ||
| def test_useradd__add_user(shadow: Shadow): | ||
| """ | ||
| :title: Descriptive test title | ||
| :setup: | ||
| 1. Setup steps | ||
| :steps: | ||
| 1. Test steps | ||
| :expectedresults: | ||
| 1. Expected outcomes | ||
| :customerscenario: False | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What does customerscenario mean? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This means that a customer or end user has found the issue. It is usually accompanied by a ticket in the marker (i.e. gh=1348). |
||
| """ | ||
| # Arrange | ||
| setup_code_here() | ||
|
|
||
| # Act | ||
| result = shadow.command_to_test() | ||
|
|
||
| # Assert | ||
| assert result is not None, "Descriptive failure message" | ||
| ``` | ||
|
|
||
| **Avoiding flakiness:** | ||
| * Use deterministic test data (avoid random values). | ||
| * Clean up test artifacts properly (handled automatically by framework). | ||
| * Use appropriate timeouts for time-sensitive operations. | ||
| * Leverage the framework's automatic backup/restore functionality. | ||
|
|
||
| ### Formatting and imports | ||
|
|
||
| **Required tools:** | ||
| * **flake8**: for style guide enforcement and error detection. | ||
| * **pycodestyle**: for PEP 8 style checking. | ||
| * **isort**: for import sorting with profiles that work well with Black. | ||
| * **Black**: for consistent code formatting. | ||
| * **mypy**: for static type checking. | ||
|
|
||
| **Import order:** | ||
| 1. Standard library imports. | ||
| 2. Third-party imports (`pytest`, `pytest_mh`). | ||
| 3. Local framework imports (`framework.*`). | ||
|
|
||
| ### Error handling and logging | ||
|
|
||
| **Error handling:** | ||
| * Prefer explicit exceptions over silent failures. | ||
| * Use `ProcessError` for command execution failures. | ||
| * Provide context in error messages. | ||
|
|
||
| **Logging guidance:** | ||
| * Use structured logging for test utilities in `tests/system/framework/`. | ||
| * Include relevant context (command, parameters, expected vs actual results). | ||
| * Leverage the framework's automatic artifact collection for debugging. | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's say that the groupdel test fails. Is there a command like --shell-on-fail that can be passed to pytest to give you a bash shell in the container after the failure, so you can look around? Or does it always leave the container running so you can launch a shell in it, in the failed state?
(Either way, that would be good info in the debug documentation here :) )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not aware of any command that would give you a shell after the test failed but the container is left running after all tests run, this way you can look around if anything fails.
I've updated this section to make this point clear.