Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 28 additions & 0 deletions doc/contributions/build_install.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,3 +73,31 @@ were running some of the CI checks locally:
```
share/container-build.sh
```

### Container troubleshooting

When working with containers for testing or development,
you may encounter issues.
Here are common troubleshooting steps:

**Post-test inspection:**
- **Container persistence**: After tests complete, containers are left running
to allow inspection of the test environment and debugging of any failures.
This enables you to examine logs, file states, and system configuration
that existed when tests ran.

**Container management:**
- **List containers**: `docker ps -a` to see all containers and their status.
- **Access container**: `docker exec -it <container-name> bash` to get shell access.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's say that the groupdel test fails. Is there a command like --shell-on-fail that can be passed to pytest to give you a bash shell in the container after the failure, so you can look around? Or does it always leave the container running so you can launch a shell in it, in the failed state?

(Either way, that would be good info in the debug documentation here :) )

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not aware of any command that would give you a shell after the test failed but the container is left running after all tests run, this way you can look around if anything fails.

I've updated this section to make this point clear.

- **Container logs**: `docker logs <container-name>` to view container output.
- **Remove containers**: `docker rm <container-name>` to clean up stopped containers.

**Common issues:**
- **Container not found**: ensure you've run the Ansible playbook
to create the required containers.
- **Permission issues**: verify the container has proper privileges
for user/group operations.
- **Network connectivity**: check that containers can communicate
if tests involve network operations.
- **Resource constraints**: ensure sufficient disk space and memory
for container operations.
24 changes: 21 additions & 3 deletions doc/contributions/ci.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,27 @@ be triggered locally by following the instructions specified in the

## System tests

The project is tested on Ubuntu. For that purpose it is built & installed in
this distribution in a VM. You can run this step locally by following the
instructions provided in the [Tests](tests.md#system-tests) page.
The project runs system tests to verify functionality
across different environments
using two complementary approaches:

### Bash system tests

Legacy Bash system tests run on Ubuntu in a VM environment.
These provide coverage for Ubuntu-specific scenarios and legacy test cases.
You can run this step locally by following the instructions provided
in the [Tests](tests.md#bash-system-tests) page.

### Python system tests

The new Python system tests use pytest and pytest-mh,
running across multiple distributions (Fedora, Debian, Alpine, openSUSE)
in containerized environments.
These tests provide cross-distribution compatibility
and improved environment management compared to the Bash tests.

For local execution of Python system tests,
follow the instructions in the [Tests](tests.md#python-system-tests) page.

## Static code analysis

Expand Down
86 changes: 85 additions & 1 deletion doc/contributions/coding_style.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,96 @@
# Coding style

The Shadow project is developed in C,
with Python used for testing purposes.
Each language follows its own established conventions and style guidelines.

## C code

* For a general guidance refer to the
[Linux kernel coding style](https://www.kernel.org/doc/html/latest/process/coding-style.html)

* Patches that change the existing coding style are not welcome, as they make
downstream porting harder for the distributions

## Indentation
### Indentation

Tabs are preferred over spaces for indentation. Loading the `.editorconfig`
file in your preferred IDE may help you configure it.

## Python code

Python code in the Shadow project is primarily found
in the system test framework (`tests/system/`).
Follow these conventions for consistency:

### General conventions

* **PEP 8 compliance**: follow [PEP 8](https://pep8.org/) style guidelines.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Took a quick look under .github/workflows and share/ansible, and didn't see anything, but maybe I missed it)

Can we have CI enforce these?

Copy link
Collaborator Author

@ikerexxe ikerexxe Oct 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are already doing it since #1349 was merged 😉

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's called python-linter and you can see it running in this PR

* **Code quality enforcement**: all Python code must pass flake8, pycodestyle, isort, mypy, and black checks.
* **Import organization**: use absolute imports with `from __future__ import annotations`.
* **Type hints**: use modern type hints (e.g., `str | None` instead of `Optional[str]`).
* **Line length**: maximum 119 characters per line.
* **Configuration**: all formatting and linting settings are defined in `tests/system/pyproject.toml`.

### Test code style

**File and test naming:**
* Test files: `test_<command>.py` (e.g., `test_useradd.py`).
* Test functions: `test_<command>__<specific_behavior>` using double underscores.
* Use descriptive names that clearly indicate what is being tested.

**Test structure (AAA pattern):**
```python
@pytest.mark.topology(KnownTopology.Shadow)
def test_useradd__add_user(shadow: Shadow):
"""
:title: Descriptive test title
:setup:
1. Setup steps
:steps:
1. Test steps
:expectedresults:
1. Expected outcomes
:customerscenario: False
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does customerscenario mean?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This means that a customer or end user has found the issue. It is usually accompanied by a ticket in the marker (i.e. gh=1348).

"""
# Arrange
setup_code_here()

# Act
result = shadow.command_to_test()

# Assert
assert result is not None, "Descriptive failure message"
```

**Avoiding flakiness:**
* Use deterministic test data (avoid random values).
* Clean up test artifacts properly (handled automatically by framework).
* Use appropriate timeouts for time-sensitive operations.
* Leverage the framework's automatic backup/restore functionality.

### Formatting and imports

**Required tools:**
* **flake8**: for style guide enforcement and error detection.
* **pycodestyle**: for PEP 8 style checking.
* **isort**: for import sorting with profiles that work well with Black.
* **Black**: for consistent code formatting.
* **mypy**: for static type checking.

**Import order:**
1. Standard library imports.
2. Third-party imports (`pytest`, `pytest_mh`).
3. Local framework imports (`framework.*`).

### Error handling and logging

**Error handling:**
* Prefer explicit exceptions over silent failures.
* Use `ProcessError` for command execution failures.
* Provide context in error messages.

**Logging guidance:**
* Use structured logging for test utilities in `tests/system/framework/`.
* Include relevant context (command, parameters, expected vs actual results).
* Leverage the framework's automatic artifact collection for debugging.
Loading
Loading