- 
                Notifications
    You must be signed in to change notification settings 
- Fork 1.4k
Template CI/CD action to track compatibility with Cpython as per GHI 5974 (title is WIP) #6176
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
* Create Check_Tests.yml * see GHI RustPython#5974
disabled integrating cpython lib
disable cpython integration and increase ulimit to debug compile timming issues
disabled multi-os to test builds in separate step
* re-enable integration logic * build before integration * use the python version to checkout the cpython version
…Python#5974 * PoC testing for GHA to automate part of GHI RustPython#5974 * THIS IS AN EXPERIMENT and WIP
* style fixes
* Prototyping churn
* With a minimal working PoC for automation of using `scripts/fix_test.py` this has reached the very first milestone of a single working test * Still experimental WIP * see RustPython#5974 for context
* fixes minor regression
* add logic for running tests by filtering for test-cases
* let's go from small to large
* more churn
* expand to use pattern "Lib/test/*.py Lib/test/**/*.py" * see RustPython#5974 for context
* see RustPython#5974 for more * Cleaned up the bootstrapping of the RustPython and CPython Libs by refactoring into re-useable actions. * Work still in progress
* see GHI RustPython#5974 and PR RustPython#6089
… file patching) * see PR RustPython#6089 * see GHI RustPython#5974
| WalkthroughAdds four new composite GitHub Actions to fetch CPython and RustPython, integrate selected CPython files into the RustPython tree, and run per-file smoke tests with timeouts; plus a new workflow that orchestrates these actions across an OS/Python matrix. Actions export branch/sha/paths/files for downstream steps. Changes
 Sequence Diagram(s)sequenceDiagram
  autonumber
  actor Runner as GitHub Runner (matrix job)
  participant Workflow as .github/workflows/Check_Tests.yml
  participant FetchRP as CI-5974-Fetch-RustPython
  participant FetchCP as CI-5974-Fetch-CPython
  participant Integrate as CI-5974-Integrate-CPython
  participant Smoke as CI-5974-Test-RustPython-Integration
  participant Cargo as cargo
  Runner->>Workflow: start job
  Workflow->>FetchRP: checkout RustPython (outputs: branch, sha, rustpython-lib-path)
  FetchRP-->>Workflow: outputs
  Workflow->>FetchCP: checkout CPython (outputs: branch, sha, files)
  FetchCP-->>Workflow: outputs
  Workflow->>Integrate: copy selected CPython files into RustPython workspace
  Integrate-->>Workflow: integration done
  Workflow->>Smoke: run per-file smoke tests (files, timeouts, env)
  Smoke->>Cargo: invoke per-file runs (cargo run / -m unittest)
  Cargo-->>Smoke: return (ok / timeout / fail)
  Smoke-->>Workflow: TEST_STEP_SUMMARY, per-file outcomes
  Workflow-->>Runner: job completes (post step)
sequenceDiagram
  autonumber
  participant Smoke as Smoke Test Action
  participant Timer as run_with_timeout
  participant Cargo as cargo run
  participant Py as python -m unittest
  Smoke->>Smoke: iterate INPUT_FILES
  alt file contains unittest.main()
    Smoke->>Timer: run_with_timeout(cargo run -- <file>)
    Timer->>Cargo: execute
    Cargo-->>Timer: exit code
  else file is unittest module
    Smoke->>Timer: run_with_timeout(cargo run -- -m unittest <test_name>)
    Timer->>Py: execute
    Py-->>Timer: exit code
  end
  Timer-->>Smoke: outcome (success / timeout / failure)
  Smoke->>Smoke: record outcome, append to TEST_STEP_SUMMARY
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Suggested reviewers
 Poem
 Pre-merge checks and finishing touches✅ Passed checks (3 passed)
 ✨ Finishing touches🧪 Generate unit tests
 Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment  | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
- .github/actions/CI-5974-Fetch-CPython/action.yaml(1 hunks)
- .github/actions/CI-5974-Fetch-RustPython/action.yaml(1 hunks)
- .github/actions/CI-5974-Integrate-CPython/action.yaml(1 hunks)
- .github/actions/CI-5974-Test-RustPython-Integration/action.yaml(1 hunks)
- .github/workflows/Check_Tests.yml(1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: ShaharNaveh
PR: RustPython/RustPython#5932
File: .github/workflows/comment-commands.yml:18-24
Timestamp: 2025-07-10T10:08:43.330Z
Learning: In GitHub Actions workflows for the RustPython project, the maintainer ShaharNaveh prefers to keep workflows simple and doesn't mind if steps fail when the desired state is already achieved (e.g., user already assigned to an issue). Avoid suggesting complex error handling for edge cases they don't consider problematic.
📚 Learning: 2025-07-10T10:08:43.330Z
Learnt from: ShaharNaveh
PR: RustPython/RustPython#5932
File: .github/workflows/comment-commands.yml:18-24
Timestamp: 2025-07-10T10:08:43.330Z
Learning: In GitHub Actions workflows for the RustPython project, the maintainer ShaharNaveh prefers to keep workflows simple and doesn't mind if steps fail when the desired state is already achieved (e.g., user already assigned to an issue). Avoid suggesting complex error handling for edge cases they don't consider problematic.
Applied to files:
- .github/actions/CI-5974-Fetch-RustPython/action.yaml
- .github/workflows/Check_Tests.yml
🪛 actionlint (1.7.7)
.github/workflows/Check_Tests.yml
3-3: unexpected key "description" for "workflow" section. expected one of "concurrency", "defaults", "env", "jobs", "name", "on", "permissions", "run-name"
(syntax-check)
134-134: missing input "override-rustpython-path" which is required by action "RustPython Smoke-Testing" defined at "./.github/actions/CI-5974-Test-RustPython-Integration". all required inputs are "override-rustpython-path"
(action)
Minor changes to defaults and fallback logic as discussed with AI reviewer. Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
      
        
              This comment was marked as resolved.
        
        
      
    
  This comment was marked as resolved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
- .github/actions/CI-5974-Fetch-CPython/action.yaml(1 hunks)
- .github/actions/CI-5974-Fetch-RustPython/action.yaml(1 hunks)
- .github/actions/CI-5974-Test-RustPython-Integration/action.yaml(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- .github/actions/CI-5974-Test-RustPython-Integration/action.yaml
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: ShaharNaveh
PR: RustPython/RustPython#5932
File: .github/workflows/comment-commands.yml:18-24
Timestamp: 2025-07-10T10:08:43.330Z
Learning: In GitHub Actions workflows for the RustPython project, the maintainer ShaharNaveh prefers to keep workflows simple and doesn't mind if steps fail when the desired state is already achieved (e.g., user already assigned to an issue). Avoid suggesting complex error handling for edge cases they don't consider problematic.
📚 Learning: 2025-09-28T22:22:55.901Z
Learnt from: reactive-firewall
PR: RustPython/RustPython#6176
File: .github/workflows/Check_Tests.yml:133-141
Timestamp: 2025-09-28T22:22:55.901Z
Learning: In the RustPython project's CI-5974-Test-RustPython-Integration action, the override-rustpython-path input is marked as required but has runtime fallback logic that defaults to RUSTPYTHONPATH environment variable or "Lib" if neither is provided, making explicit specification unnecessary in most cases.
Applied to files:
- .github/actions/CI-5974-Fetch-CPython/action.yaml
- .github/actions/CI-5974-Fetch-RustPython/action.yaml
📚 Learning: 2025-07-10T10:08:43.330Z
Learnt from: ShaharNaveh
PR: RustPython/RustPython#5932
File: .github/workflows/comment-commands.yml:18-24
Timestamp: 2025-07-10T10:08:43.330Z
Learning: In GitHub Actions workflows for the RustPython project, the maintainer ShaharNaveh prefers to keep workflows simple and doesn't mind if steps fail when the desired state is already achieved (e.g., user already assigned to an issue). Avoid suggesting complex error handling for edge cases they don't consider problematic.
Applied to files:
- .github/actions/CI-5974-Fetch-CPython/action.yaml
- .github/actions/CI-5974-Fetch-RustPython/action.yaml
* implemented fallback on Env logic for select inputs (override still prefered) Ref GHI RustPython#5974
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: Path: .coderabbit.yml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
- .github/actions/CI-5974-Fetch-CPython/action.yaml(1 hunks)
- .github/actions/CI-5974-Fetch-RustPython/action.yaml(1 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: ShaharNaveh
PR: RustPython/RustPython#5932
File: .github/workflows/comment-commands.yml:18-24
Timestamp: 2025-07-10T10:08:43.330Z
Learning: In GitHub Actions workflows for the RustPython project, the maintainer ShaharNaveh prefers to keep workflows simple and doesn't mind if steps fail when the desired state is already achieved (e.g., user already assigned to an issue). Avoid suggesting complex error handling for edge cases they don't consider problematic.
📚 Learning: 2025-09-28T22:22:55.901Z
Learnt from: reactive-firewall
PR: RustPython/RustPython#6176
File: .github/workflows/Check_Tests.yml:133-141
Timestamp: 2025-09-28T22:22:55.901Z
Learning: In the RustPython project's CI-5974-Test-RustPython-Integration action, the override-rustpython-path input is marked as required but has runtime fallback logic that defaults to RUSTPYTHONPATH environment variable or "Lib" if neither is provided, making explicit specification unnecessary in most cases.
Applied to files:
- .github/actions/CI-5974-Fetch-CPython/action.yaml
- .github/actions/CI-5974-Fetch-RustPython/action.yaml
📚 Learning: 2025-07-10T10:08:43.330Z
Learnt from: ShaharNaveh
PR: RustPython/RustPython#5932
File: .github/workflows/comment-commands.yml:18-24
Timestamp: 2025-07-10T10:08:43.330Z
Learning: In GitHub Actions workflows for the RustPython project, the maintainer ShaharNaveh prefers to keep workflows simple and doesn't mind if steps fail when the desired state is already achieved (e.g., user already assigned to an issue). Avoid suggesting complex error handling for edge cases they don't consider problematic.
Applied to files:
- .github/actions/CI-5974-Fetch-CPython/action.yaml
- .github/actions/CI-5974-Fetch-RustPython/action.yaml
🔇 Additional comments (1)
.github/actions/CI-5974-Fetch-CPython/action.yaml (1)
22-27: Keep the CPython checkout in its own subdirectory
Using${{ github.workspace }}here still clones CPython straight into the caller’s workspace root, so the checkout obliterates whatever the job already has there (typically the RustPython repo). We end up with an empty or mismatched tree for every downstream step. Please restore the dedicated subfolder (e.g.cpython) so this action can coexist with the primary checkout.- default: ${{ github.server_url == 'https://github.com' && github.workspace || 'cpython' }} + default: ${{ (github.server_url == 'https://github.com') && 'cpython' || 'cpython' }}
| @reactive-firewall tysm for the PR! What is in your opinion the next step? Having an automated PR system that updates those files automatically? | 
| on: # yamllint disable-line rule:truthy | ||
| push: | ||
| branches: ["**"] # matches any branch | ||
| tags: ["v*"] | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to specify on: pull_request too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fanninpm I don't think so. as this isn't validating anything. only showing diagnostics about what we have left, it's not checking for potential regression (ATM).
In a way this is complementary to https://github.com/RustPython/RustPython/blob/1aea1467da5269922feb6cdc099f2112b3ac90db/whats_left.py
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to specify
on: pull_requesttoo?
🤔 eventually to leverage the existing github annotations from these actions (and yes they already have those) on a per-file basis, it might make sense to add PRs, however it is too soon. So, I'm in agreement with @ShaharNaveh. These values probably should be even more selective than they are to reduce noise of some branches.
| @ShaharNaveh Could you please be a primary reviewer of this patch? | 
| 
 @ShaharNaveh currently I don't have it generating a machine readable result in json or yaml, it only puts those values in the "${GITHUB_ENV}" file at the moment. The initial idea here was to provide a human readable result to what the enormous logs report (similar to whats_left). This PR is more about the automation tooling in CI/CD than the actual test results (ATM); visibility. However, as I agree that we totally need to have the machine readable content too, I'll have to hash out another helper tool to generate next. 
 I actually have been conflicted on this very point for a while now, and opened this PR to get feedback regarding how the project best benefits. My interest was the plumbing of the CI/CD initially. My two concerns about next steps are: 
 That said, IMHO, the next steps are indeed to start work on automations based on the periodic results, including things like opening PRs to update trivial changes upstream in the reference python implementation (e.g. c-python), and github issues/file diagnostic comments (an idea previously mentioned by @arihant2math a few times in previous issues as a wishlist item); of-course, that is after the short-term of adding/integrating a helper tool to provide accessible machine-readable results as you mentioned. Regarding the automation of PRs, the tinkering of how to separate out what to include is still a TODO (especially considering it should record providence details from the upstream commits like sha and commit-message across the reference git-tree) and will require some care around providing write access permissions for PRs/content (for secure implementation) Regarding the immediate next step: I'll get to work on the machine-readable part sometime this week, and update this PR with those results when I have something worth looking at on that front. 🙇 thank you for taking the time to read this, and I hope this helps. | 
| Context for AI: we have been discussing adding a helper tool to help generate machine-readable JSON or YAML outputs from the new github actions. Goal for AI: please suggest a plan and implementation for me to use as a starting point. | 
| Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 16516 I'll suggest a comprehensive plan and implementation for generating machine-readable outputs from your GitHub Actions workflow. Plan Overview1. Structured Output SchemaCreate a JSON/YAML schema that captures: 
 2. Helper Script ImplementationA Python script ( 
 3. Integration Points
 ImplementationSchema Definition# scripts/generate_test_results.py
#!/usr/bin/env python3
"""
Generate machine-readable test results from GitHub Actions workflow.
This script parses test outcomes and generates structured JSON/YAML output
for CPython integration test results.
"""
import argparse
import json
import os
import sys
from dataclasses import dataclass, asdict
from datetime import datetime, timezone
from pathlib import Path
from typing import Dict, List, Optional
import yaml
@dataclass
class TestResult:
    """Represents the outcome of a single test file."""
    file_path: str
    raw_copy_outcome: str  # compatible, incompatible, failing
    fix_copy_outcome: str  # fixed, unfixed, reverted, skipped
    test_method: str  # direct, unittest, skipped
    duration_seconds: Optional[float] = None
@dataclass
class TestRunMetadata:
    """Metadata about the test run environment."""
    timestamp: str
    python_version: str
    os: str
    rustpython_sha: Optional[str] = None
    cpython_sha: Optional[str] = None
    cpython_branch: Optional[str] = None
    max_test_time: int = 30
    cargo_args: Optional[str] = None
@dataclass
class TestRunSummary:
    """Summary statistics for the test run."""
    total_tests: int
    compatible_count: int
    incompatible_count: int
    fixed_count: int
    unfixed_count: int
    reverted_count: int
    skipped_count: int
    inconclusive_count: int
@dataclass
class TestRunResults:
    """Complete test run results with metadata and individual results."""
    metadata: TestRunMetadata
    summary: TestRunSummary
    results: List[TestResult]
class TestResultsGenerator:
    """Generates machine-readable test results from various sources."""
    
    def __init__(self):
        self.results: List[TestResult] = []
        self.metadata: Optional[TestRunMetadata] = None
        
    def parse_github_env(self, env_file_path: str) -> None:
        """Parse test outcomes from GITHUB_ENV file."""
        env_vars = {}
        
        if not Path(env_file_path).exists():
            print(f"Warning: GITHUB_ENV file not found at {env_file_path}", file=sys.stderr)
            return
            
        with open(env_file_path, 'r') as f:
            for line in f:
                line = line.strip()
                if '=' in line and not line.startswith('#'):
                    key, value = line.split('=', 1)
                    env_vars[key] = value
        
        self._parse_env_vars(env_vars)
    
    def parse_environment(self) -> None:
        """Parse test outcomes from current environment variables."""
        self._parse_env_vars(dict(os.environ))
    
    def _parse_env_vars(self, env_vars: Dict[str, str]) -> None:
        """Parse environment variables to extract test results."""
        # Parse metadata
        self.metadata = TestRunMetadata(
            timestamp=datetime.now(timezone.utc).isoformat(),
            python_version=env_vars.get('PYTHON_VERSION', '3.13'),
            os=env_vars.get('OS', env_vars.get('RUNNER_OS', 'unknown')),
            rustpython_sha=env_vars.get('RUSTPYTHON_SHA'),
            cpython_sha=env_vars.get('CPYTHON_SHA'),
            cpython_branch=env_vars.get('CPYTHON_BRANCH'),
            max_test_time=int(env_vars.get('SUBSHELL_TIMEOUT', '30')),
            cargo_args=env_vars.get('CARGO_ARGS')
        )
        
        # Parse test results
        test_files = {}
        
        for key, value in env_vars.items():
            if key.startswith('RAW_COPY_') and key.endswith('_OUTCOME'):
                # Extract filename from key: RAW_COPY_<filename>_OUTCOME
                filename = key[9:-8]  # Remove prefix and suffix
                if filename not in test_files:
                    test_files[filename] = {}
                test_files[filename]['raw_copy'] = value
                test_files[filename]['file_path'] = self._reconstruct_file_path(filename)
                
            elif key.startswith('FIX_COPY_') and key.endswith('_OUTCOME'):
                filename = key[9:-8]
                if filename not in test_files:
                    test_files[filename] = {}
                test_files[filename]['fix_copy'] = value
                test_files[filename]['file_path'] = self._reconstruct_file_path(filename)
        
        # Convert to TestResult objects
        for filename, data in test_files.items():
            result = TestResult(
                file_path=data.get('file_path', filename),
                raw_copy_outcome=data.get('raw_copy', 'unknown'),
                fix_copy_outcome=data.get('fix_copy', 'unknown'),
                test_method=self._infer_test_method(data)
            )
            self.results.append(result)
    
    def _reconstruct_file_path(self, filename: str) -> str:
        """Reconstruct the original file path from the env var key."""
        # Handle cases like "test_foo.py" or "test_foo"
        if not filename.endswith('.py'):
            filename = filename + '.py'
        return f"Lib/test/{filename}"
    
    def _infer_test_method(self, data: Dict[str, str]) -> str:
        """Infer the test method used based on outcomes."""
        raw_outcome = data.get('raw_copy', '')
        if raw_outcome == 'inconclusive':
            return 'skipped'
        # Could be enhanced by parsing additional metadata
        return 'direct'  # Default assumption
    
    def calculate_summary(self) -> TestRunSummary:
        """Calculate summary statistics from test results."""
        summary = TestRunSummary(
            total_tests=len(self.results),
            compatible_count=sum(1 for r in self.results if r.raw_copy_outcome == 'compatible'),
            incompatible_count=sum(1 for r in self.results if r.raw_copy_outcome == 'incompatible'),
            fixed_count=sum(1 for r in self.results if r.fix_copy_outcome == 'fixed'),
            unfixed_count=sum(1 for r in self.results if r.fix_copy_outcome == 'unfixed'),
            reverted_count=sum(1 for r in self.results if r.fix_copy_outcome == 'reverted'),
            skipped_count=sum(1 for r in self.results if r.fix_copy_outcome == 'skipped'),
            inconclusive_count=sum(1 for r in self.results if r.raw_copy_outcome == 'inconclusive')
        )
        return summary
    
    def generate_output(self, output_format: str = 'json') -> str:
        """Generate formatted output in JSON or YAML."""
        if not self.metadata:
            raise ValueError("Metadata not initialized. Call parse_github_env() or parse_environment() first.")
        
        summary = self.calculate_summary()
        test_run = TestRunResults(
            metadata=self.metadata,
            summary=summary,
            results=self.results
        )
        
        # Convert dataclasses to dictionaries
        output_dict = asdict(test_run)
        
        if output_format.lower() == 'json':
            return json.dumps(output_dict, indent=2)
        elif output_format.lower() == 'yaml':
            return yaml.dump(output_dict, default_flow_style=False, sort_keys=False)
        else:
            raise ValueError(f"Unsupported output format: {output_format}")
    
    def write_output(self, output_path: str, output_format: str = 'json') -> None:
        """Write formatted output to a file."""
        content = self.generate_output(output_format)
        with open(output_path, 'w') as f:
            f.write(content)
        print(f"Results written to {output_path}")
def main():
    """Main entry point for the script."""
    parser = argparse.ArgumentParser(
        description='Generate machine-readable test results from GitHub Actions workflow'
    )
    parser.add_argument(
        '--env-file',
        help='Path to GITHUB_ENV file to parse (default: read from environment)',
        default=None
    )
    parser.add_argument(
        '--output',
        '-o',
        help='Output file path (default: stdout)',
        default=None
    )
    parser.add_argument(
        '--format',
        '-f',
        choices=['json', 'yaml'],
        default='json',
        help='Output format (default: json)'
    )
    parser.add_argument(
        '--summary-only',
        action='store_true',
        help='Output only summary statistics'
    )
    
    args = parser.parse_args()
    
    generator = TestResultsGenerator()
    
    # Parse input source
    if args.env_file:
        generator.parse_github_env(args.env_file)
    else:
        generator.parse_environment()
    
    # Generate output
    try:
        if args.summary_only:
            summary = generator.calculate_summary()
            output_dict = asdict(summary)
            if args.format == 'json':
                content = json.dumps(output_dict, indent=2)
            else:
                content = yaml.dump(output_dict, default_flow_style=False)
        else:
            content = generator.generate_output(args.format)
        
        # Write to file or stdout
        if args.output:
            with open(args.output, 'w') as f:
                f.write(content)
            print(f"Results written to {args.output}", file=sys.stderr)
        else:
            print(content)
            
    except Exception as e:
        print(f"Error generating output: {e}", file=sys.stderr)
        sys.exit(1)
if __name__ == '__main__':
    main()Integration into GitHub ActionsAdd this step to      - name: "Generate Machine-Readable Results"
      id: generate_results
      if: ${{ !cancelled() }}
      shell: bash
      run: |
        # Install PyYAML if not present
        python3 -m pip install --quiet PyYAML 2>/dev/null || true
        
        # Generate JSON output
        python3 ./scripts/generate_test_results.py \
          --format json \
          --output "test-results-${{ runner.os }}-${PYTHON_VERSION}.json"
        
        # Generate YAML output
        python3 ./scripts/generate_test_results.py \
          --format yaml \
          --output "test-results-${{ runner.os }}-${PYTHON_VERSION}.yaml"
        
        # Display summary to console
        echo "::group::Test Results Summary"
        python3 ./scripts/generate_test_results.py --format json --summary-only
        echo "::endgroup::"Add Artifact Upload Step to WorkflowIn      - name: "Upload Test Results"
      if: ${{ !cancelled() }}
      uses: actions/upload-artifact@v4
      with:
        name: test-results-${{ matrix.os }}-${{ matrix.python-version }}
        path: |
          rustpython/test-results-*.json
          rustpython/test-results-*.yaml
          rustpython/CPython-Summary-Artifact-*.md
        retention-days: 30Example OutputJSON format: {
  "metadata": {
    "timestamp": "2025-01-15T10:30:45.123456+00:00",
    "python_version": "3.13",
    "os": "ubuntu-latest",
    "rustpython_sha": "abc123...",
    "cpython_sha": "def456...",
    "cpython_branch": "v3.13.1",
    "max_test_time": 30,
    "cargo_args": "--release"
  },
  "summary": {
    "total_tests": 50,
    "compatible_count": 5,
    "incompatible_count": 30,
    "fixed_count": 10,
    "unfixed_count": 15,
    "reverted_count": 5,
    "skipped_count": 15,
    "inconclusive_count": 0
  },
  "results": [
    {
      "file_path": "Lib/test/test_grammar.py",
      "raw_copy_outcome": "compatible",
      "fix_copy_outcome": "skipped",
      "test_method": "direct",
      "duration_seconds": null
    }
  ]
}Benefits
 Next Steps
 Would you like me to help with any specific aspect of the implementation or suggest additional features for the output format? | 
| 
 Generally, you'd find it easier to first normalize data into a machine readable format, and generate human readable output from that. I think that the main issue is that you wrote all the "business logic" in bash. I'm fine with small bash scripts here and there but this is not the case. it be easier to maintain (and review) if it was written in Rust/Python. 
 This is not easy. And ty for working on this! (especially for a first PR) 
 I'll explain why I think a machine readable format is necessary as the next step; I want to see if there are "low hanging fruits" that we can update immediately, and maybe start to narrow down the scope of the PR to handle only the most simple cases and expand it later. @reactive-firewall I suggest you take a look at https://github.com/termux/termux-packages What I have in mind (at least for now) is a process where this runs once a week and open PRs for the low hanging fruits, and opens issue(s) for failing updates, similar to what bevy does. (Don't implement any of this ATM) imo the next steps are: 
 | 




Patch Notes
Impacted GHI/PRs
Key changes
.github/actions/*/action.yaml.github/workflows/Check_Tests.yamlcc: @ShaharNaveh
cc: @youknowone
I'd love to hear feedback and/or comments on this prototype
Summary by CodeRabbit
Tests
Chores