Skip to content

Conversation

@NicoMolinaOZ
Copy link
Contributor

@NicoMolinaOZ NicoMolinaOZ commented Oct 31, 2025

Summary

We have moved record_block to sync pipeline to avoid race conditions, now we have the following to keep the proper tracking of missing blocks:

Solution

Implemented a two-stage tracking system that records blocks at different points in the processing pipeline:

  1. Stage 1 - Fetched Blocks: Track blocks immediately after fetching from RPC
  2. Stage 2 - Triggered Blocks: Track blocks after successful processing and triggering

When a gap is detected, the system checks if the missing block was fetched to determine the actual failure type.

Checklist

  • Add a reference to related issues in the PR description.
  • Add unit tests if applicable.
  • Add integration tests if applicable.
  • Add property-based tests if applicable.
  • Update documentation if applicable.

Summary by CodeRabbit

  • Refactor

    • Enhanced block tracker implementation with improved logic for differentiating between successfully fetched blocks and missed blocks during processing.
    • Updated block tracking interfaces to support refined block status tracking.
  • Tests

    • Updated integration tests to reflect changes in block tracking behavior.

@NicoMolinaOZ NicoMolinaOZ marked this pull request as ready for review October 31, 2025 13:37
@NicoMolinaOZ NicoMolinaOZ requested a review from a team as a code owner October 31, 2025 13:37
@codecov
Copy link

codecov bot commented Oct 31, 2025

Codecov Report

❌ Patch coverage is 94.80519% with 4 lines in your changes missing coverage. Please review.
✅ Project coverage is 95.5%. Comparing base (3cbd91f) to head (6286220).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
src/services/blockwatcher/service.rs 71.4% 4 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##            main    #395     +/-   ##
=======================================
- Coverage   95.5%   95.5%   -0.1%     
=======================================
  Files         87      87             
  Lines      24543   24601     +58     
=======================================
+ Hits       23456   23509     +53     
- Misses      1087    1092      +5     
Flag Coverage Δ
integration 59.6% <71.4%> (-0.1%) ⬇️
properties 29.5% <0.0%> (-0.1%) ⬇️
unittests 87.2% <81.8%> (-0.1%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@OpenZeppelin OpenZeppelin deleted a comment from coderabbitai bot Oct 31, 2025
let (_process_result, _trigger_result) = tokio::join!(process_handle, trigger_handle);

// Clear fetched blocks to free memory for next iteration
block_tracker.clear_fetched_blocks(&network.slug).await;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if there's overlap in processing? E.g. the cron is set to 60 seconds, but a cycle takes 70 seconds to complete due to slow/delay in rpc. Next cycle overlaps with already running cycle. Now first cycle completed and cleared memory midsts second cycle.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know if this is a good solution, so I would spend some time analysing, but perhaps we can map it with network_slug+index so that we can track overlaps in iterations (through an index) wdyt?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I kept thinking the same tbh, I am trying to find a possible better approach

Copy link
Contributor Author

@NicoMolinaOZ NicoMolinaOZ Oct 31, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have simplified by just creating a local variable per execution, it will only live inside each execution

@coderabbitai
Copy link

coderabbitai bot commented Oct 31, 2025

Walkthrough

The pull request refactors block tracking to differentiate between fetched and missed blocks. A new fetched_blocks: &HashSet<u64> parameter is added to the BlockTrackerTrait::record_block method, enabling the tracker to distinguish genuinely missed blocks from blocks that were fetched but not processed. The process_new_blocks function signature is updated with stricter trait bounds (Send + Sync + 'static) and accepts generic Arc<TR> instead of concrete Arc<BlockTracker<S>>.

Changes

Cohort / File(s) Summary
Trait & Core Service Logic
src/services/blockwatcher/tracker.rs, src/services/blockwatcher/service.rs
Trait BlockTrackerTrait::record_block now requires fetched_blocks: &HashSet<u64> parameter. Block tracker generic bounds tightened to Send + Sync + 'static. Local HashSet<u64> collects fetched block numbers and propagates through trigger pipeline. Missed block detection now filters out blocks present in fetched set.
Test & Mock Implementations
tests/integration/blockwatcher/service.rs, tests/integration/mocks/services.rs
Mock record_block expectations updated to accept _fetched_blocks parameter. MockConfig::history_size field removed. Block handler closures refactored to extract block numbers from BlockType parameter. All tests updated to pass fetched blocks set to tracker calls.

Sequence Diagram

sequenceDiagram
    participant PL as Pipeline
    participant SVC as Service
    participant TR as Trigger Stage
    participant TRK as BlockTracker
    
    PL->>SVC: process_new_blocks(blocks, tracker)
    Note over SVC: Initialize fetched_block_numbers: HashSet
    SVC->>TR: Process each block + fetched_blocks set
    
    loop For each fetched block
        TR->>TR: Add block to fetched_block_numbers
        TR->>TRK: record_block(network, num, &fetched_blocks)
        alt Block was fetched
            TRK->>TRK: Record as processed
        else Block not in fetched_blocks
            TRK->>TRK: Treat as missed, log error
        end
    end
    
    rect rgb(200, 220, 255)
    Note over SVC: Cleanup phase
    SVC->>TRK: record_block(remaining, &fetched_blocks)
    TRK->>TRK: Identify truly missed blocks
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Trait signature cascade: The record_block signature change propagates through trait, implementations, mocks, and all callsites. Verify all callsites pass the correct fetched_blocks reference.
  • Missed block detection logic: Review the conditional logic that differentiates fetched vs. truly missed blocks in tracker.rs to ensure no blocks are incorrectly classified or logged.
  • Generic bounds tightening: Confirm the Send + Sync + 'static bounds are necessary and don't unnecessarily restrict valid implementations.
  • Test coverage: Verify tests adequately cover edge cases like out-of-order blocks, multi-network scenarios, and missed block storage when enabled.

Poem

🐰 Hop! Skip! Jump through blocks so neat,
Now tracking fetched from missed, complete!
HashSets dancing through the code,
Clarity blooms down data's road. ✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The PR description provides a clear and detailed Summary section that explains the problem, the implemented solution with a two-stage tracking system, and how the system differentiates between fetched and missed blocks. However, the description is missing the "Testing Process" section, which is a required main section in the repository's template. While the Checklist section is present, all items remain unchecked, which is typical for incomplete PRs, and the complete absence of the Testing Process section represents a significant gap in the required template structure that should be addressed before merging.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The PR title "Fix block processing bottleneck" is clearly related to and directly summarizes the main change in the pull request. The changes implement a two-stage tracking system that moves the record_block operation to the sync pipeline to address race conditions and improve block processing efficiency, which precisely aligns with the title's focus on fixing the block processing bottleneck. The title is concise, specific, and uses conventional commit formatting with the "fix:" prefix, making it clear and actionable for someone reviewing the commit history.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch 394-fix-block-processing-bottleneck

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3cbd91f and 6286220.

📒 Files selected for processing (4)
  • src/services/blockwatcher/service.rs (6 hunks)
  • src/services/blockwatcher/tracker.rs (12 hunks)
  • tests/integration/blockwatcher/service.rs (13 hunks)
  • tests/integration/mocks/services.rs (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
tests/integration/mocks/services.rs (1)
src/services/blockwatcher/tracker.rs (2)
  • record_block (31-36)
  • record_block (96-160)
tests/integration/blockwatcher/service.rs (1)
src/services/blockwatcher/tracker.rs (2)
  • new (30-30)
  • new (72-78)
src/services/blockwatcher/tracker.rs (1)
src/services/blockwatcher/error.rs (2)
  • block_tracker_error (77-83)
  • storage_error (68-74)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
  • GitHub Check: Redirect rules - openzeppelin-monitor
  • GitHub Check: Header rules - openzeppelin-monitor
  • GitHub Check: Pages changed - openzeppelin-monitor
  • GitHub Check: boostsecurity - boostsecurityio/semgrep-pro
  • GitHub Check: boostsecurity - boostsecurityio/osv-scanner
  • GitHub Check: boostsecurity - boostsecurityio/scanner
  • GitHub Check: clippy
  • GitHub Check: test
  • GitHub Check: msrv
  • GitHub Check: boostsecurity - boostsecurityio/boost-sca
  • GitHub Check: semgrep/ci
  • GitHub Check: Analyze (rust)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Fix block processing bottleneck issues for networks with sub 1-second block times

3 participants