Skip to content

feat: add tolerance for invalid peer info while busy with peer sync #7361

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 8 commits into
base: development
Choose a base branch
from

Conversation

hansieodendaal
Copy link
Contributor

@hansieodendaal hansieodendaal commented Jul 22, 2025

Description

Added tolerance for banable offences while busy with peer sync. A sync session will be interrupted after a configurable number of banable offences has been committed. When the sync session ends due to this threshold or otherwise, the peer will be banned for the most severe ban offence. This scheme enables a local node to accumulate good peer info sets optimally.

Update: All sync peers emit invalid peer info sets due to random embedded invalid peer signature errors in the peer identity claims. This PR is an interim measure to combat large scale banning of peers and to collect good peer info sets from peers that send up to a threshold number of bad peer identity claims. Those sync peers will not be banned during peer sync.
(See linked issue #7306.)

Note: This does not apply to the seedstrap phase as seed peer ban offences are ignored then.

Motivation and Context

Peer sync efficiency can be improved if good peer info sets are accepted from sync peers in the midst of having committed a this specific banable offence.

How Has This Been Tested?

System-level testing.

What process can a PR reviewer use to test or verify this change?

Code review.
System-level testing.

Breaking Changes

  • None
  • Requires data directory on base node to be deleted
  • Requires hard fork
  • Other - Please specify

Summary by CodeRabbit

  • New Features
    • Added a configurable threshold (default 100) for the number of offenses tolerated during peer syncing before banning a peer.
  • Bug Fixes
    • Improved timeout handling, error logging, and peer validation to accumulate multiple offenses before banning during peer sync.
  • Refactor
    • Enhanced offense severity comparison with ordering and equality traits.
    • Unified and simplified peer banning logic and error handling for better robustness and clarity.

Added tolerance for banable offences while busy with peer sync. A sync session will be
interrupted after a configurable number of banable offencses has been committed. When the
sync session ends due to this thresholf or otherwise, the peer will be banned for the most
severe ban offence. This scheme enables a local node to accumulate good peer info sets in
an optimal manner.
Copy link
Contributor

coderabbitai bot commented Jul 22, 2025

Walkthrough

This change introduces a configurable threshold for banning peers during network discovery peer syncing. Offences are accumulated and only trigger a ban when the threshold is reached or after syncing completes. The offence severity handling is centralized, and related configuration, validation, and timeout logic are updated in both code and configuration files.

Changes

Files/Paths Change Summary
common/config/presets/c_base_node_c.toml, common/config/presets/d_console_wallet.toml Added commented configuration option peer_sync_immediate_ban_threshold with documentation and default value 100.
comms/dht/src/actor.rs Derived Eq, PartialEq, PartialOrd, and Ord for OffenceSeverity enum to enable comparisons.
comms/dht/src/network_discovery/config.rs Added peer_sync_immediate_ban_threshold: usize field to NetworkDiscoveryConfig with default 100 and docs.
comms/dht/src/network_discovery/discovering.rs Refactored peer syncing to accumulate offences up to threshold, added explicit timeouts for RPC calls, enhanced logging, and helper methods for streaming and error handling. Updated banning logic to consider new offence variant.
comms/dht/src/network_discovery/error.rs Added new error variant BanableOffencesWhileSyncing(usize) for excessive offences during peer sync; updated PartialEq impl accordingly.

Sequence Diagram(s)

sequenceDiagram
    participant Discovering
    participant SyncPeer
    participant DhtClient

    Discovering->>DhtClient: request_peers()
    loop For each peer response
        DhtClient-->>Discovering: PeerInfo or Error
        Discovering->>Discovering: validate_and_add_peer()
        alt Offence detected
            Discovering->>Discovering: Accumulate offence count
            alt Threshold reached
                Discovering->>Discovering: Stop processing, ban SyncPeer
            end
        end
    end
    alt Offences recorded
        Discovering->>Discovering: Ban SyncPeer if necessary
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

  • fix: don't ban peers for invalid peer data #7170: Both PRs modify peer banning logic in network discovery, with this PR introducing a configurable ban threshold and offence accumulation, while the related PR changes error handling to avoid banning peers for invalid data.

Suggested reviewers

  • SWvheerden

Poem

A bunny with code on its mind,
Hopped through offences, counting each kind.
Now bans are more fair,
With thresholds to spare—
Peer syncing is robustly designed!
🐇✨

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bdb6caa and 4a10c4c.

📒 Files selected for processing (1)
  • comms/dht/src/actor.rs (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • comms/dht/src/actor.rs
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
  • GitHub Check: test (testnet, esmeralda)
  • GitHub Check: test (mainnet, stagenet)
  • GitHub Check: test (nextnet, nextnet)
  • GitHub Check: cargo check with stable
  • GitHub Check: ci
  • GitHub Check: ledger build tests
  • GitHub Check: Cucumber tests / FFI
  • GitHub Check: Cucumber tests / Base Layer
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (5)
comms/dht/src/network_discovery/config.rs (1)

105-105: Fix typo in documentation

There's a typo in the documentation: "syc" should be "sync".

-    /// Any banable offense will be punished after syc completes or when the threshold is reached; this will let a node
+    /// Any banable offense will be punished after sync completes or when the threshold is reached; this will let a node
common/config/presets/d_console_wallet.toml (1)

380-380: Fix typo in configuration comment

Same typo as in the code: "syc" should be "sync".

-# Any banable offense will be punished after syc completes or when the threshold is reached; this will let a node
+# Any banable offense will be punished after sync completes or when the threshold is reached; this will let a node
common/config/presets/c_base_node_c.toml (1)

334-334: Fix typo in configuration comment

Same typo: "syc" should be "sync".

-# Any banable offense will be punished after syc completes or when the threshold is reached; this will let a node
+# Any banable offense will be punished after sync completes or when the threshold is reached; this will let a node
comms/dht/src/actor.rs (1)

139-148: Simplify the max method using the derived Ord trait

Since OffenceSeverity now derives Ord, you can simplify the max method implementation using the standard library's max function.

impl OffenceSeverity {
    /// Compare two OffenceSeverity values and return the maximum severity
    pub fn max(self, other: OffenceSeverity) -> OffenceSeverity {
-        match (self, other) {
-            (OffenceSeverity::High, _) | (_, OffenceSeverity::High) => OffenceSeverity::High,
-            (OffenceSeverity::Medium, _) | (_, OffenceSeverity::Medium) => OffenceSeverity::Medium,
-            _ => OffenceSeverity::Low,
-        }
+        std::cmp::max(self, other)
    }
}
comms/dht/src/network_discovery/discovering.rs (1)

256-266: Optimize error return logic

The current implementation sorts all offenses but only uses the first one. Consider using max_by for better performance.

-        // Return any error that has the highest severity offence if any were recorded
-        if !ban_offences.is_empty() {
-            // Sort the offences by severity, from highest to lowest
-            ban_offences.sort_by(|a, b| b.0.cmp(&a.0));
-            if let Some((severity, err)) = ban_offences.into_iter().next() {
-                trace!(
-                    target: LOG_TARGET,
-                    "Request peers from: `{}`, returning error: {:?}, severity: {:?}", sync_peer, err, severity
-                );
-                return Err(err);
-            }
+        // Return the error with the highest severity offence if any were recorded
+        if let Some((severity, err)) = ban_offences.into_iter().max_by(|a, b| a.0.cmp(&b.0)) {
+            trace!(
+                target: LOG_TARGET,
+                "Request peers from: `{}`, returning error: {:?}, severity: {:?}", sync_peer, err, severity
+            );
+            return Err(err);
         }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ad5aac9 and 52aa149.

📒 Files selected for processing (5)
  • common/config/presets/c_base_node_c.toml (1 hunks)
  • common/config/presets/d_console_wallet.toml (1 hunks)
  • comms/dht/src/actor.rs (1 hunks)
  • comms/dht/src/network_discovery/config.rs (2 hunks)
  • comms/dht/src/network_discovery/discovering.rs (7 hunks)
🧠 Learnings (5)
📓 Common learnings
Learnt from: hansieodendaal
PR: tari-project/tari#7358
File: comms/core/src/peer_manager/storage/database.rs:566-570
Timestamp: 2025-07-21T16:03:14.260Z
Learning: In the Tari peer database, there was a known issue with JSON serialization corruption in the `source` field of the `multi_addresses` table, causing approximately 0.4% of peer validation failures. The migration to Borsh serialization (2025-07-21-170500_peer_address_source) intentionally uses a destructive approach (dropping and recreating tables) because the existing JSON data contains corruption that cannot be reliably converted. This data loss is acceptable to ensure data integrity going forward.
Learnt from: hansieodendaal
PR: tari-project/tari#6963
File: comms/core/src/peer_manager/manager.rs:60-68
Timestamp: 2025-05-26T02:40:23.812Z
Learning: PeerDatabaseSql in the Tari codebase has been specifically refactored to handle concurrent access and mitigate blocking I/O concerns on async executor threads. The implementation has been tested under high load at both system level and through unit tests like test_concurrent_add_or_update_and_get_closest_peers which validates concurrent read/write operations.
common/config/presets/d_console_wallet.toml (3)

Learnt from: SolfataraEmit
PR: #6994
File: docs/src/API_GRPC_Explanation.md:236-244
Timestamp: 2025-04-28T11:41:15.722Z
Learning: The minotari_console_wallet reports a fourth field timelocked_balance in the terminal output for the getBalance method, which should be reflected in the documentation for accuracy. The wallet.proto file confirms that timelocked_balance is defined as the fourth field in the GetBalanceResponse message.

Learnt from: SolfataraEmit
PR: #6994
File: docs/src/API_GRPC_Explanation.md:236-244
Timestamp: 2025-04-28T11:41:15.722Z
Learning: The minotari_console_wallet reports a fourth field timelocked_balance in the terminal output for the getBalance method, which should be reflected in the documentation for accuracy.

Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:721-735
Timestamp: 2025-07-09T08:13:37.206Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the 10-second STREAM_ITEM_TIMEOUT and retry logic are intentionally designed to handle service conflicts where other services kill seed peer connections during seedstrap operations. The underlying discovery_peer/dial_peer API timeouts are too lenient for seedstrap use cases, so the more aggressive timeout with retry logic is appropriate and necessary.

common/config/presets/c_base_node_c.toml (1)

Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:721-735
Timestamp: 2025-07-09T08:13:37.206Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the 10-second STREAM_ITEM_TIMEOUT and retry logic are intentionally designed to handle service conflicts where other services kill seed peer connections during seedstrap operations. The underlying discovery_peer/dial_peer API timeouts are too lenient for seedstrap use cases, so the more aggressive timeout with retry logic is appropriate and necessary.

comms/dht/src/network_discovery/config.rs (8)

Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:721-735
Timestamp: 2025-07-09T08:13:37.206Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the 10-second STREAM_ITEM_TIMEOUT and retry logic are intentionally designed to handle service conflicts where other services kill seed peer connections during seedstrap operations. The underlying discovery_peer/dial_peer API timeouts are too lenient for seedstrap use cases, so the more aggressive timeout with retry logic is appropriate and necessary.

Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the NUM_RETRIES logic in get_peers is specifically designed to handle peer connections that are closed while trying to RPC stream peer info, not general connection failures. The retry logic only applies when peers.is_empty() && !conn.is_connected() && attempt < NUM_RETRIES, which indicates a mid-stream disconnection.

Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the fetch_peers_from_connection and collect_peer_stream functions rely on RPC streaming, and when the main connection is closed by another process, collect_peer_stream times out after STREAM_ITEM_TIMEOUT because it cannot detect that the peer can no longer respond, returning an empty vector of peers. This is why the connection state check is important for the retry logic.

Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the context.connectivity.dial_peer method should fail fast and return an error if a peer cannot be dialed, rather than requiring retry logic for general connection failures.

Learnt from: hansieodendaal
PR: #7123
File: comms/core/src/peer_manager/storage/database.rs:1517-1541
Timestamp: 2025-05-29T09:42:20.881Z
Learning: In the hard_delete_all_stale_peers method in comms/core/src/peer_manager/storage/database.rs, the SQL query intentionally uses exact equality (peers.features = ?) rather than bitwise operations (peers.features & ? != 0) when matching COMMUNICATION_NODE features. This is the intended behavior to match only peers with exactly the COMMUNICATION_NODE feature, excluding those with additional feature flags.

Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.

Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.

Learnt from: hansieodendaal
PR: #7307
File: comms/core/src/connection_manager/peer_connection.rs:356-357
Timestamp: 2025-07-12T03:43:22.545Z
Learning: In base_layer/core/src/base_node/tari_pulse_service/mod.rs, the disconnect_if_unused call uses parameters (0, 2) because the tari_pulse_service creates exactly 2 substreams and 0 RPC sessions during health checks.

comms/dht/src/network_discovery/discovering.rs (8)

Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the fetch_peers_from_connection and collect_peer_stream functions rely on RPC streaming, and when the main connection is closed by another process, collect_peer_stream times out after STREAM_ITEM_TIMEOUT because it cannot detect that the peer can no longer respond, returning an empty vector of peers. This is why the connection state check is important for the retry logic.

Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the NUM_RETRIES logic in get_peers is specifically designed to handle peer connections that are closed while trying to RPC stream peer info, not general connection failures. The retry logic only applies when peers.is_empty() && !conn.is_connected() && attempt < NUM_RETRIES, which indicates a mid-stream disconnection.

Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the context.connectivity.dial_peer method should fail fast and return an error if a peer cannot be dialed, rather than requiring retry logic for general connection failures.

Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:721-735
Timestamp: 2025-07-09T08:13:37.206Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the 10-second STREAM_ITEM_TIMEOUT and retry logic are intentionally designed to handle service conflicts where other services kill seed peer connections during seedstrap operations. The underlying discovery_peer/dial_peer API timeouts are too lenient for seedstrap use cases, so the more aggressive timeout with retry logic is appropriate and necessary.

Learnt from: hansieodendaal
PR: #7123
File: comms/core/src/peer_manager/storage/database.rs:1517-1541
Timestamp: 2025-05-29T09:42:20.881Z
Learning: In the hard_delete_all_stale_peers method in comms/core/src/peer_manager/storage/database.rs, the SQL query intentionally uses exact equality (peers.features = ?) rather than bitwise operations (peers.features & ? != 0) when matching COMMUNICATION_NODE features. This is the intended behavior to match only peers with exactly the COMMUNICATION_NODE feature, excluding those with additional feature flags.

Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.

Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.

Learnt from: hansieodendaal
PR: #7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.

🧰 Additional context used
🧠 Learnings (5)
📓 Common learnings
Learnt from: hansieodendaal
PR: tari-project/tari#7358
File: comms/core/src/peer_manager/storage/database.rs:566-570
Timestamp: 2025-07-21T16:03:14.260Z
Learning: In the Tari peer database, there was a known issue with JSON serialization corruption in the `source` field of the `multi_addresses` table, causing approximately 0.4% of peer validation failures. The migration to Borsh serialization (2025-07-21-170500_peer_address_source) intentionally uses a destructive approach (dropping and recreating tables) because the existing JSON data contains corruption that cannot be reliably converted. This data loss is acceptable to ensure data integrity going forward.
Learnt from: hansieodendaal
PR: tari-project/tari#6963
File: comms/core/src/peer_manager/manager.rs:60-68
Timestamp: 2025-05-26T02:40:23.812Z
Learning: PeerDatabaseSql in the Tari codebase has been specifically refactored to handle concurrent access and mitigate blocking I/O concerns on async executor threads. The implementation has been tested under high load at both system level and through unit tests like test_concurrent_add_or_update_and_get_closest_peers which validates concurrent read/write operations.
common/config/presets/d_console_wallet.toml (3)

Learnt from: SolfataraEmit
PR: #6994
File: docs/src/API_GRPC_Explanation.md:236-244
Timestamp: 2025-04-28T11:41:15.722Z
Learning: The minotari_console_wallet reports a fourth field timelocked_balance in the terminal output for the getBalance method, which should be reflected in the documentation for accuracy. The wallet.proto file confirms that timelocked_balance is defined as the fourth field in the GetBalanceResponse message.

Learnt from: SolfataraEmit
PR: #6994
File: docs/src/API_GRPC_Explanation.md:236-244
Timestamp: 2025-04-28T11:41:15.722Z
Learning: The minotari_console_wallet reports a fourth field timelocked_balance in the terminal output for the getBalance method, which should be reflected in the documentation for accuracy.

Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:721-735
Timestamp: 2025-07-09T08:13:37.206Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the 10-second STREAM_ITEM_TIMEOUT and retry logic are intentionally designed to handle service conflicts where other services kill seed peer connections during seedstrap operations. The underlying discovery_peer/dial_peer API timeouts are too lenient for seedstrap use cases, so the more aggressive timeout with retry logic is appropriate and necessary.

common/config/presets/c_base_node_c.toml (1)

Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:721-735
Timestamp: 2025-07-09T08:13:37.206Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the 10-second STREAM_ITEM_TIMEOUT and retry logic are intentionally designed to handle service conflicts where other services kill seed peer connections during seedstrap operations. The underlying discovery_peer/dial_peer API timeouts are too lenient for seedstrap use cases, so the more aggressive timeout with retry logic is appropriate and necessary.

comms/dht/src/network_discovery/config.rs (8)

Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:721-735
Timestamp: 2025-07-09T08:13:37.206Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the 10-second STREAM_ITEM_TIMEOUT and retry logic are intentionally designed to handle service conflicts where other services kill seed peer connections during seedstrap operations. The underlying discovery_peer/dial_peer API timeouts are too lenient for seedstrap use cases, so the more aggressive timeout with retry logic is appropriate and necessary.

Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the NUM_RETRIES logic in get_peers is specifically designed to handle peer connections that are closed while trying to RPC stream peer info, not general connection failures. The retry logic only applies when peers.is_empty() && !conn.is_connected() && attempt < NUM_RETRIES, which indicates a mid-stream disconnection.

Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the fetch_peers_from_connection and collect_peer_stream functions rely on RPC streaming, and when the main connection is closed by another process, collect_peer_stream times out after STREAM_ITEM_TIMEOUT because it cannot detect that the peer can no longer respond, returning an empty vector of peers. This is why the connection state check is important for the retry logic.

Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the context.connectivity.dial_peer method should fail fast and return an error if a peer cannot be dialed, rather than requiring retry logic for general connection failures.

Learnt from: hansieodendaal
PR: #7123
File: comms/core/src/peer_manager/storage/database.rs:1517-1541
Timestamp: 2025-05-29T09:42:20.881Z
Learning: In the hard_delete_all_stale_peers method in comms/core/src/peer_manager/storage/database.rs, the SQL query intentionally uses exact equality (peers.features = ?) rather than bitwise operations (peers.features & ? != 0) when matching COMMUNICATION_NODE features. This is the intended behavior to match only peers with exactly the COMMUNICATION_NODE feature, excluding those with additional feature flags.

Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.

Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.

Learnt from: hansieodendaal
PR: #7307
File: comms/core/src/connection_manager/peer_connection.rs:356-357
Timestamp: 2025-07-12T03:43:22.545Z
Learning: In base_layer/core/src/base_node/tari_pulse_service/mod.rs, the disconnect_if_unused call uses parameters (0, 2) because the tari_pulse_service creates exactly 2 substreams and 0 RPC sessions during health checks.

comms/dht/src/network_discovery/discovering.rs (8)

Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the fetch_peers_from_connection and collect_peer_stream functions rely on RPC streaming, and when the main connection is closed by another process, collect_peer_stream times out after STREAM_ITEM_TIMEOUT because it cannot detect that the peer can no longer respond, returning an empty vector of peers. This is why the connection state check is important for the retry logic.

Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the NUM_RETRIES logic in get_peers is specifically designed to handle peer connections that are closed while trying to RPC stream peer info, not general connection failures. The retry logic only applies when peers.is_empty() && !conn.is_connected() && attempt < NUM_RETRIES, which indicates a mid-stream disconnection.

Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the context.connectivity.dial_peer method should fail fast and return an error if a peer cannot be dialed, rather than requiring retry logic for general connection failures.

Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:721-735
Timestamp: 2025-07-09T08:13:37.206Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the 10-second STREAM_ITEM_TIMEOUT and retry logic are intentionally designed to handle service conflicts where other services kill seed peer connections during seedstrap operations. The underlying discovery_peer/dial_peer API timeouts are too lenient for seedstrap use cases, so the more aggressive timeout with retry logic is appropriate and necessary.

Learnt from: hansieodendaal
PR: #7123
File: comms/core/src/peer_manager/storage/database.rs:1517-1541
Timestamp: 2025-05-29T09:42:20.881Z
Learning: In the hard_delete_all_stale_peers method in comms/core/src/peer_manager/storage/database.rs, the SQL query intentionally uses exact equality (peers.features = ?) rather than bitwise operations (peers.features & ? != 0) when matching COMMUNICATION_NODE features. This is the intended behavior to match only peers with exactly the COMMUNICATION_NODE feature, excluding those with additional feature flags.

Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.

Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.

Learnt from: hansieodendaal
PR: #7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
  • GitHub Check: cargo check with stable
  • GitHub Check: ci
  • GitHub Check: ledger build tests
  • GitHub Check: test (nextnet, nextnet)
  • GitHub Check: test (mainnet, stagenet)
  • GitHub Check: test (testnet, esmeralda)
  • GitHub Check: Cucumber tests / Base Layer
  • GitHub Check: Cucumber tests / FFI
🔇 Additional comments (2)
comms/dht/src/network_discovery/discovering.rs (2)

318-336: Good refactoring of offence severity logic

The centralization of offence severity determination into a single method improves maintainability and consistency.


191-253: Well-implemented tolerance mechanism for peer syncing

The implementation successfully adds tolerance for banable offenses during peer sync by:

  • Accumulating offenses instead of failing immediately
  • Respecting the configurable threshold
  • Continuing to process valid peers despite some bad ones
  • Returning the highest severity error after processing

This approach aligns well with the PR objectives to optimize the accumulation of good peer information.

Copy link
Collaborator

@SWvheerden SWvheerden left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is way overly complicated, the issues says only count invalid peers,

@hansieodendaal
Copy link
Contributor Author

this is way overly complicated, the issues says only count invalid peers,

We can add a ban if the peer sends more than x amount of bad peers in a batch as a precaution to flooding bad peers.

I will make a new one

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
comms/dht/src/network_discovery/discovering.rs (1)

146-191: Fix typo in error message and approve timeout handling.

The timeout handling implementation is excellent - it prevents hanging on unresponsive peers and provides detailed logging. However, there's a typo on line 242.

Apply this diff to fix the typo:

-                    "RPC get_peers from sunc '{}' timed out after {:?}",
+                    "RPC get_peers from sync '{}' timed out after {:?}",
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3c88d18 and 1464005.

📒 Files selected for processing (5)
  • common/config/presets/c_base_node_c.toml (1 hunks)
  • common/config/presets/d_console_wallet.toml (1 hunks)
  • comms/dht/src/network_discovery/config.rs (2 hunks)
  • comms/dht/src/network_discovery/discovering.rs (10 hunks)
  • comms/dht/src/network_discovery/error.rs (2 hunks)
✅ Files skipped from review due to trivial changes (2)
  • common/config/presets/d_console_wallet.toml
  • common/config/presets/c_base_node_c.toml
🚧 Files skipped from review as they are similar to previous changes (2)
  • comms/dht/src/network_discovery/error.rs
  • comms/dht/src/network_discovery/config.rs
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
  • GitHub Check: test (nextnet, nextnet)
  • GitHub Check: test (testnet, esmeralda)
  • GitHub Check: test (mainnet, stagenet)
  • GitHub Check: cargo check with stable
  • GitHub Check: ledger build tests
  • GitHub Check: Cucumber tests / Base Layer
  • GitHub Check: ci
  • GitHub Check: Cucumber tests / FFI
🔇 Additional comments (7)
comms/dht/src/network_discovery/discovering.rs (7)

30-31: LGTM: Import additions support new functionality.

The added imports (PeerValidatorError, ClientStreaming, RpcStatus, Hex, DhtPeerValidatorError, UnvalidatedPeerInfo) are correctly aligned with the new timeout handling, streaming operations, and offence accumulation logic.

Also applies to: 35-35, 49-49, 52-52


193-270: LGTM: Well-structured helper methods improve code organization.

The get_stream and get_peer_response helper methods effectively separate concerns and apply consistent timeout handling patterns. The error handling is comprehensive and provides good debugging context.


358-381: LGTM: Method signature simplification improves code organization.

Removing the sync_peer parameter and associated logging from validate_and_add_peer is good refactoring - it follows single responsibility principle and makes the method more focused on validation logic only.


395-396: LGTM: Correctly adds new error variant to ban logic.

Adding NetworkDiscoveryError::BanableOffencesWhileSyncing to the high-severity ban offences is appropriate - it ensures peers exceeding the offence threshold are properly banned.


420-445: LGTM: Enhanced logging improves network monitoring.

The detailed logging for both successful bans and failures provides excellent visibility into network health and banning behavior. The context information (peer ID, duration, reason) will be valuable for debugging and monitoring.


133-135: LGTM: Simplified success counting improves code clarity.

The streamlined success counting logic is cleaner and more maintainable. Error logging is now properly handled in the appropriate methods with better context.


272-356: Pattern matching correctly isolates InvalidPeerSignature errors

The if let NetworkDiscoveryError::PeerValidationError( DhtPeerValidatorError::ValidatorError(PeerValidatorError::InvalidPeerSignature { .. }) ) clause only matches the InvalidPeerSignature variant and will not catch any other PeerValidatorError variants. No changes are needed.

Copy link
Collaborator

@SWvheerden SWvheerden left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replace all the match errors just to log them with inspect_err

Comment on lines +309 to +318
let GetPeersResponse { peer } = match resp {
Ok(val) => val,
Err(err) => {
warn!(
target: LOG_TARGET,
"Discovering: Sync peer `{}` sent invalid response: {:?}",
sync_peer, err
);
return Err(NetworkDiscoveryError::from(err));
},
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
let GetPeersResponse { peer } = match resp {
Ok(val) => val,
Err(err) => {
warn!(
target: LOG_TARGET,
"Discovering: Sync peer `{}` sent invalid response: {:?}",
sync_peer, err
);
return Err(NetworkDiscoveryError::from(err));
},
let GetPeersResponse { peer } = resp.inspect_err(|e|
warn!(
target: LOG_TARGET,
"Discovering: Sync peer `{}` sent invalid response: {:?}",
sync_peer, err
))?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I am missing something here, but the compiler does not allow it because the inspect_err method is designed for inspecting errors without consuming the Result, and in each case I need to return on error?

Copy link
Collaborator

@SWvheerden SWvheerden Jul 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hence the ? at the end?
Doing the folloing is the same:

let res = match function(){
Ok(v) => v, 
Err(e) = {
                log (stuff)
                return e
                }

vs

let res = function().inspect_err(|e| log(stuff))?;

@stringhandler
Copy link
Collaborator

I disagree with the approach here. We should not be adding complicated ban logic, we will never be able to test this properly

);
let result = self.request_peers(peer_node_id, client).await;
self.ban_on_offence(peer_node_id.clone(), result).await?;

Ok(())
}

async fn request_peers(
async fn get_stream(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
async fn get_stream(
async fn get_peer_stream(

for more readable code

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants