-
Notifications
You must be signed in to change notification settings - Fork 247
feat: add tolerance for invalid peer info while busy with peer sync #7361
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: development
Are you sure you want to change the base?
feat: add tolerance for invalid peer info while busy with peer sync #7361
Conversation
Added tolerance for banable offences while busy with peer sync. A sync session will be interrupted after a configurable number of banable offencses has been committed. When the sync session ends due to this thresholf or otherwise, the peer will be banned for the most severe ban offence. This scheme enables a local node to accumulate good peer info sets in an optimal manner.
WalkthroughThis change introduces a configurable threshold for banning peers during network discovery peer syncing. Offences are accumulated and only trigger a ban when the threshold is reached or after syncing completes. The offence severity handling is centralized, and related configuration, validation, and timeout logic are updated in both code and configuration files. Changes
Sequence Diagram(s)sequenceDiagram
participant Discovering
participant SyncPeer
participant DhtClient
Discovering->>DhtClient: request_peers()
loop For each peer response
DhtClient-->>Discovering: PeerInfo or Error
Discovering->>Discovering: validate_and_add_peer()
alt Offence detected
Discovering->>Discovering: Accumulate offence count
alt Threshold reached
Discovering->>Discovering: Stop processing, ban SyncPeer
end
end
end
alt Offences recorded
Discovering->>Discovering: Ban SyncPeer if necessary
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
Note ⚡️ Unit Test Generation is now available in beta!Learn more here, or try it out under "Finishing Touches" below. 📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (5)
comms/dht/src/network_discovery/config.rs (1)
105-105
: Fix typo in documentationThere's a typo in the documentation: "syc" should be "sync".
- /// Any banable offense will be punished after syc completes or when the threshold is reached; this will let a node + /// Any banable offense will be punished after sync completes or when the threshold is reached; this will let a nodecommon/config/presets/d_console_wallet.toml (1)
380-380
: Fix typo in configuration commentSame typo as in the code: "syc" should be "sync".
-# Any banable offense will be punished after syc completes or when the threshold is reached; this will let a node +# Any banable offense will be punished after sync completes or when the threshold is reached; this will let a nodecommon/config/presets/c_base_node_c.toml (1)
334-334
: Fix typo in configuration commentSame typo: "syc" should be "sync".
-# Any banable offense will be punished after syc completes or when the threshold is reached; this will let a node +# Any banable offense will be punished after sync completes or when the threshold is reached; this will let a nodecomms/dht/src/actor.rs (1)
139-148
: Simplify the max method using the derived Ord traitSince
OffenceSeverity
now derivesOrd
, you can simplify themax
method implementation using the standard library'smax
function.impl OffenceSeverity { /// Compare two OffenceSeverity values and return the maximum severity pub fn max(self, other: OffenceSeverity) -> OffenceSeverity { - match (self, other) { - (OffenceSeverity::High, _) | (_, OffenceSeverity::High) => OffenceSeverity::High, - (OffenceSeverity::Medium, _) | (_, OffenceSeverity::Medium) => OffenceSeverity::Medium, - _ => OffenceSeverity::Low, - } + std::cmp::max(self, other) } }comms/dht/src/network_discovery/discovering.rs (1)
256-266
: Optimize error return logicThe current implementation sorts all offenses but only uses the first one. Consider using
max_by
for better performance.- // Return any error that has the highest severity offence if any were recorded - if !ban_offences.is_empty() { - // Sort the offences by severity, from highest to lowest - ban_offences.sort_by(|a, b| b.0.cmp(&a.0)); - if let Some((severity, err)) = ban_offences.into_iter().next() { - trace!( - target: LOG_TARGET, - "Request peers from: `{}`, returning error: {:?}, severity: {:?}", sync_peer, err, severity - ); - return Err(err); - } + // Return the error with the highest severity offence if any were recorded + if let Some((severity, err)) = ban_offences.into_iter().max_by(|a, b| a.0.cmp(&b.0)) { + trace!( + target: LOG_TARGET, + "Request peers from: `{}`, returning error: {:?}, severity: {:?}", sync_peer, err, severity + ); + return Err(err); }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
common/config/presets/c_base_node_c.toml
(1 hunks)common/config/presets/d_console_wallet.toml
(1 hunks)comms/dht/src/actor.rs
(1 hunks)comms/dht/src/network_discovery/config.rs
(2 hunks)comms/dht/src/network_discovery/discovering.rs
(7 hunks)
🧠 Learnings (5)
📓 Common learnings
Learnt from: hansieodendaal
PR: tari-project/tari#7358
File: comms/core/src/peer_manager/storage/database.rs:566-570
Timestamp: 2025-07-21T16:03:14.260Z
Learning: In the Tari peer database, there was a known issue with JSON serialization corruption in the `source` field of the `multi_addresses` table, causing approximately 0.4% of peer validation failures. The migration to Borsh serialization (2025-07-21-170500_peer_address_source) intentionally uses a destructive approach (dropping and recreating tables) because the existing JSON data contains corruption that cannot be reliably converted. This data loss is acceptable to ensure data integrity going forward.
Learnt from: hansieodendaal
PR: tari-project/tari#6963
File: comms/core/src/peer_manager/manager.rs:60-68
Timestamp: 2025-05-26T02:40:23.812Z
Learning: PeerDatabaseSql in the Tari codebase has been specifically refactored to handle concurrent access and mitigate blocking I/O concerns on async executor threads. The implementation has been tested under high load at both system level and through unit tests like test_concurrent_add_or_update_and_get_closest_peers which validates concurrent read/write operations.
common/config/presets/d_console_wallet.toml (3)
Learnt from: SolfataraEmit
PR: #6994
File: docs/src/API_GRPC_Explanation.md:236-244
Timestamp: 2025-04-28T11:41:15.722Z
Learning: The minotari_console_wallet reports a fourth field timelocked_balance
in the terminal output for the getBalance
method, which should be reflected in the documentation for accuracy. The wallet.proto
file confirms that timelocked_balance
is defined as the fourth field in the GetBalanceResponse
message.
Learnt from: SolfataraEmit
PR: #6994
File: docs/src/API_GRPC_Explanation.md:236-244
Timestamp: 2025-04-28T11:41:15.722Z
Learning: The minotari_console_wallet reports a fourth field timelocked_balance
in the terminal output for the getBalance
method, which should be reflected in the documentation for accuracy.
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:721-735
Timestamp: 2025-07-09T08:13:37.206Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the 10-second STREAM_ITEM_TIMEOUT and retry logic are intentionally designed to handle service conflicts where other services kill seed peer connections during seedstrap operations. The underlying discovery_peer/dial_peer API timeouts are too lenient for seedstrap use cases, so the more aggressive timeout with retry logic is appropriate and necessary.
common/config/presets/c_base_node_c.toml (1)
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:721-735
Timestamp: 2025-07-09T08:13:37.206Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the 10-second STREAM_ITEM_TIMEOUT and retry logic are intentionally designed to handle service conflicts where other services kill seed peer connections during seedstrap operations. The underlying discovery_peer/dial_peer API timeouts are too lenient for seedstrap use cases, so the more aggressive timeout with retry logic is appropriate and necessary.
comms/dht/src/network_discovery/config.rs (8)
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:721-735
Timestamp: 2025-07-09T08:13:37.206Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the 10-second STREAM_ITEM_TIMEOUT and retry logic are intentionally designed to handle service conflicts where other services kill seed peer connections during seedstrap operations. The underlying discovery_peer/dial_peer API timeouts are too lenient for seedstrap use cases, so the more aggressive timeout with retry logic is appropriate and necessary.
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the NUM_RETRIES logic in get_peers is specifically designed to handle peer connections that are closed while trying to RPC stream peer info, not general connection failures. The retry logic only applies when peers.is_empty() && !conn.is_connected() && attempt < NUM_RETRIES, which indicates a mid-stream disconnection.
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the fetch_peers_from_connection and collect_peer_stream functions rely on RPC streaming, and when the main connection is closed by another process, collect_peer_stream times out after STREAM_ITEM_TIMEOUT because it cannot detect that the peer can no longer respond, returning an empty vector of peers. This is why the connection state check is important for the retry logic.
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the context.connectivity.dial_peer method should fail fast and return an error if a peer cannot be dialed, rather than requiring retry logic for general connection failures.
Learnt from: hansieodendaal
PR: #7123
File: comms/core/src/peer_manager/storage/database.rs:1517-1541
Timestamp: 2025-05-29T09:42:20.881Z
Learning: In the hard_delete_all_stale_peers
method in comms/core/src/peer_manager/storage/database.rs
, the SQL query intentionally uses exact equality (peers.features = ?
) rather than bitwise operations (peers.features & ? != 0
) when matching COMMUNICATION_NODE
features. This is the intended behavior to match only peers with exactly the COMMUNICATION_NODE
feature, excluding those with additional feature flags.
Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.
Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.
Learnt from: hansieodendaal
PR: #7307
File: comms/core/src/connection_manager/peer_connection.rs:356-357
Timestamp: 2025-07-12T03:43:22.545Z
Learning: In base_layer/core/src/base_node/tari_pulse_service/mod.rs, the disconnect_if_unused call uses parameters (0, 2) because the tari_pulse_service creates exactly 2 substreams and 0 RPC sessions during health checks.
comms/dht/src/network_discovery/discovering.rs (8)
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the fetch_peers_from_connection and collect_peer_stream functions rely on RPC streaming, and when the main connection is closed by another process, collect_peer_stream times out after STREAM_ITEM_TIMEOUT because it cannot detect that the peer can no longer respond, returning an empty vector of peers. This is why the connection state check is important for the retry logic.
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the NUM_RETRIES logic in get_peers is specifically designed to handle peer connections that are closed while trying to RPC stream peer info, not general connection failures. The retry logic only applies when peers.is_empty() && !conn.is_connected() && attempt < NUM_RETRIES, which indicates a mid-stream disconnection.
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the context.connectivity.dial_peer method should fail fast and return an error if a peer cannot be dialed, rather than requiring retry logic for general connection failures.
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:721-735
Timestamp: 2025-07-09T08:13:37.206Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the 10-second STREAM_ITEM_TIMEOUT and retry logic are intentionally designed to handle service conflicts where other services kill seed peer connections during seedstrap operations. The underlying discovery_peer/dial_peer API timeouts are too lenient for seedstrap use cases, so the more aggressive timeout with retry logic is appropriate and necessary.
Learnt from: hansieodendaal
PR: #7123
File: comms/core/src/peer_manager/storage/database.rs:1517-1541
Timestamp: 2025-05-29T09:42:20.881Z
Learning: In the hard_delete_all_stale_peers
method in comms/core/src/peer_manager/storage/database.rs
, the SQL query intentionally uses exact equality (peers.features = ?
) rather than bitwise operations (peers.features & ? != 0
) when matching COMMUNICATION_NODE
features. This is the intended behavior to match only peers with exactly the COMMUNICATION_NODE
feature, excluding those with additional feature flags.
Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.
Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.
Learnt from: hansieodendaal
PR: #7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
🧰 Additional context used
🧠 Learnings (5)
📓 Common learnings
Learnt from: hansieodendaal
PR: tari-project/tari#7358
File: comms/core/src/peer_manager/storage/database.rs:566-570
Timestamp: 2025-07-21T16:03:14.260Z
Learning: In the Tari peer database, there was a known issue with JSON serialization corruption in the `source` field of the `multi_addresses` table, causing approximately 0.4% of peer validation failures. The migration to Borsh serialization (2025-07-21-170500_peer_address_source) intentionally uses a destructive approach (dropping and recreating tables) because the existing JSON data contains corruption that cannot be reliably converted. This data loss is acceptable to ensure data integrity going forward.
Learnt from: hansieodendaal
PR: tari-project/tari#6963
File: comms/core/src/peer_manager/manager.rs:60-68
Timestamp: 2025-05-26T02:40:23.812Z
Learning: PeerDatabaseSql in the Tari codebase has been specifically refactored to handle concurrent access and mitigate blocking I/O concerns on async executor threads. The implementation has been tested under high load at both system level and through unit tests like test_concurrent_add_or_update_and_get_closest_peers which validates concurrent read/write operations.
common/config/presets/d_console_wallet.toml (3)
Learnt from: SolfataraEmit
PR: #6994
File: docs/src/API_GRPC_Explanation.md:236-244
Timestamp: 2025-04-28T11:41:15.722Z
Learning: The minotari_console_wallet reports a fourth field timelocked_balance
in the terminal output for the getBalance
method, which should be reflected in the documentation for accuracy. The wallet.proto
file confirms that timelocked_balance
is defined as the fourth field in the GetBalanceResponse
message.
Learnt from: SolfataraEmit
PR: #6994
File: docs/src/API_GRPC_Explanation.md:236-244
Timestamp: 2025-04-28T11:41:15.722Z
Learning: The minotari_console_wallet reports a fourth field timelocked_balance
in the terminal output for the getBalance
method, which should be reflected in the documentation for accuracy.
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:721-735
Timestamp: 2025-07-09T08:13:37.206Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the 10-second STREAM_ITEM_TIMEOUT and retry logic are intentionally designed to handle service conflicts where other services kill seed peer connections during seedstrap operations. The underlying discovery_peer/dial_peer API timeouts are too lenient for seedstrap use cases, so the more aggressive timeout with retry logic is appropriate and necessary.
common/config/presets/c_base_node_c.toml (1)
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:721-735
Timestamp: 2025-07-09T08:13:37.206Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the 10-second STREAM_ITEM_TIMEOUT and retry logic are intentionally designed to handle service conflicts where other services kill seed peer connections during seedstrap operations. The underlying discovery_peer/dial_peer API timeouts are too lenient for seedstrap use cases, so the more aggressive timeout with retry logic is appropriate and necessary.
comms/dht/src/network_discovery/config.rs (8)
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:721-735
Timestamp: 2025-07-09T08:13:37.206Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the 10-second STREAM_ITEM_TIMEOUT and retry logic are intentionally designed to handle service conflicts where other services kill seed peer connections during seedstrap operations. The underlying discovery_peer/dial_peer API timeouts are too lenient for seedstrap use cases, so the more aggressive timeout with retry logic is appropriate and necessary.
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the NUM_RETRIES logic in get_peers is specifically designed to handle peer connections that are closed while trying to RPC stream peer info, not general connection failures. The retry logic only applies when peers.is_empty() && !conn.is_connected() && attempt < NUM_RETRIES, which indicates a mid-stream disconnection.
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the fetch_peers_from_connection and collect_peer_stream functions rely on RPC streaming, and when the main connection is closed by another process, collect_peer_stream times out after STREAM_ITEM_TIMEOUT because it cannot detect that the peer can no longer respond, returning an empty vector of peers. This is why the connection state check is important for the retry logic.
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the context.connectivity.dial_peer method should fail fast and return an error if a peer cannot be dialed, rather than requiring retry logic for general connection failures.
Learnt from: hansieodendaal
PR: #7123
File: comms/core/src/peer_manager/storage/database.rs:1517-1541
Timestamp: 2025-05-29T09:42:20.881Z
Learning: In the hard_delete_all_stale_peers
method in comms/core/src/peer_manager/storage/database.rs
, the SQL query intentionally uses exact equality (peers.features = ?
) rather than bitwise operations (peers.features & ? != 0
) when matching COMMUNICATION_NODE
features. This is the intended behavior to match only peers with exactly the COMMUNICATION_NODE
feature, excluding those with additional feature flags.
Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.
Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.
Learnt from: hansieodendaal
PR: #7307
File: comms/core/src/connection_manager/peer_connection.rs:356-357
Timestamp: 2025-07-12T03:43:22.545Z
Learning: In base_layer/core/src/base_node/tari_pulse_service/mod.rs, the disconnect_if_unused call uses parameters (0, 2) because the tari_pulse_service creates exactly 2 substreams and 0 RPC sessions during health checks.
comms/dht/src/network_discovery/discovering.rs (8)
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the fetch_peers_from_connection and collect_peer_stream functions rely on RPC streaming, and when the main connection is closed by another process, collect_peer_stream times out after STREAM_ITEM_TIMEOUT because it cannot detect that the peer can no longer respond, returning an empty vector of peers. This is why the connection state check is important for the retry logic.
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the NUM_RETRIES logic in get_peers is specifically designed to handle peer connections that are closed while trying to RPC stream peer info, not general connection failures. The retry logic only applies when peers.is_empty() && !conn.is_connected() && attempt < NUM_RETRIES, which indicates a mid-stream disconnection.
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:352-456
Timestamp: 2025-07-09T08:33:29.320Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the context.connectivity.dial_peer method should fail fast and return an error if a peer cannot be dialed, rather than requiring retry logic for general connection failures.
Learnt from: hansieodendaal
PR: #7294
File: comms/dht/src/network_discovery/seed_strap.rs:721-735
Timestamp: 2025-07-09T08:13:37.206Z
Learning: In comms/dht/src/network_discovery/seed_strap.rs, the 10-second STREAM_ITEM_TIMEOUT and retry logic are intentionally designed to handle service conflicts where other services kill seed peer connections during seedstrap operations. The underlying discovery_peer/dial_peer API timeouts are too lenient for seedstrap use cases, so the more aggressive timeout with retry logic is appropriate and necessary.
Learnt from: hansieodendaal
PR: #7123
File: comms/core/src/peer_manager/storage/database.rs:1517-1541
Timestamp: 2025-05-29T09:42:20.881Z
Learning: In the hard_delete_all_stale_peers
method in comms/core/src/peer_manager/storage/database.rs
, the SQL query intentionally uses exact equality (peers.features = ?
) rather than bitwise operations (peers.features & ? != 0
) when matching COMMUNICATION_NODE
features. This is the intended behavior to match only peers with exactly the COMMUNICATION_NODE
feature, excluding those with additional feature flags.
Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.
Learnt from: SWvheerden
PR: #6951
File: base_layer/core/src/base_node/tari_pulse_service/mod.rs:327-352
Timestamp: 2025-04-16T07:06:53.981Z
Learning: The discovery_peer and dial_peer methods in the Tari codebase have built-in timeout mechanisms, so adding explicit timeouts with tokio::time::timeout is unnecessary.
Learnt from: hansieodendaal
PR: #7284
File: applications/minotari_console_wallet/src/automation/commands.rs:0-0
Timestamp: 2025-07-15T12:23:14.650Z
Learning: In applications/minotari_console_wallet/src/automation/commands.rs, the consistent error handling pattern for command execution is to use match statements that: 1) On success: log with debug!, print user feedback, and push tx_id to tx_ids vector for monitoring, 2) On error: print error message with eprintln! using the format "{CommandName} error! {}", rather than using .unwrap() which would panic.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: cargo check with stable
- GitHub Check: ci
- GitHub Check: ledger build tests
- GitHub Check: test (nextnet, nextnet)
- GitHub Check: test (mainnet, stagenet)
- GitHub Check: test (testnet, esmeralda)
- GitHub Check: Cucumber tests / Base Layer
- GitHub Check: Cucumber tests / FFI
🔇 Additional comments (2)
comms/dht/src/network_discovery/discovering.rs (2)
318-336
: Good refactoring of offence severity logicThe centralization of offence severity determination into a single method improves maintainability and consistency.
191-253
: Well-implemented tolerance mechanism for peer syncingThe implementation successfully adds tolerance for banable offenses during peer sync by:
- Accumulating offenses instead of failing immediately
- Respecting the configurable threshold
- Continuing to process valid peers despite some bad ones
- Returning the highest severity error after processing
This approach aligns well with the PR objectives to optimize the accumulation of good peer information.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is way overly complicated, the issues says only count invalid peers,
I will make a new one |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
comms/dht/src/network_discovery/discovering.rs (1)
146-191
: Fix typo in error message and approve timeout handling.The timeout handling implementation is excellent - it prevents hanging on unresponsive peers and provides detailed logging. However, there's a typo on line 242.
Apply this diff to fix the typo:
- "RPC get_peers from sunc '{}' timed out after {:?}", + "RPC get_peers from sync '{}' timed out after {:?}",
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
common/config/presets/c_base_node_c.toml
(1 hunks)common/config/presets/d_console_wallet.toml
(1 hunks)comms/dht/src/network_discovery/config.rs
(2 hunks)comms/dht/src/network_discovery/discovering.rs
(10 hunks)comms/dht/src/network_discovery/error.rs
(2 hunks)
✅ Files skipped from review due to trivial changes (2)
- common/config/presets/d_console_wallet.toml
- common/config/presets/c_base_node_c.toml
🚧 Files skipped from review as they are similar to previous changes (2)
- comms/dht/src/network_discovery/error.rs
- comms/dht/src/network_discovery/config.rs
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: test (nextnet, nextnet)
- GitHub Check: test (testnet, esmeralda)
- GitHub Check: test (mainnet, stagenet)
- GitHub Check: cargo check with stable
- GitHub Check: ledger build tests
- GitHub Check: Cucumber tests / Base Layer
- GitHub Check: ci
- GitHub Check: Cucumber tests / FFI
🔇 Additional comments (7)
comms/dht/src/network_discovery/discovering.rs (7)
30-31
: LGTM: Import additions support new functionality.The added imports (PeerValidatorError, ClientStreaming, RpcStatus, Hex, DhtPeerValidatorError, UnvalidatedPeerInfo) are correctly aligned with the new timeout handling, streaming operations, and offence accumulation logic.
Also applies to: 35-35, 49-49, 52-52
193-270
: LGTM: Well-structured helper methods improve code organization.The
get_stream
andget_peer_response
helper methods effectively separate concerns and apply consistent timeout handling patterns. The error handling is comprehensive and provides good debugging context.
358-381
: LGTM: Method signature simplification improves code organization.Removing the
sync_peer
parameter and associated logging fromvalidate_and_add_peer
is good refactoring - it follows single responsibility principle and makes the method more focused on validation logic only.
395-396
: LGTM: Correctly adds new error variant to ban logic.Adding
NetworkDiscoveryError::BanableOffencesWhileSyncing
to the high-severity ban offences is appropriate - it ensures peers exceeding the offence threshold are properly banned.
420-445
: LGTM: Enhanced logging improves network monitoring.The detailed logging for both successful bans and failures provides excellent visibility into network health and banning behavior. The context information (peer ID, duration, reason) will be valuable for debugging and monitoring.
133-135
: LGTM: Simplified success counting improves code clarity.The streamlined success counting logic is cleaner and more maintainable. Error logging is now properly handled in the appropriate methods with better context.
272-356
: Pattern matching correctly isolates InvalidPeerSignature errorsThe
if let NetworkDiscoveryError::PeerValidationError( DhtPeerValidatorError::ValidatorError(PeerValidatorError::InvalidPeerSignature { .. }) )
clause only matches theInvalidPeerSignature
variant and will not catch any otherPeerValidatorError
variants. No changes are needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace all the match errors just to log them with inspect_err
let GetPeersResponse { peer } = match resp { | ||
Ok(val) => val, | ||
Err(err) => { | ||
warn!( | ||
target: LOG_TARGET, | ||
"Discovering: Sync peer `{}` sent invalid response: {:?}", | ||
sync_peer, err | ||
); | ||
return Err(NetworkDiscoveryError::from(err)); | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let GetPeersResponse { peer } = match resp { | |
Ok(val) => val, | |
Err(err) => { | |
warn!( | |
target: LOG_TARGET, | |
"Discovering: Sync peer `{}` sent invalid response: {:?}", | |
sync_peer, err | |
); | |
return Err(NetworkDiscoveryError::from(err)); | |
}, | |
let GetPeersResponse { peer } = resp.inspect_err(|e| | |
warn!( | |
target: LOG_TARGET, | |
"Discovering: Sync peer `{}` sent invalid response: {:?}", | |
sync_peer, err | |
))? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I am missing something here, but the compiler does not allow it because the inspect_err
method is designed for inspecting errors without consuming the Result, and in each case I need to return on error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hence the ?
at the end?
Doing the folloing is the same:
let res = match function(){
Ok(v) => v,
Err(e) = {
log (stuff)
return e
}
vs
let res = function().inspect_err(|e| log(stuff))?;
I disagree with the approach here. We should not be adding complicated ban logic, we will never be able to test this properly |
); | ||
let result = self.request_peers(peer_node_id, client).await; | ||
self.ban_on_offence(peer_node_id.clone(), result).await?; | ||
|
||
Ok(()) | ||
} | ||
|
||
async fn request_peers( | ||
async fn get_stream( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
async fn get_stream( | |
async fn get_peer_stream( |
for more readable code
Description
Added tolerance for banable offences while busy with peer sync. A sync session will be interrupted after a configurable number of banable offences has been committed. When the sync session ends due to this threshold or otherwise, the peer will be banned for the most severe ban offence. This scheme enables a local node to accumulate good peer info sets optimally.Update: All sync peers emit invalid peer info sets due to random embedded invalid peer signature errors in the peer identity claims. This PR is an interim measure to combat large scale banning of peers and to collect good peer info sets from peers that send up to a threshold number of bad peer identity claims. Those sync peers will not be banned during peer sync.
(See linked issue #7306.)
Note: This does not apply to the seedstrap phase as seed peer ban offences are ignored then.
Motivation and Context
Peer sync efficiency can be improved if good peer info sets are accepted from sync peers in the midst of having committed
athis specific banable offence.How Has This Been Tested?
System-level testing.
What process can a PR reviewer use to test or verify this change?
Code review.
System-level testing.
Breaking Changes
Summary by CodeRabbit