-
Notifications
You must be signed in to change notification settings - Fork 955
Tree-sync friendly lookup sync tests #8592
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: unstable
Are you sure you want to change the base?
Conversation
| // removed from the da_checker. Note that ALL components are removed from the da_checker | ||
| // so when we re-download and process the block we get the error | ||
| // MissingComponentsAfterAllProcessed and get stuck. | ||
| lookup.reset_requests(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug found on testing, lookups may be stuck given this sequence of events
| // sending retry requests to the disconnecting peer. | ||
| for sync_request_id in self.network.peer_disconnected(peer_id) { | ||
| self.inject_error(*peer_id, sync_request_id, RPCError::Disconnected); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor bug, we need to remove the peer from the sync states (e.g. self.block_lookups) then inject the disconnect events. Otherwise we may send requests to peers that are already disconnected. I don't think there's risk of sync getting stuck if libp2p rejects sending messages to disconnected peers, but deserves a fix anyway.
|
This pull request has merge conflicts. Could you please resolve them @dapplion? 🙏 |
| // Should not penalize peer, but network is not clear because of the blocks_by_range requests | ||
| rig.expect_no_penalty_for(peer_id); | ||
| rig.assert_ignored_chain(chain_hash); | ||
| assert_eq!(r.dropped_lookups(), 0, "no dropped lookups"); | ||
| } | ||
|
|
||
| // Regression test for https://github.com/sigp/lighthouse/pull/7118 | ||
| // 8042 UPDATE: block was previously added to the failed_chains cache, now it's inserted into the | ||
| // ignored chains cache. The regression test still applies as the chaild lookup is not created |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor typo
| // ignored chains cache. The regression test still applies as the chaild lookup is not created | |
| // ignored chains cache. The regression test still applies as the child lookup is not created |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
| node_custody_type: NodeCustodyType::Fullnode, | ||
| test_config: TestConfig { | ||
| disable_crypto: false, | ||
| disable_fetch_blobs: false, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this currently do anything in the tests, other than skipping the attempt?
The attempt will always fail because the mock-el does not support getBlobs right?
I made an attempt to add it in #7986, but we didn't end up merging it because it didn't feel useful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed disable_fetch_blobs as it does not materially increase speed (I think I added it to make the logs less messy)
Also applied disable_crypto to block production in the TestRig
| .get_blinded_block(block_root) | ||
| .unwrap() | ||
| .unwrap_or_else(|| { | ||
| panic!("block root does not exist in external harness {block_root:?}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't always "external" harness
| panic!("block root does not exist in external harness {block_root:?}") | |
| panic!("block root does not exist in harness {block_root:?}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
|
|
||
| #[cfg(test)] | ||
| #[derive(Debug)] | ||
| /// Tuple of `SingleLookupId`, requested block root, awaiting parent block root (if any), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please update doc - no longer a tuple and awaiting parent block removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
beacon_node/network/Cargo.toml
Outdated
| k256 = "0.13.4" | ||
| kzg = { workspace = true } | ||
| matches = "0.1.8" | ||
| paste = "1.0.15" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| paste = "1.0.15" | |
| paste = { workspace = true } |
| RECENT_FORKS_BEFORE_GLOAS=electra fulu | ||
|
|
||
| # List of all recent hard forks. This list is used to set env variables for http_api tests | ||
| # Include phase0 to test the code paths in sync that are pre blobs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We already have nightly-tests that runs prior fork tests
#8319
But i just realised it hasn't been activated on the sigp fork because github only run scheduled workflows from the main branch (stable), we can either wait until the release or have a separate PR to stable to activate this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Made a PR to activate these nightly tests:
#8636
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we keep this for network tests only? It's just one extra fork and makes it easy to debug and catch errors. For sync tests we should keep the forks that add new objects like run only
- phase0, deneb, fulu
|
|
||
| test-network-%: | ||
| env FORK_NAME=$* cargo nextest run --release \ | ||
| env FORK_NAME=$* cargo nextest run --no-fail-fast --release \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you want to merge this, or just for your local testing? I think it's fine to not fail fast, as long as the job doens't take forever to run, e.g. beacon-chain-tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The --no-fail-fast gives you more information on CI on which set of tests failed. A single fork run is not that long so we don't save that much time. But the full report is useful
| /// Beacon chain harness | ||
| harness: BeaconChainHarness<EphemeralHarnessType<E>>, | ||
| /// External beacon chain harness to produce blocks that are not imported | ||
| external_harness: BeaconChainHarness<EphemeralHarnessType<E>>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason to have this as a field on TestRig? I see that it's only used in build_chain
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good find! Moved to build_chain
|
|
||
| // Inject a Disconnected error on all requests associated with the disconnected peer | ||
| // to retry all batches/lookups. Only after removing the peer from the data structures to | ||
| // sending retry requests to the disconnecting peer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing word i think
| // sending retry requests to the disconnecting peer. | |
| // avoid sending retry requests to the disconnecting peer. |
|
Some required checks have failed. Could you please take a look @dapplion? 🙏 |
Issue Addressed
Current lookup sync tests are written in an explicit way that assume how the internals of lookup sync work. For example the test would do:
This is unnecessarily verbose. And it will requires a complete re-write when something changes in the internals of lookup sync (has happened a few times, mostly for deneb and fulu).
What we really want to assert is:
Proposed Changes
Keep all existing tests and add new cases but written in the new style described above. The logic to serve and respond to request is in this function
fn simulatehttps://github.com/dapplion/lighthouse/blob/2288a3aeb11164bb1960dc803f41696c984c69ff/beacon_node/network/src/sync/tests/lookups.rs#L301CompleteStrategywhere you can set for example "respond to BlocksByRoot requests with empty"TestConfigAlong the way I found a couple bugs, which I documented on the diff.
Review guide
Look at
lighthouse/beacon_node/network/src/sync/tests/lookups.rsdirectly (no diff).Other changes are very minor and should not affect production paths