-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove a random subset of validators in net_dynamic_hb #374
Comments
Hello! I'm going to fix it and I have a few questions.
|
@RicoGit, thanks for taking this issue! You are right about breaking the correctness invariant. I think that if we remove a subset of validators from a correct network and it becomes incorrect as a result then we can skip checking subsequent assertions because they may not hold with more than a third of validators being malicious. However, for this test I would suggest always making sure that the network remains correct after removal of a subset of nodes. That is, if there is no such subset that you can remove without making the network incorrect, there is no reason to try readdition because the resulting incorrect network may in general not readd the nodes in that subset. |
Let me clarify. The test has to remove more than 1 validators, but cluster should remain correct. For example is we have total N nodes and f malicious node, the test should remove less than N-(3f+1). But in the case of N=4 f=1, the test should remove 0 nodes - in that case, the test will check nothing. |
These are all good points. On one hand, we want to remove nodes only if the removal results in a network that preserves the correctness invariant, or the assertions might fail due to the network having become incorrect. On the other hand, we still need to test cases with maximal assumptions on the number of correct and faulty nodes, such as |
I think we should actually test the most general case and replace one set of validators with a completely different set of validators which may or may not even overlap with the first one. Of course both sets need to satisfy the condition that less than a third of them are malicious. But note that correctness of the algorithm never depends on the intersection of the sets or anything. During the transition period, the old set is still fully performing the consensus algorithm, while the new set is running the distributed key generation (DKG). After DKG has been completed, the new set takes over the consensus algorithm. |
How about making that a new test rather than modifying |
I agree, let's |
Makes sense. 👍 |
Instead of a single validator (a.k.a. pivot node), the test
net_dynamic_hb
should pick a subset of validators at random for removal. This will better reflect the interface toDynamicHoneyBadger
that instead of changing one node in fact changes all nodes by substituting one set of validators for another.The text was updated successfully, but these errors were encountered: