Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run distributed key generation on-chain, storing Acks and Parts in a contract. #35

Closed
afck opened this issue Nov 19, 2018 · 36 comments
Closed
Assignees

Comments

@afck
Copy link
Collaborator

afck commented Nov 19, 2018

Whenever the set of validators changes (due to a new staking epoch or a faulty node being removed), the new validators need to generate their threshold keys, so Honey Badger can be restarted with the new configuration.

To do this, each of the new validators needs to create a SyncKeyGen instance with all new validators' public key, and publish their Part message on the contract. After each block, they handle all key gen messages stored on the contract. After the first block with which SyncKeyGen::count_complete is at least N - f—i.e. ⅔ of the new validators' parts are complete—, the new keys are generated. All old validators terminate their Honey Badger instances, and the new ones start theirs, beginning at the next block.

@afck
Copy link
Collaborator Author

afck commented Nov 19, 2018

@varasev: For this we would need an additional smart contract—not too smart, though: It just needs to record the history of messages that were exchanged during the key generation process.

Validators—and only the ones that were selected for the next staking epoch—can commit one Part message and up to N (number of validators) Ack messages each. They should probably be stored as events containing the serialized messages. The contract could even be oblivious of the message types. If that's useful, we could make it a bit smarter, though: It could count the number of Acks referring to each Part, and know that key generation is complete when more than ⅔ of the Parts received Acks from at least ⅓ of the validators. After a block where key generation completes, a new Honey Badger instance with the new validator set will be started.

@afck
Copy link
Collaborator Author

afck commented Nov 19, 2018

Actually: Can Parity record events in blocks even without a specific contract? Then we could just store the key generation messages directly.

@varasev
Copy link

varasev commented Nov 19, 2018

Please take a look at the draft of the contract: https://github.com/poanetwork/pos-contracts/blob/master/contracts/KeyGenHistory.sol

Is this approximately what we need?

@afck
Copy link
Collaborator Author

afck commented Nov 19, 2018

Looks great!
I think / 3 is wrong, though: Let's multiply by 3 on the other side, so it doesn't round down.

@afck
Copy link
Collaborator Author

afck commented Nov 19, 2018

Also, we'll probably need events for the messages: It should be possible for a restarting node to read the key generation messages from the blockchain and reconstruct its key shares.

@varasev
Copy link

varasev commented Nov 20, 2018

Ok, I've made corresponding changes: https://github.com/poanetwork/pos-contracts/blob/master/contracts/KeyGenHistory.sol

The PartWritten and AckWritten events were added. Are they correct?

@varasev
Copy link

varasev commented Nov 20, 2018

Please pay attention to the next things:

  1. I propose to have the changeRequestCount in ValidatorSet contract which will be incremented each time when a new validator set is requested by Parity (due to a new staking epoch or a faulty node being removed): https://github.com/poanetwork/pos-contracts/blob/bf1a041f1773b205bfd3739a121e66adcd22fb0c/contracts/ReportingValidatorSet.sol#L98 and https://github.com/poanetwork/pos-contracts/blob/bf1a041f1773b205bfd3739a121e66adcd22fb0c/contracts/ReportingValidatorSet.sol#L115

    The stakingEpoch counter is only incremented when newValidatorSet() is called: https://github.com/poanetwork/pos-contracts/blob/05e277e61fb9503da169dcbee6555b93f24267ed/contracts/ReportingValidatorSet.sol#L99

  2. KeyGenHistory.writePart function should be called before KeyGenHistory.writeAck is called: https://github.com/poanetwork/pos-contracts/blob/1b0111d1a66a7c8a3471ea67e2b421f7a0861c0d/contracts/KeyGenHistory.sol#L58

  3. writePart function can only be called once by the specified validator within the current changeRequestCount: https://github.com/poanetwork/pos-contracts/blob/1b0111d1a66a7c8a3471ea67e2b421f7a0861c0d/contracts/KeyGenHistory.sol#L44

  4. writeAck function can only be called once for the same ack message within the current changeRequestCount by the specified validator: https://github.com/poanetwork/pos-contracts/blob/1b0111d1a66a7c8a3471ea67e2b421f7a0861c0d/contracts/KeyGenHistory.sol#L59 - i.e., the validator cannot call writeAck twice for the same ack message within the current changeRequestCount.

@afck
Copy link
Collaborator Author

afck commented Nov 20, 2018

Thanks, that looks great, and these three requirements make perfect sense. 👍

I assume changeRequestCount is a counter for validator changes (banning faulty nodes) during a staking epoch, and we'd reset it to zero in each new staking epoch?

@varasev
Copy link

varasev commented Nov 20, 2018

I assume changeRequestCount is a counter for validator changes (banning faulty nodes) during a staking epoch, and we'd reset it to zero in each new staking epoch?

I propose to simply increment changeRequestCount each time when a validator set changes (despite stakingEpoch counter) and not to reset it (so that this counter would be incremented independently of the current staking epoch). But if that's necessary, we could reset as you suggested.

@afck
Copy link
Collaborator Author

afck commented Nov 20, 2018

You're right, always incrementing is simpler. 👍

@afck
Copy link
Collaborator Author

afck commented Nov 20, 2018

I think there's one more thing we'll have to store on the blockchain (probably in ReportingValidatorSet?), namely the validators'/candidates' (serialized) threshold_crypto::PublicKeys. These are needed to initialize key generation, and also to start Honey Badger.

@afck
Copy link
Collaborator Author

afck commented Nov 20, 2018

(On the other hand, it would also be feasible to exchange these via Whisper, so they don't necessarily need to go into the contract.)

@varasev
Copy link

varasev commented Nov 20, 2018

We could add the next function for that:

mapping(address => bytes) public publicKey;

function savePublicKey(address _validator, bytes _key) public onlySystem {
    publicKey[_validator] = _key;
}

or (for batch saving)

function savePublicKeys(address[] _validators, bytes[] _keys) public onlySystem {
    require(_validators.length == _keys.length);
    for (uint256 i = 0; i < _validators.length; i++) {
        publicKey[_validators[i]] = _keys[i];
    }
}

Is this correct?

Should there be any function to remove the key from the contract after its using? E.g.:

function removePublicKey(address _validator) public onlySystem {
    delete publicKey[_validator];
}

@afck
Copy link
Collaborator Author

afck commented Nov 20, 2018

Yes, that would be perfect!
I don't think we'd batch-add public keys. The validators would rather publish theirs individually, probably as soon as they register? I also don't think we'll ever need to remove them. Possibly update, but for that, savePublicKey would do.

@varasev
Copy link

varasev commented Nov 20, 2018

Ok, the savePublicKey function will be called by SYSTEM_ADDRESS, right?

@afck
Copy link
Collaborator Author

afck commented Nov 20, 2018

No, I think it would be called by the validator themselves.
Ideally they should include their public key as soon as they register with the contract as validators in the first place.

@varasev
Copy link

varasev commented Nov 20, 2018

Ok, savePublicKey function will be called by validator's (observer's) Parity node and msg.sender will be the address of validator. If so, then savePublicKey function should look like the following:

function savePublicKey(bytes _key) public {
    require(doesPoolExist(msg.sender));
    publicKey[msg.sender] = _key;
}

I.e., the savePublicKey should only be called by the existing observer.

The second possible variant is

function savePublicKey(bytes _key) public {
    require(isValidator(msg.sender));
    publicKey[msg.sender] = _key;
}

I.e., the savePublicKey will succeed only if msg.sender is a validator from a new validator set, what is true after newValidatorSet() function is called or after InitiateChange event is emitted (both cases assume building the new set of validators).

@afck
Copy link
Collaborator Author

afck commented Nov 21, 2018

Sorry, I'm still confused about the notions "validator", "observer", "staker" and "pool". (Please also take a look at poanetwork/hbbft#347 — let's make sure we find a consistent naming scheme for all layers of the stack.)
I think as soon as a node joins the network and registers itself with the contract as a potential validator (i.e. it joins the pool?), it should also store its public key in the contract. There's no reason to delay its own (local, non-threshold) key generation until it becomes selected into a validator set, so ideally it wouldn't be possible to join the pool without also specifying one's public key.

(To clarify, this is a full public key, to which the node has the full secret key. These are different from the key shares for threshold encryption that are created in the distributed key generation process.)

@varasev
Copy link

varasev commented Nov 21, 2018

Thanks for the link! In the terminology of the current ReportingValidatorSet contract:

  • observer is the address which received at least one stake (an observer may become a validator if it is chosen randomly at the beginning of new staking epoch);
  • pool is the same as observer (its another name);
  • staker is the address which makes a stake for some observer (or a few observers). Staker puts his stake in the pool of observer (pool of stakes for this observer).

Should I rename observer to something else?

@afck
Copy link
Collaborator Author

afck commented Nov 21, 2018

Thanks for the clarification! I wouldn't rename it just yet, but I'll quote your comment on the naming discussion thread.

With the notions as they are right now, I think that an observer should always publish their public key from the beginning. There shouldn't ever be an observer in the contract without a (BLS/threshold_crypto) public key.

@afck
Copy link
Collaborator Author

afck commented Nov 21, 2018

I.e. I imagine the public key should be a field in ObserverState (but I only just started reading the contract in detail, so I might misunderstand the meaning).

@varasev
Copy link

varasev commented Nov 21, 2018

I imagine the public key should be a field in ObserverState

Good idea, I'll think about that.

I've added the checking of public key existence when an address tries to become an observer: poanetwork/posdao-contracts@935cb03

Will the public key have an exact length in bytes?

Can an address change its public key while it is an observer? E.g., it will call savePublicKey again: https://github.com/poanetwork/pos-contracts/blob/935cb03dee1158b7d491a1f91cec0172fd67e5ac/contracts/ReportingValidatorSet.sol#L112

@afck
Copy link
Collaborator Author

afck commented Nov 21, 2018

Will the public key have an exact length in bytes?

Yes… I was going to say "48 bytes", but instead I opened this issue. 😩 (Edit: We'll definitely fix that, of course, so it's going to be 48 bytes.)

Can an address change its public key while it is an observer?

We don't have concrete plans for that yet, but I guess in general it's a good idea to allow them to periodically generate a new key?

@varasev
Copy link

varasev commented Nov 21, 2018

I guess in general it's a good idea to allow them to periodically generate a new key?

Yes, I think so too.

@varasev
Copy link

varasev commented Nov 21, 2018

We'll definitely fix that, of course, so it's going to be 48 bytes

Got it.

@varasev
Copy link

varasev commented Nov 21, 2018

I imagine the public key should be a field in ObserverState

The publicKey moved to the ObserverState struct: poanetwork/posdao-contracts@e125a7f

@c0gent c0gent self-assigned this Nov 29, 2018
@DemiMarie
Copy link

@c0gent does this block my work on #21?

@c0gent
Copy link

c0gent commented Dec 1, 2018

@DemiMarie Please see my comments in Slack.

@afck
Copy link
Collaborator Author

afck commented Dec 5, 2018

There's an InitiateChange event now: #33 (comment)
A node that finds itself in newSet must start key generation and send its Part.

@afck
Copy link
Collaborator Author

afck commented Dec 5, 2018

@varasev: I think this line isn't right: https://github.com/poanetwork/pos-contracts/blob/98ec9196ceb911891f78bd200c8cb953f0d0c146/contracts/KeyGenHistory.sol#L68
And now that I think of it, I'm not sure if the whole counting and detecting completeness is even feasible within the contract. (It's not strictly necessary either; we can do that in the Rust code.)

Each Ack acknowledges a Part, usually from a different sender. So: Each of us calls writePart, and when I see your PartWritten events, I call writeAck with an Ack for each of them (including my own).
Unfortunately it's hard to tell which Part any given Ack belongs to: Its first field is the index of the Part's sender, in a list where all the participanting nodes are sorted by their node ID.

It's probably not worth adding a lot of code for this: For our purposes it's enough if the contract just records the events, without any knowledge about the messages' meanings.
So isKeyGenComplete and most of the checks can probably be removed. The only thing the contract should enforce is that each of the N participants writes at most 1 Part and N Acks.

Sorry for the confusion! I should have given more details in the issue in the first place.

@varasev
Copy link

varasev commented Dec 5, 2018

@afck
Copy link
Collaborator Author

afck commented Dec 6, 2018

That looks good, thanks! 👍

@varasev
Copy link

varasev commented Dec 7, 2018

There's an InitiateChange event now: #33 (comment)
A node that finds itself in newSet must start key generation and send its Part.

We agreed that we'll use the getValidators() getter instead of catching the event. Parity could just read the current validator set with getValidators() every block, and if it discovers that the set is changed in the contract, it launches changing of the set in the engine.

@DemiMarie
Copy link

This is fixed, I believe.

@DemiMarie
Copy link

Is this still relevant?

@DemiMarie DemiMarie reopened this Jan 12, 2019
@afck
Copy link
Collaborator Author

afck commented Jun 25, 2019

Closed in favor of #157.

@afck afck closed this as completed Jun 25, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants