Impact
An internal modification to the way struct PeerState
is serialized to JSON introduced a deadlock when new function MarshallJSON is called. This function can be called from two places:
- Via logs
- Setting the
consensus
logging module to "debug" level (should not happen in production), and
- Setting the log output format to JSON
- Via RPC
dump_consensus_state
Case 1 above, which should not be hit in production, will eventually hit the deadlock in most goroutines, effectively halting the node.
In case 2, only the data structures related to the first peer will be deadlocked, together with the thread(s) dealing with the RPC request(s). This means that only one of the channels of communication to the node's peers will be blocked. Eventually the peer will timeout and excluded from the list (typically after 2 minutes). The goroutines involved in the deadlock will not be garbage collected, but they will not interfere with the system after the peer is excluded.
The theoretical worst case for case 2, is a network with only two validator nodes. In this case, each of the nodes only has one PeerState
struct. If dump_consensus_state
is called in either node (or both), the chain will halt until the peer connections time out, after which the nodes will reconnect (with different PeerState
structs) and the chain will progress again. Then, the same process can be repeated.
As the number of nodes in a network increases, and thus, the number of peer struct each node maintains, the possibility of reproducing the perturbation visible with 2 nodes decreases. Only the first PeerState
struct will deadlock, and not the others (RPC dump_consensus_state
accesses them in a for loop, so the deadlock at the first iteration causes the rest of the iterations of that "for" loop to never be reached).
This regression was introduced in versions v0.34.28
and v0.37.1
, and will be fixed in v0.34.29
and v0.37.2
.
Patches
The PR containing the fix is here, and the corresponding issue is here
Workarounds
For case 1 (hitting the deadlock via logs)
- either don't set the log output to "json", leave at "plain",
- or don't set the consensus logging module to "debug", leave it at "info" or higher.
For case 2 (hitting the deadlock via RPC dump_consensus_state
)
- do not expose
dump_consensus_state
RPC endpoint to the public internet (e.g., via rules in your nginx setup)
References
- Issue that introduced the deadlock
- Issue reporting the bug via logs
References
Impact
An internal modification to the way struct
PeerState
is serialized to JSON introduced a deadlock when new function MarshallJSON is called. This function can be called from two places:consensus
logging module to "debug" level (should not happen in production), anddump_consensus_state
Case 1 above, which should not be hit in production, will eventually hit the deadlock in most goroutines, effectively halting the node.
In case 2, only the data structures related to the first peer will be deadlocked, together with the thread(s) dealing with the RPC request(s). This means that only one of the channels of communication to the node's peers will be blocked. Eventually the peer will timeout and excluded from the list (typically after 2 minutes). The goroutines involved in the deadlock will not be garbage collected, but they will not interfere with the system after the peer is excluded.
The theoretical worst case for case 2, is a network with only two validator nodes. In this case, each of the nodes only has one
PeerState
struct. Ifdump_consensus_state
is called in either node (or both), the chain will halt until the peer connections time out, after which the nodes will reconnect (with differentPeerState
structs) and the chain will progress again. Then, the same process can be repeated.As the number of nodes in a network increases, and thus, the number of peer struct each node maintains, the possibility of reproducing the perturbation visible with 2 nodes decreases. Only the first
PeerState
struct will deadlock, and not the others (RPCdump_consensus_state
accesses them in a for loop, so the deadlock at the first iteration causes the rest of the iterations of that "for" loop to never be reached).This regression was introduced in versions
v0.34.28
andv0.37.1
, and will be fixed inv0.34.29
andv0.37.2
.Patches
The PR containing the fix is here, and the corresponding issue is here
Workarounds
For case 1 (hitting the deadlock via logs)
For case 2 (hitting the deadlock via RPC
dump_consensus_state
)dump_consensus_state
RPC endpoint to the public internet (e.g., via rules in your nginx setup)References
References