You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Both the SDK and Tendermint are planning to release new versions of their software which, especially if we plan to adopt it for mainnet launch, means we should be thinking ahead as to how we plan to integrate these with our current forks. Estimating the amount of work required and the associated risk should give us a good idea of whether it makes sense to make such a change or to delay it to a later point.
Most notable in the new releases we have the first phase of ABCI++. One of the tasks then would be to consolidate the divergence between the PrepareProposal and ProcessProposal methods between our fork and Tendermint/SDK. If we are able to do so, this would therefore lead to the removal of the celestiaorg/cosmos-sdk fork and a reduction in our current maintenance burden. Given extensive changes to celestiaorg/celestia-core (not to mention further modifications i.e. push/pull mempool and compact blocks), it seems likely that we will maintain that fork.
Examining the methods closer we have the following differences:
typeRequestPrepareProposalstruct {
// the modified transactions cannot exceed this size.MaxTxBytesint64`protobuf:"varint,1,opt,name=max_tx_bytes,json=maxTxBytes,proto3" json:"max_tx_bytes,omitempty"`// txs is an array of transactions that will be included in a block,// sent to the app for possible modifications.Txs [][]byte`protobuf:"bytes,2,rep,name=txs,proto3" json:"txs,omitempty"`LocalLastCommitExtendedCommitInfo`protobuf:"bytes,3,opt,name=local_last_commit,json=localLastCommit,proto3" json:"local_last_commit"`Misbehavior []Misbehavior`protobuf:"bytes,4,rep,name=misbehavior,proto3" json:"misbehavior"`Heightint64`protobuf:"varint,5,opt,name=height,proto3" json:"height,omitempty"`Time time.Time`protobuf:"bytes,6,opt,name=time,proto3,stdtime" json:"time"`NextValidatorsHash []byte`protobuf:"bytes,7,opt,name=next_validators_hash,json=nextValidatorsHash,proto3" json:"next_validators_hash,omitempty"`// address of the public key of the validator proposing the block.ProposerAddress []byte`protobuf:"bytes,8,opt,name=proposer_address,json=proposerAddress,proto3" json:"proposer_address,omitempty"`
}
typeResponsePrepareProposalstruct {
Txs [][]byte`protobuf:"bytes,1,rep,name=txs,proto3" json:"txs,omitempty"`
}
typeRequestProcessProposalstruct {
Txs [][]byte`protobuf:"bytes,1,rep,name=txs,proto3" json:"txs,omitempty"`ProposedLastCommitCommitInfo`protobuf:"bytes,2,opt,name=proposed_last_commit,json=proposedLastCommit,proto3" json:"proposed_last_commit"`Misbehavior []Misbehavior`protobuf:"bytes,3,rep,name=misbehavior,proto3" json:"misbehavior"`// hash is the merkle root hash of the fields of the proposed block.Hash []byte`protobuf:"bytes,4,opt,name=hash,proto3" json:"hash,omitempty"`Heightint64`protobuf:"varint,5,opt,name=height,proto3" json:"height,omitempty"`Time time.Time`protobuf:"bytes,6,opt,name=time,proto3,stdtime" json:"time"`NextValidatorsHash []byte`protobuf:"bytes,7,opt,name=next_validators_hash,json=nextValidatorsHash,proto3" json:"next_validators_hash,omitempty"`// address of the public key of the original proposer of the block.ProposerAddress []byte`protobuf:"bytes,8,opt,name=proposer_address,json=proposerAddress,proto3" json:"proposer_address,omitempty"`
}
typeResponseProcessProposalstruct {
StatusResponseProcessProposal_ProposalStatus`protobuf:"varint,1,opt,name=status,proto3,enum=tendermint.abci.ResponseProcessProposal_ProposalStatus" json:"status,omitempty"`
}
Celestia Core PrepareProposal + ProcessProposal
typeRequestPrepareProposalstruct {
// block_data is an array of transactions that will be included in a block,// sent to the app for possible modifications.// applications can not exceed the size of the data passed to it.BlockData*types1.Data`protobuf:"bytes,1,opt,name=block_data,json=blockData,proto3" json:"block_data,omitempty"`// If an application decides to populate block_data with extra information, they can not exceed this value.BlockDataSizeint64`protobuf:"varint,2,opt,name=block_data_size,json=blockDataSize,proto3" json:"block_data_size,omitempty"`
}
typeResponsePrepareProposalstruct {
BlockData*types1.Data`protobuf:"bytes,1,opt,name=block_data,json=blockData,proto3" json:"block_data,omitempty"`
}
typeRequestProcessProposalstruct {
Header types1.Header`protobuf:"bytes,1,opt,name=header,proto3" json:"header"`BlockData*types1.Data`protobuf:"bytes,2,opt,name=block_data,json=blockData,proto3" json:"block_data,omitempty"`
}
typeResponseProcessProposalstruct {
ResultResponseProcessProposal_Result`protobuf:"varint,1,opt,name=result,proto3,enum=tendermint.abci.ResponseProcessProposal_Result" json:"result,omitempty"`Evidence [][]byte`protobuf:"bytes,2,rep,name=evidence,proto3" json:"evidence,omitempty"`
}
typeDatastruct {
// Txs that will be applied by state @ block.Height+1.// NOTE: not all txs here are valid. We're just agreeing on the order first.// This means that block.AppHash does not include these txs.Txs [][]byte`protobuf:"bytes,1,rep,name=txs,proto3" json:"txs,omitempty"`// field number 2 is reserved for intermediate state roots// field number 3 was previously used for evidenceBlobs []Blob`protobuf:"bytes,4,rep,name=blobs,proto3" json:"blobs"`SquareSizeuint64`protobuf:"varint,5,opt,name=square_size,json=squareSize,proto3" json:"square_size,omitempty"`Hash []byte`protobuf:"bytes,6,opt,name=hash,proto3" json:"hash,omitempty"`
}
typeBlobstruct {
NamespaceId []byte`protobuf:"bytes,1,opt,name=namespace_id,json=namespaceId,proto3" json:"namespace_id,omitempty"`Data []byte`protobuf:"bytes,2,opt,name=data,proto3" json:"data,omitempty"`ShareVersionuint32`protobuf:"varint,3,opt,name=share_version,json=shareVersion,proto3" json:"share_version,omitempty"`
}
Since Celestia only passes the raw txs in PrepareProposal, the RequestPrepareProposal objects overlap and so aren't much of a concern. This is not the case for the ResponsePrepareProposal objects. The celestia app takes the transactions and computes the extended square, working out the square size and data availability hash and separates the blobs from the state modifying transactions.
The crux of the problem then is to reconcile the [][]byte response object in tendermint v0.37 with the modified Data struct in celestia core. I have a few ideas in mind that I'd like to discuss. As a reminder, if we are using a forked version of Tendermint we have full control as to what transactions are actually delivered to the application. There's also a few other angles I'm trying to consider at the same time:
We may want to instead choose to gossip the tags (normally the hash) of the transactions in a block and rely on the mempool to fetch the full data | compact blocks
We want to keep PFBs and Blobs separate for the sake of indexers and transaction tracing.
We want users to be unhindered in how the construct transactions i.e. multiple PFBs in a transaction with other sdk.Msg and multiple Blobs.
We may want a block sync protocol that does not include blobs (only state modifying transactions) as this will cut down the necessary bandwidth to sync.
Possible Solutions:
The first transaction is always of type Metadata and includes the data availability hash, square size and potentially other relevant data. Celestia Core will parse the first transaction returned in PrepareProposal and populate the Data struct accordingly.
Transactions and blobs are merged together again as an array of either sdk.Tx or BlobTx (represented as [][]byte). This makes it somewhat easy for the compact blocks because we can simply use hashes instead of the transaction bytes and fetch both regular transactions and the transactions that contain Blobs. We can then split out the blobs from the transactions when it comes to building the EDS. And we can parse the BlobTxs and only pass the sdk.Tx within them when running DeliverTx (so only state affecting transactions are executed upon).
// in the applications PrepareProposal and ProcessProposal beforetxs, blobs:=Split(req.Txs)
square.Split(core.Data{
Txs: txs,
Blobs: blobs,
SquareSize: metadata.SquareSize
}
We still merge the Txs and Blobs fields within the Data struct to be just [][]byte however we use the aforementioned metadata transaction to include the point (an index) where blobs starts (and transactions end). This is similar to how it is currently done. PrepareProposal will still split out the transactions from the blobs. We will only deliver the transactions ([]byte) that are within the prescribed index to the application.
// in Tendermint's apply blockfor_, tx:=rangeblock.Txs[1:metadata.Index] {
app.DeliverTx(tx)
}
Option 3
Update the versions but maintain the existing design, thus keeping the fork of the SDK. This could end up being the easier of the decisions.
Other Areas of Note
Celestia's ResponseProcessProposal has an Evidence field that Tendermint does not support although this is not currently used AFAICS
Celestia's RequestProcessProposal includes an entire header whereas only certain fields are passed in Tendermint's RequestProcessProposal. This critically means that the application can't verify the data availability root. However this can be passed in as the first transaction i.e. the metadata and there can be checks on Celestia Core's side to compare the hash in the first tx with the one in the header
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Both the SDK and Tendermint are planning to release new versions of their software which, especially if we plan to adopt it for mainnet launch, means we should be thinking ahead as to how we plan to integrate these with our current forks. Estimating the amount of work required and the associated risk should give us a good idea of whether it makes sense to make such a change or to delay it to a later point.
Most notable in the new releases we have the first phase of ABCI++. One of the tasks then would be to consolidate the divergence between the
PrepareProposal
andProcessProposal
methods between our fork and Tendermint/SDK. If we are able to do so, this would therefore lead to the removal of thecelestiaorg/cosmos-sdk
fork and a reduction in our current maintenance burden. Given extensive changes tocelestiaorg/celestia-core
(not to mention further modifications i.e. push/pull mempool and compact blocks), it seems likely that we will maintain that fork.Examining the methods closer we have the following differences:
Tendermint v0.37 PrepareProposal + ProcessProposal
Celestia Core PrepareProposal + ProcessProposal
Since Celestia only passes the raw txs in PrepareProposal, the
RequestPrepareProposal
objects overlap and so aren't much of a concern. This is not the case for theResponsePrepareProposal
objects. The celestia app takes the transactions and computes the extended square, working out the square size and data availability hash and separates the blobs from the state modifying transactions.The crux of the problem then is to reconcile the
[][]byte
response object in tendermint v0.37 with the modifiedData
struct in celestia core. I have a few ideas in mind that I'd like to discuss. As a reminder, if we are using a forked version of Tendermint we have full control as to what transactions are actually delivered to the application. There's also a few other angles I'm trying to consider at the same time:Possible Solutions:
The first transaction is always of type
Metadata
and includes the data availability hash, square size and potentially other relevant data. Celestia Core will parse the first transaction returned inPrepareProposal
and populate theData
struct accordingly.Option 1
Transactions and blobs are merged together again as an array of either
sdk.Tx
orBlobTx
(represented as [][]byte). This makes it somewhat easy for the compact blocks because we can simply use hashes instead of the transaction bytes and fetch both regular transactions and the transactions that containBlob
s. We can then split out the blobs from the transactions when it comes to building the EDS. And we can parse theBlobTx
s and only pass thesdk.Tx
within them when runningDeliverTx
(so only state affecting transactions are executed upon).Option 2
We still merge the
Txs
andBlobs
fields within theData
struct to be just[][]byte
however we use the aforementioned metadata transaction to include the point (an index) where blobs starts (and transactions end). This is similar to how it is currently done. PrepareProposal will still split out the transactions from the blobs. We will only deliver the transactions ([]byte
) that are within the prescribed index to the application.Option 3
Update the versions but maintain the existing design, thus keeping the fork of the SDK. This could end up being the easier of the decisions.
Other Areas of Note
ResponseProcessProposal
has anEvidence
field that Tendermint does not support although this is not currently used AFAICSRequestProcessProposal
includes an entire header whereas only certain fields are passed in Tendermint'sRequestProcessProposal
. This critically means that the application can't verify the data availability root. However this can be passed in as the first transaction i.e. the metadata and there can be checks on Celestia Core's side to compare the hash in the first tx with the one in the headerBeta Was this translation helpful? Give feedback.
All reactions