-
Notifications
You must be signed in to change notification settings - Fork 450
Add Packet and Acknowledgement spec to ICS4 along with Commitments #1149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 2 commits
ad8c2db
59c848d
c7b2e6d
0c2f344
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -0,0 +1,77 @@ | ||||||
| # Packet Specification | ||||||
|
|
||||||
| ## Packet V2 | ||||||
|
|
||||||
| The IBC packet sends application data from a source chain to a destination chain with a timeout that specifies when the packet is no longer valid. The packet will be committed to by the source chain as specified in the ICS-24 specification. The receiver chain will then verify the packet commitment under the ICS-24 specified packet commitment path. If the proof succeeds, the IBC handler sends the application data(s) to the relevant application(s). | ||||||
|
|
||||||
| ```typescript | ||||||
| interface Packet { | ||||||
| sourceIdentifier: bytes, | ||||||
|
||||||
| destIdentifier: bytes, | ||||||
|
||||||
| sequence: uint64 | ||||||
| timeout: uint64, | ||||||
|
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Interpet this as seconds on receiver block time. Reject timeouts that are too far in future in implementations (ibc-go) to ensure nanoseconds don't accidentally get interpreted as seconds There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Would it make sense to make the timeout field into opaque
Ultimately, the semantics of the timeout is tightly coupled with the IBC client. So it would be better to allow the IBC client to customize it.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It could make sense. We are trying to strike a balance between flexibility and just choosing a sane default. Our thinking was that timestamps are way easier to understand and interpret for users. And most chains can implement either through BFT time directly in their protocol or through a time oracle application. |
||||||
| data: [Payload] | ||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Does this mean multiple
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It is different behaviour. The point is to enable new features for IBC applications. A key usecase is transfer + ICA We want to eventually allow either partial failure or full atomic packets. However, for launch we will only allow a single payload in this list and enable multiple in a future release There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hmm makes sense. Now I get a better understanding behind the design rationale. Though I wonder what is supposed to be the API for enabling multi-payload packets? For the example case where we want to include both transfer and ICA payloads, would it be triggered by a multi-message transaction that sends multiple messages to transfer and ICA separately? Does this means that the IBC stack needs to somehow wrap around the transaction engine, and build only one IBC packet per src/dst pair for the entire transaction? Or do we need separate message-level APIs for constructing multi-payload packets? |
||||||
| } | ||||||
|
Comment on lines
+33
to
+34
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Doesn't this open the door to a DoS attack through payload flooding? Shouldn't hosts consider a cap for that, depending on their characteristics? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think we should think about DoS prevention at the full packet level, rather than the individual fields. An IBC packet should be encoded into a reasonable size when submitted as a An application should not assume that an IBC packet can always be successfully relayed to a counterparty chain. Aside from the packet size, there are many ways the packet may fail, such as when there are malformed packet data, or when errors occurred when an application is processing the packet. Aside from that, a relayer should set their own limit of how large an IBC packet they are willing to relay, or how many retries they should make, so that it would not indefinitely try to relay a malformed packet. In general, a chain may contain arbitrary state machine logic that allows it to craft bad IBC packets. So neither the relayer nor the counterparty chain should trust that the packet is always wellformed.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We are only allowing single payload for now in ibc-go Implementations should probably have a total packet size cap. And indeed relayers should also do this |
||||||
|
|
||||||
| interface Payload { | ||||||
| sourcePort: bytes, | ||||||
| destPort: bytes, | ||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What is the semantics of allowing different combinations of source and destination ports? Should it require a definite binding between the source port and destination port, or should each combination of source/destination port be identified uniquely?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Really all that matters is the portID on your side generally speaking. In case you want a strict mapping from your portID to counterparty portID that must be enforced in the application There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I am still a bit confused on how the pairing would work, if there is no handshake and any application can send packet to any other application. It seems like we would need to take both the source and destination identifier pair as unique identification. As an example, if we have two IBC transfer packets I think it also probably make sense to call this an application identifier instead of calling it port. My understanding is that one application identifier is intended to be used for handling all packets of the same use case. So for example, there should be just one
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think I also prefer referring to this as |
||||||
| version: string, | ||||||
sangier marked this conversation as resolved.
Show resolved
Hide resolved
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm not sure what is the use for the Even if we keep it, it may make sense to keep it as raw
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It could, however this would require a significant change to the IBC apps to support it which i think we would like to avoid. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. How would this allow backward compatibility, if existing IBC apps assume that the version stays the same after the handshake? Actually, I'm a bit confused of how classic IBC apps can be initialized, if we remove channel handshake? |
||||||
| encoding: Encoding, | ||||||
|
||||||
| appData: bytes, | ||||||
| } | ||||||
|
|
||||||
| enum Encoding { | ||||||
|
||||||
| NO_ENCODING_SPECIFIED, | ||||||
| PROTO_3, | ||||||
| JSON, | ||||||
| RLP, | ||||||
| BCS, | ||||||
| } | ||||||
| ``` | ||||||
|
|
||||||
| The source and destination identifiers at the top-level of the packet identifiers the chains communicating. The source identifier **must** be unique on the source chain and is a pointer to the destination chain. The destination identifier **must** be a unique identifier on the destination chain and is a pointer to the source chain. The sequence is a monotonically incrementing nonce to uniquely identify packets sent between the source and destination chain. | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Suggested change
|
||||||
|
|
||||||
| The timeout is the UNIX timestamp in seconds that must be passed on the **destination** chain before the packet is invalid and no longer capable of being received. Note that the timeout timestamp is assessed against the destination chain's clock which may drift relative to the clocks of the sender chain or a third party observer. If a packet is received on the destination chain after the timeout timestamp has passed relative to the destination chain's clock; the packet must be rejected so that it can be safely timed out and reverted by the sender chain. | ||||||
|
|
||||||
|
Comment on lines
+63
to
+64
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Isn't it restrictive to define the timeout as only a UNIX timestamp? Was there a specific reason we're no longer allowing timeout height?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Unnecessary and unintuitive |
||||||
| In version 2 of the IBC specification, implementations **MAY** support multiple application data within the same packet. This can be represented by a list of payloads. Implementations may choose to only support a single payload per packet, in which case they can just reject incoming packets sent with multiple payloads. | ||||||
|
|
||||||
| Each payload will include its own `Encoding` and `AppVersion` that will be sent to the application to instruct it how to decode and interpret the opaque application data. The application must be able to support the provided `Encoding` and `AppVersion` in order to process the `AppData`. If the receiving application does not support the encoding or app version, then the application **must** return an error to IBC core. If the receiving application does support the provided encoding and app version, then the application must decode the application as specified by the `Encoding` enum and then process the application as expected by the counterparty given the agreed-upon app version. Since the `Encoding` and `AppVersion` are now in each packet they can be changed on a per-packet basis and an application can simultaneously support many encodings and app versions from a counterparty. This is in stark contrast to IBC version 1 where the channel prenegotiated the channel version (which implicitly negotiates the encoding as well); so that changing the app version after channel opening is very difficult. | ||||||
|
|
||||||
| The packet must be committed to as specified in the ICS24 specification. In order to do this we must first commit the packet data and timeout. The timeout is encoded in LittleEndian format. The packet data which is a list of payloads is encoded first in the canonical CBOR encoding format before being passed to the ICS24 packet commitment function. This ensures that a given packet will always create the exact same commitment by all compliant implementations and two different packets will never create the same commitment by a compliant implementation. | ||||||
|
|
||||||
| ```typescript | ||||||
| func commitV2Packet(packet: Packet) { | ||||||
| timeoutBytes = LittleEndian(timeout) | ||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If the implementation is using CBOR, why not also use that to encode the timeout field? |
||||||
| // TODO: Decide on canonical encoding scheme | ||||||
| // Suggested CBOR | ||||||
| appBytes = cbor.encoding(payload) | ||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think we should note that the use of CBOR encoding is specific to Cosmos SDK chains with ibc-go implementation. During verification, we should pass the packet as a structured data to the IBC client, so that it can choose the appropriate way to encode and verify the packet commitment on the counterparty chain.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No unfortunately the commitment has to be exactly the same across all chans so we must all agree at least on the commitment paths and the commitment values |
||||||
| ics24.commitPacket(packet.destinationIdentifier, timeoutBytes, appBytes) | ||||||
| } | ||||||
| ``` | ||||||
|
|
||||||
| ## Acknowledgement V2 | ||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think we should make the format of failure acknowledgement explicit. Regardless of which part of the IBC stack failed, the error should be formatted in a way that is standard for relayers and middlewares to understand. This would help in diagnostics for failures. |
||||||
|
|
||||||
| The acknowledgement in the version 2 specification is also modified to support multiple payloads in the packet that will each go to separate applications that can write their own acknowledgements. Each acknowledgment will be contained within the final packet acknowledgment in the same order that they were received in the original packet. Thus if a packet contains payloads for modules `A` and `B` in that order; the receiver will write an acknowledgment with the app acknowledgements `A` and `B` in the same order. | ||||||
|
|
||||||
| The acknowledgement which is itself a list of app acknowledgement bytes must be first encoded with the canonical CBOR encoding format. This ensures that all compliant implementations reach the same acknowledgment commitment and that two different acknowledgements never create the same commitment. | ||||||
|
|
||||||
| An application may not need to return an acknowledgment. In this case, it may return a sentinel acknowledgement value `SENTINEL_ACKNOWLEDGMENT` which will be the single byte in the byte array: `bytes(0x01)`. In this case, the IBC `acknowledgePacket` handler will still do the core IBC acknowledgment logic but it will not call the application's acknowledgePacket callback. | ||||||
|
|
||||||
| ```typescript | ||||||
| interface Acknowledgement { | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. regarding the EDIT: I noticed soareschen mentioned a point about an explicit format, I guess this is the same point. |
||||||
| appAcknowledgement: [bytes] | ||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What is the semantics if the acknowledgement list has a different size than the original payloads count? If there are less, do the application receive a default sentinel ack? If there are more, do the extra acks get ignored? Is the handling of acks meant to be also atomic, similar to receiving packets? If one of the ack processing failed, how would it affect other ack handlers? Similarly, how do we handle timeouts if one of the application handler failed? Do we need to lower the protocol assumption, so that applications are designed to handle the case when neither ack nor timeout is called for a packet?
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think that an acknowledgement should be rejected if if it has a different size than the original payload count. I believe it would fail during packet acknowledgement verification if this was the case anyway. Regarding atomicity, my understanding of how this should work is that they are not atomic, (we could have some failures and some successes, they would not all share the same result). However I think if a single application was async, the sending chain should be able to acknowledge the acknowledgement until all async acks have been written. (The receiving chain would not write the acknowledgement until the final application has written the async ack) |
||||||
| } | ||||||
| ``` | ||||||
|
|
||||||
| All acknowledgements must be committed to and stored under the ICS24 acknowledgment path. | ||||||
|
|
||||||
| ```typescript | ||||||
| func commitV2Acknowledgment(ack: Acknowledgement) { | ||||||
| // TODO: Decide on canonical encoding scheme | ||||||
| // Suggested CBOR | ||||||
| ackBytes = cbor.encoding(ack) | ||||||
| ics24.commitAcknowledgment(ackBytes) | ||||||
| } | ||||||
| ``` | ||||||
|
|
||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I see, with this new packet structure, we’re no longer backward compatible with IBC v1. We can still reuse the same identifiers and sequence numbers from the v1 channel to create v2 packets and send them to the IBC v2 handlers. However, when IBC v2 interacts with applications, the callback functions still depend on the v1 packet structure. What’s the reasoning behind this? Is this implicitly requiring implementations now treat the packet field in callbacks as abstract and opaque, or has backward compatibility been dropped as a requirement for v2?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Backwards compatibility is not a strict requirement. We might break the application interfaces for the callback though their general expected behaviour can remain the same