-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow DA recorded blocks to come out-of-order #2415
base: master
Are you sure you want to change the base?
Allow DA recorded blocks to come out-of-order #2415
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good overall to me. A few suggestions, nits and questions.
crates/fuel-gas-price-algorithm/gas-price-analysis/src/charts.rs
Outdated
Show resolved
Hide resolved
crates/fuel-gas-price-algorithm/src/v1/tests/update_da_record_data_tests.rs
Outdated
Show resolved
Hide resolved
crates/fuel-gas-price-algorithm/src/v1/tests/update_da_record_data_tests.rs
Outdated
Show resolved
Hide resolved
crates/fuel-gas-price-algorithm/src/v1/tests/update_da_record_data_tests.rs
Outdated
Show resolved
Hide resolved
@@ -270,3 +228,132 @@ fn update_da_record_data__da_block_updates_projected_total_cost_with_known_and_g | |||
let expected = new_known_total_cost + guessed_part; | |||
assert_eq!(actual, expected as u128); | |||
} | |||
|
|||
#[test] | |||
fn update_da_record_data__updates_known_total_cost_if_blocks_are_out_of_order() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe it's because it's late, but I don't see where we provide blocks out of order in this test case. Is it because we skip 1
in the recorded_range
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we be even more ezplicit and set recorded_range
to [3, 2]
instead?.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good idea. I've switched it. In reality, they'll probably be in order inside the vector, but they don't need to be. The point of this PR is that some heights can be skipped and returned to later. These tests skip 1
and do 2
and 3
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I also spent a couple of minutes looking at this. Maybe if you rename recorded_range
to recorded_range_with_skipped_height_1
will make it easier to follow the logic. Or indeed just change to 3, 2
.
crates/fuel-gas-price-algorithm/src/v1/tests/update_da_record_data_tests.rs
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
with just a few nits and minor comments
@@ -270,3 +228,132 @@ fn update_da_record_data__da_block_updates_projected_total_cost_with_known_and_g | |||
let expected = new_known_total_cost + guessed_part; | |||
assert_eq!(actual, expected as u128); | |||
} | |||
|
|||
#[test] | |||
fn update_da_record_data__updates_known_total_cost_if_blocks_are_out_of_order() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I also spent a couple of minutes looking at this. Maybe if you rename recorded_range
to recorded_range_with_skipped_height_1
will make it easier to follow the logic. Or indeed just change to 3, 2
.
crates/fuel-gas-price-algorithm/src/v1/tests/update_da_record_data_tests.rs
Outdated
Show resolved
Hide resolved
self.latest_da_cost_per_byte = new_cost_per_byte; | ||
Ok(()) | ||
} | ||
let recorded_bytes = self.drain_l2_block_bytes_for_range(heights)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe just:
let recorded_bytes = self.drain_l2_block_bytes_for_range(heights)?; | |
let recorded_bytes = self.drain_l2_block_bytes(heights)?; |
It'll save us from renaming the function when we change the datatype again. However, the intention here could be to not relate to the Range<T>
, but to the "range" as arbitrary "noun".
Either way, just a small detail.
#[error("Could not calculate cost per byte: {bytes:?} bytes, {cost:?} cost")] | ||
CouldNotCalculateCostPerByte { bytes: u128, cost: u128 }, | ||
#[error("Failed to include L2 block data: {0}")] | ||
FailedTooIncludeL2BlockData(String), | ||
#[error("L2 block expected but not found in unrecorded blocks: {0}")] | ||
L2BlockExpectedNotFound(u32), | ||
#[error("L2 block expected but not found in unrecorded blocks: {height:?}")] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unrelated, but this could be just {height}
, not {height:?}
. But also I see that we use debug for other messages, so maybe there's a reason.
crates/fuel-gas-price-algorithm/src/v1/tests/update_da_record_data_tests.rs
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for updating! 🙏
…data_tests.rs Co-authored-by: Rafał Chabowski <[email protected]>
let bytes = self.unrecorded_blocks.remove(expected_height).ok_or( | ||
Error::L2BlockExpectedNotFound { | ||
height: *expected_height, | ||
}, | ||
)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really need an error here? We could log error here and use 0
=)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's up to us. If this errors we have a serious problem with our code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the committer has problems and submits several bundles per same height(they can in theory because they do re-bundle), it will break the node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That can't happen if it's finalized before they report to us.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've been thinking about the (very unlikely) case that our gas price metadata is "ahead" of the chain on restart. This could in theory result in a bunch of the blocks being missing from the unrecorded_blocks
. At that point we might have the da_committer reporting those blocks again.
Since the precision of the algorithm isn't that important, I think the simplest solution would be to just ignore re-reported blocks.
In that case we don't want it throwing an error here. So I'm going to change this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmmmmmmm.... Well it's kinda weird.
We don't get the values for each committed block, we just get a value for the entire bundle.
This means, for example if the unrecorded_blocks
are just [(8, 100)]
(height, bytes) and the committed blocks come back and they are for heights [1,2,3,4,5,6,7,8]
with cost 10_000
, then we will remove the 8 and subtract 100
from the total and then add 10_000
to the total! Even though it's likely that the costs for 1-7 are already accounted for.
We can take 1/8 of the 10_000
, which would be better than nothing, but definitely inaccurate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In terms of the algorithm, I think the best approach here is to ignore the batch. The problem is if it ever occurs, we will carry those unrecorded_blocks
forever. It's fine from the algorithm's perspective because the error will become negligible pretty quickly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem is if it ever occurs, we will carry those unrecorded_blocks forever.
Nevermind. We can both remove the blocks and ignore. I think that's the best.
let projection_portion: u128 = self | ||
.unrecorded_blocks | ||
.iter() | ||
.map(|(_, &bytes)| (bytes as u128)) | ||
.map(|(_, &bytes)| u128::from(bytes)) | ||
.fold(0_u128, |acc, n| acc.saturating_add(n)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we really need to iterate each time here? Just curios why not to use structure that does accounting internally and updates values once per add/remove events.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This gets called semi-infrequently (only when we get a batch back from the committer). It could become more frequent I guess.
Yes. An alternative would be having the "total unrecorded bytes" tracked and when we remove blocks we subtract their bytes from that total. Then this would just be a multiplication of that. I can add that if you prefer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
had a look, only one question from my side. otherwise LGTM |
…-be-received-out-of-order
@@ -538,16 +543,13 @@ impl AlgorithmUpdaterV1 { | |||
)?; | |||
total = total.saturating_add(bytes as u128); | |||
} | |||
self.unrecorded_blocks_bytes = self.unrecorded_blocks_bytes.saturating_sub(total); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm thinking about the following scenario, not sure if this is an issue or not:
- we have 3 blocks in
self.unrecorded_blocks
- we
remove()
2 of them, but we get an error on 3rd one - now, the
unrecorded_blocks_bytes
becomes out-of-sync
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. This is the inverse of the problem I mentioned above.
Linked Issues/PRs
Closes: #2414
Eliminates the need for: #2397
Description
It sounds like it is likely we will occasionally receive recorded block information from the committer out of order. There was no real reason to expect them in order so I've generalized the algorithm to accept whatever order.
Checklist
Before requesting review