Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow DA recorded blocks to come out-of-order #2415

Open
wants to merge 25 commits into
base: master
Choose a base branch
from

Conversation

MitchTurner
Copy link
Member

@MitchTurner MitchTurner commented Oct 30, 2024

Linked Issues/PRs

Closes: #2414

Eliminates the need for: #2397

Description

It sounds like it is likely we will occasionally receive recorded block information from the committer out of order. There was no real reason to expect them in order so I've generalized the algorithm to accept whatever order.

Checklist

  • New behavior is reflected in tests

Before requesting review

  • I have reviewed the code myself

@MitchTurner MitchTurner changed the title Remove height parameter, change tests Allow DA recorded blocks to come out-of-order Oct 30, 2024
@MitchTurner MitchTurner added the no changelog Skip the CI check of the changelog modification label Oct 30, 2024
@MitchTurner MitchTurner marked this pull request as ready for review October 30, 2024 16:41
netrome
netrome previously approved these changes Oct 30, 2024
Copy link
Contributor

@netrome netrome left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good overall to me. A few suggestions, nits and questions.

crates/fuel-gas-price-algorithm/src/v1.rs Outdated Show resolved Hide resolved
crates/fuel-gas-price-algorithm/src/v1.rs Outdated Show resolved Hide resolved
crates/fuel-gas-price-algorithm/src/v1.rs Show resolved Hide resolved
@@ -270,3 +228,132 @@ fn update_da_record_data__da_block_updates_projected_total_cost_with_known_and_g
let expected = new_known_total_cost + guessed_part;
assert_eq!(actual, expected as u128);
}

#[test]
fn update_da_record_data__updates_known_total_cost_if_blocks_are_out_of_order() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it's because it's late, but I don't see where we provide blocks out of order in this test case. Is it because we skip 1 in the recorded_range?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we be even more ezplicit and set recorded_range to [3, 2] instead?.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good idea. I've switched it. In reality, they'll probably be in order inside the vector, but they don't need to be. The point of this PR is that some heights can be skipped and returned to later. These tests skip 1 and do 2 and 3.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I also spent a couple of minutes looking at this. Maybe if you rename recorded_range to recorded_range_with_skipped_height_1 will make it easier to follow the logic. Or indeed just change to 3, 2.

rafal-ch
rafal-ch previously approved these changes Oct 31, 2024
Copy link
Contributor

@rafal-ch rafal-ch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 👍
with just a few nits and minor comments

@@ -270,3 +228,132 @@ fn update_da_record_data__da_block_updates_projected_total_cost_with_known_and_g
let expected = new_known_total_cost + guessed_part;
assert_eq!(actual, expected as u128);
}

#[test]
fn update_da_record_data__updates_known_total_cost_if_blocks_are_out_of_order() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I also spent a couple of minutes looking at this. Maybe if you rename recorded_range to recorded_range_with_skipped_height_1 will make it easier to follow the logic. Or indeed just change to 3, 2.

self.latest_da_cost_per_byte = new_cost_per_byte;
Ok(())
}
let recorded_bytes = self.drain_l2_block_bytes_for_range(heights)?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe just:

Suggested change
let recorded_bytes = self.drain_l2_block_bytes_for_range(heights)?;
let recorded_bytes = self.drain_l2_block_bytes(heights)?;

It'll save us from renaming the function when we change the datatype again. However, the intention here could be to not relate to the Range<T>, but to the "range" as arbitrary "noun".

Either way, just a small detail.

#[error("Could not calculate cost per byte: {bytes:?} bytes, {cost:?} cost")]
CouldNotCalculateCostPerByte { bytes: u128, cost: u128 },
#[error("Failed to include L2 block data: {0}")]
FailedTooIncludeL2BlockData(String),
#[error("L2 block expected but not found in unrecorded blocks: {0}")]
L2BlockExpectedNotFound(u32),
#[error("L2 block expected but not found in unrecorded blocks: {height:?}")]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unrelated, but this could be just {height}, not {height:?}. But also I see that we use debug for other messages, so maybe there's a reason.

netrome
netrome previously approved these changes Oct 31, 2024
Copy link
Contributor

@netrome netrome left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for updating! 🙏

@MitchTurner MitchTurner dismissed stale reviews from netrome and rafal-ch via 922f3e0 October 31, 2024 10:31
Comment on lines 534 to 538
let bytes = self.unrecorded_blocks.remove(expected_height).ok_or(
Error::L2BlockExpectedNotFound {
height: *expected_height,
},
)?;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really need an error here? We could log error here and use 0=)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's up to us. If this errors we have a serious problem with our code.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the committer has problems and submits several bundles per same height(they can in theory because they do re-bundle), it will break the node.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That can't happen if it's finalized before they report to us.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been thinking about the (very unlikely) case that our gas price metadata is "ahead" of the chain on restart. This could in theory result in a bunch of the blocks being missing from the unrecorded_blocks. At that point we might have the da_committer reporting those blocks again.

Since the precision of the algorithm isn't that important, I think the simplest solution would be to just ignore re-reported blocks.

In that case we don't want it throwing an error here. So I'm going to change this.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmmmmmmm.... Well it's kinda weird.

We don't get the values for each committed block, we just get a value for the entire bundle.

This means, for example if the unrecorded_blocks are just [(8, 100)] (height, bytes) and the committed blocks come back and they are for heights [1,2,3,4,5,6,7,8] with cost 10_000, then we will remove the 8 and subtract 100 from the total and then add 10_000 to the total! Even though it's likely that the costs for 1-7 are already accounted for.

We can take 1/8 of the 10_000, which would be better than nothing, but definitely inaccurate.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In terms of the algorithm, I think the best approach here is to ignore the batch. The problem is if it ever occurs, we will carry those unrecorded_blocks forever. It's fine from the algorithm's perspective because the error will become negligible pretty quickly.

Copy link
Member Author

@MitchTurner MitchTurner Nov 19, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is if it ever occurs, we will carry those unrecorded_blocks forever.

Nevermind. We can both remove the blocks and ignore. I think that's the best.

Comment on lines 546 to 550
let projection_portion: u128 = self
.unrecorded_blocks
.iter()
.map(|(_, &bytes)| (bytes as u128))
.map(|(_, &bytes)| u128::from(bytes))
.fold(0_u128, |acc, n| acc.saturating_add(n))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really need to iterate each time here? Just curios why not to use structure that does accounting internally and updates values once per add/remove events.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This gets called semi-infrequently (only when we get a batch back from the committer). It could become more frequent I guess.

Yes. An alternative would be having the "total unrecorded bytes" tracked and when we remove blocks we subtract their bytes from that total. Then this would just be a multiplication of that. I can add that if you prefer.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@acerone85
Copy link
Contributor

had a look, only one question from my side. otherwise LGTM

acerone85
acerone85 previously approved these changes Nov 1, 2024
rafal-ch
rafal-ch previously approved these changes Nov 14, 2024
@MitchTurner MitchTurner dismissed stale reviews from rafal-ch and acerone85 via cfd1a71 November 17, 2024 08:03
@@ -538,16 +543,13 @@ impl AlgorithmUpdaterV1 {
)?;
total = total.saturating_add(bytes as u128);
}
self.unrecorded_blocks_bytes = self.unrecorded_blocks_bytes.saturating_sub(total);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm thinking about the following scenario, not sure if this is an issue or not:

  1. we have 3 blocks in self.unrecorded_blocks
  2. we remove() 2 of them, but we get an error on 3rd one
  3. now, the unrecorded_blocks_bytes becomes out-of-sync

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. This is the inverse of the problem I mentioned above.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
no changelog Skip the CI check of the changelog modification
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Allow DA recorded blocks to come in out-of-order
6 participants