-
Notifications
You must be signed in to change notification settings - Fork 0
Fix a logic bug in slot allocation within LvmtStore
#16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
d28f85a
to
491cbcd
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: 0 of 2 files reviewed, 1 unresolved discussion (waiting on @rongma7)
src/lvmt/storage.rs
line 137 at r1 (raw file):
} fn resolve_allocation_slot(
The current implementation is very weird. In the earlier loop rounds, you directly access the DB, but in the final round, you access the cache. Maybe it's logically correct, but it’s no different from code that only works because of a bug.
Inappropriate logical structure also leads to a significant amount of duplicated code, with the repeated part being the complex LVMT slot allocation logic. This reduces code readability and increases the difficulty of future maintenance.
Consider merging &db
and cache
into a single object, and use the get
method of this object to replace the original db.get
, similar to how the Conflux-Rust EVM state accesses storage. This is a typical read-through write-back cache policy, and the current implementation exposes a lack of fundamental knowledge.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: 0 of 3 files reviewed, 2 unresolved discussions (waiting on @rongma7)
src/lvmt/storage.rs
line 58 at r3 (raw file):
for (key, value) in changes { if !set_of_keys.insert(key.clone()) { return Err(StorageError::DuplicateKeysInOneCommit);
I think we can just skip the duplicated keys, or panic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: 0 of 3 files reviewed, 2 unresolved discussions (waiting on @ChenxingLi)
src/lvmt/storage.rs
line 137 at r1 (raw file):
Previously, ChenxingLi (Chenxing Li) wrote…
The current implementation is very weird. In the earlier loop rounds, you directly access the DB, but in the final round, you access the cache. Maybe it's logically correct, but it’s no different from code that only works because of a bug.
Inappropriate logical structure also leads to a significant amount of duplicated code, with the repeated part being the complex LVMT slot allocation logic. This reduces code readability and increases the difficulty of future maintenance.
Consider merging
&db
andcache
into a single object, and use theget
method of this object to replace the originaldb.get
, similar to how the Conflux-Rust EVM state accesses storage. This is a typical read-through write-back cache policy, and the current implementation exposes a lack of fundamental knowledge.
Done.
src/lvmt/storage.rs
line 58 at r3 (raw file):
Previously, ChenxingLi (Chenxing Li) wrote…
I think we can just skip the duplicated keys, or panic.
Done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 1 of 1 files at r3, 1 of 2 files at r4, all commit messages.
Reviewable status: 2 of 3 files reviewed, 2 unresolved discussions (waiting on @rongma7)
src/lvmt/storage.rs
line 122 at r4 (raw file):
} fn set_cache(&mut self, amt_node_id: AmtNodeId, alloc_info: AllocationKeyInfo) {
rename to set
src/lvmt/storage.rs
line 124 at r4 (raw file):
fn set_cache(&mut self, amt_node_id: AmtNodeId, alloc_info: AllocationKeyInfo) { self.cache.insert(amt_node_id, alloc_info); }
Implement commit
to WriteSchema
in this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: 1 of 3 files reviewed, 2 unresolved discussions (waiting on @ChenxingLi)
src/lvmt/storage.rs
line 122 at r4 (raw file):
Previously, ChenxingLi (Chenxing Li) wrote…
rename to
set
Done.
src/lvmt/storage.rs
line 124 at r4 (raw file):
Previously, ChenxingLi (Chenxing Li) wrote…
Implement
commit
toWriteSchema
in this PR.
Done.
write_to_db
in LvmtStore
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed all commit messages.
Reviewable status: 1 of 3 files reviewed, 1 unresolved discussion
…ful function naming
a95615f
to
ad3c7db
Compare
write_to_db
in LvmtStore
LvmtStore
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 1 of 2 files at r6, all commit messages.
Reviewable status: 2 of 3 files reviewed, all discussions resolved (waiting on @rongma7)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 1 of 2 files at r6.
Reviewable status:complete! all files reviewed, all discussions resolved (waiting on @rongma7)
Summary
This PR fixes a bug in the slot allocation logic within the
commit
function ofLvmtStore
.Bug Description
The original logic processed each key in the
changes
parameter sequentially. If a key required a slot allocation, it would find the first slot not present in theslot_alloc_view
(i.e., an immutable reference to the database) and allocate it to that key. Importantly, the database was not updated during this process. As a result, this approach may lead to multiple keys in thechanges
receiving the same slot allocation, which is incorrect.Solution
The updated logic introduces a cache to track slots occupied by keys that have already been processed in the changes parameter. It utilizes a new structure,
AllocationCacheDb
, to integrate the database and the cache. If a key requires a slot allocation, the logic finds the first slot not present in this new structure and allocates that slot to the key while also updating the cache accordingly.Additionally, a related but separate change addresses the correctness of the updated logic, which relies on there being no duplicate keys; otherwise, it will allocate different slots to the same key. Since having no duplicate keys is a requirement, the implementation now skips any duplicate keys encountered in the input parameter
changes
.Conclusion
These modifications correct the slot allocation logic by ensuring that each key receives a unique slot.
This change is