-
Notifications
You must be signed in to change notification settings - Fork 208
feat(state): new state package + state clean cache #2864
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
done event filter this works
use hashFn
…Eth/juno into maksym/state-refactor
Signed-off-by: MaksymMalicki <[email protected]>
…Eth/juno into maksym/state-refactor
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2864 +/- ##
==========================================
+ Coverage 74.35% 74.63% +0.27%
==========================================
Files 226 233 +7
Lines 24688 25585 +897
==========================================
+ Hits 18357 19095 +738
- Misses 5105 5197 +92
- Partials 1226 1293 +67 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
…Eth/juno into maksym/state-refactor
core/state/cache.go
Outdated
|
||
type stateCache struct { | ||
diffs map[felt.Felt]*diffCache // state root -> state diff | ||
links map[felt.Felt]felt.Felt // child -> parent |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Am I getting this correctly, does this links
represents a tree of the form
ch1 -> ch2 -> ch3 -> parent
, where ch1
is child of ch2
and so on?
If that's the case I strongly believe a map is not the right data structure here. It looks that a linked list with two pointers might be a better fit for the job. If I got it correctly, then I think the best is to implement our own, didn't find any convincing package.
The pros from this is that we can now do all the cache modification in constant times, the cons is that all getter methods need to iterate through the list. To solve this, then we can add another map to the mix to be (key, ptr to linked listnode). Then we have O(1) (instead of O(128)) op for pushing/popping a layer and O(log(128)) for getter operations (stays the same).
The space cost is 128 additional pointers (64 bits each) to the current implementation. I think we are fine.
Let me know what you think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I was thinking about the linked list as well, I like the idea. This was supposed to mock a simpler version of the layertree. I will be implementing it.
core/state/state.go
Outdated
return &State{ | ||
initRoot: stateRoot, | ||
db: db, | ||
contractTrie: contractTrie, | ||
classTrie: classTrie, | ||
stateObjects: make(map[felt.Felt]*stateObject), | ||
}, nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a type that contains reference to other types, why are we returning it by reference and not by value?
This PR introduces a new
state
package, which should replace thestate.go
andstate_snapshot.go
files with their tests files. This package should be used throughout the project, the changes will appear in the following PR(s). The most important newonces:StateDB
object, with long-livedTrieDB
as one of the fields. This will allow us to entirely get rid ofIndexedBatch
in the following PRs, which negatively affects the performance of the node with its slow writes.stateCache
, which should speed up the reads. The clean cache was designed as a simple, "linked-list" like structure, which should mimic the layertree approac, without it's complex newonces (i. e. we don't need a dirty cache, since we intend to write a new state on every block). The cache handles both stateUpdate
andRevert
. The clean cache also allows us to get rid ofIndexedBatch
, since the performance benefit ofIndexBatch
(quick reads from the in-memory cache layer) will be omitted. This allows us to use regularBatch
with much quicker write time.TrieDB
and flushing the state changes is done concurrently