fix: l1 handler message hash to txn hash mapping#3367
Conversation
9a9b02a to
c92ac35
Compare
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #3367 +/- ##
==========================================
- Coverage 75.63% 75.63% -0.01%
==========================================
Files 358 361 +3
Lines 33846 33979 +133
==========================================
+ Hits 25601 25699 +98
- Misses 6430 6446 +16
- Partials 1815 1834 +19 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
8c1faac to
535f9d3
Compare
migration/migration.go
Outdated
| var writeMu sync.Mutex | ||
| numWorkers := runtime.GOMAXPROCS(0) | ||
| workerPool := pool.New().WithErrors().WithMaxGoroutines(numWorkers) | ||
| batch := database.NewBatch() |
There was a problem hiding this comment.
How long does the migration take. Is it super quick, no need, but if not, what do you think of adding some logging to inform users. I leave it completely at your discretion.
Also, is this operation light enough to work under the 8gb of ram and 4 core minimum requirements of Juno? I think yes, but asking just to be safe. An intuitive answer is good enough in this case in my opinion – let's not put more time into this PR than strictly required
There was a problem hiding this comment.
Migration was taking ~1 min 15 sec in my machine, let me run it in environment similar to minimum requirements
There was a problem hiding this comment.
I think how long it takes also matters because we're writing everything in a single batch. If user kills the process in the middle, all progress will be lost, and user has to suffer the same cost of migrating them again.
There was a problem hiding this comment.
Im not sure if we should allow to cancel. In case of cancel and re-run we would do the all operations again, migration wont start from where it was cancelled. To be able to remain from where we left, we need to track additional state, by taking a look at the table we are writing L1MessageHash to L2 txn hash we cannot tell were we left?
There was a problem hiding this comment.
Is it sepolia or mainnet?
It was Sepolia
There was a problem hiding this comment.
I will add some logs as well, on Mainnet it will take lot more than on sepolia
f09d1dc to
2f5b23e
Compare
2f5b23e to
b3d6c64
Compare
b3d6c64 to
436b125
Compare


Due to a bug, only the first L1Handler transaction in a block was written to MsgHash to L2TxnHash mapping, resulting in consecutive ones to be missing, leading to to error in
starknet_getMessageStatus.Includes migration