-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat!: switch back to protons #468
Conversation
6143b8e
to
b20e007
Compare
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## master #468 +/- ##
==========================================
- Coverage 81.39% 78.52% -2.88%
==========================================
Files 48 46 -2
Lines 12333 10868 -1465
Branches 1303 1058 -245
==========================================
- Hits 10039 8534 -1505
- Misses 2294 2334 +40 ☔ View full report in Codecov by Sentry. |
this branch
on master
with latest benchmark (with different lengths of messages)
|
Updates code from ChainSafe#468
@tuyennhv what's the status of this PR? Could it be moved forward? |
b20e007
to
31e0ddc
Compare
sorry @achingbrain for the delay, I just rebased against master, testing it with lodestar now cc @wemeetagain |
somehow rss in this branch is consistently higher than with gossipsub v11.1.0 when I test on different nodes
|
That's interesting, protons is synchronous and stateless so any memory it uses would be garbage collectable immediately. That said, it's going to have a different memory usage pattern to Do you have a view on what is taking up the extra space? |
@achingbrain how is it different from
I think it's worth to compare https://github.com/protobufjs/protobuf.js/blob/master/src/reader.js and https://github.com/ipfs/protons/blob/main/packages/protons-runtime/src/utils/reader.ts . From a quick look it shares a lot of similarities |
It isn't, the reader/writer parts of protons are from protobuf.js, ported to ESM.
Yeah, it's basically the same code which is why it's surprising the RSS is larger. For the same workload the memory behaviour should be comparable. I don't have a view on the testing you've been doing with Lodestar so I can only assume the workload is different. Would it be possible for you to record the inputs and outputs of the protobuf.js usage in the app, (e.g. write the buffers being decoded/objects being encoded out to a file), then we could replay them without the rest of lodestar present and observe the memory usage? |
just found out I compared orange to apple, I used service deployment to test this branch and compared it to docker deployment instances, and the service deployments always have higher RSS somehow. So we don't need to worry about RSS, I'll need 1-2 more days to make sure number of mesh peers are the same @achingbrain |
d99d3df
to
a82b44f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good to me
src/message/decodeRpc.ts
Outdated
*/ | ||
export function decodeRpc (bytes: Uint8Array, opts: DecodeRPCLimits): IRPC { | ||
export function decodeRpc (bytes: Uint8Array, opts: DecodeRPCLimits): RPC { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think protons runtime now can limit fields while decoding.
Might be worth looking into as a followup PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can give it a try thanks @wemeetagain 👍
Motivation
Switch from protobufjs to protons
Description
While decode flow is comparable or a little bit faster, encode flow is ~1.5x slower
cc @wemeetagain
Closes #453
Closes #318
TODOs
Test with lodestar to see how it effect the performance