-
Notifications
You must be signed in to change notification settings - Fork 584
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce allocations by using pooled memory and recycling memory streams #694
Comments
Some explanations: |
So a 25% reduction with this specific run. Looks promising! |
ralated to #452 |
I have got System.IO.Pipelines working on the socket connection as well, will push the PR soon for review and testing. Improvements are quite impressive, will add details and screenshots later :) |
So here's where I'm at. Similar scenario as above, one sender, one receiver. Bulk Send 50000 messages in 500x100 message batches, but now with a 512 byte, 4kb and 16kb payloads. Before512 byte payload4kb payload16kb payloadAfter (using pooled arrays, recyclable memorystreams and System.IO.Pipelines)512 byte payload4kb payload16kb payload |
More progress :) 512 byte payloads4kb payloads16kb payloadsTo summarize: What's left |
@stebet impressive 💪 |
#1445 appears to be the "final word" on this issue. Closing. |
The RabbitMQ client currently allocates a lot of unnecessary memory and has a lot of GC overhead.
I'm currently working on a PR to reduce the allocations being made, and will probably introduce some *Async overloads as well as they help reduce lock contention.
Here is the current progress I have made with a test app that opens two connections. One connection bulk publishes 50000 messages in 100 message batches containing just an 4-byte integer as payload. The other connection receives those messages so it's mostly just measureing the frame serialize/deserialize overhead.
Before:
After:
I'll follow up this issue with discussions and link the PR once it's ready for review.
The text was updated successfully, but these errors were encountered: