-
Notifications
You must be signed in to change notification settings - Fork 25
CRE-1323 #1711
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
👋 mchain0, thanks for creating this pull request! To help reviewers, please consider creating future PRs as drafts first. This allows you to self-review and make any final changes before notifying the team. Once you're ready, you can mark it as "Ready for review" to request feedback. Thanks! |
✅ API Diff Results - No breaking changes |
| subs.Go(func() { | ||
| for { | ||
| select { | ||
| case update := <-poller.Updates(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updates are exposed by poller here. The main goal of updatesFanIn channel seems just to collect all the updates together. Changing its buffer from 0 to 1 does not really change anything on the poller side - they will continue writing updates to their poller.Updates() channels.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is a valid point that this may not be entirely sufficient.
let's analyze: poller goroutine has it's own channel (which is buffered at 100), they write independently, when read from poller.Update() and then trying to write to updatesFanIn and blocked if unbuffered consumer is not receiving, while blocked can't loop back to Updates(). The problem in that as I see it is that we don't update uniformly across the poller pool, when buffered we avoid this. Buffering could be 100 to match pollers.
But I agree it's not a great improvement (although does the job). In the code on line 67 is a comment // TODO (dru) do we need a worker pool here? <- maybe that's a better direction for the consumer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently, I don't see how 1 (or any other number) improves the setting over 0. It would be useful to have a test where it actually matters, or some profiling information which can better justify changing these files.
|
Is this module still used? If so, it needs proper code owners. |
|
|
||
| // Listen for updates | ||
| updatesFanIn := make(chan interface{}) | ||
| updatesFanIn := make(chan interface{}, 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless this relieves a deadlock, I don't think it will have the effect that you intend. Blocking sends can be a feature in a system like this, due to becoming a natural scheduling point to swap over to executing the worker on the receiving end. Instant backpressure like before this change leads to less latency. Adding/Increasing the buffer trades increased latency for what is effectively batch executions - do we really want/need that? And if so, why only 1 instead of 10, or some factor of the number of senders, etc.?
how do we identify this? |
CRE-1323
Multiple pollers may send updates, but with a buffer of 1, at least one sender won't block waiting for the consumer.
With unbuffered channel, send blocks until a receiver is actively waiting. But consumer may not be at the receive select yet - it may still be in the exporter loop.