Skip to content

Conversation

@mchain0
Copy link
Contributor

@mchain0 mchain0 commented Nov 18, 2025

CRE-1323

Multiple pollers may send updates, but with a buffer of 1, at least one sender won't block waiting for the consumer.

With unbuffered channel, send blocks until a receiver is actively waiting. But consumer may not be at the receive select yet - it may still be in the exporter loop.

@github-actions
Copy link

👋 mchain0, thanks for creating this pull request!

To help reviewers, please consider creating future PRs as drafts first. This allows you to self-review and make any final changes before notifying the team.

Once you're ready, you can mark it as "Ready for review" to request feedback. Thanks!

@github-actions
Copy link

github-actions bot commented Nov 18, 2025

✅ API Diff Results - No breaking changes


📄 View full apidiff report

subs.Go(func() {
for {
select {
case update := <-poller.Updates():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updates are exposed by poller here. The main goal of updatesFanIn channel seems just to collect all the updates together. Changing its buffer from 0 to 1 does not really change anything on the poller side - they will continue writing updates to their poller.Updates() channels.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is a valid point that this may not be entirely sufficient.

let's analyze: poller goroutine has it's own channel (which is buffered at 100), they write independently, when read from poller.Update() and then trying to write to updatesFanIn and blocked if unbuffered consumer is not receiving, while blocked can't loop back to Updates(). The problem in that as I see it is that we don't update uniformly across the poller pool, when buffered we avoid this. Buffering could be 100 to match pollers.

But I agree it's not a great improvement (although does the job). In the code on line 67 is a comment // TODO (dru) do we need a worker pool here? <- maybe that's a better direction for the consumer?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, I don't see how 1 (or any other number) improves the setting over 0. It would be useful to have a test where it actually matters, or some profiling information which can better justify changing these files.

@jmank88
Copy link
Contributor

jmank88 commented Nov 18, 2025

Is this module still used? If so, it needs proper code owners.


// Listen for updates
updatesFanIn := make(chan interface{})
updatesFanIn := make(chan interface{}, 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless this relieves a deadlock, I don't think it will have the effect that you intend. Blocking sends can be a feature in a system like this, due to becoming a natural scheduling point to swap over to executing the worker on the receiving end. Instant backpressure like before this change leads to less latency. Adding/Increasing the buffer trades increased latency for what is effectively batch executions - do we really want/need that? And if so, why only 1 instead of 10, or some factor of the number of senders, etc.?

@mchain0
Copy link
Contributor Author

mchain0 commented Nov 20, 2025

Is this module still used? If so, it needs proper code owners.

how do we identify this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants