fix: Apply timeout on batch basis #1133
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What
Applies timeout to the entire batch instead of on a per message basis.
How
The calls to the individual consume share the batch timeout and each call to consume can only use the timeout that the batch has remaining.
Why
The existing behavior is described in detail in this issue.
In short, for
d
millisecond delay,c
count, andb
blocking time:b = d * c
, given a constant topic rpm60000 / d
.Given any rpm
> 60000 / d
:b = c * (60 / rpm)
secondsFor example, a 1000ms delay with a batch count of 100 will cause the
consumeNum
loop to block for up to 100 seconds given a constant topic rpm of 60.References
Issue: confluentinc/confluent-kafka-javascript#262
PR Introduced: #34
Test & Review
With c = 100, d = 1000, rpm = 700, expectation is that pre-change we'd block for about 8.6 seconds (it seems like it takes longer in the example, though) before returning the 100 messages.
My testing environment has a 50ms cooldown on calling
consume
, so after the change we expect a batch of messages to be returned every 1050ms.Pre-Change

Post-Change
