-
Notifications
You must be signed in to change notification settings - Fork 215
Start partition producer in get metadata function #636
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
f42c4e3
to
8d26440
Compare
@zmstone hi, how about this PR |
thank you for the pr. i just got back from vacation last week, slowly catching up with the missed activities. |
0e1f3a6
to
8287fef
Compare
@@ -1,5 +1,8 @@ | |||
# Changelog | |||
|
|||
- 4.4.4 | |||
- Start supervisor process for the new increase partitions at `do_get_metadata` function. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if the goal is to automatically start producers for newly discovered partitions, maybe brod_client should start a timer to periodically refresh partition counts for each topic which has a producers_sup
started
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At the beginning of this, I do really consider of this, maybe should start a process and automatically get kafka metadata then start new partition producer, and also I noticed the PR of #623 which already implemented this feature.
But, there are two reasons lead me open this PR:
- The bug of function
brod_client:get_metadata/2
, this function only modify the number of topic partition in ETS table, but not start the partition producer. - In my situation, I get kafka metadata and calculate the partition index which I need to produce the message; use Support Create Partitions #623 feature, there may be a jet lag that the partition producer is not start when I need to produce.
Of course, #623 feature is good and I think it's not conflict with this PR.
What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, i think i can agree to this.
but should also stop producers if some partitions are deleted ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, i think i can agree to this. but should also stop producers if some partitions are deleted ?
kafka topic partitions only can be increased, it's can't be reduced, so it doesn't need to be stopped.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, i'm referring to the case when a topic is deleted then created.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please kindly add some tests
src/brod_client.erl
Outdated
@@ -414,7 +415,15 @@ handle_call({get_transactional_coordinator, TransactionId}, _From, State) -> | |||
{Result, NewState} = do_get_transactional_coordinator(State, TransactionId), | |||
{reply, Result, NewState}; | |||
handle_call({start_producer, TopicName, ProducerConfig}, _From, State) -> | |||
{Reply, NewState} = do_start_producer(TopicName, ProducerConfig, State), | |||
%% NOTE: Store ProducerConfig into the config in state |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
producer config is per topic, it is already stored in supervisor child spec.
we can port get_childspec to brod_supervisor3
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
@@ -1,5 +1,8 @@ | |||
# Changelog | |||
|
|||
- 4.4.4 | |||
- Start supervisor process for the new increase partitions at `do_get_metadata` function. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, i think i can agree to this.
but should also stop producers if some partitions are deleted ?
src/brod_client.erl
Outdated
|
||
-spec started_producers(pid()) -> #{topic() => non_neg_integer()}. | ||
started_producers(ProducerSup) -> | ||
Producers = brod_supervisor3:which_children(ProducerSup), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
brod_supervisor3:which_children(ProducerSup)
will give you the per-topic supervisors which supervises the per-partition workers
i think you'll need to do it like this:
started_producers(ProducerSup) ->
TopicSups = which_producers(ProducerSup),
lists:foldl(
fun({Topic, Pid, _Type, _Mods}, Acc) when is_pid(Pid) ->
PartitionWorkers = which_producers(Pid),
case length(PartitionWorkers) of
0 -> Acc;
L -> Acc#{Topic => L}
(_) ->
Acc
end, TopicSups).
and this
which_producers(Sup) ->
try
brod_supervisor3:which_children(Sup)
catch
_:_ ->
[]
end.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
e9cbc7a
to
65981a2
Compare
Done. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is good to merge, however, it would be nice to have another PR to address decreased partition count.
Thank you for merge this PR. Would you release a tag for this feature? |
released 4.4.5 |
Close #635