Replies: 1 comment
-
|
I eventually found that the deadlock was caused by my code not calling |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I'm having an issue where
rd_kafka_destroycalled to destroy a consumer hangs indefinitely. When I attach gdb to the process, I find 3 threads, with the stack traces shown at the end of this post.I couldn't reproduce this problem outside of my code base (when extracting the rdkafka calls my application makes into a single, simpler test file, I didn't get the issue) , so I suspect the issue is somewhere in my code, but I'd like to understand what happens when destroying a consumer and what could cause such a deadlock. From the stack trace, I'm assuming the main thread is trying to join the
rdk:mainthread, which is itself trying to join therdk:broker1thread, which is blocked insiderd_kafka_q_pop_serve.For references, my code essentially does the following (producing an consuming to/from a new topic with a single partition):
rd_kafka_tand produce 100 events using a series ofrd_kafka_produce_batch;rd_kafka_pollis called periodically;rd_kafka_flushis called after each call tord_kafka_produce_batchand delivery callbacks are used to ensure all the messages in the batch have been produced;rd_kafka_tis created. It hasenable.auto.commitset tofalse,auto.offset.resetset to earliest.rd_kafka_subscribeis used to subscribe it the topic/partition;rd_kafka_consumer_pollis used to poll for messages repeatedly. I can see the 100 messages coming correctly andrd_kafka_message_destroyis used on each message after the message is processed.rd_kafka_unsubscribeon the consumer;rd_kafka_destroyto destroy the consumer, this is where it blocks.GDB stack traces:
Beta Was this translation helpful? Give feedback.
All reactions