-
Hey, i've managed to run the connector locally (not the lightweight one), i've inserted large amount of data to my topic and i noticed that the sink connector committed to the latest offset after short time, meanwhile the amount of rows in clickhouse were incrementing. i then restarted kafka connect and saw that it stopped inserting rows to clickhouse, for example it ended up with 50k rows in clickhouse whereas i originally had 800k messages in the topic. am i missing something on the configuration side? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
I see that there is some logic around handling offsets in acknowledgeRecords |
Beta Was this translation helpful? Give feedback.
No the lightweight version does not include any kafka dependencies. The logic is straight forward, the threads in the thread pool need to notify
SinkTask
when they have persisted to ClickHouse and then the offsets can be acknowledged to Kafka. There are examples in other kafka sink connectors.