Bug description
The source connector batch acker, will commit offsets to Kafka at every n or 1000 (as const).
This can result in failure introducing between 1 < duplicates < n to the destination.
When the consumer restarts, it will assume the consumer group contains the latest positioning data
ask'd by Conduit, however the discrepancy described above can be observed.
Steps to reproduce
- Create kafka pipeline with conduit
- Produce 10 records
- Kill pipeline (kill command not teardown, as this will force a flush)
- Start pipeline, same records will be produced
Version
latest