This repository was archived by the owner on Oct 25, 2025. It is now read-only.

Description
When I'm running two instances of indadyn with the same config but in different networks with different public IP addresses, then each inadyn instance will only update the DNS record once (unless its public IP changes). This is due to the caching of inadyn (--cache-dir=/var/cache/inadyn). After deleting the cache on one of the instances it successfully updated the DNS record again
Let me give you a timeline of how to trigger this scenario:
- inadyn instance 1 starts and sets the record
- inadyn instance 2 starts (different public IP but same config) and sets the record
- inadyn instance 1 thinks the record is correctly set according to its config and its public IP - but it isn't
- inadyn instance 2 thinks the record is set correctly - which is right from its perspective
I had that scenario when I replicated a kubernetes cluster in a test environment without disabling inadyn or changing its config. I didn't have the test cluster running for long and I only noticed the issue a couple of days later. It was suprising to me that the inadyn CronJob wouldn't update the record anymore - until I removed the cache file
The bug would also happen just the same if the record was changed by outside means other than an inadyn instance. e.g.:
- inadyn starts and sets the record
- A human being sets the record manually
- inadyn thinks the record is correctly set according to its config and its public IP - but it isn't
My general idea for how to resolve this would be invalidating the cache every now and then to ensure the record is still up to date even if it's been changed by someone/something other than inadyn