You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When we get k8s api errors the sync stop working silently
like calling kubernetes: (410) Reason: Expired: The resourceVersion for the provided watch is too old.
After on debuglevel you only see the msg: Performing watch-based sync on secret resources: {'label_selector': 'grafana_dashboard_v10=1', 'timeout_seconds': '300', '_request_timeout': '330'}
the msg for configmap stops: Performing watch-based sync on configmap resources: {'label_selector': 'grafana_dashboard_v10=1', 'timeout_seconds': '300', '_request_timeout': '330'}
as well as other debug messages related to configmap. We only have matching configmaps in this cluster.
It looks like that the process for configmap are dead. The process itself is still there.
Make it sense to introduce a liveness check (dead man switch like), that on problems the hole container get restartet?
When we get k8s api errors the sync stop working silently
like
calling kubernetes: (410) Reason: Expired: The resourceVersion for the provided watch is too old.
After on debuglevel you only see the msg:
Performing watch-based sync on secret resources: {'label_selector': 'grafana_dashboard_v10=1', 'timeout_seconds': '300', '_request_timeout': '330'}
the msg for configmap stops:
Performing watch-based sync on configmap resources: {'label_selector': 'grafana_dashboard_v10=1', 'timeout_seconds': '300', '_request_timeout': '330'}
as well as other debug messages related to configmap. We only have matching configmaps in this cluster.
It looks like that the process for configmap are dead. The process itself is still there.
Make it sense to introduce a liveness check (dead man switch like), that on problems the hole container get restartet?
container yaml:
The text was updated successfully, but these errors were encountered: