-
Notifications
You must be signed in to change notification settings - Fork 808
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CSI controller Health Check doesn't properly validate controller health. #1551
Comments
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
Bummer! Any reason we can't keep this open? I think my concern is still valid here. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten @stobias123 yes we should be keeping this open. I'll bring this issue back up at today's standup. Thank you for your patience. |
/priority important-longterm |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/kind bug
What happened?
Volumes could not attach. I had a combination of these two errors. After restarting the CSI driver deployment, the volumes attached properly.
What you expected to happen?
I'd expect this time out failure to surface in the healthcheck and cause the pod to be restarted. If the healthcheck fails, and restarts the pod, things would have auto healed without user input.
How to reproduce it (as minimally and precisely as possible)?
Anything else we need to know?:
I had a support case open for this. 12358592881.
Environment
kubectl version
):v1.23.16-eks-48e63af
v1.17.0-eksbuild.1
The text was updated successfully, but these errors were encountered: