-
Notifications
You must be signed in to change notification settings - Fork 32
Description
Report
Note: this was written up with the help of AI, but based on experience with several crashes that the operator couldn't recover from. I either had to restore from backup, or disable the operator and "hand recover" (and afterwards I'd reset from scratch to get an "operator operated" cluster again).
Bug Description
The Percona MySQL Operator's readiness probe treats RECOVERING nodes as unhealthy, causing Kubernetes to restart pods that are actively recovering data through MySQL Group Replication. This creates an infinite loop where nodes can never complete recovery because they are killed mid-process.
More about the problem
The Problem
When a MySQL node is in RECOVERING state (legitimately catching up with group replication), the readiness probe fails and Kubernetes restarts the pod. This prevents the node from ever completing recovery, creating a deadlock situation.
Current Readiness Probe Behavior
kubectl exec ntpdb-mysql-1 -n ntpdb -- /opt/percona/healthcheck readiness
# Output: 2025/09/27 20:46:18 readiness check failed: Member state: RECOVERING
# Exit code: 1Group Replication Status
SELECT MEMBER_HOST, MEMBER_STATE, MEMBER_ROLE FROM performance_schema.replication_group_members;
+-----------------------------------+--------------+-------------+
| MEMBER_HOST | MEMBER_STATE | MEMBER_ROLE |
+-----------------------------------+--------------+-------------+
| ntpdb-mysql-0.ntpdb-mysql.ntpdb | ONLINE | PRIMARY |
| ntpdb-mysql-1.ntpdb-mysql.ntpdb | RECOVERING | SECONDARY |
+-----------------------------------+--------------+-------------+Steps to reproduce
The Deadlock Cycle
- Node starts recovery: MySQL node joins group replication in
RECOVERINGstate - Readiness probe fails:
/opt/percona/healthcheck readinessreturns exit code 1 forRECOVERINGnodes - Kubernetes restarts pod: After readiness probe failure threshold, pod is marked unready and restarted
- Recovery interruption: Node loses recovery progress and must start over
- StatefulSet blocking: Next pod (mysql-2) cannot start until mysql-1 is Ready
- Operator deadlock: Cannot complete initialization without all pods healthy
- Infinite loop: Process repeats indefinitely
Versions
- Operator Version: v0.12.0
- MySQL Version: percona/percona-server:8.0.43-34
- Kubernetes Version: v1.32.6
- Cluster Configuration: 3-node MySQL Group Replication with StatefulSet
Anything else?
StatefulSet Cascade Failure
- Pod ordering: StatefulSets require pod N to be Ready before creating pod N+1
- Blocked scaling: mysql-2 never gets created because mysql-1 never becomes Ready
- Cluster incomplete: Operator cannot complete initialization with missing pods
Operational Impact
- Extended outages: Clusters stuck in initialization for hours/days
- Data consistency issues: Repeated recovery interruptions
- Resource waste: Continuous pod restarts consume CPU/memory
- Manual intervention required: No automatic recovery possible
Expected Behavior
Readiness Probe Should Accept RECOVERING
The readiness probe should distinguish between:
-
Healthy States (Ready=True):
ONLINE- Node is fully operationalRECOVERING- Node is actively syncing data (healthy progress)
-
Unhealthy States (Ready=False):
ERROR- Node has encountered an errorOFFLINE- Node is not participating in groupUNREACHABLE- Network connectivity issues
Rationale
- RECOVERING is expected: During cluster recovery, nodes legitimately spend time catching up
- Progress is healthy: RECOVERING means the node is actively syncing data
- Time is required: Large datasets may take hours to sync
- Interruption is harmful: Restarting during recovery loses progress
Proposed Solution
1. Modify Readiness Check Logic
# Current logic (problematic):
if [ "$MEMBER_STATE" != "ONLINE" ]; then
echo "readiness check failed: Member state: $MEMBER_STATE"
exit 1
fi
# Proposed logic:
case "$MEMBER_STATE" in
"ONLINE"|"RECOVERING")
exit 0 # Ready
;;
"ERROR"|"OFFLINE"|"UNREACHABLE"|"")
echo "readiness check failed: Member state: $MEMBER_STATE"
exit 1 # Not ready
;;
esac2. Add Recovery Progress Monitoring
For RECOVERING nodes, optionally check if progress is being made:
- Monitor
GTID_EXECUTEDadvancement - Allow reasonable timeout for large recoveries
- Only fail if recovery is truly stuck (no progress for extended period)
3. Configuration Options
Add operator configuration to control readiness behavior:
spec:
mysql:
readinessProbe:
allowRecovering: true # Default: true
recoveryTimeout: 3600 # Seconds to allow recovery before failingReproduction Steps
- Create 3-node cluster with some data
- Simulate node failure (delete 2 pods, leaving 1 with most recent data)
- Start recovery: Let operator attempt to recover cluster
- Observe deadlock:
- Node 1 reaches RECOVERING state
- Readiness probe fails repeatedly
- Pod restarts interrupt recovery
- Node 2 never starts due to StatefulSet ordering
- Cluster never completes initialization
Current Workaround
None available - The issue is fundamental to the readiness probe logic and cannot be worked around without code changes.
Evidence
Pod Restart Pattern
kubectl get pods -n ntpdb | grep mysql-1
# ntpdb-mysql-1 1/2 Running 4 (2m ago) 8m
# Shows continuous restarts due to readiness failuresStatefulSet Status
kubectl get statefulset ntpdb-mysql -n ntpdb
# Shows replicas: 2/3 because mysql-2 cannot startOperator Status
kubectl get perconaservermysql ntpdb -n ntpdb -o jsonpath='{.status.state}'
# Shows: "initializing" (stuck indefinitely)Related Issues
This issue compounds with the broader operator reconciliation problems:
- Status update conflicts prevent progress
- Crash recovery logic is overly aggressive
- Initialization state never completes
Best I can tell this:
- Prevents cluster recovery in common failure scenarios
- Requires manual operator intervention/restart
- Can cause extended production outages
- Has no viable workaround