You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+31-24
Original file line number
Diff line number
Diff line change
@@ -61,34 +61,41 @@ spec:
61
61
scaleDownThresholdDurationSeconds: 1800
62
62
scaleDownCooldownSeconds: 3600
63
63
diskUsagePercentScaledownWatermark: 80
64
+
experimental:
65
+
draining:
66
+
maxRetries: 999
67
+
maximumWaitTimeDurationSeconds: 30
68
+
minimumWaitTimeDurationSeconds: 10
64
69
```
65
70
66
71
### Custom resource properties
67
72
68
-
69
-
| Key | Description | Type |
70
-
|----------|---------------|---------|
71
-
| spec.replicas | Initial size of the StatefulSet. If auto-scaling is disabled, this is your desired cluster size. | Int |
72
-
| spec.excludeSystemIndices | Enable or disable inclusion of system indices like '.kibana' when calculating shard-per-node ratio and scaling index replica counts. Those are usually managed by Elasticsearch internally. Default is false for backwards compatibility | Boolean |
73
-
| spec.skipDraining | Allows the ES Operator to terminate an Elasticsearch node without re-allocating its data. This is useful for persistent disk setups, like EBS volumes. Beware that the ES Operator does not verify that you have more than one copy of your indices and therefore wouldn't protect you from potential data loss. (default=false) | Boolean |
74
-
| spec.scaling.enabled | Enable or disable auto-scaling. May be necessary to enforce manual scaling. | Boolean |
75
-
| spec.scaling.minReplicas | Minimum Pod replicas. Lower bound (inclusive) when scaling down. | Int |
76
-
| spec.scaling.maxReplicas | Maximum Pod replicas. Upper bound (inclusive) when scaling up. | Int |
77
-
| spec.scaling.minIndexReplicas | Minimum index replicas. Lower bound (inclusive) when reducing index copies. (reminder: total copies is replicas+1 in Elasticsearch) | Int |
78
-
| spec.scaling.maxIndexReplicas | Maximum index replicas. Upper bound (inclusive) when increasing index copies. | Int |
79
-
| spec.scaling.minShardsPerNode | Minimum shard per node ratio. When reached, scaling up also requires adding more index replicas. | Int |
80
-
| spec.scaling.maxShardsPerNode | Maximum shard per node ratio. Boundary for scaling down. | Int |
81
-
| spec.scaling.scaleUpCPUBoundary | (Median) CPU consumption/request ratio to consistently exceed in order to trigger scale up. | Int |
82
-
| spec.scaling.scaleUpThresholdDurationSeconds | Duration in seconds required to meet the scale-up criteria before scaling. | Int |
83
-
| spec.scaling.scaleUpCooldownSeconds | Minimum duration in seconds between two scale up operations. | Int |
84
-
| spec.scaling.scaleDownCPUBoundary | (Median) CPU consumption/request ratio to consistently fall below in order to trigger scale down. | Int |
85
-
| spec.scaling.scaleDownThresholdDurationSeconds | Duration in seconds required to meet the scale-down criteria before scaling. | Int |
86
-
| spec.scaling.scaleDownCooldownSeconds | Minimum duration in seconds between two scale-down operations. | Int |
87
-
| spec.scaling.diskUsagePercentScaledownWatermark | If disk usage on one of the nodes exceeds this threshold, scaling down will be prevented. | Float |
88
-
| status.lastScaleUpStarted | Timestamp of start of last scale-up activity | Timestamp |
89
-
| status.lastScaleUpEnded | Timestamp of end of last scale-up activity | Timestamp |
90
-
| status.lastScaleDownStarted | Timestamp of start of last scale-down activity | Timestamp |
91
-
| status.lastScaleDownEnded | Timestamp of end of last scale-down activity | Timestamp |
| spec.replicas | Initial size of the StatefulSet. If auto-scaling is disabled, this is your desired cluster size. | Int |
76
+
| spec.excludeSystemIndices | Enable or disable inclusion of system indices like '.kibana' when calculating shard-per-node ratio and scaling index replica counts. Those are usually managed by Elasticsearch internally. Default is false for backwards compatibility | Boolean |
77
+
| spec.skipDraining | Allows the ES Operator to terminate an Elasticsearch node without re-allocating its data. This is useful for persistent disk setups, like EBS volumes. Beware that the ES Operator does not verify that you have more than one copy of your indices and therefore wouldn't protect you from potential data loss. (default=false) | Boolean |
78
+
| spec.scaling.enabled | Enable or disable auto-scaling. May be necessary to enforce manual scaling. | Boolean |
79
+
| spec.scaling.minReplicas | Minimum Pod replicas. Lower bound (inclusive) when scaling down. | Int |
80
+
| spec.scaling.maxReplicas | Maximum Pod replicas. Upper bound (inclusive) when scaling up. | Int |
81
+
| spec.scaling.minIndexReplicas | Minimum index replicas. Lower bound (inclusive) when reducing index copies. (reminder: total copies is replicas+1 in Elasticsearch) | Int |
82
+
| spec.scaling.maxIndexReplicas | Maximum index replicas. Upper bound (inclusive) when increasing index copies. | Int |
83
+
| spec.scaling.minShardsPerNode | Minimum shard per node ratio. When reached, scaling up also requires adding more index replicas. | Int |
84
+
| spec.scaling.maxShardsPerNode | Maximum shard per node ratio. Boundary for scaling down. | Int |
85
+
| spec.scaling.scaleUpCPUBoundary | (Median) CPU consumption/request ratio to consistently exceed in order to trigger scale up. | Int |
86
+
| spec.scaling.scaleUpThresholdDurationSeconds | Duration in seconds required to meet the scale-up criteria before scaling. | Int |
87
+
| spec.scaling.scaleUpCooldownSeconds | Minimum duration in seconds between two scale up operations. | Int |
88
+
| spec.scaling.scaleDownCPUBoundary | (Median) CPU consumption/request ratio to consistently fall below in order to trigger scale down. | Int |
89
+
| spec.scaling.scaleDownThresholdDurationSeconds | Duration in seconds required to meet the scale-down criteria before scaling. | Int |
90
+
| spec.scaling.scaleDownCooldownSeconds | Minimum duration in seconds between two scale-down operations. | Int |
91
+
| spec.scaling.diskUsagePercentScaledownWatermark | If disk usage on one of the nodes exceeds this threshold, scaling down will be prevented. | Float |
92
+
| spec.experimental.draining.maxRetries | MaxRetries specifies the maximum number of attempts to drain a node. | Int |
93
+
| spec.experimental.draining.maximumWaitTimeDurationSeconds | MaximumWaitTimeDurationSeconds specifies the maximum wait time in seconds between retry attempts after a failed node drain. | Int |
94
+
| spec.experimental.draining.minimumWaitTimeDurationSeconds | MMinimumWaitTimeDurationSeconds specifies the minimum wait time in seconds between retry attempts after a failed node drain. | Int |
95
+
| status.lastScaleUpStarted | Timestamp of start of last scale-up activity | Timestamp |
96
+
| status.lastScaleUpEnded | Timestamp of end of last scale-up activity | Timestamp |
97
+
| status.lastScaleDownStarted | Timestamp of start of last scale-down activity | Timestamp |
98
+
| status.lastScaleDownEnded | Timestamp of end of last scale-down activity | Timestamp |
0 commit comments