You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on May 21, 2025. It is now read-only.
Copy file name to clipboardExpand all lines: README.md
+41-2Lines changed: 41 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,6 +20,43 @@ This module is a wrapper module that groups the following modules:
20
20
- Make sure that you have a recent version of the [IBM Cloud CLI](https://cloud.ibm.com/docs/cli?topic=cli-getting-started)
21
21
- Make sure that you have a recent version of the [IBM Cloud Kubernetes service CLI](https://cloud.ibm.com/docs/containers?topic=containers-kubernetes-service-cli)
22
22
23
+
### Default Worker Pool management
24
+
25
+
You can manage the default worker pool using Terraform, and make changes to it through this module. This option is enabled by default. Under the hood, the default worker pool is imported as a `ibm_container_vpc_worker_pool` resource. Advanced users may opt-out of this option by setting `import_default_worker_pool_on_create` parameter to `false`. For most use cases it is recommended to keep this variable to `true`.
26
+
27
+
#### Important Considerations for Terraform and Default Worker Pool
28
+
29
+
**Terraform Destroy**
30
+
31
+
When using the default behavior of handling the default worker pool as a stand-alone `ibm_container_vpc_worker_pool`, you must manually remove the default worker pool from the Terraform state before running a terraform destroy command on the module. This is due to a [known limitation](https://cloud.ibm.com/docs/containers?topic=containers-faqs#smallest_cluster) in IBM Cloud.
32
+
33
+
Terraform CLI Example
34
+
35
+
For a cluster with 2 worker pools, named 'default' and 'secondarypool', follow these steps:
36
+
37
+
```sh
38
+
$ terraform state list | grep ibm_container_vpc_worker_pool
$ terraform state rm "module.ocp_all_inclusive.module.ocp_base.ibm_container_vpc_worker_pool.pool[\"default\"]"
46
+
```
47
+
48
+
Schematics Example: For a cluster with 2 worker pools, named 'default' and 'secondarypool', follow these steps:
49
+
50
+
```sh
51
+
$ ibmcloud schematics workspace state rm --id <workspace_id> --address "module.ocp_all_inclusive.module.ocp_base.ibm_container_vpc_worker_pool.pool[\"default\"]"
52
+
```
53
+
54
+
**Changes Requiring Re-creation of Default Worker Pool**
55
+
56
+
If you need to make changes to the default worker pool that require its re-creation (e.g., changing the worker node `operating_system`), you must set the `allow_default_worker_pool_replacement` variable to true, perform the apply, and then set it back to false in the code before the subsequent apply. This is **only** necessary for changes that require the recreation the entire default pool and is **not needed for scenarios that does not require recreating the worker pool such as changing the number of workers in the default worker pool**.
57
+
58
+
This approach is due to a limitation in the Terraform provider that may be lifted in the future.
59
+
23
60
<!-- Below content is automatically populated via pre-commit hook -->
24
61
<!-- BEGIN OVERVIEW HOOK -->
25
62
## Overview
@@ -127,8 +164,8 @@ You need the following permissions to run this module.
| <aname="input_additional_lb_security_group_ids"></a> [additional\_lb\_security\_group\_ids](#input\_additional\_lb\_security\_group\_ids)| Additional security group IDs to add to the load balancers associated with the cluster. These security groups are in addition to the IBM-maintained security group. |`list(string)`|`[]`| no |
143
180
| <aname="input_additional_vpe_security_group_ids"></a> [additional\_vpe\_security\_group\_ids](#input\_additional\_vpe\_security\_group\_ids)| Additional security groups to add to all the load balancers. This comes in addition to the IBM maintained security group. | <pre>object({<br> master = optional(list(string), [])<br> registry = optional(list(string), [])<br> api = optional(list(string), [])<br> })</pre> |`{}`| no |
144
181
| <aname="input_addons"></a> [addons](#input\_addons)| List of all addons supported by the ocp cluster. | <pre>object({<br> debug-tool = optional(string)<br> image-key-synchronizer = optional(string)<br> openshift-data-foundation = optional(string)<br> vpc-file-csi-driver = optional(string)<br> static-route = optional(string)<br> cluster-autoscaler = optional(string)<br> vpc-block-csi-driver = optional(string)<br> })</pre> |`null`| no |
182
+
| <aname="input_allow_default_worker_pool_replacement"></a> [allow\_default\_worker\_pool\_replacement](#input\_allow\_default\_worker\_pool\_replacement)| (Advanced users) Set to true to allow the module to recreate a default worker pool. Only use in the case where you are getting an error indicating that the default worker pool cannot be replaced on apply. Once the default worker pool is handled as a stand-alone ibm\_container\_vpc\_worker\_pool, if you wish to make any change to the default worker pool which requires the re-creation of the default pool set this variable to true. |`bool`|`false`| no |
145
183
| <aname="input_attach_ibm_managed_security_group"></a> [attach\_ibm\_managed\_security\_group](#input\_attach\_ibm\_managed\_security\_group)| Whether to attach the IBM-defined default security group (named `kube-<clusterid>`) to all worker nodes. Applies only if `custom_security_group_ids` is set. |`bool`|`true`| no |
146
184
| <aname="input_cloud_monitoring_access_key"></a> [cloud\_monitoring\_access\_key](#input\_cloud\_monitoring\_access\_key)| Access key for the Cloud Monitoring agent to communicate with the instance. |`string`|`null`| no |
147
185
| <aname="input_cloud_monitoring_add_cluster_name"></a> [cloud\_monitoring\_add\_cluster\_name](#input\_cloud\_monitoring\_add\_cluster\_name)| If true, configure the cloud monitoring agent to attach a tag containing the cluster name to all metric data. |`bool`|`true`| no |
@@ -168,6 +206,7 @@ No resources.
168
206
| <aname="input_existing_kms_root_key_id"></a> [existing\_kms\_root\_key\_id](#input\_existing\_kms\_root\_key\_id)| The Key ID of a root key, existing in the KMS instance passed in var.existing\_kms\_instance\_guid, which will be used to encrypt the data encryption keys (DEKs) which are then used to encrypt the secrets in the cluster. Required if value passed for var.existing\_kms\_instance\_guid. |`string`|`null`| no |
169
207
| <aname="input_force_delete_storage"></a> [force\_delete\_storage](#input\_force\_delete\_storage)| Delete attached storage when destroying the cluster - Default: false |`bool`|`false`| no |
170
208
| <aname="input_ignore_worker_pool_size_changes"></a> [ignore\_worker\_pool\_size\_changes](#input\_ignore\_worker\_pool\_size\_changes)| Enable if using worker autoscaling. Stops Terraform managing worker count |`bool`|`false`| no |
209
+
| <aname="input_import_default_worker_pool_on_create"></a> [import\_default\_worker\_pool\_on\_create](#input\_import\_default\_worker\_pool\_on\_create)| (Advanced users) Whether to handle the default worker pool as a stand-alone ibm\_container\_vpc\_worker\_pool resource on cluster creation. Only set to false if you understand the implications of managing the default worker pool as part of the cluster resource. Set to true to import the default worker pool as a separate resource. Set to false to manage the default worker pool as part of the cluster resource. |`bool`|`true`| no |
171
210
| <aname="input_kms_account_id"></a> [kms\_account\_id](#input\_kms\_account\_id)| Id of the account that owns the KMS instance to encrypt the cluster. It is only required if the KMS instance is in another account. |`string`|`null`| no |
172
211
| <aname="input_kms_use_private_endpoint"></a> [kms\_use\_private\_endpoint](#input\_kms\_use\_private\_endpoint)| Set as true to use the Private endpoint when communicating between cluster and KMS instance. |`bool`|`true`| no |
173
212
| <aname="input_kms_wait_for_apply"></a> [kms\_wait\_for\_apply](#input\_kms\_wait\_for\_apply)| Set true to make terraform wait until KMS is applied to master and it is ready and deployed. Default value is true. |`bool`|`true`| no |
// workaround for the issue https://github.ibm.com/GoldenEye/issues/issues/10743
50
+
// when the issue is fixed on IKS, so the destruction of default workers pool is correctly managed on provider/clusters service the next two entries should be removed
description="(Advanced users) Whether to handle the default worker pool as a stand-alone ibm_container_vpc_worker_pool resource on cluster creation. Only set to false if you understand the implications of managing the default worker pool as part of the cluster resource. Set to true to import the default worker pool as a separate resource. Set to false to manage the default worker pool as part of the cluster resource."
296
+
default=true
297
+
nullable=false
298
+
}
299
+
300
+
variable"allow_default_worker_pool_replacement" {
301
+
type=bool
302
+
description="(Advanced users) Set to true to allow the module to recreate a default worker pool. Only use in the case where you are getting an error indicating that the default worker pool cannot be replaced on apply. Once the default worker pool is handled as a stand-alone ibm_container_vpc_worker_pool, if you wish to make any change to the default worker pool which requires the re-creation of the default pool set this variable to true."
0 commit comments