Skip to content
This repository was archived by the owner on May 21, 2025. It is now read-only.

Commit df56753

Browse files
authored
feat: added new variables import_default_worker_pool_on_create and allow_default_worker_pool_replacement. For more information on these, see https://github.com/terraform-ibm-modules/terraform-ibm-ocp-all-inclusive/tree/main?tab=readme-ov-file#default-worker-pool-management (#325)
1 parent 6577eec commit df56753

File tree

5 files changed

+96
-38
lines changed

5 files changed

+96
-38
lines changed

README.md

Lines changed: 41 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,43 @@ This module is a wrapper module that groups the following modules:
2020
- Make sure that you have a recent version of the [IBM Cloud CLI](https://cloud.ibm.com/docs/cli?topic=cli-getting-started)
2121
- Make sure that you have a recent version of the [IBM Cloud Kubernetes service CLI](https://cloud.ibm.com/docs/containers?topic=containers-kubernetes-service-cli)
2222

23+
### Default Worker Pool management
24+
25+
You can manage the default worker pool using Terraform, and make changes to it through this module. This option is enabled by default. Under the hood, the default worker pool is imported as a `ibm_container_vpc_worker_pool` resource. Advanced users may opt-out of this option by setting `import_default_worker_pool_on_create` parameter to `false`. For most use cases it is recommended to keep this variable to `true`.
26+
27+
#### Important Considerations for Terraform and Default Worker Pool
28+
29+
**Terraform Destroy**
30+
31+
When using the default behavior of handling the default worker pool as a stand-alone `ibm_container_vpc_worker_pool`, you must manually remove the default worker pool from the Terraform state before running a terraform destroy command on the module. This is due to a [known limitation](https://cloud.ibm.com/docs/containers?topic=containers-faqs#smallest_cluster) in IBM Cloud.
32+
33+
Terraform CLI Example
34+
35+
For a cluster with 2 worker pools, named 'default' and 'secondarypool', follow these steps:
36+
37+
```sh
38+
$ terraform state list | grep ibm_container_vpc_worker_pool
39+
> module.ocp_all_inclusive.module.ocp_base.data.ibm_container_vpc_worker_pool.all_pools["default"]
40+
> module.ocp_all_inclusive.module.ocp_base.data.ibm_container_vpc_worker_pool.all_pools["secondarypool"]
41+
> module.ocp_all_inclusive.module.ocp_base.ibm_container_vpc_worker_pool.pool["default"]
42+
> module.ocp_all_inclusive.module.ocp_base.ibm_container_vpc_worker_pool.pool["secondarypool"]
43+
> ...
44+
45+
$ terraform state rm "module.ocp_all_inclusive.module.ocp_base.ibm_container_vpc_worker_pool.pool[\"default\"]"
46+
```
47+
48+
Schematics Example: For a cluster with 2 worker pools, named 'default' and 'secondarypool', follow these steps:
49+
50+
```sh
51+
$ ibmcloud schematics workspace state rm --id <workspace_id> --address "module.ocp_all_inclusive.module.ocp_base.ibm_container_vpc_worker_pool.pool[\"default\"]"
52+
```
53+
54+
**Changes Requiring Re-creation of Default Worker Pool**
55+
56+
If you need to make changes to the default worker pool that require its re-creation (e.g., changing the worker node `operating_system`), you must set the `allow_default_worker_pool_replacement` variable to true, perform the apply, and then set it back to false in the code before the subsequent apply. This is **only** necessary for changes that require the recreation the entire default pool and is **not needed for scenarios that does not require recreating the worker pool such as changing the number of workers in the default worker pool**.
57+
58+
This approach is due to a limitation in the Terraform provider that may be lifted in the future.
59+
2360
<!-- Below content is automatically populated via pre-commit hook -->
2461
<!-- BEGIN OVERVIEW HOOK -->
2562
## Overview
@@ -127,8 +164,8 @@ You need the following permissions to run this module.
127164

128165
| Name | Source | Version |
129166
|------|--------|---------|
130-
| <a name="module_observability_agents"></a> [observability\_agents](#module\_observability\_agents) | terraform-ibm-modules/observability-agents/ibm | 1.28.7 |
131-
| <a name="module_ocp_base"></a> [ocp\_base](#module\_ocp\_base) | terraform-ibm-modules/base-ocp-vpc/ibm | 3.29.3 |
167+
| <a name="module_observability_agents"></a> [observability\_agents](#module\_observability\_agents) | terraform-ibm-modules/observability-agents/ibm | 1.29.0 |
168+
| <a name="module_ocp_base"></a> [ocp\_base](#module\_ocp\_base) | terraform-ibm-modules/base-ocp-vpc/ibm | 3.30.1 |
132169

133170
### Resources
134171

@@ -142,6 +179,7 @@ No resources.
142179
| <a name="input_additional_lb_security_group_ids"></a> [additional\_lb\_security\_group\_ids](#input\_additional\_lb\_security\_group\_ids) | Additional security group IDs to add to the load balancers associated with the cluster. These security groups are in addition to the IBM-maintained security group. | `list(string)` | `[]` | no |
143180
| <a name="input_additional_vpe_security_group_ids"></a> [additional\_vpe\_security\_group\_ids](#input\_additional\_vpe\_security\_group\_ids) | Additional security groups to add to all the load balancers. This comes in addition to the IBM maintained security group. | <pre>object({<br> master = optional(list(string), [])<br> registry = optional(list(string), [])<br> api = optional(list(string), [])<br> })</pre> | `{}` | no |
144181
| <a name="input_addons"></a> [addons](#input\_addons) | List of all addons supported by the ocp cluster. | <pre>object({<br> debug-tool = optional(string)<br> image-key-synchronizer = optional(string)<br> openshift-data-foundation = optional(string)<br> vpc-file-csi-driver = optional(string)<br> static-route = optional(string)<br> cluster-autoscaler = optional(string)<br> vpc-block-csi-driver = optional(string)<br> })</pre> | `null` | no |
182+
| <a name="input_allow_default_worker_pool_replacement"></a> [allow\_default\_worker\_pool\_replacement](#input\_allow\_default\_worker\_pool\_replacement) | (Advanced users) Set to true to allow the module to recreate a default worker pool. Only use in the case where you are getting an error indicating that the default worker pool cannot be replaced on apply. Once the default worker pool is handled as a stand-alone ibm\_container\_vpc\_worker\_pool, if you wish to make any change to the default worker pool which requires the re-creation of the default pool set this variable to true. | `bool` | `false` | no |
145183
| <a name="input_attach_ibm_managed_security_group"></a> [attach\_ibm\_managed\_security\_group](#input\_attach\_ibm\_managed\_security\_group) | Whether to attach the IBM-defined default security group (named `kube-<clusterid>`) to all worker nodes. Applies only if `custom_security_group_ids` is set. | `bool` | `true` | no |
146184
| <a name="input_cloud_monitoring_access_key"></a> [cloud\_monitoring\_access\_key](#input\_cloud\_monitoring\_access\_key) | Access key for the Cloud Monitoring agent to communicate with the instance. | `string` | `null` | no |
147185
| <a name="input_cloud_monitoring_add_cluster_name"></a> [cloud\_monitoring\_add\_cluster\_name](#input\_cloud\_monitoring\_add\_cluster\_name) | If true, configure the cloud monitoring agent to attach a tag containing the cluster name to all metric data. | `bool` | `true` | no |
@@ -168,6 +206,7 @@ No resources.
168206
| <a name="input_existing_kms_root_key_id"></a> [existing\_kms\_root\_key\_id](#input\_existing\_kms\_root\_key\_id) | The Key ID of a root key, existing in the KMS instance passed in var.existing\_kms\_instance\_guid, which will be used to encrypt the data encryption keys (DEKs) which are then used to encrypt the secrets in the cluster. Required if value passed for var.existing\_kms\_instance\_guid. | `string` | `null` | no |
169207
| <a name="input_force_delete_storage"></a> [force\_delete\_storage](#input\_force\_delete\_storage) | Delete attached storage when destroying the cluster - Default: false | `bool` | `false` | no |
170208
| <a name="input_ignore_worker_pool_size_changes"></a> [ignore\_worker\_pool\_size\_changes](#input\_ignore\_worker\_pool\_size\_changes) | Enable if using worker autoscaling. Stops Terraform managing worker count | `bool` | `false` | no |
209+
| <a name="input_import_default_worker_pool_on_create"></a> [import\_default\_worker\_pool\_on\_create](#input\_import\_default\_worker\_pool\_on\_create) | (Advanced users) Whether to handle the default worker pool as a stand-alone ibm\_container\_vpc\_worker\_pool resource on cluster creation. Only set to false if you understand the implications of managing the default worker pool as part of the cluster resource. Set to true to import the default worker pool as a separate resource. Set to false to manage the default worker pool as part of the cluster resource. | `bool` | `true` | no |
171210
| <a name="input_kms_account_id"></a> [kms\_account\_id](#input\_kms\_account\_id) | Id of the account that owns the KMS instance to encrypt the cluster. It is only required if the KMS instance is in another account. | `string` | `null` | no |
172211
| <a name="input_kms_use_private_endpoint"></a> [kms\_use\_private\_endpoint](#input\_kms\_use\_private\_endpoint) | Set as true to use the Private endpoint when communicating between cluster and KMS instance. | `bool` | `true` | no |
173212
| <a name="input_kms_wait_for_apply"></a> [kms\_wait\_for\_apply](#input\_kms\_wait\_for\_apply) | Set true to make terraform wait until KMS is applied to master and it is ready and deployed. Default value is true. | `bool` | `true` | no |

examples/end-to-end-example/main.tf

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -154,7 +154,7 @@ module "vpc" {
154154

155155
module "observability_instances" {
156156
source = "terraform-ibm-modules/observability-instances/ibm"
157-
version = "2.14.1"
157+
version = "2.18.0"
158158
providers = {
159159
logdna.at = logdna.at
160160
logdna.ld = logdna.ld
@@ -168,7 +168,6 @@ module "observability_instances" {
168168
cloud_monitoring_plan = "graduated-tier"
169169
enable_platform_logs = false
170170
enable_platform_metrics = false
171-
cloud_logs_provision = false
172171
log_analysis_tags = var.resource_tags
173172
cloud_monitoring_tags = var.resource_tags
174173
}
@@ -184,7 +183,7 @@ locals {
184183

185184
module "key_protect_all_inclusive" {
186185
source = "terraform-ibm-modules/kms-all-inclusive/ibm"
187-
version = "4.15.9"
186+
version = "4.15.13"
188187
resource_group_id = module.resource_group.resource_group_id
189188
region = var.region
190189
key_protect_instance_name = "${var.prefix}-kp"

main.tf

Lines changed: 34 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -22,37 +22,39 @@ locals {
2222
}
2323

2424
module "ocp_base" {
25-
source = "terraform-ibm-modules/base-ocp-vpc/ibm"
26-
version = "3.29.3"
27-
cluster_name = var.cluster_name
28-
ocp_version = var.ocp_version
29-
resource_group_id = var.resource_group_id
30-
region = var.region
31-
tags = var.cluster_tags
32-
access_tags = var.access_tags
33-
force_delete_storage = var.force_delete_storage
34-
vpc_id = var.vpc_id
35-
vpc_subnets = var.vpc_subnets
36-
worker_pools = var.worker_pools
37-
cluster_ready_when = var.cluster_ready_when
38-
cos_name = var.cos_name
39-
existing_cos_id = var.existing_cos_id
40-
ocp_entitlement = var.ocp_entitlement
41-
disable_public_endpoint = var.disable_public_endpoint
42-
ignore_worker_pool_size_changes = var.ignore_worker_pool_size_changes
43-
attach_ibm_managed_security_group = var.attach_ibm_managed_security_group
44-
custom_security_group_ids = var.custom_security_group_ids
45-
additional_lb_security_group_ids = var.additional_lb_security_group_ids
46-
number_of_lbs = var.number_of_lbs
47-
additional_vpe_security_group_ids = var.additional_vpe_security_group_ids
48-
kms_config = local.kms_config
49-
addons = var.addons
50-
manage_all_addons = var.manage_all_addons
51-
verify_worker_network_readiness = var.verify_worker_network_readiness
52-
cluster_config_endpoint_type = var.cluster_config_endpoint_type
53-
enable_registry_storage = var.enable_registry_storage
54-
disable_outbound_traffic_protection = var.disable_outbound_traffic_protection
55-
operating_system = var.operating_system
25+
source = "terraform-ibm-modules/base-ocp-vpc/ibm"
26+
version = "3.30.1"
27+
cluster_name = var.cluster_name
28+
ocp_version = var.ocp_version
29+
resource_group_id = var.resource_group_id
30+
region = var.region
31+
tags = var.cluster_tags
32+
access_tags = var.access_tags
33+
force_delete_storage = var.force_delete_storage
34+
vpc_id = var.vpc_id
35+
vpc_subnets = var.vpc_subnets
36+
worker_pools = var.worker_pools
37+
cluster_ready_when = var.cluster_ready_when
38+
cos_name = var.cos_name
39+
existing_cos_id = var.existing_cos_id
40+
ocp_entitlement = var.ocp_entitlement
41+
disable_public_endpoint = var.disable_public_endpoint
42+
ignore_worker_pool_size_changes = var.ignore_worker_pool_size_changes
43+
attach_ibm_managed_security_group = var.attach_ibm_managed_security_group
44+
custom_security_group_ids = var.custom_security_group_ids
45+
additional_lb_security_group_ids = var.additional_lb_security_group_ids
46+
number_of_lbs = var.number_of_lbs
47+
additional_vpe_security_group_ids = var.additional_vpe_security_group_ids
48+
kms_config = local.kms_config
49+
addons = var.addons
50+
manage_all_addons = var.manage_all_addons
51+
verify_worker_network_readiness = var.verify_worker_network_readiness
52+
cluster_config_endpoint_type = var.cluster_config_endpoint_type
53+
enable_registry_storage = var.enable_registry_storage
54+
disable_outbound_traffic_protection = var.disable_outbound_traffic_protection
55+
operating_system = var.operating_system
56+
import_default_worker_pool_on_create = var.import_default_worker_pool_on_create
57+
allow_default_worker_pool_replacement = var.allow_default_worker_pool_replacement
5658
}
5759

5860
##############################################################################
@@ -62,7 +64,7 @@ module "ocp_base" {
6264
module "observability_agents" {
6365
count = var.log_analysis_enabled == true || var.cloud_monitoring_enabled == true ? 1 : 0
6466
source = "terraform-ibm-modules/observability-agents/ibm"
65-
version = "1.28.7"
67+
version = "1.29.0"
6668
cluster_id = module.ocp_base.cluster_id
6769
cluster_resource_group_id = var.resource_group_id
6870
cluster_config_endpoint_type = var.cluster_config_endpoint_type

tests/pr_test.go

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,10 @@ func setupOptions(t *testing.T, prefix string, terraformVars map[string]interfac
4646
ImplicitDestroy: []string{ // Ignore full destroy to speed up tests
4747
"module.ocp_all_inclusive.module.observability_agents",
4848
"module.ocp_all_inclusive.module.ocp_base.null_resource.confirm_network_healthy",
49+
// workaround for the issue https://github.ibm.com/GoldenEye/issues/issues/10743
50+
// when the issue is fixed on IKS, so the destruction of default workers pool is correctly managed on provider/clusters service the next two entries should be removed
51+
"'module.ocp_all_inclusive.module.ocp_base.ibm_container_vpc_worker_pool.autoscaling_pool[\"default\"]'",
52+
"'module.ocp_all_inclusive.module.ocp_base.ibm_container_vpc_worker_pool.pool[\"default\"]'",
4953
},
5054
ImplicitRequired: false,
5155
TerraformVars: terraformVars,
@@ -84,7 +88,7 @@ func TestRunCompleteExample(t *testing.T) {
8488
t.Parallel()
8589

8690
// This test should always test the latest and the earliest supported OCP versions.
87-
versions := []string{"4.12", "4.13", "4.15"}
91+
versions := []string{"4.15"}
8892
for _, version := range versions {
8993
t.Run(version, func(t *testing.T) { testRunComplete(t, version) })
9094
}

variables.tf

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -290,6 +290,20 @@ variable "operating_system" {
290290
}
291291
}
292292

293+
variable "import_default_worker_pool_on_create" {
294+
type = bool
295+
description = "(Advanced users) Whether to handle the default worker pool as a stand-alone ibm_container_vpc_worker_pool resource on cluster creation. Only set to false if you understand the implications of managing the default worker pool as part of the cluster resource. Set to true to import the default worker pool as a separate resource. Set to false to manage the default worker pool as part of the cluster resource."
296+
default = true
297+
nullable = false
298+
}
299+
300+
variable "allow_default_worker_pool_replacement" {
301+
type = bool
302+
description = "(Advanced users) Set to true to allow the module to recreate a default worker pool. Only use in the case where you are getting an error indicating that the default worker pool cannot be replaced on apply. Once the default worker pool is handled as a stand-alone ibm_container_vpc_worker_pool, if you wish to make any change to the default worker pool which requires the re-creation of the default pool set this variable to true."
303+
default = false
304+
nullable = false
305+
}
306+
293307
##############################################################################
294308
# KMS Variables
295309
##############################################################################

0 commit comments

Comments
 (0)