Skip to content

Commit

Permalink
changed platform version to standard-v2
Browse files Browse the repository at this point in the history
  • Loading branch information
malibora committed Nov 2, 2023
1 parent 5ee70ac commit 2bf4ff8
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 6 deletions.
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -197,8 +197,8 @@ No modules.
| <a name="input_network_id"></a> [network\_id](#input\_network\_id) | The ID of the cluster network. | `string` | n/a | yes |
| <a name="input_network_policy_provider"></a> [network\_policy\_provider](#input\_network\_policy\_provider) | Network policy provider for Kubernetes cluster | `string` | `"CALICO"` | no |
| <a name="input_node_account_name"></a> [node\_account\_name](#input\_node\_account\_name) | IAM node account name. | `string` | `"k8s-node-account"` | no |
| <a name="input_node_groups"></a> [node\_groups](#input\_node\_groups) | Kubernetes node groups map of maps. It could contain all parameters of nebius\_kubernetes\_node\_group resource,<br> many of them could be NULL and have default values.<br><br> Notes:<br> - If node groups version isn't defined, cluster version will be used instead of.<br> - A master locations list must have only one location for zonal cluster and three locations for a regional.<br> - All node groups are able to define own locations. These locations will be used at first.<br> - If own location aren't defined for node groups with auto scale policy, locations for these groups will be automatically generated from master locations. If node groups list have more than three groups, locations for them will be assigned from the beggining of the master locations list. So, all node groups will be distributed in a range of master locations. <br> - Master locations will be used for fixed scale node groups.<br> - Auto repair and upgrade values will be used master\_auto\_upgrade value.<br> - Master maintenance windows will be used for Node groups also!<br> - Only one max\_expansion OR max\_unavailable values should be specified for the deployment policy.<br><br> Documentation - https://registry.terraform.io/providers/nebius-cloud/nebius/latest/docs/resources/kubernetes_node_group<br><br> Default values:<pre>platform_id = "standard-v3"<br> node_cores = 4<br> node_memory = 8<br> node_gpus = 0<br> core_fraction = 100<br> disk_type = "network-ssd"<br> disk_size = 32<br> preemptible = false<br> nat = false<br> auto_repair = true<br> auto_upgrade = true<br> maintenance_day = "monday"<br> maintenance_start_time = "20:00"<br> maintenance_duration = "3h30m"<br> network_acceleration_type = "standard"<br> container_runtime_type = "containerd"</pre>Example:<pre>node_groups = {<br> "yc-k8s-ng-01" = {<br> cluster_name = "k8s-kube-cluster"<br> description = "Kubernetes nodes group with fixed scale policy and one maintenance window"<br> fixed_scale = {<br> size = 3<br> }<br> labels = {<br> owner = "nebius"<br> service = "kubernetes"<br> }<br> node_labels = {<br> role = "worker-01"<br> environment = "dev"<br> }<br> },<br> "yc-k8s-ng-02" = {<br> description = "Kubernetes nodes group with auto scale policy"<br> auto_scale = {<br> min = 2<br> max = 4<br> initial = 2<br> }<br> node_locations = [<br> {<br> zone = "ru-central1-b"<br> subnet_id = "e2lu07tr481h35012c8p"<br> }<br> ]<br> labels = {<br> owner = "example"<br> service = "kubernetes"<br> }<br> node_labels = {<br> role = "worker-02"<br> environment = "testing"<br> }<br> }<br> }</pre> | `any` | `{}` | no |
| <a name="input_node_groups_defaults"></a> [node\_groups\_defaults](#input\_node\_groups\_defaults) | Map of common default values for Node groups. | `map(any)` | <pre>{<br> "core_fraction": 100,<br> "disk_size": 32,<br> "disk_type": "network-ssd",<br> "ipv4": true,<br> "ipv6": false,<br> "nat": false,<br> "node_cores": 4,<br> "node_gpus": 0,<br> "node_memory": 8,<br> "platform_id": "standard-v3",<br> "preemptible": false<br>}</pre> | no |
| <a name="input_node_groups"></a> [node\_groups](#input\_node\_groups) | Kubernetes node groups map of maps. It could contain all parameters of nebius\_kubernetes\_node\_group resource,<br> many of them could be NULL and have default values.<br><br> Notes:<br> - If node groups version isn't defined, cluster version will be used instead of.<br> - A master locations list must have only one location for zonal cluster and three locations for a regional.<br> - All node groups are able to define own locations. These locations will be used at first.<br> - If own location aren't defined for node groups with auto scale policy, locations for these groups will be automatically generated from master locations. If node groups list have more than three groups, locations for them will be assigned from the beggining of the master locations list. So, all node groups will be distributed in a range of master locations. <br> - Master locations will be used for fixed scale node groups.<br> - Auto repair and upgrade values will be used master\_auto\_upgrade value.<br> - Master maintenance windows will be used for Node groups also!<br> - Only one max\_expansion OR max\_unavailable values should be specified for the deployment policy.<br><br> Documentation - https://registry.terraform.io/providers/nebius-cloud/nebius/latest/docs/resources/kubernetes_node_group<br><br> Default values:<pre>platform_id = "standard-v2"<br> node_cores = 4<br> node_memory = 8<br> node_gpus = 0<br> core_fraction = 100<br> disk_type = "network-ssd"<br> disk_size = 32<br> preemptible = false<br> nat = false<br> auto_repair = true<br> auto_upgrade = true<br> maintenance_day = "monday"<br> maintenance_start_time = "20:00"<br> maintenance_duration = "3h30m"<br> network_acceleration_type = "standard"<br> container_runtime_type = "containerd"</pre>Example:<pre>node_groups = {<br> "yc-k8s-ng-01" = {<br> cluster_name = "k8s-kube-cluster"<br> description = "Kubernetes nodes group with fixed scale policy and one maintenance window"<br> fixed_scale = {<br> size = 3<br> }<br> labels = {<br> owner = "nebius"<br> service = "kubernetes"<br> }<br> node_labels = {<br> role = "worker-01"<br> environment = "dev"<br> }<br> },<br> "yc-k8s-ng-02" = {<br> description = "Kubernetes nodes group with auto scale policy"<br> auto_scale = {<br> min = 2<br> max = 4<br> initial = 2<br> }<br> node_locations = [<br> {<br> zone = "ru-central1-b"<br> subnet_id = "e2lu07tr481h35012c8p"<br> }<br> ]<br> labels = {<br> owner = "example"<br> service = "kubernetes"<br> }<br> node_labels = {<br> role = "worker-02"<br> environment = "testing"<br> }<br> }<br> }</pre> | `any` | `{}` | no |
| <a name="input_node_groups_defaults"></a> [node\_groups\_defaults](#input\_node\_groups\_defaults) | Map of common default values for Node groups. | `map(any)` | <pre>{<br> "core_fraction": 100,<br> "disk_size": 32,<br> "disk_type": "network-ssd",<br> "ipv4": true,<br> "ipv6": false,<br> "nat": false,<br> "node_cores": 4,<br> "node_gpus": 0,<br> "node_memory": 8,<br> "platform_id": "standard-v2",<br> "preemptible": false<br>}</pre> | no |
| <a name="input_node_ipv4_cidr_mask_size"></a> [node\_ipv4\_cidr\_mask\_size](#input\_node\_ipv4\_cidr\_mask\_size) | (Optional) Size of the masks that are assigned to each node in the cluster.<br> This efficiently limits the maximum number of pods for each node. | `number` | `24` | no |
| <a name="input_public_access"></a> [public\_access](#input\_public\_access) | Public or private Kubernetes cluster | `bool` | `true` | no |
| <a name="input_release_channel"></a> [release\_channel](#input\_release\_channel) | Kubernetes cluster release channel name | `string` | `"REGULAR"` | no |
Expand Down Expand Up @@ -288,8 +288,8 @@ No modules.
| <a name="input_network_id"></a> [network\_id](#input\_network\_id) | The ID of the cluster network. | `string` | n/a | yes |
| <a name="input_network_policy_provider"></a> [network\_policy\_provider](#input\_network\_policy\_provider) | Kubernetes cluster network policy provider | `string` | `"CALICO"` | no |
| <a name="input_node_account_name"></a> [node\_account\_name](#input\_node\_account\_name) | IAM node account name. | `string` | `"k8s-node-account"` | no |
| <a name="input_node_groups"></a> [node\_groups](#input\_node\_groups) | Kubernetes node groups map of maps. It could contain all parameters of nebius\_kubernetes\_node\_group resource,<br> many of them could be NULL and have default values.<br><br> Notes:<br> - If node groups version isn't defined, cluster version will be used instead of.<br> - A master locations list must have only one location for zonal cluster and three locations for a regional.<br> - All node groups are able to define own locations. These locations will be used at first.<br> - If own location aren't defined for node groups with auto scale policy, locations for these groups will be automatically generated from master locations. If node groups list have more than three groups, locations for them will be assigned from the beggining of the master locations list. So, all node groups will be distributed in a range of master locations. <br> - Master locations will be used for fixed scale node groups.<br> - Auto repair and upgrade values will be used master\_auto\_upgrade value.<br> - Master maintenance windows will be used for Node groups also!<br> - Only one max\_expansion OR max\_unavailable values should be specified for the deployment policy.<br><br> Documentation - https://registry.terraform.io/providers/nebius-cloud/nebius/latest/docs/resources/kubernetes_node_group<br><br> Default values:<pre>platform_id = "standard-v3"<br> node_cores = 4<br> node_memory = 8<br> node_gpus = 0<br> core_fraction = 100<br> disk_type = "network-ssd"<br> disk_size = 32<br> preemptible = false<br> nat = false<br> auto_repair = true<br> auto_upgrade = true<br> maintenance_day = "monday"<br> maintenance_start_time = "20:00"<br> maintenance_duration = "3h30m"<br> network_acceleration_type = "standard"<br> container_runtime_type = "containerd"</pre>Example:<pre>node_groups = {<br> "yc-k8s-ng-01" = {<br> cluster_name = "k8s-kube-cluster"<br> description = "Kubernetes nodes group with fixed scale policy and one maintenance window"<br> fixed_scale = {<br> size = 3<br> }<br> labels = {<br> owner = "nebius"<br> service = "kubernetes"<br> }<br> node_labels = {<br> role = "worker-01"<br> environment = "dev"<br> }<br> },<br> "yc-k8s-ng-02" = {<br> description = "Kubernetes nodes group with auto scale policy"<br> auto_scale = {<br> min = 2<br> max = 4<br> initial = 2<br> }<br> node_locations = [<br> {<br> zone = "ru-central1-b"<br> subnet_id = "e2lu07tr481h35012c8p"<br> }<br> ]<br> labels = {<br> owner = "example"<br> service = "kubernetes"<br> }<br> node_labels = {<br> role = "worker-02"<br> environment = "testing"<br> }<br> }<br> }</pre> | `any` | `{}` | no |
| <a name="input_node_groups_defaults"></a> [node\_groups\_defaults](#input\_node\_groups\_defaults) | A map of common default values for Node groups. | `map` | <pre>{<br> "core_fraction": 100,<br> "disk_size": 32,<br> "disk_type": "network-ssd",<br> "ipv4": true,<br> "ipv6": false,<br> "nat": false,<br> "node_cores": 4,<br> "node_gpus": 0,<br> "node_memory": 8,<br> "platform_id": "standard-v3",<br> "preemptible": false<br>}</pre> | no |
| <a name="input_node_groups"></a> [node\_groups](#input\_node\_groups) | Kubernetes node groups map of maps. It could contain all parameters of nebius\_kubernetes\_node\_group resource,<br> many of them could be NULL and have default values.<br><br> Notes:<br> - If node groups version isn't defined, cluster version will be used instead of.<br> - A master locations list must have only one location for zonal cluster and three locations for a regional.<br> - All node groups are able to define own locations. These locations will be used at first.<br> - If own location aren't defined for node groups with auto scale policy, locations for these groups will be automatically generated from master locations. If node groups list have more than three groups, locations for them will be assigned from the beggining of the master locations list. So, all node groups will be distributed in a range of master locations. <br> - Master locations will be used for fixed scale node groups.<br> - Auto repair and upgrade values will be used master\_auto\_upgrade value.<br> - Master maintenance windows will be used for Node groups also!<br> - Only one max\_expansion OR max\_unavailable values should be specified for the deployment policy.<br><br> Documentation - https://registry.terraform.io/providers/nebius-cloud/nebius/latest/docs/resources/kubernetes_node_group<br><br> Default values:<pre>platform_id = "standard-v2"<br> node_cores = 4<br> node_memory = 8<br> node_gpus = 0<br> core_fraction = 100<br> disk_type = "network-ssd"<br> disk_size = 32<br> preemptible = false<br> nat = false<br> auto_repair = true<br> auto_upgrade = true<br> maintenance_day = "monday"<br> maintenance_start_time = "20:00"<br> maintenance_duration = "3h30m"<br> network_acceleration_type = "standard"<br> container_runtime_type = "containerd"</pre>Example:<pre>node_groups = {<br> "yc-k8s-ng-01" = {<br> cluster_name = "k8s-kube-cluster"<br> description = "Kubernetes nodes group with fixed scale policy and one maintenance window"<br> fixed_scale = {<br> size = 3<br> }<br> labels = {<br> owner = "nebius"<br> service = "kubernetes"<br> }<br> node_labels = {<br> role = "worker-01"<br> environment = "dev"<br> }<br> },<br> "yc-k8s-ng-02" = {<br> description = "Kubernetes nodes group with auto scale policy"<br> auto_scale = {<br> min = 2<br> max = 4<br> initial = 2<br> }<br> node_locations = [<br> {<br> zone = "ru-central1-b"<br> subnet_id = "e2lu07tr481h35012c8p"<br> }<br> ]<br> labels = {<br> owner = "example"<br> service = "kubernetes"<br> }<br> node_labels = {<br> role = "worker-02"<br> environment = "testing"<br> }<br> }<br> }</pre> | `any` | `{}` | no |
| <a name="input_node_groups_defaults"></a> [node\_groups\_defaults](#input\_node\_groups\_defaults) | A map of common default values for Node groups. | `map` | <pre>{<br> "core_fraction": 100,<br> "disk_size": 32,<br> "disk_type": "network-ssd",<br> "ipv4": true,<br> "ipv6": false,<br> "nat": false,<br> "node_cores": 4,<br> "node_gpus": 0,<br> "node_memory": 8,<br> "platform_id": "standard-v2",<br> "preemptible": false<br>}</pre> | no |
| <a name="input_node_ipv4_cidr_mask_size"></a> [node\_ipv4\_cidr\_mask\_size](#input\_node\_ipv4\_cidr\_mask\_size) | (Optional) Size of the masks that are assigned to each node in the cluster.<br> Effectively limits maximum number of pods for each node. | `number` | `24` | no |
| <a name="input_public_access"></a> [public\_access](#input\_public\_access) | Public or private Kubernetes cluster | `bool` | `true` | no |
| <a name="input_release_channel"></a> [release\_channel](#input\_release\_channel) | Kubernetes cluster release channel name | `string` | `"REGULAR"` | no |
Expand Down
4 changes: 2 additions & 2 deletions variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -233,7 +233,7 @@ variable "node_groups" {
Default values:
```
platform_id = "standard-v3"
platform_id = "standard-v2"
node_cores = 4
node_memory = 8
node_gpus = 0
Expand Down Expand Up @@ -301,7 +301,7 @@ variable "node_groups_defaults" {
description = "Map of common default values for Node groups."
type = map(any)
default = {
platform_id = "standard-v3"
platform_id = "standard-v2"
node_cores = 4
node_memory = 8
node_gpus = 0
Expand Down

0 comments on commit 2bf4ff8

Please sign in to comment.