- Create zonal Kubernetes cluster
- Create user defined Kubernetes node groups
- Easy to use in other resources via outputs
First, you need to create a VPC network with three subnets!
The Kubernetes module requires the following input variables:
- VPC network ID
- VPC network subnet IDs
- Master locations: List of maps with zone names and subnet IDs for each location.
- Node groups: List of node group maps with any number of parameters
Master locations may only have either one or three locations: one for zonal cluster and three for regional cluster.
Notes:
- If the node group version is missing,
cluster version
will be used instead. - All node groups are able to define their own locations. These locations will be used instead of master locations.
- If an own location was not defined for a node group with the
auto scale
policy, the location for this group will be automatically generated from the master location list. - If the node group list has more than three groups, the locations for them will be assigned from the beginning of the master location list. This means all node groups will be distributed in the range of the master location list.
- All three master locations will be used for the
fixed scale
node groups.
The node_groups
section defines a list of maps for each node group. You can determine any parameter for each node group, but all of them have default values. This way, an empty node group object will be created using such default values.
For instance, in example 2
, we define seven node groups with their own parameters. You can create any number of node groups, which is only limited by the Nebius Kubernetes service capacity. If the node_location
parameter is not provided, the location will be automatically assigned from the master location list.
node_groups = {
"yc-k8s-ng-01" = {
description = "Kubernetes nodes group 01"
fixed_scale = {
size = 2
}
},
"yc-k8s-ng-02" = {
description = "Kubernetes nodes group 02"
auto_scale = {
min = 3
max = 5
initial = 3
}
}
}
module "kube" {
source = "./modules/kubernetes"
network_id = "enpmff6ah2bvi0k10j66"
master_locations = [
{
zone = "eu-north1-a"
subnet_id = "e9b3k97pr2nh1i80as04"
},
{
zone = "eu-north1-b"
subnet_id = "e2laaglsc7u99ur8c4j1"
},
{
zone = "eu-north1-c"
subnet_id = "b0ckjm3olbpmk2t6c28o"
}
]
master_maintenance_windows = [
{
day = "monday"
start_time = "23:00"
duration = "3h"
}
]
node_groups = {
"yc-k8s-ng-01" = {
description = "Kubernetes nodes group 01"
fixed_scale = {
size = 3
}
node_labels = {
role = "worker-01"
environment = "testing"
}
},
"yc-k8s-ng-02" = {
description = "Kubernetes nodes group 02"
auto_scale = {
min = 2
max = 4
initial = 2
}
node_locations = [
{
zone = "eu-north1-b"
subnet_id = "e2lu07tr481h35012c8p"
}
]
node_labels = {
role = "worker-02"
environment = "dev"
}
max_expansion = 1
max_unavailable = 1
}
}
}
- Install NCP CLI
- Authenticate using Service Account authorization key
- Add environment variables for terraform authentication in Nebuis Cloud
export NCP_TOKEN=$(ncp iam create-token)
Name | Version |
---|---|
terraform | >= 1.0.0 |
random | > 3.3 |
nebius | > 0.8 |
Name | Version |
---|---|
random | 3.5.1 |
nebius | 0.91.0 |
No modules.
Name | Description | Type | Default | Required |
---|---|---|---|---|
allow_public_load_balancers | Flag for creating new IAM role with a load-balancer.admin access. | bool |
true |
no |
allowed_ips | List of allowed IPv4 CIDR blocks. | list(string) |
[ |
no |
allowed_ips_ssh | List of allowed IPv4 CIDR blocks for an access via SSH. | list(string) |
[ |
no |
cluster_ipv4_range | CIDR block. IP range for allocating pod addresses. It should not overlap with any subnet in the network the Kubernetes cluster located in. Static routes will be set up for this CIDR blocks in node subnets. |
string |
"172.17.0.0/16" |
no |
cluster_ipv6_range | IPv6 CIDR block. IP range for allocating pod addresses. | string |
null |
no |
cluster_name | Name of a specific Kubernetes cluster. | string |
"k8s-cluster" |
no |
cluster_version | Kubernetes cluster version | string |
"1.23" |
no |
container_runtime_type | Kubernetes Node Group container runtime type | string |
"containerd" |
no |
custom_egress_rules | Map definition of custom security egress rules. Example: custom_egress_rules = { |
any |
{} |
no |
custom_ingress_rules | Map definition of custom security ingress rules. Example: custom_ingress_rules = { |
any |
{} |
no |
description | Description of the Kubernetes cluster. | string |
"Nebius Managed K8S cluster" |
no |
enable_cilium_policy | Flag for enabling or disabling Cilium CNI. | bool |
false |
no |
enable_default_rules | Manages creation of default security rules. Default security rules: - Allow all incoming traffic from any protocol. - Allows master-to-node and node-to-node communication inside a security group. - Allows pod-to-pod and service-to-service communication. - Allows debugging ICMP packets from internal subnets. - Allows incomming traffic from the Internet to the NodePort port range. - Allows all outgoing traffic. Nodes can connect to Nebius Container Registry, Nebius Object Storage, Docker Hub, etc. - Allow access to Kubernetes API via port 6443 from the subnet. - Allow access to Kubernetes API via port 443 from the subnet. - Allow access to worker nodes via SSH from the allowed IP range. |
bool |
true |
no |
folder_id | The ID of the folder that the Kubernetes cluster belongs to. | string |
null |
no |
master_labels | Set of key/value label pairs to assign Kubernetes master nodes. | map(string) |
{} |
no |
master_locations | List of locations where the cluster will be created. If the list contains only one location, a zonal cluster will be created; if there are three locations, this will create a regional cluster. Note: The master locations list may only have ONE or THREE locations. |
list(object({ |
n/a | yes |
master_logging | (Optional) Master logging options. | map(any) |
{ |
no |
master_maintenance_windows | List of structures that specifies maintenance windows, when auto update for the master is allowed. Example: master_maintenance_windows = [ |
list(map(string)) |
[] |
no |
master_region | Name of the region where the cluster will be created. This setting is required for regional cluster and not used for zonal cluster. | string |
"ru-central1" |
no |
network_acceleration_type | Network acceleration type for the Kubernetes node group | string |
"standard" |
no |
network_id | The ID of the cluster network. | string |
n/a | yes |
network_policy_provider | Network policy provider for Kubernetes cluster | string |
"CALICO" |
no |
node_account_name | IAM node account name. | string |
"k8s-node-account" |
no |
node_groups | Kubernetes node groups map of maps. It could contain all parameters of nebius_kubernetes_node_group resource, many of them could be NULL and have default values. Notes: - If node groups version isn't defined, cluster version will be used instead of. - A master locations list must have only one location for zonal cluster and three locations for a regional. - All node groups are able to define own locations. These locations will be used at first. - If own location aren't defined for node groups with auto scale policy, locations for these groups will be automatically generated from master locations. If node groups list have more than three groups, locations for them will be assigned from the beggining of the master locations list. So, all node groups will be distributed in a range of master locations. - Master locations will be used for fixed scale node groups. - Auto repair and upgrade values will be used master_auto_upgrade value. - Master maintenance windows will be used for Node groups also! - Only one max_expansion OR max_unavailable values should be specified for the deployment policy. Documentation - https://registry.terraform.io/providers/nebius-cloud/nebius/latest/docs/resources/kubernetes_node_group Default values: platform_id = "standard-v2"Example: node_groups = { |
any |
{} |
no |
node_groups_defaults | Map of common default values for Node groups. | map(any) |
{ |
no |
node_ipv4_cidr_mask_size | (Optional) Size of the masks that are assigned to each node in the cluster. This efficiently limits the maximum number of pods for each node. |
number |
24 |
no |
public_access | Public or private Kubernetes cluster | bool |
true |
no |
release_channel | Kubernetes cluster release channel name | string |
"REGULAR" |
no |
security_groups_ids_list | List of security group IDs to which the Kubernetes cluster belongs | list(string) |
[] |
no |
service_account_name | IAM service account name. | string |
"k8s-service-account" |
no |
service_ipv4_range | CIDR block. IP range from which Kubernetes service cluster IP addresses will be allocated from. It should not overlap with any subnet in the network the Kubernetes cluster located in |
string |
"172.18.0.0/16" |
no |
service_ipv6_range | IPv6 CIDR block. IP range for allocating pod addresses. | string |
null |
no |
timeouts | Timeouts. | map(string) |
{ |
no |
ssh_username | SSH Username. | map(string) |
{ |
no |
ssh_public_key | SSH Public key content. | map(string) |
{ |
no |
ssh_public_key_path | Path to SSH Public key file. | map(string) |
{ |
no |
Name | Description |
---|---|
cluster_id | Kubernetes cluster ID. |
cluster_name | Kubernetes cluster name. |
external_cluster_cmd | Kubernetes cluster public IP address. Use the following command to download kube config and start working with Nebius Managed Kubernetes cluster: $ yc managed-kubernetes cluster get-credentials --id <cluster_id> --external This command will automatically add kube config for your user; after that, you will be able to test it with the kubectl get cluster-info command. |
internal_cluster_cmd | Kubernetes cluster private IP address. Use the following command to download kube config and start working with Nebius Managed Kubernetes cluster: $ yc managed-kubernetes cluster get-credentials --id <cluster_id> --internal Note: Kubernetes internal cluster nodes are available from the virtual machines in the same VPC as cluster nodes. |
Name | Version |
---|---|
terraform | >= 1.0.0 |
random | > 3.3 |
nebius | > 0.8 |
Name | Version |
---|---|
random | > 3.3 |
nebius | 0.86.0 |
No modules.
Name | Description | Type | Default | Required |
---|---|---|---|---|
allow_public_load_balancers | Flag for creating new IAM role with a load-balancer.admin access. | bool |
true |
no |
allowed_ips | A list of allowed IPv4 CIDR blocks. | list(string) |
[ |
no |
allowed_ips_ssh | A list of allowed IPv4 CIDR blocks for an access via SSH. | list(string) |
[ |
no |
cluster_ipv4_range | CIDR block. IP range for allocating pod addresses. It should not overlap with any subnet in the network the Kubernetes cluster located in. Static routes will be set up for this CIDR blocks in node subnets. |
string |
"172.17.0.0/16" |
no |
cluster_ipv6_range | IPv6 CIDR block. IP range for allocating pod addresses. | string |
null |
no |
cluster_name | Name of a specific Kubernetes cluster. | string |
"k8s-cluster" |
no |
cluster_version | Kubernetes cluster version | string |
"1.23" |
no |
container_runtime_type | Kubernetes Node Group container runtime type | string |
"containerd" |
no |
custom_egress_rules | A map definition of custom security egress rules. Example: custom_egress_rules = { |
any |
{} |
no |
custom_ingress_rules | A map definition of custom security ingress rules. Example: custom_ingress_rules = { |
any |
{} |
no |
description | A description of the Kubernetes cluster. | string |
"nebius Managed K8S cluster" |
no |
enable_cilium_policy | Flag for enabling / disabling Cilium CNI. | bool |
false |
no |
enable_default_rules | Controls creation of default security rules. Default security rules: - allow all incoming traffic from ANY protocol - allows master-node and node-node communication inside a security group - allows pod-pod and service-service communication - allows debugging ICMP packets from internal subnets - allows incomming traffic from the Internet to the NodePort port range - allows all outgoing traffic. Nodes can connect to nebius Container Registry, nebius Object Storage, Docker Hub, and so on - allow access to Kubernetes API via port 6443 from subnet - allow access to Kubernetes API via port 443 from subnet - allow access to worker nodes via SSH from allowed IPs range |
bool |
true |
no |
folder_id | The ID of the folder that the Kubernetes cluster belongs to. | string |
null |
no |
master_auto_upgrade | Boolean flag that specifies if master can be upgraded automatically. | bool |
true |
no |
master_labels | A set of key/value label pairs to assign Kubernetes master nodes. | map(string) |
{} |
no |
master_locations | List of locations where cluster will be created. If list contains only ONE location, will be created Zonal cluster, if THREE - Regional cluster. NOTE: Master locations list must have only ONE or THREE locations! |
list(object({ |
n/a | yes |
master_logging | (Optional) Master Logging options. | map |
{ |
no |
master_maintenance_windows | List of structures that specifies maintenance windows, when auto update for master is allowed. Example: master_maintenance_windows = [ |
list(map(string)) |
[] |
no |
master_region | Name of region where cluster will be created. Required for regional cluster, not used for zonal cluster. |
string |
"ru-central1" |
no |
network_acceleration_type | Kubernetes Node Group network acceleration type | string |
"standard" |
no |
network_id | The ID of the cluster network. | string |
n/a | yes |
network_policy_provider | Kubernetes cluster network policy provider | string |
"CALICO" |
no |
node_account_name | IAM node account name. | string |
"k8s-node-account" |
no |
node_groups | Kubernetes node groups map of maps. It could contain all parameters of nebius_kubernetes_node_group resource, many of them could be NULL and have default values. Notes: - If node groups version isn't defined, cluster version will be used instead of. - A master locations list must have only one location for zonal cluster and three locations for a regional. - All node groups are able to define own locations. These locations will be used at first. - If own location aren't defined for node groups with auto scale policy, locations for these groups will be automatically generated from master locations. If node groups list have more than three groups, locations for them will be assigned from the beggining of the master locations list. So, all node groups will be distributed in a range of master locations. - Master locations will be used for fixed scale node groups. - Auto repair and upgrade values will be used master_auto_upgrade value. - Master maintenance windows will be used for Node groups also! - Only one max_expansion OR max_unavailable values should be specified for the deployment policy. Documentation - https://registry.terraform.io/providers/nebius-cloud/nebius/latest/docs/resources/kubernetes_node_group Default values: platform_id = "standard-v2"Example: node_groups = { |
any |
{} |
no |
node_groups_defaults | A map of common default values for Node groups. | map |
{ |
no |
node_ipv4_cidr_mask_size | (Optional) Size of the masks that are assigned to each node in the cluster. Effectively limits maximum number of pods for each node. |
number |
24 |
no |
public_access | Public or private Kubernetes cluster | bool |
true |
no |
release_channel | Kubernetes cluster release channel name | string |
"REGULAR" |
no |
security_groups_ids_list | List of security group IDs to which the Kubernetes cluster belongs | list(string) |
[] |
no |
service_account_name | IAM service account name. | string |
"k8s-service-account" |
no |
service_ipv4_range | CIDR block. IP range Kubernetes service cluster IP addresses will be allocated from. It should not overlap with any subnet in the network the Kubernetes cluster located in |
string |
"172.18.0.0/16" |
no |
service_ipv6_range | IPv6 CIDR block. IP range for allocating pod addresses. | string |
null |
no |
timeouts | Timeouts. | map(string) |
{ |
no |
Name | Description |
---|---|
cluster_id | Kubernetes cluster ID. |
cluster_name | Kubernetes cluster name. |
external_cluster_cmd | Kubernetes cluster public IP address. Using following command to download kube config and start working with nebius Managed Kubernetes cluster. $ yc managed-kubernetes cluster get-credentials --id <cluster_id> --external This command will automatically add kube config for your user and after that you could test it with command. |
internal_cluster_cmd | Kubernetes cluster pricate IP address. Using following command to download kube config and start working with nebius Managed Kubernetes cluster. $ yc managed-kubernetes cluster get-credentials --id <cluster_id> --internal NOTE: Be aware Kubernetes internal cluster nodes are available from nodes in the same subnet as cluster nodes! |