Skip to content

UCL-ARC/terraform-harvester-modules

Repository files navigation

terraform-harvester-modules

Terraform modules for deploying virtual machines and k3s clusters on Harvester.

virtual-machine module

The virtual-machine module allows you to create and manage virtual machines on Harvester. To deploy a basic VM with a single boot disk and an IP provided by DHCP:

module "vm" {
  source = "github.com/UCL-ARC/terraform-harvester-modules//modules/virtual-machine"

  cpu       = 2
  efi_boot  = true
  memory    = "8Gi"
  name      = "my-vm"
  namespace = "default"
  networks = [
    {
      iface   = "nic-1"
      network = "default/net"
    }
  ]
  vm_image           = "almalinux-9.5"
  vm_image_namespace = "harvester-public"
  vm_username        = "almalinux"
}

To deploy a VM with a data disk in addition to the boot disk, a static IP address and an SSH key:

  source = "github.com/UCL-ARC/terraform-harvester-modules//modules/virtual-machine"

  additional_disks = [
    {
      boot_order = 2
      bus        = "virtio"
      name       = "data"
      mount      = "/data"
      size       = "100Gi"
      type       = "disk"
    }
  ]
  cpu              = 2
  efi_boot         = true
  memory           = "8Gi"
  name             = "my-vm"
  namespace        = "default"
  networks = [
    {
      cidr    = 24
      dns     = "10.0.0.1"
      gateway = "10.0.0.1"
      iface   = "nic-1"
      ip      = "10.0.0.2"
      network = "default/net"
    }
  ]
  ssh_public_key     = file("~/.ssh/id_rsa")
  vm_image           = "almalinux-9.5"
  vm_image_namespace = "harvester-public"
  vm_username        = "almalinux"

It is also possible to completely customise the cloud-config data by providing your own files via the network_data and user_data variables.

Note that the IP addresses and namespaces given here are illustrative only and should be changed.

k3s-cluster Module

The k3s-cluster module helps you deploy a high-availability k3s Kubernetes cluster on Harvester. This module internally uses the virtual-machine module to create the necessary VMs. This module provides user_data to the virtual machines, and is configured to install k3s using the default operating system user in the machine image being used (set using the vm_image variable). As such the vm_username variable must be set accordingly (e.g. cloud-user for a RHEL machine image).

Example usage which would deploy a 3-node cluster:

module "k3s_cluster" {
  source = "github.com/UCL-ARC/terraform-harvester-modules//modules/k3s-cluster"

  cluster_name        = "my-cluster"
  cluster_api_vip     = "10.0.0.5"
  cluster_ingress_vip = "10.0.0.6"
  namespace           = "default"
  networks = {
    eth0 = {
      ips     = ["10.0.0.2", "10.0.0.3", "10.0.0.4"]
      cidr    = 24
      gateway = "10.0.0.1"
      dns     = "10.0.0.1"
      network = "default/net"
    }
  }
  vm_image           = "rhel-9.4"
  vm_image_namespace = "default"
  vm_username        = "cloud-user"
}

kairos-k3s-cluster module

The kairos-k3s-cluster module helps you deploy a high-availability k3s Kubernetes cluster on Harvester with virtual machines deployed using an immutable operating system. kairos provides a means to turn a Linux system, and preferred Kubernetes distribution, into a secure bootable image. Here users of the module can specify both the kairos ISO and container image to be deployed, forming the final OS running in the VMs. Although this supports multiple different Kubernetes distributions we strongly encourage the use of k3s. In the example below the kairos Alpine ISO is used to deploy a Rocky Linux container image which has k3s baked in. The system-upgrade-controller is installed in the cluster to provide a means to manage OS and Kubernetes distribution upgrades in a zero-downtime manner. A Plan resource needs to be provided by the consumer of the module to trigger the upgrade process.

Networking

Here edgevpn and kubevip configures a peer-to-peer mesh and a virtual IP address for the cluster (instead of metallb which is used in the k3s-cluster module). Note that k3s uses Flannel as the default CNI plugin, and this is what is used in the module. If you would like to use a different CNI plugin, or something that provides a means for managing network policies (such as Calico), then you will need to deploy this yourself after the cluster is up and running. The k3s arguments that are required to use a different CNI plugin can be set using the k3s_extra_args variable. For example, to be able to use Calico, you would set:

  k3s_extra_args = [
    "--disable-network-policy",
    "--flannel-backend=none",
  ]

Note that when disabling the default CNI plugin, k3s can take a while to start as such you should make sure that the service is available before attempting to install any custom CNI plugin.

SSH

This module supports the use of certificate-based authentication for SSH. To enable this, the consumer of the module must provide a CA certificate, and if required the authorised principles via the ssh_admin_principals variable.

module "cluster" {
  source = "github.com/UCL-ARC/terraform-harvester-modules//modules/kairos-k3s-cluster"

  cluster_name             = "my-cluster"
  cluster_namespace        = "default"
  cluster_vip              = "10.0.0.5"
  efi_boot                 = true
  iso_disk_image           = "kairos-alpine"
  iso_disk_image_namespace = "default"
  iso_disk_name            = "bootstrap"
  iso_disk_size            = "10Gi"
  networks = {
    eth0 = {
      alias   = "enp1s0"
      ips     = ["10.0.0.2", "10.0.0.3", "10.0.0.4"]
      cidr    = 24
      gateway = "10.0.0.1"
      dns     = "10.0.0.1"
      network = "default/net"
    }
  }
  root_disk_container_image = "docker:quay.io/kairos/rockylinux:9-standard-amd64-generic-v3.4.2-k3sv1.32.3-k3s1"
  ssh_public_key            = file("${path.root}/ssh-key.pub")
  vm_username               = "kairos"
  vm_tags = {
      ssh-user = "kairos"
  }
}

k8s manifests

Additional manifests to be deployed when creating the cluster can be passed to the module using the additional_manifests variable:

  additional_manifests = [{
    name = "upgrade-plan"
    content = templatefile("${path.root}/templates/upgrade-plan.yaml.tftpl", {
      image: "9-standard-amd64-generic-v3.4.2-k3sv1.32.3-k3s1"
      version: latest
    })
  }]

The example shows how a user of the module might create a template manifest and and pass the rendered result as a manifest for the kairos-k3s-module to use. The manifest will get written to /var/lib/rancher/k3s/server/manifests/ and be applied automatically after the cluster is created.

Manifests can also be passed to the cluster using Kairos bundles. These are container images that can be applied on first boot, before Kubernetes starts, to provide a way to customise the cluster (i.e. deployed manifests or Helm charts). Several popular Kubernetes tools are provided by the Kairos community bundles repository. The system-upgrade-controller is installed to the cluster using this mechanism. To add more bundles to the cluster, you can use the additional_bundles variable:

  additional_bundles = [
    {
      target = "quay.io/kairos/community-bundles/nginx_latest"
      values = {
        nginx = {
          version = "4.12.3"
        }
      }
    }
  ]

kubeconfig module

This module can be used to fetch the kubeconfig file for a k3s cluster deployed using the k3s-cluster or kairos-k3s-cluster modules. It uses the ansible_playbook resource from the Terraform Ansible Provider to SSH into a k3s nodes and retrieve the kubeconfig file. The kubeconfig module requires a SSH private key and as such the consumer of the module should ensure that either the public key is present on the node or that a new public key has been signed by the CA certificate present on the node.

resource "tls_private_key" "ssh" {
  algorithm = "ED25519"
}

resource "local_sensitive_file" "ssh_key" {
  filename = "${path.root}/ssh-key"
  content  = tls_private_key.ssh.private_key_openssh
}

module "kubeconfig" {
  depends_on = [ module.cluster ]
  source = "../modules/kubeconfig"

  cluster_vip          = "10.0.0.5"
  ssh_private_key_path = local_sensitive_file.ssh_key.filename # make sure the public key is present on the node
  ssh_common_args = join(" ", [
    "-o ProxyCommand=\"ssh -W %h:%p jumphost\"",
  ])
  vm_ip       = "10.0.0.2"
  vm_username = "kairos"
}

For detailed information about each module's variables and outputs, please refer to the README files in their respective directories:

About

Terraform modules for use with Condenser

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •