Description
What happened?
I just wanted to manage my hetznercloud project with terraform on a fresh admin machine, ran terraform init (=> provider "registry.terraform.io/hetznercloud/hcloud" { version = "1.51.0" ...
and ran into this problem:
Because I had no tfstate, I ran terraform import for all my objects.
After that, terraform insisted on destroying and recreating all servers (which I didn't allow), because in my tf configuration a list of ssh-keys was set for each server, while terraform import did not import the ssh-keys settings and believed that running servers had no ssh-keys set and tries to change this by hard destroying and recreating the servers.
Proof: If I do remove (comment out) the ssh-key-settings in my server configs, terraform is happy and no longer wants to change the servers (but then, no ssh-keys are set if I really want to recreate the servers through terraform).
So the bug seems to be that terraform import SERVERID RESSOURCENAME does not recognize and import the ssh keys of a server and thus wants to recreate the server in order to correct this.
What did you expect to happen?
Properly import server machines and recognize, that the ssh-keys are set as configured and do not need to be changed.
Please provide a minimal working example
resource "hcloud_server" "machine1" {
name = "machine1"
server_type = "cx22"
datacenter = "nbg1-dc3"
image = "ubuntu-24.04"
ssh_keys = var.ssh_keys_root
delete_protection = true
rebuild_protection = true
lifecycle {
prevent_destroy = true
}
connection {
host = self.ipv4_address
agent = true
}
}
variable "ssh_keys_root" {
description = "SSH-Keys zum Root-Zugang"
type = list(string)
default = [ "key1","key2" ]
}
resource "hcloud_ssh_key" "key1" {
name = "key1"
public_key = file("key1.pub")
}
resource "hcloud_ssh_key" "key2" {
name = "key2"
public_key = file("key2.pub")
}