Skip to content

Control plane/worker nodes with the same hostname fail to install #636

Open
@tonyd33

Description

@tonyd33

With basically vanilla settings, the control planes fail to join the cluster on verifying that all nodes actually joined.

I'm fairly new to setting up a k8s/k3s cluster from scratch, so changing to different hostnames before running the playbooks may be an obvious prerequisite to those more experienced, but it certainly caused me a few hours of headache until I finally dug into the k3s + flannel docs to realize this might be an issue.

As it turned out, running a playbook to change to unique hostnames first ended up resolving it for me.

While I'm not sure if the site.yml playbook should be changing the hostnames automatically to achieve this, I think it'd be helpful for future users to have an assertion that the hostnames are unique, maybe in the "Pre tasks" block of site.yml, alongside the ansible version assert. Or, even a warning about this in the documentation somewhere would go a long way.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions