You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Total memory: 15.5 GiB (pass)
File system of /var/lib/k0s: btrfs (pass)
Disk space available for /var/lib/k0s: 26.7 GiB (pass)
Relative disk space available for /var/lib/k0s: 26% (pass)
Name resolution: localhost: [::1 127.0.0.1] (pass)
Operating system: Linux (pass)
Linux kernel release: 6.11.5-300.fc41.x86_64 (pass)
Max. file descriptors per process: current: 524288 / max: 524288 (pass)
AppArmor: unavailable (pass)
Executable in PATH: modprobe: /usr/sbin/modprobe (pass)
Executable in PATH: mount: /usr/bin/mount (pass)
Executable in PATH: umount: /usr/bin/umount (pass)
/proc file system: mounted (0x9fa0) (pass)
Control Groups: version 2 (pass)
cgroup controller "cpu": available (is a listed root controller) (pass)
cgroup controller "cpuacct": available (via cpu in version 2) (pass)
cgroup controller "cpuset": available (is a listed root controller) (pass)
cgroup controller "memory": available (is a listed root controller) (pass)
cgroup controller "devices": available (device filters attachable) (pass)
cgroup controller "freezer": available (cgroup.freeze exists) (pass)
cgroup controller "pids": available (is a listed root controller) (pass)
cgroup controller "hugetlb": available (is a listed root controller) (pass)
cgroup controller "blkio": available (via io in version 2) (pass)
CONFIG_CGROUPS: Control Group support: built-in (pass)
CONFIG_CGROUP_FREEZER: Freezer cgroup subsystem: built-in (pass)
CONFIG_CGROUP_PIDS: PIDs cgroup subsystem: built-in (pass)
CONFIG_CGROUP_DEVICE: Device controller for cgroups: built-in (pass)
CONFIG_CPUSETS: Cpuset support: built-in (pass)
CONFIG_CGROUP_CPUACCT: Simple CPU accounting cgroup subsystem: built-in (pass)
CONFIG_MEMCG: Memory Resource Controller for Control Groups: built-in (pass)
CONFIG_CGROUP_HUGETLB: HugeTLB Resource Controller for Control Groups: built-in (pass)
CONFIG_CGROUP_SCHED: Group CPU scheduler: built-in (pass)
CONFIG_FAIR_GROUP_SCHED: Group scheduling for SCHED_OTHER: built-in (pass)
CONFIG_CFS_BANDWIDTH: CPU bandwidth provisioning for FAIR_GROUP_SCHED: built-in (pass)
CONFIG_BLK_CGROUP: Block IO controller: built-in (pass)
CONFIG_NAMESPACES: Namespaces support: built-in (pass)
CONFIG_UTS_NS: UTS namespace: built-in (pass)
CONFIG_IPC_NS: IPC namespace: built-in (pass)
CONFIG_PID_NS: PID namespace: built-in (pass)
CONFIG_NET_NS: Network namespace: built-in (pass)
CONFIG_NET: Networking support: built-in (pass)
CONFIG_INET: TCP/IP networking: built-in (pass)
CONFIG_IPV6: The IPv6 protocol: built-in (pass)
CONFIG_NETFILTER: Network packet filtering framework (Netfilter): built-in (pass)
CONFIG_NETFILTER_ADVANCED: Advanced netfilter configuration: built-in (pass)
CONFIG_NF_CONNTRACK: Netfilter connection tracking support: module (pass)
CONFIG_NETFILTER_XTABLES: Netfilter Xtables support: built-in (pass)
CONFIG_NETFILTER_XT_TARGET_REDIRECT: REDIRECT target support: module (pass)
CONFIG_NETFILTER_XT_MATCH_COMMENT: "comment" match support: module (pass)
CONFIG_NETFILTER_XT_MARK: nfmark target and match support: module (pass)
CONFIG_NETFILTER_XT_SET: set target and match support: module (pass)
CONFIG_NETFILTER_XT_TARGET_MASQUERADE: MASQUERADE target support: module (pass)
CONFIG_NETFILTER_XT_NAT: "SNAT and DNAT" targets support: module (pass)
CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: "addrtype" address type match support: module (pass)
CONFIG_NETFILTER_XT_MATCH_CONNTRACK: "conntrack" connection tracking match support: module (pass)
CONFIG_NETFILTER_XT_MATCH_MULTIPORT: "multiport" Multiple port match support: module (pass)
CONFIG_NETFILTER_XT_MATCH_RECENT: "recent" match support: module (pass)
CONFIG_NETFILTER_XT_MATCH_STATISTIC: "statistic" match support: module (pass)
CONFIG_NETFILTER_NETLINK: module (pass)
CONFIG_NF_NAT: module (pass)
CONFIG_IP_SET: IP set support: module (pass)
CONFIG_IP_SET_HASH_IP: hash:ip set support: module (pass)
CONFIG_IP_SET_HASH_NET: hash:net set support: module (pass)
CONFIG_IP_VS: IP virtual server support: module (pass)
CONFIG_IP_VS_NFCT: Netfilter connection tracking: built-in (pass)
CONFIG_IP_VS_SH: Source hashing scheduling: module (pass)
CONFIG_IP_VS_RR: Round-robin scheduling: module (pass)
CONFIG_IP_VS_WRR: Weighted round-robin scheduling: module (pass)
CONFIG_NF_CONNTRACK_IPV4: IPv4 connetion tracking support (required for NAT): unknown (warning)
CONFIG_NF_REJECT_IPV4: IPv4 packet rejection: module (pass)
CONFIG_NF_NAT_IPV4: IPv4 NAT: unknown (warning)
CONFIG_IP_NF_IPTABLES: IP tables support: module (pass)
CONFIG_IP_NF_FILTER: Packet filtering: module (pass)
CONFIG_IP_NF_TARGET_REJECT: REJECT target support: module (pass)
CONFIG_IP_NF_NAT: iptables NAT support: module (pass)
CONFIG_IP_NF_MANGLE: Packet mangling: module (pass)
CONFIG_NF_DEFRAG_IPV4: module (pass)
CONFIG_NF_CONNTRACK_IPV6: IPv6 connetion tracking support (required for NAT): unknown (warning)
CONFIG_NF_NAT_IPV6: IPv6 NAT: unknown (warning)
CONFIG_IP6_NF_IPTABLES: IP6 tables support: module (pass)
CONFIG_IP6_NF_FILTER: Packet filtering: module (pass)
CONFIG_IP6_NF_MANGLE: Packet mangling: module (pass)
CONFIG_IP6_NF_NAT: ip6tables NAT support: module (pass)
CONFIG_NF_DEFRAG_IPV6: module (pass)
CONFIG_BRIDGE: 802.1d Ethernet Bridging: module (pass)
CONFIG_LLC: module (pass)
CONFIG_STP: module (pass)
CONFIG_EXT4_FS: The Extended 4 (ext4) filesystem: built-in (pass)
CONFIG_PROC_FS: /proc file system support: built-in (pass)
What happened?
Hi!
I have a single node k0s cluster, which has been running for several months. Today I configured tailscale on the host, when I started it, k0s continued to work without any problems, but after a restart of k0s it stopped working.
In particular, kubeapi continues to run but the kube-router and metallb-controller pods cannot complete the startup process. both fail with a rediness probe fail.
If I try to get the log of kube-router, the kubeapi returns the following error
Error from server: Get "https://100.96.48.75:10250/containerLogs/kube-system/kube-router-nhz22/kube-router": dial tcp 100.96.48.75:10250: i/o timeout
You can see that kubeapi now tries to contact kubelet through the ip of the host on the tailscale network instead of using the ip of the physical interface.
Possible workaround: Disabling tailscale, restarting k0s and then restarting tailscale is enough to get the system working again.
I have not been able to force k0s to run kubelet binding on the host ip of the physical interface ignoring the tailscale interface.
Steps to reproduce
Have a working k0s single node cluster, kubeapi contacts kubelet through a physical network interface (e.g. 10.46.34.1/24)
On the same host is configured tailscale but it is not running
k0s stop
Start tailscaled service, which creates the virtual tailscale network interface
k0s start
Expected behavior
k0s, after running taiscale, continues to use the physical host interface for kubelet.
Actual behavior
kubeapi tries to contact kubelet on virtual network interface and it appears that kubelet does not respond to it.
Screenshots and logs
No response
Additional context
Before tailscale was installed, the host had a wireguard interface, but it never interfered with kubelet and k0s.
Before creating an issue, make sure you've checked the following:
Platform
Linux 6.11.5-300.fc41.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Oct 22 20:11:15 UTC 2024 x86_64 GNU/Linux
NAME="Fedora Linux"
VERSION="41.20241104.1 (IoT Edition)"
RELEASE_TYPE=stable
ID=fedora
VERSION_ID=41
VERSION_CODENAME=""
PLATFORM_ID="platform:f41"
PRETTY_NAME="Fedora Linux 41.20241104.1 (IoT Edition)"
CPE_NAME="cpe:/o:fedoraproject:fedora:41"
HOME_URL="https://fedoraproject.org/"
DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f41/system-administrators-guide/"
SUPPORT_URL="https://ask.fedoraproject.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=41
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=41
SUPPORT_END=2025-05-13
VARIANT="IoT Edition"
VARIANT_ID=iot
OSTREE_VERSION='41.20241104.1'
Version
v1.31.2+k0s.0
Sysinfo
`k0s sysinfo`
What happened?
Hi!
I have a single node k0s cluster, which has been running for several months. Today I configured tailscale on the host, when I started it, k0s continued to work without any problems, but after a restart of k0s it stopped working.
In particular, kubeapi continues to run but the kube-router and metallb-controller pods cannot complete the startup process. both fail with a rediness probe fail.
If I try to get the log of kube-router, the kubeapi returns the following error
You can see that kubeapi now tries to contact kubelet through the ip of the host on the tailscale network instead of using the ip of the physical interface.
Possible workaround: Disabling tailscale, restarting k0s and then restarting tailscale is enough to get the system working again.
I have not been able to force k0s to run kubelet binding on the host ip of the physical interface ignoring the tailscale interface.
Steps to reproduce
k0s stop
k0s start
Expected behavior
k0s, after running taiscale, continues to use the physical host interface for kubelet.
Actual behavior
kubeapi tries to contact kubelet on virtual network interface and it appears that kubelet does not respond to it.
Screenshots and logs
No response
Additional context
Before tailscale was installed, the host had a wireguard interface, but it never interfered with kubelet and k0s.
`k0s.yaml`
tailscale version: 1.76.1
Updates
Applied the work around after some hours the kubeapi tries again to use the ip of the tailscale interface to retrieve the logs.
The text was updated successfully, but these errors were encountered: