-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Description
Overview
Kube-bench tests 4.2.1, 4.2.2, and 4.2.3 fail on K3s clusters with exit status 1 even when the cluster is correctly configured and secured. These tests search for security flags (--anonymous-auth, --authorization-mode, --client-ca-file) in journal logs by parsing the "Running kubelet" message. However, K3s configures kubelet via YAML configuration files using --config-dir, not command-line arguments, so these security flags never appear in the journal logs. This causes false negative test results despite the cluster being properly secured according to CIS benchmarks.
How did you run kube-bench?
Kube-bench was run as a Kubernetes Job using the following configuration:
---
apiVersion: batch/v1
kind: Job
metadata:
name: kube-bench
namespace: kube-system
spec:
template:
metadata:
labels:
app: kube-bench
spec:
hostPID: true
containers:
- name: kube-bench
image: docker.io/aquasec/kube-bench:v0.13.0-ubi
command: ["kube-bench", "--json", "--benchmark", "k3s-cis-1.7"]
securityContext:
privileged: true
runAsUser: 0
readOnlyRootFilesystem: false
volumeMounts:
- name: var-lib-rancher
mountPath: /var/lib/rancher
readOnly: true
- name: app-data-rancher
mountPath: /app/data/rancher
readOnly: true
- name: etc-systemd
mountPath: /etc/systemd
readOnly: true
- name: lib-systemd
mountPath: /lib/systemd/
readOnly: true
- name: srv-kubernetes
mountPath: /srv/kubernetes/
readOnly: true
- name: etc-kubernetes
mountPath: /etc/kubernetes
readOnly: true
- name: usr-bin
mountPath: /usr/local/mount-from-host/bin
readOnly: true
- name: etc-cni-netd
mountPath: /etc/cni/net.d/
readOnly: true
- name: opt-cni-bin
mountPath: /opt/cni/bin/
readOnly: true
- name: etc-passwd
mountPath: /etc/passwd
readOnly: true
- name: etc-group
mountPath: /etc/group
readOnly: true
- name: run-log
mountPath: /run/log
readOnly: true
restartPolicy: Never
volumes:
- name: var-lib-rancher
hostPath:
path: "/var/lib/rancher"
- name: app-data-rancher
hostPath:
path: "/app/data/rancher"
- name: etc-systemd
hostPath:
path: "/etc/systemd"
- name: lib-systemd
hostPath:
path: "/lib/systemd"
- name: srv-kubernetes
hostPath:
path: "/srv/kubernetes"
- name: etc-kubernetes
hostPath:
path: "/etc/kubernetes"
- name: usr-bin
hostPath:
path: "/usr/bin"
- name: etc-cni-netd
hostPath:
path: "/etc/cni/net.d/"
- name: opt-cni-bin
hostPath:
path: "/opt/cni/bin/"
- name: etc-passwd
hostPath:
path: "/etc/passwd"
- name: etc-group
hostPath:
path: "/etc/group"
- name: run-log
hostPath:
path: "/run/log"What happened?
Tests 4.2.1, 4.2.2, and 4.2.3 all failed with exit status 1 and empty output:
{
"test_number": "4.2.1",
"test_desc": "Ensure that the --anonymous-auth argument is set to false (Automated)",
"audit": "/bin/sh -c 'if test $(journalctl -m -u k3s | grep \"Running kubelet\" | wc -l) -gt 0; then journalctl -m -u k3s -u k3s-agent | grep \"Running kubelet\" | tail -n1 | grep \"anonymous-auth\" | grep -v grep; else echo \"--anonymous-auth=false\"; fi'",
"type": "",
"remediation": "If using a Kubelet config file, edit the file to set authentication: anonymous: enabled to false.\nIf using executable arguments, edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.\n--anonymous-auth=false\nBased on your system, restart the kubelet service. For example:\nsystemctl daemon-reload\nsystemctl restart kubelet.service\n",
"test_info": [
"If using a Kubelet config file, edit the file to set authentication: anonymous: enabled to false.\nIf using executable arguments, edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.\n--anonymous-auth=false\nBased on your system, restart the kubelet service. For example:\nsystemctl daemon-reload\nsystemctl restart kubelet.service\n"
],
"status": "FAIL",
"actual_value": "",
"scored": true,
"IsMultiple": false,
"expected_result": "",
"reason": "failed to run: \"/bin/sh -c 'if test $(journalctl -m -u k3s | grep \\\"Running kubelet\\\" | wc -l) -gt 0; then journalctl -m -u k3s -u k3s-agent | grep \\\"Running kubelet\\\" | tail -n1 | grep \\\"anonymous-auth\\\" | grep -v grep; else echo \\\"--anonymous-auth=false\\\"; fi'\", output: \"\", error: exit status 1"
}Similar failures for tests 4.2.2 and 4.2.3.
Example journal log messages from two different K3s clusters:
Cluster 1 (with node labels - log truncated):
Oct 03 09:58:41 cluster-1 k3s[2977124]: time="2025-10-03T09:58:41Z" level=info msg="Running kubelet --cloud-provider=external --config-dir=/app/data/rancher/k3s/agent/etc/kubelet.conf.d --containerd=/run/k3s/containerd/containerd.sock --hostname-override=cluster-1 --kubeconfig=/app/data/rancher/k3s/agent/kubelet.kubeconfig --node-ip=10.44.33.50 --node-labels="
Cluster 2 (without node labels - complete log):
Oct 02 00:26:34 cluster-2 k3s[2103247]: time="2025-10-02T00:26:34Z" level=info msg="Running kubelet --cloud-provider=external --config-dir=/var/lib/rancher/k3s/agent/etc/kubelet.conf.d --containerd=/run/k3s/containerd/containerd.sock --hostname-override=node-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --make-iptables-util-chains=true --node-ip=192.168.182.43 --node-labels= --read-only-port=0 --streaming-connection-idle-timeout=4h"
Key observation: Neither log contains --anonymous-auth, --authorization-mode, or --client-ca-file flags.
What did you expect to happen:
Tests should PASS when the cluster is correctly configured according to CIS benchmarks. The security settings exist in K3s kubelet configuration files and are properly enforced, as verified by:
-
Runtime verification - anonymous auth is disabled:
$ curl -k https://localhost:10250/metrics Unauthorized
-
Config file shows correct security settings:
$ cat /var/lib/rancher/k3s/agent/etc/kubelet.conf.d/00-k3s-defaults.conf authentication: anonymous: enabled: false webhook: enabled: true x509: clientCAFile: /var/lib/rancher/k3s/agent/client-ca.crt authorization: mode: Webhook -
Certificate files exist with proper permissions:
$ ls -la /var/lib/rancher/k3s/agent/client-ca.crt -rw------- 1 root root 1131 Oct 3 09:58 /var/lib/rancher/k3s/agent/client-ca.crt
The cluster is properly secured, but the tests fail because they're looking in the wrong place (journal logs instead of config files).
Environment
Kube-bench version:
kube-bench version: 0.13.0
Also reproduced with v0.9.4-ubi
Kubernetes version:
$ kubectl version --short
Client Version: v1.32.2+k3s1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.32.2+k3s1
Distribution: K3s v1.32.2+k3s1
CIS Benchmark: k3s-cis-1.7
Running processes
$ ps -eaf | grep kube
root 2977124 1 14 09:58 ? 00:32:34 /usr/local/bin/k3s server --cluster-init --data-dir /app/data/rancher/k3s --token changeme! --disable=traefik --embedded-registryKey detail: K3s runs as a single binary that embeds the kubelet. The kubelet is not a separate process with visible command-line arguments.
Configuration files
K3s kubelet configuration at /var/lib/rancher/k3s/agent/etc/kubelet.conf.d/00-k3s-defaults.conf (on Cluster 1, it's at /app/data/rancher/k3s/agent/etc/kubelet.conf.d/00-k3s-defaults.conf):
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /var/lib/rancher/k3s/agent/client-ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
clusterDNS:
- 10.43.0.10
clusterDomain: cluster.local
kind: KubeletConfiguration
rotateCertificates: true
runtimeRequestTimeout: 15m0s
shutdownGracePeriod: 30s
shutdownGracePeriodCriticalPods: 10s
streamingConnectionIdleTimeout: 5m0s
tlsCertFile: /var/lib/rancher/k3s/agent/serving-kubelet.crt
tlsPrivateKeyFile: /var/lib/rancher/k3s/agent/serving-kubelet.keyHow K3s configures kubelet:
K3s uses the --config-dir flag (visible in journal logs) to load kubelet configuration from YAML files:
--config-dir=/var/lib/rancher/k3s/agent/etc/kubelet.conf.d
This means security-critical flags like --anonymous-auth, --authorization-mode, and --client-ca-file are not passed as command-line arguments and therefore do not appear in the "Running kubelet" journal message.
Anything else you would like to add:
-
Test 4.2.4 (
--read-only-port) works correctly on some K3s clusters because this flag is sometimes passed as a command-line argument and appears in journal logs. -
This affects all K3s deployments regardless of how they're configured, because K3s's architecture uses config files for kubelet settings.
-
Tests 4.2.9 and 4.2.11 also fail on K3s for similar architectural reasons (K3s manages certificates differently than standard Kubernetes).
-
Reproduction without a live cluster:
You can reproduce this issue by simulating K3s journal logs:
#!/bin/bash # Simulate K3s journal log (without security flags in command line) cat > /tmp/fake-k3s-journal.txt << 'EOF' Oct 03 09:58:41 cluster-1 k3s[2977124]: time="2025-10-03T09:58:41Z" level=info msg="Running kubelet --cloud-provider=external --config-dir=/app/data/rancher/k3s/agent/etc/kubelet.conf.d --containerd=/run/k3s/containerd/containerd.sock --hostname-override=cluster-1 --kubeconfig=/app/data/rancher/k3s/agent/kubelet.kubeconfig --node-ip=10.44.33.50 --node-labels=" EOF # Run the audit command /bin/sh -c 'if test $(cat /tmp/fake-k3s-journal.txt | grep "Running kubelet" | wc -l) -gt 0; then cat /tmp/fake-k3s-journal.txt | grep "Running kubelet" | tail -n1 | grep "anonymous-auth" | grep -v grep; else echo "--anonymous-auth=false"; fi' echo "Exit code: $?" # Result: Exit code 1 (FAIL) despite correct configuration
-
Impact:
- Security teams may incorrectly conclude K3s clusters are insecure
- Compliance reports show false failures
- Manual verification required to confirm actual security posture
- CI/CD pipelines may fail unnecessarily
-
Related issue: CIS-1.6-k3s benchmark does not match K3s documentation and attempts to remediate causes K3s to fail to start #1501 discusses other K3s CIS benchmark issues
The core issue is that kube-bench's audit methodology for these tests assumes kubelet security flags are passed as command-line arguments, which is not how K3s implements kubelet configuration. A K3s-aware approach that checks configuration files would correctly identify the security posture.