Skip to content

Err when executing quick start #1428

@Gidi233

Description

@Gidi233

What happened:
When I try to run quick-start, the log has level=info msg="[PROBE] ERR: on connect: bpf_sk_storage_get failed\n" subsys=ebpf
Is this normal?
What you expected to happen:

How to reproduce it (as minimally and precisely as possible):
The following command was executed

kubectl create namespace istio-system
helm install istio-base istio/base -n istio-system
helm install istiod istio/istiod --namespace istio-system --set pilot.env.PILOT_ENABLE_AMBIENT=true
kubectl get crd gateways.gateway.networking.k8s.io &> /dev/null || \
  { kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd/experimental?ref=444631bfe06f3bcca5d0eadf1857eac1d369421d" | kubectl apply -f -; }
helm install kmesh ./deploy/charts/kmesh-helm -n kmesh-system --create-namespace
kubectl label namespace default istio.io/dataplane-mode=Kmesh
kubectl apply -f ./samples/httpbin/httpbin.yaml
kubectl apply -f ./samples/sleep/sleep.yaml

Full log:

time="2025-05-26T18:32:00Z" level=info msg="FLAG: --bpf-fs-path=\"/sys/fs/bpf\"" subsys=manager
time="2025-05-26T18:32:00Z" level=info msg="FLAG: --cgroup2-path=\"/mnt/kmesh_cgroup2\"" subsys=manager
time="2025-05-26T18:32:00Z" level=info msg="FLAG: --cni-etc-path=\"/etc/cni/net.d\"" subsys=manager
time="2025-05-26T18:32:00Z" level=info msg="FLAG: --conflist-name=\"\"" subsys=manager
time="2025-05-26T18:32:00Z" level=info msg="FLAG: --enable-bypass=\"false\"" subsys=manager
time="2025-05-26T18:32:00Z" level=info msg="FLAG: --enable-ipsec=\"false\"" subsys=manager
time="2025-05-26T18:32:00Z" level=info msg="FLAG: --enable-mda=\"false\"" subsys=manager
time="2025-05-26T18:32:00Z" level=info msg="FLAG: --enable-secret-manager=\"false\"" subsys=manager
time="2025-05-26T18:32:00Z" level=info msg="FLAG: --help=\"false\"" subsys=manager
time="2025-05-26T18:32:00Z" level=info msg="FLAG: --mode=\"dual-engine\"" subsys=manager
time="2025-05-26T18:32:00Z" level=info msg="FLAG: --monitoring=\"true\"" subsys=manager
time="2025-05-26T18:32:00Z" level=info msg="FLAG: --plugin-cni-chained=\"true\"" subsys=manager
time="2025-05-26T18:32:00Z" level=info msg="FLAG: --profiling=\"false\"" subsys=manager
time="2025-05-26T18:32:00Z" level=info msg="kmesh start with Normal" subsys=bpf
time="2025-05-26T18:32:00Z" level=info msg="bpf loader start successfully" subsys=manager
time="2025-05-26T18:32:00Z" level=info msg="start kmesh manage controller successfully" subsys=controller
time="2025-05-26T18:32:00Z" level=info msg="proxy ztunnel~10.244.0.6~kmesh-frj7j.kmesh-system~kmesh-system.svc.cluster.local connect to discovery address istiod.istio-system.svc:15012" subsys=controller/config
time="2025-05-26T18:32:00Z" level=info msg="Disabling Kmesh manage for all pods in namespace: default" subsys=manage_controller
time="2025-05-26T18:32:00Z" level=info msg="Disabling Kmesh manage for all pods in namespace: istio-system" subsys=manage_controller
time="2025-05-26T18:32:00Z" level=info msg="Disabling Kmesh manage for all pods in namespace: kmesh-system" subsys=manage_controller
time="2025-05-26T18:32:00Z" level=info msg="Disabling Kmesh manage for all pods in namespace: kube-node-lease" subsys=manage_controller
time="2025-05-26T18:32:00Z" level=info msg="Disabling Kmesh manage for all pods in namespace: kube-public" subsys=manage_controller
time="2025-05-26T18:32:00Z" level=info msg="Disabling Kmesh manage for all pods in namespace: kube-system" subsys=manage_controller
time="2025-05-26T18:32:00Z" level=info msg="Disabling Kmesh manage for all pods in namespace: local-path-storage" subsys=manage_controller
time="2025-05-26T18:32:00Z" level=info msg="controller start successfully" subsys=manager
time="2025-05-26T18:32:00Z" level=info msg="start write CNI config" subsys="cni installer"
time="2025-05-26T18:32:00Z" level=info msg="kmesh cni use chained\n" subsys="cni installer"
time="2025-05-26T18:32:00Z" level=info msg="reload authz config from last epoch" subsys=workload_controller
time="2025-05-26T18:32:01Z" level=info msg="Copied /usr/bin/kmesh-cni to /opt/cni/bin." subsys="cni installer"
time="2025-05-26T18:32:01Z" level=info msg="wrote kubeconfig file /etc/cni/net.d/kmesh-cni-kubeconfig" subsys="cni installer"
time="2025-05-26T18:32:01Z" level=info msg="cni config file: /etc/cni/net.d/10-kindnet.conflist" subsys="cni installer"
time="2025-05-26T18:32:01Z" level=info msg="start cni successfully" subsys=manager
time="2025-05-26T18:32:01Z" level=info msg="start watching file /var/run/secrets/kubernetes.io/serviceaccount/token" subsys="cni installer"
time="2025-05-26T18:50:42Z" level=info msg="Enabling Kmesh for all pods in namespace: default" subsys=manage_controller
time="2025-05-26T18:50:48Z" level=info msg="[PROBE] ERR: on connect: bpf_sk_storage_get failed\n" subsys=ebpf
time="2025-05-26T18:50:48Z" level=info msg="[PROBE] ERR: on connect: bpf_sk_storage_get failed\n" subsys=ebpf
time="2025-05-26T18:52:11Z" level=info msg="[PROBE] ERR: on connect: bpf_sk_storage_get failed\n" subsys=ebpf
time="2025-05-26T18:52:11Z" level=info msg="add annotation for pod default/httpbin-65975d4c6f-nkgc8" subsys=manage_controller
time="2025-05-26T18:52:26Z" level=info msg="[PROBE] ERR: on connect: bpf_sk_storage_get failed\n" subsys=ebpf
time="2025-05-26T18:52:26Z" level=info msg="add annotation for pod default/sleep-7656cf8794-gh9kf" subsys=manage_controller

Anything else we need to know?:

Environment:
Creating a cluster using kind

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions