Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Traefik + linkerd gateway-api CRD collision #13334

Closed
brandonros opened this issue Nov 15, 2024 · 4 comments
Closed

Traefik + linkerd gateway-api CRD collision #13334

brandonros opened this issue Nov 15, 2024 · 4 comments
Labels

Comments

@brandonros
Copy link
Contributor

What is the issue?

Deploying version 2024.11.3 linkerd-control-plane does not work because the policy container in linkerd-destination deployment crash loop backoffs.

How can it be reproduced?

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: linkerd-self-signed-issuer
  namespace: cert-manager
spec:
  selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: linkerd-trust-anchor
  namespace: cert-manager
spec:
  isCA: true
  commonName: root.linkerd.cluster.local
  secretName: linkerd-identity-trust-roots
  privateKey:
    algorithm: ECDSA
    size: 256
  issuerRef:
    name: linkerd-self-signed-issuer
    kind: ClusterIssuer
    group: cert-manager.io
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: linkerd-trust-anchor
  namespace: cert-manager
spec:
  ca:
    secretName: linkerd-identity-trust-roots
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: linkerd-crds
  namespace: kube-system
spec:
  repo: https://helm.linkerd.io/edge
  chart: linkerd-crds
  version: 2024.11.3
  targetNamespace: linkerd
  createNamespace: true
  valuesContent: |-
    # TODO: CRD collision as per https://github.com/linkerd/linkerd2/issues/12232
    enableHttpRoutes: false
    enableTcpRoutes: false
    enableTlsRoutes: false
apiVersion: v1
kind: Namespace
metadata:
  name: linkerd
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: linkerd-identity-issuer
  namespace: linkerd
spec:
  secretName: linkerd-identity-issuer
  duration: 48h
  renewBefore: 25h
  issuerRef:
    name: linkerd-trust-anchor
    kind: ClusterIssuer
  commonName: identity.linkerd.cluster.local
  dnsNames:
  - identity.linkerd.cluster.local
  isCA: true
  privateKey:
    algorithm: ECDSA
  usages:
  - cert sign
  - crl sign
  - server auth
  - client auth
---
apiVersion: trust.cert-manager.io/v1alpha1
kind: Bundle
metadata:
  name: linkerd-identity-trust-roots
  namespace: linkerd
spec:
  sources:
  - secret:
      name: "linkerd-identity-trust-roots"
      key: "ca.crt"
  target:
    configMap:
      key: "ca-bundle.crt"
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
  name: linkerd-control-plane
  namespace: kube-system
spec:
  repo: https://helm.linkerd.io/edge
  chart: linkerd-control-plane
  version: 2024.11.3
  targetNamespace: linkerd
  createNamespace: true
  valuesContent: |-
    prometheusUrl: "http://kube-prometheus-stack-prometheus.monitoring.svc.cluster.local:9090"
    identity:
      issuer:
        scheme: kubernetes.io/tls
      externalCA: true
    podMonitor:
      enabled: true
      controller:
        enabled: true
        namespaceSelector: |
          matchNames:
            - {{ .Release.Namespace }}
            - traefik
            - chess-bot

Logs, error output, etc

2024-11-15T14:41:15.476983Z  INFO linkerd_policy_controller: Lease already exists, no need to create it
thread 'main' panicked at policy-controller/src/main.rs:522:10:
Failed to list API group resources: Api(ErrorResponse { status: "404 Not Found", message: "\"404 page not found\\n\"", reason: "Failed to parse error data", code: 404 })
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
2024-11-15T14:41:15.609899Z  WARN kube_client::client: Unsuccessful data error parse: 404 page not found

Stream closed EOF for linkerd/linkerd-destination-88c64746d-pg9mw (policy)

output of linkerd check -o short

kubernetes-api

can query the Kubernetes API

kubernetes-version

is running the minimum Kubernetes API version

linkerd-existence

'linkerd-config' config map exists

heartbeat ServiceAccount exist

control plane replica sets are ready

no unschedulable pods

control plane pods are ready

pod/linkerd-destination-7bcddb76f9-tdn5p container policy is not ready

seehttps://linkerd.io/2/checks/#l5d-api-control-readyfor hints

Environment

  • Kubernetes version: v1.30.6+k3s1 (k3s)

Possible solution

No response

Additional context

No response

Would you like to work on fixing this bug?

None

@brandonros brandonros added the bug label Nov 15, 2024
@brandonros
Copy link
Contributor Author

linkerd-destination deployment yaml

apiVersion: v1
kind: Pod
metadata:
  annotations:
    checksum/config: 479771c2ce6010a2faf0a4bec704170432415bab2da2efbe31f23537740ef63a
    cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
    config.linkerd.io/default-inbound-policy: all-unauthenticated
    linkerd.io/created-by: linkerd/helm edge-24.11.3
    linkerd.io/proxy-version: edge-24.11.3
    linkerd.io/trust-root-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
    viz.linkerd.io/tap-enabled: "true"
  creationTimestamp: "2024-11-15T14:47:48Z"
  generateName: linkerd-destination-8656f6cbdd-
  labels:
    linkerd.io/control-plane-component: destination
    linkerd.io/control-plane-ns: linkerd
    linkerd.io/proxy-deployment: linkerd-destination
    linkerd.io/workload-ns: linkerd
    pod-template-hash: 8656f6cbdd
  name: linkerd-destination-8656f6cbdd-lpv9r
  namespace: linkerd
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: linkerd-destination-8656f6cbdd
    uid: fbd44058-4794-4133-acd4-0ce2bb00f1a5
  resourceVersion: "2939"
  uid: 96590179-48a0-424b-ae3e-ccf183542fef
spec:
  automountServiceAccountToken: false
  containers:
  - env:
    - name: _pod_name
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: _pod_ns
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    - name: _pod_nodeName
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: spec.nodeName
    - name: LINKERD2_PROXY_SHUTDOWN_ENDPOINT_ENABLED
      value: "false"
    - name: LINKERD2_PROXY_LOG
      value: warn,linkerd=info,hickory=error,[{headers}]=off,[{request}]=off
    - name: LINKERD2_PROXY_LOG_FORMAT
      value: plain
    - name: LINKERD2_PROXY_DESTINATION_SVC_ADDR
      value: localhost.:8086
    - name: LINKERD2_PROXY_DESTINATION_PROFILE_NETWORKS
      value: 10.0.0.0/8,100.64.0.0/10,172.16.0.0/12,192.168.0.0/16,fd00::/8
    - name: LINKERD2_PROXY_POLICY_SVC_ADDR
      value: localhost.:8090
    - name: LINKERD2_PROXY_POLICY_WORKLOAD
      value: |
        {"ns":"$(_pod_ns)", "pod":"$(_pod_name)"}
    - name: LINKERD2_PROXY_INBOUND_DEFAULT_POLICY
      value: all-unauthenticated
    - name: LINKERD2_PROXY_POLICY_CLUSTER_NETWORKS
      value: 10.0.0.0/8,100.64.0.0/10,172.16.0.0/12,192.168.0.0/16,fd00::/8
    - name: LINKERD2_PROXY_CONTROL_STREAM_INITIAL_TIMEOUT
      value: 3s
    - name: LINKERD2_PROXY_CONTROL_STREAM_IDLE_TIMEOUT
      value: 5m
    - name: LINKERD2_PROXY_CONTROL_STREAM_LIFETIME
      value: 1h
    - name: LINKERD2_PROXY_INBOUND_CONNECT_TIMEOUT
      value: 100ms
    - name: LINKERD2_PROXY_OUTBOUND_CONNECT_TIMEOUT
      value: 1000ms
    - name: LINKERD2_PROXY_OUTBOUND_DISCOVERY_IDLE_TIMEOUT
      value: 5s
    - name: LINKERD2_PROXY_INBOUND_DISCOVERY_IDLE_TIMEOUT
      value: 90s
    - name: LINKERD2_PROXY_CONTROL_LISTEN_ADDR
      value: 0.0.0.0:4190
    - name: LINKERD2_PROXY_ADMIN_LISTEN_ADDR
      value: 0.0.0.0:4191
    - name: LINKERD2_PROXY_OUTBOUND_LISTEN_ADDR
      value: 127.0.0.1:4140
    - name: LINKERD2_PROXY_OUTBOUND_LISTEN_ADDRS
      value: 127.0.0.1:4140
    - name: LINKERD2_PROXY_INBOUND_LISTEN_ADDR
      value: 0.0.0.0:4143
    - name: LINKERD2_PROXY_INBOUND_IPS
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: status.podIPs
    - name: LINKERD2_PROXY_INBOUND_PORTS
      value: 8086,8090,8443,9443,9990,9996,9997
    - name: LINKERD2_PROXY_DESTINATION_PROFILE_SUFFIXES
      value: svc.cluster.local.
    - name: LINKERD2_PROXY_INBOUND_ACCEPT_KEEPALIVE
      value: 10000ms
    - name: LINKERD2_PROXY_OUTBOUND_CONNECT_KEEPALIVE
      value: 10000ms
    - name: LINKERD2_PROXY_INBOUND_ACCEPT_USER_TIMEOUT
      value: 30s
    - name: LINKERD2_PROXY_OUTBOUND_CONNECT_USER_TIMEOUT
      value: 30s
    - name: LINKERD2_PROXY_INBOUND_SERVER_HTTP2_KEEP_ALIVE_INTERVAL
      value: 10s
    - name: LINKERD2_PROXY_INBOUND_SERVER_HTTP2_KEEP_ALIVE_TIMEOUT
      value: 3s
    - name: LINKERD2_PROXY_OUTBOUND_SERVER_HTTP2_KEEP_ALIVE_INTERVAL
      value: 10s
    - name: LINKERD2_PROXY_OUTBOUND_SERVER_HTTP2_KEEP_ALIVE_TIMEOUT
      value: 3s
    - name: LINKERD2_PROXY_INBOUND_PORTS_DISABLE_PROTOCOL_DETECTION
      value: 25,587,3306,4444,5432,6379,9300,11211
    - name: LINKERD2_PROXY_DESTINATION_CONTEXT
      value: |
        {"ns":"$(_pod_ns)", "nodeName":"$(_pod_nodeName)", "pod":"$(_pod_name)"}
    - name: _pod_sa
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: spec.serviceAccountName
    - name: _l5d_ns
      value: linkerd
    - name: _l5d_trustdomain
      value: cluster.local
    - name: LINKERD2_PROXY_IDENTITY_DIR
      value: /var/run/linkerd/identity/end-entity
    - name: LINKERD2_PROXY_IDENTITY_TRUST_ANCHORS
      valueFrom:
        configMapKeyRef:
          key: ca-bundle.crt
          name: linkerd-identity-trust-roots
    - name: LINKERD2_PROXY_IDENTITY_TOKEN_FILE
      value: /var/run/secrets/tokens/linkerd-identity-token
    - name: LINKERD2_PROXY_IDENTITY_SVC_ADDR
      value: linkerd-identity-headless.linkerd.svc.cluster.local.:8080
    - name: LINKERD2_PROXY_IDENTITY_LOCAL_NAME
      value: $(_pod_sa).$(_pod_ns).serviceaccount.identity.linkerd.cluster.local
    - name: LINKERD2_PROXY_IDENTITY_SVC_NAME
      value: linkerd-identity.linkerd.serviceaccount.identity.linkerd.cluster.local
    - name: LINKERD2_PROXY_DESTINATION_SVC_NAME
      value: linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local
    - name: LINKERD2_PROXY_POLICY_SVC_NAME
      value: linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local
    - name: LINKERD2_PROXY_TAP_SVC_NAME
      value: tap.linkerd.serviceaccount.identity.linkerd.cluster.local
    image: cr.l5d.io/linkerd/proxy:edge-24.11.3
    imagePullPolicy: IfNotPresent
    lifecycle:
      postStart:
        exec:
          command:
          - /usr/lib/linkerd/linkerd-await
          - --timeout=2m
          - --port=4191
    livenessProbe:
      failureThreshold: 3
      httpGet:
        path: /live
        port: 4191
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    name: linkerd-proxy
    ports:
    - containerPort: 4143
      name: linkerd-proxy
      protocol: TCP
    - containerPort: 4191
      name: linkerd-admin
      protocol: TCP
    readinessProbe:
      failureThreshold: 3
      httpGet:
        path: /ready
        port: 4191
        scheme: HTTP
      initialDelaySeconds: 2
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    resources: {}
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      readOnlyRootFilesystem: true
      runAsNonRoot: true
      runAsUser: 2102
      seccompProfile:
        type: RuntimeDefault
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: FallbackToLogsOnError
    volumeMounts:
    - mountPath: /var/run/linkerd/identity/end-entity
      name: linkerd-identity-end-entity
    - mountPath: /var/run/secrets/tokens
      name: linkerd-identity-token
  - args:
    - destination
    - -addr=:8086
    - -controller-namespace=linkerd
    - -enable-h2-upgrade=true
    - -log-level=trace
    - -log-format=plain
    - -enable-endpoint-slices=true
    - -cluster-domain=cluster.local
    - -identity-trust-domain=cluster.local
    - -default-opaque-ports=25,587,3306,4444,5432,6379,9300,11211
    - -enable-ipv6=false
    - -enable-pprof=false
    - --meshed-http2-client-params={"keep_alive":{"interval":{"seconds":10},"timeout":{"seconds":3},"while_idle":true}}
    image: cr.l5d.io/linkerd/controller:edge-24.11.3
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 3
      httpGet:
        path: /ping
        port: 9996
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    name: destination
    ports:
    - containerPort: 8086
      name: grpc
      protocol: TCP
    - containerPort: 9996
      name: admin-http
      protocol: TCP
    readinessProbe:
      failureThreshold: 7
      httpGet:
        path: /ready
        port: 9996
        scheme: HTTP
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    resources: {}
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      readOnlyRootFilesystem: true
      runAsNonRoot: true
      runAsUser: 2103
      seccompProfile:
        type: RuntimeDefault
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access
      readOnly: true
  - args:
    - sp-validator
    - -log-level=trace
    - -log-format=plain
    - -enable-pprof=false
    image: cr.l5d.io/linkerd/controller:edge-24.11.3
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 3
      httpGet:
        path: /ping
        port: 9997
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    name: sp-validator
    ports:
    - containerPort: 8443
      name: sp-validator
      protocol: TCP
    - containerPort: 9997
      name: admin-http
      protocol: TCP
    readinessProbe:
      failureThreshold: 7
      httpGet:
        path: /ready
        port: 9997
        scheme: HTTP
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    resources: {}
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      readOnlyRootFilesystem: true
      runAsNonRoot: true
      runAsUser: 2103
      seccompProfile:
        type: RuntimeDefault
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/linkerd/tls
      name: sp-tls
      readOnly: true
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access
      readOnly: true
  - args:
    - --admin-addr=0.0.0.0:9990
    - --control-plane-namespace=linkerd
    - --grpc-addr=0.0.0.0:8090
    - --server-addr=0.0.0.0:9443
    - --server-tls-key=/var/run/linkerd/tls/tls.key
    - --server-tls-certs=/var/run/linkerd/tls/tls.crt
    - --cluster-networks=10.0.0.0/8,100.64.0.0/10,172.16.0.0/12,192.168.0.0/16,fd00::/8
    - --identity-domain=cluster.local
    - --cluster-domain=cluster.local
    - --default-policy=all-unauthenticated
    - --log-level=info
    - --log-format=plain
    - --default-opaque-ports=25,587,3306,4444,5432,6379,9300,11211
    - --global-egress-network-namespace=linkerd-egress
    - --probe-networks=0.0.0.0/0,::/0
    image: cr.l5d.io/linkerd/policy-controller:edge-24.11.3
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 3
      httpGet:
        path: /live
        port: admin-http
        scheme: HTTP
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    name: policy
    ports:
    - containerPort: 8090
      name: grpc
      protocol: TCP
    - containerPort: 9990
      name: admin-http
      protocol: TCP
    - containerPort: 9443
      name: policy-https
      protocol: TCP
    readinessProbe:
      failureThreshold: 7
      httpGet:
        path: /ready
        port: admin-http
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    resources: {}
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      readOnlyRootFilesystem: true
      runAsNonRoot: true
      runAsUser: 2103
      seccompProfile:
        type: RuntimeDefault
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/linkerd/tls
      name: policy-tls
      readOnly: true
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  initContainers:
  - args:
    - --ipv6=false
    - --incoming-proxy-port
    - "4143"
    - --outgoing-proxy-port
    - "4140"
    - --proxy-uid
    - "2102"
    - --inbound-ports-to-ignore
    - 4190,4191,4567,4568
    - --outbound-ports-to-ignore
    - 443,6443
    image: cr.l5d.io/linkerd/proxy-init:v2.4.1
    imagePullPolicy: IfNotPresent
    name: linkerd-init
    resources: {}
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        add:
        - NET_ADMIN
        - NET_RAW
      privileged: false
      readOnlyRootFilesystem: true
      runAsGroup: 65534
      runAsNonRoot: true
      runAsUser: 65534
      seccompProfile:
        type: RuntimeDefault
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: FallbackToLogsOnError
    volumeMounts:
    - mountPath: /run
      name: linkerd-proxy-init-xtables-lock
  nodeName: lima-debian-k3s
  nodeSelector:
    kubernetes.io/os: linux
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    seccompProfile:
      type: RuntimeDefault
  serviceAccount: linkerd-destination
  serviceAccountName: linkerd-destination
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: sp-tls
    secret:
      defaultMode: 420
      secretName: linkerd-sp-validator-k8s-tls
  - name: policy-tls
    secret:
      defaultMode: 420
      secretName: linkerd-policy-validator-k8s-tls
  - name: kube-api-access
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
  - emptyDir: {}
    name: linkerd-proxy-init-xtables-lock
  - name: linkerd-identity-token
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          audience: identity.l5d.io
          expirationSeconds: 86400
          path: linkerd-identity-token
  - emptyDir:
      medium: Memory
    name: linkerd-identity-end-entity
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2024-11-15T14:47:50Z"
    status: "True"
    type: PodReadyToStartContainers
  - lastProbeTime: null
    lastTransitionTime: "2024-11-15T14:47:50Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2024-11-15T14:47:48Z"
    message: 'containers with unready status: [policy]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2024-11-15T14:47:48Z"
    message: 'containers with unready status: [policy]'
    reason: ContainersNotReady
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2024-11-15T14:47:48Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: containerd://0a0581c7ee30d5a17d1e9f6d255dfe3fa4fbdd684b986b9957984731d9df62c4
    image: cr.l5d.io/linkerd/controller:edge-24.11.3
    imageID: cr.l5d.io/linkerd/controller@sha256:96d2c9df51b798a40b26d14a5f4c05291377a8e87187a6227c0ecb53a05fd5c8
    lastState: {}
    name: destination
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2024-11-15T14:47:51Z"
  - containerID: containerd://ff74cfb6b79726463eee09705e110ae961d8c2b22add1099a208cec7df761b5f
    image: cr.l5d.io/linkerd/proxy:edge-24.11.3
    imageID: cr.l5d.io/linkerd/proxy@sha256:1bad85f55bd5a7f937ae49b5fa648a73a8f9df9c024ebeaabfba2e64b4be9e5a
    lastState: {}
    name: linkerd-proxy
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2024-11-15T14:47:50Z"
  - containerID: containerd://bfff70639e6a74131e30ecafff04c8cdb03fca41e2b4f2c3178673900da9f848
    image: cr.l5d.io/linkerd/policy-controller:edge-24.11.3
    imageID: cr.l5d.io/linkerd/policy-controller@sha256:2c852eb06962af6323490fe8a304bcdba6f99ae81c9b1176f8506a09b419f48a
    lastState:
      terminated:
        containerID: containerd://bfff70639e6a74131e30ecafff04c8cdb03fca41e2b4f2c3178673900da9f848
        exitCode: 101
        finishedAt: "2024-11-15T14:50:51Z"
        reason: Error
        startedAt: "2024-11-15T14:50:51Z"
    name: policy
    ready: false
    restartCount: 5
    started: false
    state:
      waiting:
        message: back-off 2m40s restarting failed container=policy pod=linkerd-destination-8656f6cbdd-lpv9r_linkerd(96590179-48a0-424b-ae3e-ccf183542fef)
        reason: CrashLoopBackOff
  - containerID: containerd://85851abc3d63614a0be6b24254fe686e1a6bbeab8b3d4abb0a11172dd96a4396
    image: cr.l5d.io/linkerd/controller:edge-24.11.3
    imageID: cr.l5d.io/linkerd/controller@sha256:96d2c9df51b798a40b26d14a5f4c05291377a8e87187a6227c0ecb53a05fd5c8
    lastState: {}
    name: sp-validator
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2024-11-15T14:47:51Z"
  hostIP: 192.168.5.15
  hostIPs:
  - ip: 192.168.5.15
  initContainerStatuses:
  - containerID: containerd://e056b7f42703cc62b827f3c394188d4162536dad3338f8f37d9e9810348d2193
    image: cr.l5d.io/linkerd/proxy-init:v2.4.1
    imageID: cr.l5d.io/linkerd/proxy-init@sha256:e4ef473f52c453ea7895e9258738909ded899d20a252744cc0b9459b36f987ca
    lastState: {}
    name: linkerd-init
    ready: true
    restartCount: 0
    started: false
    state:
      terminated:
        containerID: containerd://e056b7f42703cc62b827f3c394188d4162536dad3338f8f37d9e9810348d2193
        exitCode: 0
        finishedAt: "2024-11-15T14:47:49Z"
        reason: Completed
        startedAt: "2024-11-15T14:47:49Z"
  phase: Running
  podIP: 10.42.0.36
  podIPs:
  - ip: 10.42.0.36
  qosClass: BestEffort
  startTime: "2024-11-15T14:47:48Z"

@brandonros
Copy link
Contributor Author

Trace logs:

2024-11-15T14:53:55.978754Z TRACE hyper::proto::h1::io: received 3558 bytes
2024-11-15T14:53:55.978756Z TRACE parse_headers: hyper::proto::h1::role: Response.parse bytes=3558
2024-11-15T14:53:55.978757Z TRACE parse_headers: hyper::proto::h1::role: Response.parse Complete(341)
2024-11-15T14:53:55.978760Z DEBUG hyper::proto::h1::io: parsed 7 headers
2024-11-15T14:53:55.978760Z DEBUG hyper::proto::h1::conn: incoming body is chunked encoding
2024-11-15T14:53:55.978762Z TRACE hyper::proto::h1::decode: decode; state=Chunked { state: Start, chunk_len: 0, extensions_cnt: 0 }
2024-11-15T14:53:55.978763Z TRACE hyper::proto::h1::decode: Read chunk start
2024-11-15T14:53:55.978764Z TRACE hyper::proto::h1::decode: Read chunk hex size
2024-11-15T14:53:55.978765Z TRACE hyper::proto::h1::decode: Read chunk hex size
2024-11-15T14:53:55.978765Z TRACE hyper::proto::h1::decode: Read chunk hex size
2024-11-15T14:53:55.978766Z TRACE hyper::proto::h1::decode: Read chunk hex size
2024-11-15T14:53:55.978767Z TRACE hyper::proto::h1::decode: Chunk size is 4770
2024-11-15T14:53:55.978767Z DEBUG hyper::proto::h1::decode: incoming chunked header: 0x12A2 (4770 bytes)
2024-11-15T14:53:55.978768Z TRACE hyper::proto::h1::decode: Chunked read, remaining=4770
2024-11-15T14:53:55.978769Z TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Body(Chunked { state: Body, chunk_len: 1559, extensions_cnt: 0 }), writing: KeepAlive, keep_alive: Busy }
2024-11-15T14:53:55.978780Z TRACE hyper::proto::h1::decode: decode; state=Chunked { state: Body, chunk_len: 1559, extensions_cnt: 0 }
2024-11-15T14:53:55.978783Z TRACE hyper::proto::h1::decode: Chunked read, remaining=1559
2024-11-15T14:53:55.978789Z TRACE hyper::proto::h1::io: received 1566 bytes
2024-11-15T14:53:55.978790Z TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Body(Chunked { state: BodyCr, chunk_len: 0, extensions_cnt: 0 }), writing: KeepAlive, keep_alive: Busy }
2024-11-15T14:53:55.978794Z TRACE hyper::proto::h1::decode: decode; state=Chunked { state: BodyCr, chunk_len: 0, extensions_cnt: 0 }
2024-11-15T14:53:55.978794Z TRACE hyper::proto::h1::decode: Read chunk hex size
2024-11-15T14:53:55.978795Z TRACE hyper::proto::h1::decode: Read chunk hex size
2024-11-15T14:53:55.978796Z TRACE hyper::proto::h1::decode: Chunk size is 0
2024-11-15T14:53:55.978797Z TRACE hyper::proto::h1::decode: end of chunked
2024-11-15T14:53:55.978797Z DEBUG hyper::proto::h1::conn: incoming body completed
2024-11-15T14:53:55.978800Z TRACE hyper::proto::h1::conn: maybe_notify; read_from_io blocked
2024-11-15T14:53:55.978804Z TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
2024-11-15T14:53:55.978805Z TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
2024-11-15T14:53:55.978806Z TRACE hyper::client::pool: put; add idle connection for ("https", 10.43.0.1)
2024-11-15T14:53:55.978807Z DEBUG hyper::client::pool: pooling idle connection for ("https", 10.43.0.1)
2024-11-15T14:53:55.978929Z TRACE httproutes.gateway.networking.k8s.io: kubert::index: event=Restarted([HttpRoute { metadata: ObjectMeta { annotations: Some({"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"gateway.networking.k8s.io/v1\",\"kind\":\"HTTPRoute\",\"metadata\":{\"annotations\":{},\"name\":\"docker-registry-httproute\",\"namespace\":\"traefik\"},\"spec\":{\"hostnames\":[\"docker-registry.debian-k3s\"],\"parentRefs\":[{\"kind\":\"Gateway\",\"name\":\"gateway\",\"namespace\":\"traefik\",\"sectionName\":\"websecure\"}],\"rules\":[{\"backendRefs\":[{\"kind\":\"Service\",\"name\":\"docker-registry\",\"namespace\":\"docker-registry\",\"port\":5000,\"weight\":100}],\"matches\":[{\"path\":{\"type\":\"PathPrefix\",\"value\":\"/\"}}]}]}}\n"}), cluster_name: None, creation_timestamp: Some(Time(2024-11-15T14:45:20Z)), deletion_grace_period_seconds: None, deletion_timestamp: None, finalizers: None, generate_name: None, generation: Some(1), labels: None, managed_fields: Some([ManagedFieldsEntry { api_version: Some("gateway.networking.k8s.io/v1"), fields_type: Some("FieldsV1"), fields_v1: Some(FieldsV1(Object {"f:metadata": Object {"f:annotations": Object {".": Object {}, "f:kubectl.kubernetes.io/last-applied-configuration": Object {}}}, "f:spec": Object {".": Object {}, "f:hostnames": Object {}, "f:parentRefs": Object {}, "f:rules": Object {}}})), manager: Some("kubectl-client-side-apply"), operation: Some("Update"), subresource: None, time: Some(Time(2024-11-15T14:45:20Z)) }, ManagedFieldsEntry { api_version: Some("gateway.networking.k8s.io/v1"), fields_type: Some("FieldsV1"), fields_v1: Some(FieldsV1(Object {"f:status": Object {".": Object {}, "f:parents": Object {}}})), manager: Some("traefik"), operation: Some("Update"), subresource: Some("status"), time: Some(Time(2024-11-15T14:45:20Z)) }]), name: Some("docker-registry-httproute"), namespace: Some("traefik"), owner_references: None, resource_version: Some("1092"), self_link: None, uid: Some("887280e2-01f1-46d7-ad2f-7e223b6009c9") }, spec: HttpRouteSpec { inner: CommonRouteSpec { parent_refs: Some([ParentReference { group: Some("gateway.networking.k8s.io"), kind: Some("Gateway"), namespace: Some("traefik"), name: "gateway", section_name: Some("websecure"), port: None }]) }, hostnames: Some(["docker-registry.debian-k3s"]), rules: Some([HttpRouteRule { matches: Some([HttpRouteMatch { path: Some(PathPrefix { value: "/" }), headers: None, query_params: None, method: None }]), filters: None, backend_refs: Some([HttpBackendRef { backend_ref: Some(BackendRef { weight: Some(100), inner: BackendObjectReference { group: Some(""), kind: Some("Service"), name: "docker-registry", namespace: Some("docker-registry"), port: Some(5000) } }), filters: None }]) }]) }, status: Some(HttpRouteStatus { inner: RouteStatus { parents: [RouteParentStatus { parent_ref: ParentReference { group: Some("gateway.networking.k8s.io"), kind: Some("Gateway"), namespace: Some("traefik"), name: "gateway", section_name: Some("websecure"), port: None }, controller_name: "traefik.io/gateway-controller", conditions: [Condition { last_transition_time: Time(2024-11-15T14:45:20Z), message: "", observed_generation: Some(1), reason: "Accepted", status: "True", type_: "Accepted" }, Condition { last_transition_time: Time(2024-11-15T14:45:20Z), message: "", observed_generation: Some(1), reason: "ResolvedRefs", status: "True", type_: "ResolvedRefs" }] }] } }) }, HttpRoute { metadata: ObjectMeta { annotations: Some({"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"gateway.networking.k8s.io/v1\",\"kind\":\"HTTPRoute\",\"metadata\":{\"annotations\":{},\"name\":\"linkerd-viz-httproute\",\"namespace\":\"traefik\"},\"spec\":{\"hostnames\":[\"linkerd-viz.debian-k3s\"],\"parentRefs\":[{\"kind\":\"Gateway\",\"name\":\"gateway\",\"namespace\":\"traefik\",\"sectionName\":\"websecure\"}],\"rules\":[{\"backendRefs\":[{\"kind\":\"Service\",\"name\":\"web\",\"namespace\":\"linkerd\",\"port\":8084,\"weight\":100}],\"matches\":[{\"path\":{\"type\":\"PathPrefix\",\"value\":\"/\"}}]}]}}\n"}), cluster_name: None, creation_timestamp: Some(Time(2024-11-15T14:45:50Z)), deletion_grace_period_seconds: None, deletion_timestamp: None, finalizers: None, generate_name: None, generation: Some(1), labels: None, managed_fields: Some([ManagedFieldsEntry { api_version: Some("gateway.networking.k8s.io/v1"), fields_type: Some("FieldsV1"), fields_v1: Some(FieldsV1(Object {"f:metadata": Object {"f:annotations": Object {".": Object {}, "f:kubectl.kubernetes.io/last-applied-configuration": Object {}}}, "f:spec": Object {".": Object {}, "f:hostnames": Object {}, "f:parentRefs": Object {}, "f:rules": Object {}}})), manager: Some("kubectl-client-side-apply"), operation: Some("Update"), subresource: None, time: Some(Time(2024-11-15T14:45:50Z)) }, ManagedFieldsEntry { api_version: Some("gateway.networking.k8s.io/v1"), fields_type: Some("FieldsV1"), fields_v1: Some(FieldsV1(Object {"f:status": Object {".": Object {}, "f:parents": Object {}}})), manager: Some("traefik"), operation: Some("Update"), subresource: Some("status"), time: Some(Time(2024-11-15T14:45:50Z)) }]), name: Some("linkerd-viz-httproute"), namespace: Some("traefik"), owner_references: None, resource_version: Some("2091"), self_link: None, uid: Some("0046ed41-7ee4-4901-acd9-9452a7560811") }, spec: HttpRouteSpec { inner: CommonRouteSpec { parent_refs: Some([ParentReference { group: Some("gateway.networking.k8s.io"), kind: Some("Gateway"), namespace: Some("traefik"), name: "gateway", section_name: Some("websecure"), port: None }]) }, hostnames: Some(["linkerd-viz.debian-k3s"]), rules: Some([HttpRouteRule { matches: Some([HttpRouteMatch { path: Some(PathPrefix { value: "/" }), headers: None, query_params: None, method: None }]), filters: None, backend_refs: Some([HttpBackendRef { backend_ref: Some(BackendRef { weight: Some(100), inner: BackendObjectReference { group: Some(""), kind: Some("Service"), name: "web", namespace: Some("linkerd"), port: Some(8084) } }), filters: None }]) }]) }, status: Some(HttpRouteStatus { inner: RouteStatus { parents: [RouteParentStatus { parent_ref: ParentReference { group: Some("gateway.networking.k8s.io"), kind: Some("Gateway"), namespace: Some("traefik"), name: "gateway", section_name: Some("websecure"), port: None }, controller_name: "traefik.io/gateway-controller", conditions: [Condition { last_transition_time: Time(2024-11-15T14:45:50Z), message: "", observed_generation: Some(1), reason: "Accepted", status: "True", type_: "Accepted" }, Condition { last_transition_time: Time(2024-11-15T14:45:50Z), message: "", observed_generation: Some(1), reason: "ResolvedRefs", status: "True", type_: "ResolvedRefs" }] }] } }) }])
2024-11-15T14:53:55.979013Z DEBUG httproutes.gateway.networking.k8s.io: linkerd_policy_controller_k8s_index::outbound::index: indexing httproute name="docker-registry-httproute"
2024-11-15T14:53:55.979036Z DEBUG httproutes.gateway.networking.k8s.io: linkerd_policy_controller_k8s_index::outbound::index: route=GatewayHttp(HttpRoute { metadata: ObjectMeta { annotations: Some({"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"gateway.networking.k8s.io/v1\",\"kind\":\"HTTPRoute\",\"metadata\":{\"annotations\":{},\"name\":\"docker-registry-httproute\",\"namespace\":\"traefik\"},\"spec\":{\"hostnames\":[\"docker-registry.debian-k3s\"],\"parentRefs\":[{\"kind\":\"Gateway\",\"name\":\"gateway\",\"namespace\":\"traefik\",\"sectionName\":\"websecure\"}],\"rules\":[{\"backendRefs\":[{\"kind\":\"Service\",\"name\":\"docker-registry\",\"namespace\":\"docker-registry\",\"port\":5000,\"weight\":100}],\"matches\":[{\"path\":{\"type\":\"PathPrefix\",\"value\":\"/\"}}]}]}}\n"}), cluster_name: None, creation_timestamp: Some(Time(2024-11-15T14:45:20Z)), deletion_grace_period_seconds: None, deletion_timestamp: None, finalizers: None, generate_name: None, generation: Some(1), labels: None, managed_fields: Some([ManagedFieldsEntry { api_version: Some("gateway.networking.k8s.io/v1"), fields_type: Some("FieldsV1"), fields_v1: Some(FieldsV1(Object {"f:metadata": Object {"f:annotations": Object {".": Object {}, "f:kubectl.kubernetes.io/last-applied-configuration": Object {}}}, "f:spec": Object {".": Object {}, "f:hostnames": Object {}, "f:parentRefs": Object {}, "f:rules": Object {}}})), manager: Some("kubectl-client-side-apply"), operation: Some("Update"), subresource: None, time: Some(Time(2024-11-15T14:45:20Z)) }, ManagedFieldsEntry { api_version: Some("gateway.networking.k8s.io/v1"), fields_type: Some("FieldsV1"), fields_v1: Some(FieldsV1(Object {"f:status": Object {".": Object {}, "f:parents": Object {}}})), manager: Some("traefik"), operation: Some("Update"), subresource: Some("status"), time: Some(Time(2024-11-15T14:45:20Z)) }]), name: Some("docker-registry-httproute"), namespace: Some("traefik"), owner_references: None, resource_version: Some("1092"), self_link: None, uid: Some("887280e2-01f1-46d7-ad2f-7e223b6009c9") }, spec: HttpRouteSpec { inner: CommonRouteSpec { parent_refs: Some([ParentReference { group: Some("gateway.networking.k8s.io"), kind: Some("Gateway"), namespace: Some("traefik"), name: "gateway", section_name: Some("websecure"), port: None }]) }, hostnames: Some(["docker-registry.debian-k3s"]), rules: Some([HttpRouteRule { matches: Some([HttpRouteMatch { path: Some(PathPrefix { value: "/" }), headers: None, query_params: None, method: None }]), filters: None, backend_refs: Some([HttpBackendRef { backend_ref: Some(BackendRef { weight: Some(100), inner: BackendObjectReference { group: Some(""), kind: Some("Service"), name: "docker-registry", namespace: Some("docker-registry"), port: Some(5000) } }), filters: None }]) }]) }, status: Some(HttpRouteStatus { inner: RouteStatus { parents: [RouteParentStatus { parent_ref: ParentReference { group: Some("gateway.networking.k8s.io"), kind: Some("Gateway"), namespace: Some("traefik"), name: "gateway", section_name: Some("websecure"), port: None }, controller_name: "traefik.io/gateway-controller", conditions: [Condition { last_transition_time: Time(2024-11-15T14:45:20Z), message: "", observed_generation: Some(1), reason: "Accepted", status: "True", type_: "Accepted" }, Condition { last_transition_time: Time(2024-11-15T14:45:20Z), message: "", observed_generation: Some(1), reason: "ResolvedRefs", status: "True", type_: "ResolvedRefs" }] }] } }) })
2024-11-15T14:53:55.979084Z DEBUG httproutes.gateway.networking.k8s.io: linkerd_policy_controller_k8s_index::outbound::index: outbound_route=OutboundRoute { hostnames: [Exact("docker-registry.debian-k3s")], rules: [OutboundRouteRule { matches: [HttpRouteMatch { path: Some(Prefix("/")), headers: ], query_params: ], method: None }], backends: [Service(WeightedService { weight: 100, authority: "docker-registry.docker-registry.svc.cluster.local:5000", name: "docker-registry", namespace: "docker-registry", port: 5000, filters: ], exists: false })], retry: None, timeouts: RouteTimeouts { response: None, request: None, idle: None }, filters: ] }], creation_timestamp: Some(2024-11-15T14:45:20Z) }
2024-11-15T14:53:55.979112Z DEBUG httproutes.gateway.networking.k8s.io: linkerd_policy_controller_k8s_status::index: Lease non-holder skipping controller update self.name=linkerd-destination-75f56b6586-79dwm
2024-11-15T14:53:55.979128Z DEBUG httproutes.gateway.networking.k8s.io: linkerd_policy_controller_k8s_index::outbound::index: indexing httproute name="linkerd-viz-httproute"
2024-11-15T14:53:55.979147Z DEBUG httproutes.gateway.networking.k8s.io: linkerd_policy_controller_k8s_index::outbound::index: route=GatewayHttp(HttpRoute { metadata: ObjectMeta { annotations: Some({"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"gateway.networking.k8s.io/v1\",\"kind\":\"HTTPRoute\",\"metadata\":{\"annotations\":{},\"name\":\"linkerd-viz-httproute\",\"namespace\":\"traefik\"},\"spec\":{\"hostnames\":[\"linkerd-viz.debian-k3s\"],\"parentRefs\":[{\"kind\":\"Gateway\",\"name\":\"gateway\",\"namespace\":\"traefik\",\"sectionName\":\"websecure\"}],\"rules\":[{\"backendRefs\":[{\"kind\":\"Service\",\"name\":\"web\",\"namespace\":\"linkerd\",\"port\":8084,\"weight\":100}],\"matches\":[{\"path\":{\"type\":\"PathPrefix\",\"value\":\"/\"}}]}]}}\n"}), cluster_name: None, creation_timestamp: Some(Time(2024-11-15T14:45:50Z)), deletion_grace_period_seconds: None, deletion_timestamp: None, finalizers: None, generate_name: None, generation: Some(1), labels: None, managed_fields: Some([ManagedFieldsEntry { api_version: Some("gateway.networking.k8s.io/v1"), fields_type: Some("FieldsV1"), fields_v1: Some(FieldsV1(Object {"f:metadata": Object {"f:annotations": Object {".": Object {}, "f:kubectl.kubernetes.io/last-applied-configuration": Object {}}}, "f:spec": Object {".": Object {}, "f:hostnames": Object {}, "f:parentRefs": Object {}, "f:rules": Object {}}})), manager: Some("kubectl-client-side-apply"), operation: Some("Update"), subresource: None, time: Some(Time(2024-11-15T14:45:50Z)) }, ManagedFieldsEntry { api_version: Some("gateway.networking.k8s.io/v1"), fields_type: Some("FieldsV1"), fields_v1: Some(FieldsV1(Object {"f:status": Object {".": Object {}, "f:parents": Object {}}})), manager: Some("traefik"), operation: Some("Update"), subresource: Some("status"), time: Some(Time(2024-11-15T14:45:50Z)) }]), name: Some("linkerd-viz-httproute"), namespace: Some("traefik"), owner_references: None, resource_version: Some("2091"), self_link: None, uid: Some("0046ed41-7ee4-4901-acd9-9452a7560811") }, spec: HttpRouteSpec { inner: CommonRouteSpec { parent_refs: Some([ParentReference { group: Some("gateway.networking.k8s.io"), kind: Some("Gateway"), namespace: Some("traefik"), name: "gateway", section_name: Some("websecure"), port: None }]) }, hostnames: Some(["linkerd-viz.debian-k3s"]), rules: Some([HttpRouteRule { matches: Some([HttpRouteMatch { path: Some(PathPrefix { value: "/" }), headers: None, query_params: None, method: None }]), filters: None, backend_refs: Some([HttpBackendRef { backend_ref: Some(BackendRef { weight: Some(100), inner: BackendObjectReference { group: Some(""), kind: Some("Service"), name: "web", namespace: Some("linkerd"), port: Some(8084) } }), filters: None }]) }]) }, status: Some(HttpRouteStatus { inner: RouteStatus { parents: [RouteParentStatus { parent_ref: ParentReference { group: Some("gateway.networking.k8s.io"), kind: Some("Gateway"), namespace: Some("traefik"), name: "gateway", section_name: Some("websecure"), port: None }, controller_name: "traefik.io/gateway-controller", conditions: [Condition { last_transition_time: Time(2024-11-15T14:45:50Z), message: "", observed_generation: Some(1), reason: "Accepted", status: "True", type_: "Accepted" }, Condition { last_transition_time: Time(2024-11-15T14:45:50Z), message: "", observed_generation: Some(1), reason: "ResolvedRefs", status: "True", type_: "ResolvedRefs" }] }] } }) })
2024-11-15T14:53:55.979178Z DEBUG httproutes.gateway.networking.k8s.io: linkerd_policy_controller_k8s_index::outbound::index: outbound_route=OutboundRoute { hostnames: [Exact("linkerd-viz.debian-k3s")], rules: [OutboundRouteRule { matches: [HttpRouteMatch { path: Some(Prefix("/")), headers: ], query_params: ], method: None }], backends: [Service(WeightedService { weight: 100, authority: "web.linkerd.svc.cluster.local:8084", name: "web", namespace: "linkerd", port: 8084, filters: ], exists: false })], retry: None, timeouts: RouteTimeouts { response: None, request: None, idle: None }, filters: ] }], creation_timestamp: Some(2024-11-15T14:45:50Z) }
2024-11-15T14:53:55.979189Z DEBUG httproutes.gateway.networking.k8s.io: linkerd_policy_controller_k8s_status::index: Lease non-holder skipping controller update self.name=linkerd-destination-75f56b6586-79dwm
2024-11-15T14:53:55.979202Z TRACE httproutes.gateway.networking.k8s.io: tower::buffer::service: sending request to buffer worker
2024-11-15T14:53:55.979212Z TRACE tower::buffer::worker: worker polling for next message
2024-11-15T14:53:55.979213Z TRACE tower::buffer::worker: processing new request
2024-11-15T14:53:55.979213Z TRACE httproutes.gateway.networking.k8s.io: tower::buffer::worker: resumed=false worker received request; waiting for service readiness
2024-11-15T14:53:55.979214Z DEBUG httproutes.gateway.networking.k8s.io: tower::buffer::worker: service.ready=true processing request
2024-11-15T14:53:55.979216Z TRACE httproutes.gateway.networking.k8s.io: tower::buffer::worker: returning response future
2024-11-15T14:53:55.979217Z TRACE tower::buffer::worker: worker polling for next message
2024-11-15T14:53:55.979221Z DEBUG httproutes.gateway.networking.k8s.io:HTTP{http.method=GET http.url=https://10.43.0.1/apis/gateway.networking.k8s.io/v1beta1/httproutes?&watch=true&timeoutSeconds=290&allowWatchBookmarks=true&resourceVersion=3348 otel.name="watch" otel.kind="client"}: kube_client::client::builder: requesting
2024-11-15T14:53:55.979223Z TRACE httproutes.gateway.networking.k8s.io:HTTP{http.method=GET http.url=https://10.43.0.1/apis/gateway.networking.k8s.io/v1beta1/httproutes?&watch=true&timeoutSeconds=290&allowWatchBookmarks=true&resourceVersion=3348 otel.name="watch" otel.kind="client"}: hyper::client::pool: take? ("https", 10.43.0.1): expiration = Some(90s)
2024-11-15T14:53:55.979224Z DEBUG httproutes.gateway.networking.k8s.io:HTTP{http.method=GET http.url=https://10.43.0.1/apis/gateway.networking.k8s.io/v1beta1/httproutes?&watch=true&timeoutSeconds=290&allowWatchBookmarks=true&resourceVersion=3348 otel.name="watch" otel.kind="client"}: hyper::client::pool: reuse idle connection for ("https", 10.43.0.1)
2024-11-15T14:53:55.979228Z TRACE encode_headers: hyper::proto::h1::role: Client::encode method=GET, body=None
2024-11-15T14:53:55.979244Z DEBUG hyper::proto::h1::io: flushed 1423 bytes
2024-11-15T14:53:55.979245Z TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: KeepAlive, keep_alive: Busy }
2024-11-15T14:53:55.979390Z TRACE hyper::proto::h1::conn: Conn::read_head
2024-11-15T14:53:55.979397Z TRACE hyper::proto::h1::io: received 341 bytes
2024-11-15T14:53:55.979398Z TRACE parse_headers: hyper::proto::h1::role: Response.parse bytes=341
2024-11-15T14:53:55.979399Z TRACE parse_headers: hyper::proto::h1::role: Response.parse Complete(341)
2024-11-15T14:53:55.979402Z DEBUG hyper::proto::h1::io: parsed 7 headers
2024-11-15T14:53:55.979402Z DEBUG hyper::proto::h1::conn: incoming body is chunked encoding
2024-11-15T14:53:55.979404Z TRACE hyper::proto::h1::decode: decode; state=Chunked { state: Start, chunk_len: 0, extensions_cnt: 0 }
2024-11-15T14:53:55.979405Z TRACE hyper::proto::h1::decode: Read chunk start
2024-11-15T14:53:55.979408Z TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Body(Chunked { state: Start, chunk_len: 0, extensions_cnt: 0 }), writing: KeepAlive, keep_alive: Busy }
2024-11-15T14:53:55.979412Z TRACE httproutes.gateway.networking.k8s.io: kube_client::client: headers: {"audit-id": "da47feee-f470-458f-9649-ec58d97b0d28", "cache-control": "no-cache, private", "content-type": "application/json", "x-kubernetes-pf-flowschema-uid": "ae6c4127-3746-4f58-a817-f7d4538eac88", "x-kubernetes-pf-prioritylevel-uid": "118c1612-bbd1-49d1-91ce-1ff8752d378e", "date": "Fri, 15 Nov 2024 14:53:55 GMT", "transfer-encoding": "chunked"}
2024-11-15T14:53:56.022948Z TRACE hyper::proto::h1::conn: Conn::read_head
2024-11-15T14:53:56.022957Z TRACE hyper::proto::h1::io: received 401 bytes
2024-11-15T14:53:56.022958Z TRACE parse_headers: hyper::proto::h1::role: Response.parse bytes=401
2024-11-15T14:53:56.022960Z TRACE parse_headers: hyper::proto::h1::role: Response.parse Complete(382)
2024-11-15T14:53:56.022963Z DEBUG hyper::proto::h1::io: parsed 8 headers
2024-11-15T14:53:56.022964Z DEBUG hyper::proto::h1::conn: incoming body is content-length (19 bytes)
2024-11-15T14:53:56.022971Z TRACE hyper::proto::h1::decode: decode; state=Length(19)
2024-11-15T14:53:56.022972Z DEBUG hyper::proto::h1::conn: incoming body completed
2024-11-15T14:53:56.022974Z TRACE hyper::proto::h1::conn: maybe_notify; read_from_io blocked
2024-11-15T14:53:56.022976Z TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
2024-11-15T14:53:56.022977Z TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
2024-11-15T14:53:56.022983Z TRACE HTTP{http.method=GET http.url=https://10.43.0.1/apis/gateway.networking.k8s.io/v1alpha2 otel.name="HTTP" otel.kind="client"}: hyper::client::pool: put; add idle connection for ("https", 10.43.0.1)
2024-11-15T14:53:56.022986Z DEBUG HTTP{http.method=GET http.url=https://10.43.0.1/apis/gateway.networking.k8s.io/v1alpha2 otel.name="HTTP" otel.kind="client"}: hyper::client::pool: pooling idle connection for ("https", 10.43.0.1)
2024-11-15T14:53:56.023002Z  WARN kube_client::client: Unsuccessful data error parse: 404 page not found

2024-11-15T14:53:56.023008Z DEBUG kube_client::client: Unsuccessful: ErrorResponse { status: "404 Not Found", message: "\"404 page not found\\n\"", reason: "Failed to parse error data", code: 404 } (reconstruct)
2024-11-15T14:53:56.023013Z TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Init, writing: Init, keep_alive: Idle }
thread 'main' panicked at policy-controller/src/main.rs:522:10:
Failed to list API group resources: Api(ErrorResponse { status: "404 Not Found", message: "\"404 page not found\\n\"", reason: "Failed to parse error data", code: 404 })
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
2024-11-15T14:53:56.023152Z TRACE hyper::proto::h1::dispatch: body receiver dropped before eof, draining or closing
2024-11-15T14:53:56.023158Z TRACE hyper::proto::h1::decode: decode; state=Chunked { state: Start, chunk_len: 0, extensions_cnt: 0 }
2024-11-15T14:53:56.023159Z TRACE hyper::proto::h1::decode: Read chunk start
2024-11-15T14:53:56.023161Z TRACE hyper::proto::h1::conn: State::close_read()
2024-11-15T14:53:56.023165Z TRACE hyper::proto::h1::conn: State::close()
2024-11-15T14:53:56.023166Z TRACE hyper::proto::h1::conn: flushed({role=client}): State { reading: Closed, writing: Closed, keep_alive: Disabled }
2024-11-15T14:53:56.023167Z DEBUG rustls::common_state: Sending warning alert CloseNotify    
2024-11-15T14:53:56.023191Z TRACE hyper::proto::h1::conn: shut down IO complete
2024-11-15T14:53:56.023205Z DEBUG tower::buffer::worker: buffer closing; waking pending tasks
2024-11-15T14:53:56.023217Z DEBUG authorizationpolicies: kube_runtime::watcher: watch initlist error: Service(Closed)
2024-11-15T14:53:56.023230Z  INFO authorizationpolicies: kubert::errors: stream failed error=failed to start watching object: ServiceError: buffer's worker closed unexpectedly
2024-11-15T14:53:56.023236Z DEBUG authorizationpolicies: kube_runtime::watcher: watch initlist error: Service(Closed)
2024-11-15T14:53:56.023238Z  INFO authorizationpolicies: kubert::errors: stream failed error=failed to start watching object: ServiceError: buffer's worker closed unexpectedly
2024-11-15T14:53:56.023483Z DEBUG external_workloads: kube_runtime::watcher: watch initlist error: Service(Closed)
2024-11-15T14:53:56.023487Z  INFO external_workloads: kubert::errors: stream failed error=failed to start watching object: ServiceError: buffer's worker closed unexpectedly
2024-11-15T14:53:56.023490Z DEBUG external_workloads: kube_runtime::watcher: watch initlist error: Service(Closed)
2024-11-15T14:53:56.023491Z  INFO external_workloads: kubert::errors: stream failed error=failed to start watching object: ServiceError: buffer's worker closed unexpectedly
Stream closed EOF for linkerd/linkerd-destination-75f56b6586-79dwm (policy)

@brandonros
Copy link
Contributor Author

For sure related to #13032 but it isn't clear how to solve (trying to make HTTP request for https://10.43.0.1/apis/gateway.networking.k8s.io/v1alpha2 instead of https://10.43.0.1/apis/gateway.networking.k8s.io/v1

$ kubectl get crd grpcroutes.gateway.networking.k8s.io -o json | jq -r '.spec.versions[] | .name + " served=" + (.served | tostring)'
v1 served=true

linkerd-crds collides with your cluster shipping its own (newer?) (https://gateway-api.sigs.k8s.io/)... hmm...

@brandonros brandonros changed the title policy-controller Failed to list API group resources: Api(ErrorResponse { status: "404 Not Found Traefik + linkerd gateway-api CRD collision Nov 15, 2024
@brandonros
Copy link
Contributor Author

The problem with the bundled CRDs is that #13032 for compatibility with Google Kubernetes Engine (GKE).

as per traefik/traefik-helm-chart#1209

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant