-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Traefik + linkerd gateway-api CRD collision #13334
Comments
linkerd-destination deployment yaml apiVersion: v1
kind: Pod
metadata:
annotations:
checksum/config: 479771c2ce6010a2faf0a4bec704170432415bab2da2efbe31f23537740ef63a
cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
config.linkerd.io/default-inbound-policy: all-unauthenticated
linkerd.io/created-by: linkerd/helm edge-24.11.3
linkerd.io/proxy-version: edge-24.11.3
linkerd.io/trust-root-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
viz.linkerd.io/tap-enabled: "true"
creationTimestamp: "2024-11-15T14:47:48Z"
generateName: linkerd-destination-8656f6cbdd-
labels:
linkerd.io/control-plane-component: destination
linkerd.io/control-plane-ns: linkerd
linkerd.io/proxy-deployment: linkerd-destination
linkerd.io/workload-ns: linkerd
pod-template-hash: 8656f6cbdd
name: linkerd-destination-8656f6cbdd-lpv9r
namespace: linkerd
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: linkerd-destination-8656f6cbdd
uid: fbd44058-4794-4133-acd4-0ce2bb00f1a5
resourceVersion: "2939"
uid: 96590179-48a0-424b-ae3e-ccf183542fef
spec:
automountServiceAccountToken: false
containers:
- env:
- name: _pod_name
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: _pod_ns
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: _pod_nodeName
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: LINKERD2_PROXY_SHUTDOWN_ENDPOINT_ENABLED
value: "false"
- name: LINKERD2_PROXY_LOG
value: warn,linkerd=info,hickory=error,[{headers}]=off,[{request}]=off
- name: LINKERD2_PROXY_LOG_FORMAT
value: plain
- name: LINKERD2_PROXY_DESTINATION_SVC_ADDR
value: localhost.:8086
- name: LINKERD2_PROXY_DESTINATION_PROFILE_NETWORKS
value: 10.0.0.0/8,100.64.0.0/10,172.16.0.0/12,192.168.0.0/16,fd00::/8
- name: LINKERD2_PROXY_POLICY_SVC_ADDR
value: localhost.:8090
- name: LINKERD2_PROXY_POLICY_WORKLOAD
value: |
{"ns":"$(_pod_ns)", "pod":"$(_pod_name)"}
- name: LINKERD2_PROXY_INBOUND_DEFAULT_POLICY
value: all-unauthenticated
- name: LINKERD2_PROXY_POLICY_CLUSTER_NETWORKS
value: 10.0.0.0/8,100.64.0.0/10,172.16.0.0/12,192.168.0.0/16,fd00::/8
- name: LINKERD2_PROXY_CONTROL_STREAM_INITIAL_TIMEOUT
value: 3s
- name: LINKERD2_PROXY_CONTROL_STREAM_IDLE_TIMEOUT
value: 5m
- name: LINKERD2_PROXY_CONTROL_STREAM_LIFETIME
value: 1h
- name: LINKERD2_PROXY_INBOUND_CONNECT_TIMEOUT
value: 100ms
- name: LINKERD2_PROXY_OUTBOUND_CONNECT_TIMEOUT
value: 1000ms
- name: LINKERD2_PROXY_OUTBOUND_DISCOVERY_IDLE_TIMEOUT
value: 5s
- name: LINKERD2_PROXY_INBOUND_DISCOVERY_IDLE_TIMEOUT
value: 90s
- name: LINKERD2_PROXY_CONTROL_LISTEN_ADDR
value: 0.0.0.0:4190
- name: LINKERD2_PROXY_ADMIN_LISTEN_ADDR
value: 0.0.0.0:4191
- name: LINKERD2_PROXY_OUTBOUND_LISTEN_ADDR
value: 127.0.0.1:4140
- name: LINKERD2_PROXY_OUTBOUND_LISTEN_ADDRS
value: 127.0.0.1:4140
- name: LINKERD2_PROXY_INBOUND_LISTEN_ADDR
value: 0.0.0.0:4143
- name: LINKERD2_PROXY_INBOUND_IPS
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIPs
- name: LINKERD2_PROXY_INBOUND_PORTS
value: 8086,8090,8443,9443,9990,9996,9997
- name: LINKERD2_PROXY_DESTINATION_PROFILE_SUFFIXES
value: svc.cluster.local.
- name: LINKERD2_PROXY_INBOUND_ACCEPT_KEEPALIVE
value: 10000ms
- name: LINKERD2_PROXY_OUTBOUND_CONNECT_KEEPALIVE
value: 10000ms
- name: LINKERD2_PROXY_INBOUND_ACCEPT_USER_TIMEOUT
value: 30s
- name: LINKERD2_PROXY_OUTBOUND_CONNECT_USER_TIMEOUT
value: 30s
- name: LINKERD2_PROXY_INBOUND_SERVER_HTTP2_KEEP_ALIVE_INTERVAL
value: 10s
- name: LINKERD2_PROXY_INBOUND_SERVER_HTTP2_KEEP_ALIVE_TIMEOUT
value: 3s
- name: LINKERD2_PROXY_OUTBOUND_SERVER_HTTP2_KEEP_ALIVE_INTERVAL
value: 10s
- name: LINKERD2_PROXY_OUTBOUND_SERVER_HTTP2_KEEP_ALIVE_TIMEOUT
value: 3s
- name: LINKERD2_PROXY_INBOUND_PORTS_DISABLE_PROTOCOL_DETECTION
value: 25,587,3306,4444,5432,6379,9300,11211
- name: LINKERD2_PROXY_DESTINATION_CONTEXT
value: |
{"ns":"$(_pod_ns)", "nodeName":"$(_pod_nodeName)", "pod":"$(_pod_name)"}
- name: _pod_sa
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.serviceAccountName
- name: _l5d_ns
value: linkerd
- name: _l5d_trustdomain
value: cluster.local
- name: LINKERD2_PROXY_IDENTITY_DIR
value: /var/run/linkerd/identity/end-entity
- name: LINKERD2_PROXY_IDENTITY_TRUST_ANCHORS
valueFrom:
configMapKeyRef:
key: ca-bundle.crt
name: linkerd-identity-trust-roots
- name: LINKERD2_PROXY_IDENTITY_TOKEN_FILE
value: /var/run/secrets/tokens/linkerd-identity-token
- name: LINKERD2_PROXY_IDENTITY_SVC_ADDR
value: linkerd-identity-headless.linkerd.svc.cluster.local.:8080
- name: LINKERD2_PROXY_IDENTITY_LOCAL_NAME
value: $(_pod_sa).$(_pod_ns).serviceaccount.identity.linkerd.cluster.local
- name: LINKERD2_PROXY_IDENTITY_SVC_NAME
value: linkerd-identity.linkerd.serviceaccount.identity.linkerd.cluster.local
- name: LINKERD2_PROXY_DESTINATION_SVC_NAME
value: linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local
- name: LINKERD2_PROXY_POLICY_SVC_NAME
value: linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local
- name: LINKERD2_PROXY_TAP_SVC_NAME
value: tap.linkerd.serviceaccount.identity.linkerd.cluster.local
image: cr.l5d.io/linkerd/proxy:edge-24.11.3
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
exec:
command:
- /usr/lib/linkerd/linkerd-await
- --timeout=2m
- --port=4191
livenessProbe:
failureThreshold: 3
httpGet:
path: /live
port: 4191
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: linkerd-proxy
ports:
- containerPort: 4143
name: linkerd-proxy
protocol: TCP
- containerPort: 4191
name: linkerd-admin
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /ready
port: 4191
scheme: HTTP
initialDelaySeconds: 2
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 2102
seccompProfile:
type: RuntimeDefault
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- mountPath: /var/run/linkerd/identity/end-entity
name: linkerd-identity-end-entity
- mountPath: /var/run/secrets/tokens
name: linkerd-identity-token
- args:
- destination
- -addr=:8086
- -controller-namespace=linkerd
- -enable-h2-upgrade=true
- -log-level=trace
- -log-format=plain
- -enable-endpoint-slices=true
- -cluster-domain=cluster.local
- -identity-trust-domain=cluster.local
- -default-opaque-ports=25,587,3306,4444,5432,6379,9300,11211
- -enable-ipv6=false
- -enable-pprof=false
- --meshed-http2-client-params={"keep_alive":{"interval":{"seconds":10},"timeout":{"seconds":3},"while_idle":true}}
image: cr.l5d.io/linkerd/controller:edge-24.11.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /ping
port: 9996
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: destination
ports:
- containerPort: 8086
name: grpc
protocol: TCP
- containerPort: 9996
name: admin-http
protocol: TCP
readinessProbe:
failureThreshold: 7
httpGet:
path: /ready
port: 9996
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 2103
seccompProfile:
type: RuntimeDefault
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access
readOnly: true
- args:
- sp-validator
- -log-level=trace
- -log-format=plain
- -enable-pprof=false
image: cr.l5d.io/linkerd/controller:edge-24.11.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /ping
port: 9997
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: sp-validator
ports:
- containerPort: 8443
name: sp-validator
protocol: TCP
- containerPort: 9997
name: admin-http
protocol: TCP
readinessProbe:
failureThreshold: 7
httpGet:
path: /ready
port: 9997
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 2103
seccompProfile:
type: RuntimeDefault
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/linkerd/tls
name: sp-tls
readOnly: true
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access
readOnly: true
- args:
- --admin-addr=0.0.0.0:9990
- --control-plane-namespace=linkerd
- --grpc-addr=0.0.0.0:8090
- --server-addr=0.0.0.0:9443
- --server-tls-key=/var/run/linkerd/tls/tls.key
- --server-tls-certs=/var/run/linkerd/tls/tls.crt
- --cluster-networks=10.0.0.0/8,100.64.0.0/10,172.16.0.0/12,192.168.0.0/16,fd00::/8
- --identity-domain=cluster.local
- --cluster-domain=cluster.local
- --default-policy=all-unauthenticated
- --log-level=info
- --log-format=plain
- --default-opaque-ports=25,587,3306,4444,5432,6379,9300,11211
- --global-egress-network-namespace=linkerd-egress
- --probe-networks=0.0.0.0/0,::/0
image: cr.l5d.io/linkerd/policy-controller:edge-24.11.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /live
port: admin-http
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: policy
ports:
- containerPort: 8090
name: grpc
protocol: TCP
- containerPort: 9990
name: admin-http
protocol: TCP
- containerPort: 9443
name: policy-https
protocol: TCP
readinessProbe:
failureThreshold: 7
httpGet:
path: /ready
port: admin-http
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 2103
seccompProfile:
type: RuntimeDefault
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/linkerd/tls
name: policy-tls
readOnly: true
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
initContainers:
- args:
- --ipv6=false
- --incoming-proxy-port
- "4143"
- --outgoing-proxy-port
- "4140"
- --proxy-uid
- "2102"
- --inbound-ports-to-ignore
- 4190,4191,4567,4568
- --outbound-ports-to-ignore
- 443,6443
image: cr.l5d.io/linkerd/proxy-init:v2.4.1
imagePullPolicy: IfNotPresent
name: linkerd-init
resources: {}
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_ADMIN
- NET_RAW
privileged: false
readOnlyRootFilesystem: true
runAsGroup: 65534
runAsNonRoot: true
runAsUser: 65534
seccompProfile:
type: RuntimeDefault
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- mountPath: /run
name: linkerd-proxy-init-xtables-lock
nodeName: lima-debian-k3s
nodeSelector:
kubernetes.io/os: linux
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
seccompProfile:
type: RuntimeDefault
serviceAccount: linkerd-destination
serviceAccountName: linkerd-destination
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: sp-tls
secret:
defaultMode: 420
secretName: linkerd-sp-validator-k8s-tls
- name: policy-tls
secret:
defaultMode: 420
secretName: linkerd-policy-validator-k8s-tls
- name: kube-api-access
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
- emptyDir: {}
name: linkerd-proxy-init-xtables-lock
- name: linkerd-identity-token
projected:
defaultMode: 420
sources:
- serviceAccountToken:
audience: identity.l5d.io
expirationSeconds: 86400
path: linkerd-identity-token
- emptyDir:
medium: Memory
name: linkerd-identity-end-entity
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2024-11-15T14:47:50Z"
status: "True"
type: PodReadyToStartContainers
- lastProbeTime: null
lastTransitionTime: "2024-11-15T14:47:50Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2024-11-15T14:47:48Z"
message: 'containers with unready status: [policy]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2024-11-15T14:47:48Z"
message: 'containers with unready status: [policy]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2024-11-15T14:47:48Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://0a0581c7ee30d5a17d1e9f6d255dfe3fa4fbdd684b986b9957984731d9df62c4
image: cr.l5d.io/linkerd/controller:edge-24.11.3
imageID: cr.l5d.io/linkerd/controller@sha256:96d2c9df51b798a40b26d14a5f4c05291377a8e87187a6227c0ecb53a05fd5c8
lastState: {}
name: destination
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2024-11-15T14:47:51Z"
- containerID: containerd://ff74cfb6b79726463eee09705e110ae961d8c2b22add1099a208cec7df761b5f
image: cr.l5d.io/linkerd/proxy:edge-24.11.3
imageID: cr.l5d.io/linkerd/proxy@sha256:1bad85f55bd5a7f937ae49b5fa648a73a8f9df9c024ebeaabfba2e64b4be9e5a
lastState: {}
name: linkerd-proxy
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2024-11-15T14:47:50Z"
- containerID: containerd://bfff70639e6a74131e30ecafff04c8cdb03fca41e2b4f2c3178673900da9f848
image: cr.l5d.io/linkerd/policy-controller:edge-24.11.3
imageID: cr.l5d.io/linkerd/policy-controller@sha256:2c852eb06962af6323490fe8a304bcdba6f99ae81c9b1176f8506a09b419f48a
lastState:
terminated:
containerID: containerd://bfff70639e6a74131e30ecafff04c8cdb03fca41e2b4f2c3178673900da9f848
exitCode: 101
finishedAt: "2024-11-15T14:50:51Z"
reason: Error
startedAt: "2024-11-15T14:50:51Z"
name: policy
ready: false
restartCount: 5
started: false
state:
waiting:
message: back-off 2m40s restarting failed container=policy pod=linkerd-destination-8656f6cbdd-lpv9r_linkerd(96590179-48a0-424b-ae3e-ccf183542fef)
reason: CrashLoopBackOff
- containerID: containerd://85851abc3d63614a0be6b24254fe686e1a6bbeab8b3d4abb0a11172dd96a4396
image: cr.l5d.io/linkerd/controller:edge-24.11.3
imageID: cr.l5d.io/linkerd/controller@sha256:96d2c9df51b798a40b26d14a5f4c05291377a8e87187a6227c0ecb53a05fd5c8
lastState: {}
name: sp-validator
ready: true
restartCount: 0
started: true
state:
running:
startedAt: "2024-11-15T14:47:51Z"
hostIP: 192.168.5.15
hostIPs:
- ip: 192.168.5.15
initContainerStatuses:
- containerID: containerd://e056b7f42703cc62b827f3c394188d4162536dad3338f8f37d9e9810348d2193
image: cr.l5d.io/linkerd/proxy-init:v2.4.1
imageID: cr.l5d.io/linkerd/proxy-init@sha256:e4ef473f52c453ea7895e9258738909ded899d20a252744cc0b9459b36f987ca
lastState: {}
name: linkerd-init
ready: true
restartCount: 0
started: false
state:
terminated:
containerID: containerd://e056b7f42703cc62b827f3c394188d4162536dad3338f8f37d9e9810348d2193
exitCode: 0
finishedAt: "2024-11-15T14:47:49Z"
reason: Completed
startedAt: "2024-11-15T14:47:49Z"
phase: Running
podIP: 10.42.0.36
podIPs:
- ip: 10.42.0.36
qosClass: BestEffort
startTime: "2024-11-15T14:47:48Z" |
Trace logs:
|
For sure related to #13032 but it isn't clear how to solve (trying to make HTTP request for
linkerd-crds collides with your cluster shipping its own (newer?) (https://gateway-api.sigs.k8s.io/)... hmm... |
|
What is the issue?
Deploying version
2024.11.3
linkerd-control-plane does not work because thepolicy
container inlinkerd-destination
deployment crash loop backoffs.How can it be reproduced?
Logs, error output, etc
output of
linkerd check -o short
Environment
Possible solution
No response
Additional context
No response
Would you like to work on fixing this bug?
None
The text was updated successfully, but these errors were encountered: