Skip to content

Allowing linkerd-proxy to accept inbound traffic despite Pod not being ready #13247

Closed
@tjorri

Description

@tjorri

What problem are you trying to solve?

We have a clustered service, which at boot time requires all the cluster nodes to establish connections with each other prior to the service (or any of the Pods) being able to pass readinessProbes (specifically we run StatefulSets with a headless Service set to publishNotReadyAddresses: true, which allows nodes to discover other nodes and then communicate on the clustering port on each Pod directly). We run a separate, regular Service for then providing the interface for other services to communicate to this service, which refuses traffic until the clustered service is ready through the readinessProbes.

When we enable linkerd-proxy sidecars, we observe that direct Pod-to-Pod communication is rejected by linkerd-proxy. If we deactivate the readinessProbes for the Pods (or set config.linkerd.io/skip-inbound-ports: [clustering port]), the cluster setup works, so our belief is that the unreadiness of the Pod causes the proxy to reject traffic.

In our situation we are attempting to use Linkerd mainly for convenient encryption of traffic as well as the mTLS-based authentication and authorization.

How should the problem be solved?

It would be ideal if we could potentially flag certain ports to be such that linkerd-proxy should accept traffic on those ports even if a readiness check fails.

I'm not fully familiar with the internals of Linkerd, so an initial naive, end-user view would be to think of it similar to the config.linkerd.io/skip-inbound-ports annotation/options, but I understand those are mainly used for the linkerd-proxy-init and iptables setup, so perhaps this would require some further coordination with other resources (potentially e.g. allowing configuration of a Server resource to state that the target should receive traffic irrespective of readiness checks).

Any alternatives you've considered?

Right now our less-than-elegant workaround is to omit the clustering ports from linkerd-proxy through the use of config.linkerd.io/skip-inbound-port/--inbound-ports-to-ignore configs. As the clustering ports are important from a security standpoint, we then overlay a NetworkPolicy that forbids access to those ports. This is a bit ugly but works in a single-cluster context, but would fail with multi-clustered setups, at which multi-cluster capable NetworkPolicies or similar structures would be needed, whereas having this type of functionality in Linkerd would allow keeping the toolset smaller and more elegant.

How would users interact with this feature?

No response

Would you like to work on this feature?

None

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions